Okay, get ready to pick those nits! Attached is a patch for a new memory allocator. Please review and tear it apart. If you hate what I have done, then please propose something different (preferably with patches!). Read on for my motivation and description of whats going on here.
What was wrong with the old allocator you say? Glad you asked.
The old memory allocator worked on a static region of memory set aside by the linker - in the case of libpayload it was 16k sightly above the .data segment of the payload. This was so that we didn't need to concern ourselves with figuring out how much system memory we had and other such concerns.
Well, 16k goes quickly. We are outgrowing our little sandbox. As an example, I wanted to change the menu system in coreinfo to allow for more items. This would have involved a "popup" window that the user could navigate to select the next area to examine. Unfortunately, the current implementation uses a static allocation for windows, which limited us to 3 total windows, and guess what - we're already using them. If I allocate them dynamically, then that will reduce the amount of memory usable for other components, and like I said - 16k goes fast.
So, this new allocator reads the memory tables, and sets up a block of allocateable memory at the top of system RAM. Right now we are hardcoding 8Mb of memory, but that will eventually be configurable through both Kconfig and at runtime.
So here is how the allocator works - right below the allocated memory is room for an array of slots - each slot has an entry for an offset into memory and a size. It also has prev and next pointers for linked list purposes. The number of slots is currently hardcoded to 1024.
There are three lists - the unallocated list contains all of the unused slots. At init time, there would are 1023 unused slots. The free list contains slots describing all of the free areas in the memory. At init time, this list contains a single entry with offset 0, and size of 8Mb. The final list is actually a hash of linked lists - this will contain the allocated pointers as they are created.
So, we're set up - and the user calls malloc(). The free blocks are searched for a free block large enough for the allocation. The slot containing the former free block is re-adjusted for the allocated size, and a new slot is allocated on the free list containing the remainder of the previous free block. Put simply, if the original free block is 8Mb, and you allocate 1Mb, then a new 7Mb slot is put on to the free block list. And the 1Mb allocated slot is added to the allocated hash table.
When the entry is free()d, it is added to the free list. We also run a consolidate function that merges consecutive free blocks and and returns the unused slots to the unallocated list.
The advantage of this implementation is that it is pretty simple, easy to code, and interestingly enough, it allows us to maintain multiple chunks of dynamic memory. Why is this useful? One of the problems with reentrant payloads such as bayou is that if they need their memory to stick around, then they need to exclude it from the tables they pass to their children payloads. This is cool if you are running coreinfo, but not so cool if you are calling FILO and then the kernel, because the kernel won't return, and you've wasted memory. One of the solutions for this would be to use both the original static memory block _and_ the extended memory block, and then restrict bayou so that it only uses the static memory. This implementation lets us do that, and it will just work (TM).