Okay, let me see if I can make this clearer:
You are feeding ELF binaries into LAR instead of binary blobs.
So call it what it is - a useful option to LAR to cut out the objcopy middleman. Which is fine. I'm happy with that.
Also, I would abandon the use of .o naming in the LAR - that will drive others (like me) off course thinking an object file is living in the LAR, which is not the case.
JOrdan
On 27/11/07 08:18 -0800, ron minnich wrote:
On Nov 27, 2007 8:04 AM, Jordan Crouse jordan.crouse@amd.com wrote:
I'm not saying that the new method isn't a good one, but Stefan has a point. This will be difficult to explain to people. I'll start with the most obvious question:
How many bytes is it costing me to have N elf files in the LAR instead of N blobs?
Remember, there are no ELF files in the lar. I removed those with the earlier fix when I put ELF processing into LAR for the payloads. Remember that before I made this change we took ELF files and, with no processing, put them in the LAR. That was a mess! Sometimes you'd flash and boot and find out the ELF files were no good.
Now there are only LAR files. And they're all the same thing: LAR header + name. All I did a a while ago was move ELF *payload* parsing out of linuxbios and into lar, so we would never again find out, after having flashed a new bios, that the ELF file we flashed was invalid ... we made a runtime check into a build time check. in the process, we removed a significant source of error, and made the startup way more efficient -- no need to create an intermediate copy of the data, no need for the horrible 'relocate my code' stuff from v2, remove one whole copy from the startup path. This change was good for data that got moved to memory.
But, ELF parsing in LAR applied to payload only. It did not apply to our own code segments, such as the initram binary blob. We still had a lot of weird processing for turning things from ELF files and binary blobs and then putting them into LAR, complete with a LAR header that was full of misinformation (like the entry point, which was '0'). Now why is this? It's historical. It's how we did it in V2. That's the only reason I can see.
There's no other reason for binary blobs that I can think of.
Result? We had two different ways of processing executable files, one in which we parsed ELF, and one in which we did not. In retrospect, that doesn't make a lot of sense.
So all the change is really saying is, "you have an elf parser in LAR, and it made life better for payloads, why not use it for your other .o stuff and get rid of binary blob processing? Just make everything done the same way?"
For the binary blob, the extra cost is 0 for the header and we grew the name by these bytes: ".o/segment0", so about 11 bytes. There was a LAR header already for the binary blob. It's just that the lar header was totally wrong, because you can't find the entry point from a binary blob.
I am worried that this seems to be confusing people. From my point of view we did the following:
- removed the objcopy -o binary parts (and remember, we've had
trouble with even objcopy over the years) 2. made the processing for all lar files identical -- always parse elf and produce lar entries 3. don't have to fight to try to make gcc order functions in a file correctly 4. don't have to add switches to lar to pass entry point information to it. 5. don't have to figure out how to add a 'jmp to main' instruction at the front of the binary blob
So it seems to me we've reduced the number of variables in the equation. To me it's less complex. I hope it seems that way once you look at it more, but we'll see ...
thanks, remember, this is v3, it's not out yet, nothing is cast in stone.
ron