The problem is that the loader code will not change. When the loader reads the type value, it will differ depending on endianism. As an example, 'CODE' will be equivalent to 0x45444F43 on a little endian machine, and 0x434F4445 on a big endian machine. This causes obvious issues in the code. By dictating that all the header values should be in little endian we avoid this problem.
Either the code will use u32 everywhere, or it will use u8[4] (or char[4]) everywhere. Either way, it works fine; the second option is better of course, since the binary will be more readable (in a hexdump).
You have to be careful in your tools, so that cross-builds work correctly.
Thats true. But again, like I said, what concerns us is that endianism affects the order in which the bytes will be stored in the header
That depends on how *exactly* you define the SELF format. Writing a *proper* specification is a lot of work, indeed!
The endianism is specified by the processor, not the software.
Not true. There are *many* processors that can run either little-endian or correct-endian: ARM, MIPS, PowerPC, ...
A x86 is little endian, forever more.
Yeah, poor x86.
And since v3 only works on an x86,
I hope to change that soon.
we could ignore it for now.
No you cannot, if you're defining this new binary format. Yet another reason why you really shouldn't.
But we know that the problem exists, and we might as well account for it now. By specifying it as little endian, we do put other architectures at a disadvantage. But none of those other architectures are in play for a very long time. You have to play to your strengths.
???
What are you saying here?
Segher