On 10/04/12 18:41, Blue Swirl wrote:
> Yes, but again, image size savings are not very interesting. Savings
> in image loading time is easily lost during decompression.
Well, the use case I'm considering at the moment is that if you build
OpenBIOS SPARC32 with -O0 -g and then try to load it into QEMU, QEMU
reports that the image is too large. So I'm guessing this is a
limitation of the memory layout of the machine, rather than the space
occupied by the binary.
>> Secondly, if we're copying the data to an OFMEM-allocated area then why
>> don't we just compress it at build time and then decompress it during the
>> copy using DEFLATE (see my previous email)? Then we can further reduce the
>> dictionary payload size from ~180K to around ~37K, although as you rightly
>> point out there could be a small delay on startup - given the small size
>> involved (and the fact we can lock the TLB entry like we do with the Forth
>> machine), I don't think the penalty will be too bad.
>
> But only the decompressed result is what matters wrt memory usage,
> isn't it? After Forth has started, there shouldn't be any difference
> in RAM use, except that the decompressor code takes more space.
Again for the use case above, this would not be a problem if we were to
decompress and relocate the dictionary into RAM.
>> I'm fairly confident I can come up with an experimental patch for this -
>> would people mind if we added zlib as a build-time dependency, and puff.c to
>> the OpenBIOS codebase? At the very least, if the decompression appears too
>> expensive the first stage on its own would still be a good idea.
>
> For maximum compression, bzImage (still remember those?) style
> approach could be used for whole image.
I can't say I've ever looked at that. But if you're happy to at least
consider a patch, then I'd like to invest some time looking at this.
ATB,
Mark.