Generally, if the new code is just plain better I'm all for upgrading. We should just make sure that it is really better from coreboot's perspective... the needs of a userspace decompression tool may be quite different from ours. In particular, we should make sure that memory requirements (that 16KB scratchpad) do not grow, that code size stays reasonable, and that it doesn't use any fancy CPU features we might not be able to support early (e.g. SSE... or does that work out of the box?).
Note that for LZ4 we use the exact same decompression code (from src/commonlib/) in coreboot and in cbfstool. cbfstool always checks that a compressed file can be decompressed again after adding it, so by checking that with the very same code you can be pretty certain that you won't suddenly run into unfortunate surprises during boot. If we did the same thing with the LZMA code (currently we don't), switching out the implementation under the hood would become a lot less scary.
On Thu, Jan 24, 2019 at 1:08 AM Ivan Ivanov qmastery16@gmail.com wrote:
Igor Pavlov (7z/LZMA SDK author) told me a few months ago that
Decompression speed for lzma: lzma sdk 18.03 C code - about +120% speed increase from 4.42. lzma sdk 18.03 asm-x64 code - about +180% speed increase from 4.42.
Although he didn't mention any compression ratio changes, maybe because they're not big enough, this such a huge speed increase alone should be a good reason to upgrade these LZMA libraries.
ср, 4 апр. 2018 г. в 11:19, Carl-Daniel Hailfinger c-d.hailfinger.devel.2006@gmx.net:
Hi Ivan,
On 03.04.2018 20:03, Ivan Ivanov wrote:
I have noticed that both coreboot and seabios are using the very old versions of LZMA SDK.
True. I introduced the lzma code in coreboot (back when it was called LinuxBIOS) when we were working on OLPC XO-1 support.
If we will upgrade our LZMA libraries from the outdated-by-12-years 4.42 to the current version 18.04 , speed and compression ratio should improve and maybe a few bugs will be fixed.
Do you have any numbers for this? An improved compression ratio and improved speed would be nice indeed, but how does the size of the decompression code change? If the decompression code grows more than the size reduction from better compression, it would be a net loss. A significantly reduced decompression speed would also be a problem. Decompression speed would have to be measured both for stream decompression (i.e. the decompressor gets the compressed data in single-byte or multibyte chunks) as well as full-size decompression (i.e. the decompressor can access all compressed data at once). We also have to make sure that stream decompression still works after the change.
Do you think it should be done, or you are OK with using such an outdated version?
A size benefit for the resulting image is a good reason to switch.
Regards, Carl-Daniel
coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-leave@coreboot.org