Julius Werner has posted comments on this change. ( https://review.coreboot.org/c/coreboot/+/38421 )
Change subject: commonlib/bsd: Add new CBFS core implementation ......................................................................
Patch Set 5:
(16 comments)
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/cbfs_priv... File src/commonlib/bsd/cbfs_private.c:
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/cbfs_priv... PS4, Line 10: cbfs_dev_size(dev)
Shouldn't we just call this once? The size of dev shouldn't change.
I mean... it's the cost for what's probably a small inline function, compared to the cost of a SPI read in the loop. But okay, I can factor it out.
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/cbfs_priv... PS4, Line 29: ENABLE_HASHING
This should probably be namespaced better.
Changed to CBFS_ENABLE_HASHING.
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/cbfs_priv... PS4, Line 54: goto next_file;
Shouldn't this be a major error? If we're wanting to ensure we touch every file this algo just bypas […]
I was just trying to continue where possible and skip over broken files rather than abort completely. This just checks that the metadata fits in my arbitrarily-defined CBFS_METADATA_MAX_SIZE. Eventually cbfstool should ensure it doesn't write headers larger than that, but it doesn't do that yet... and just in case someone somehow manages to write a file with a larger header, I think skipping over it makes more sense than aborting completely. In the hashing case you'll likely fail anyway, of course, but in the non-hashing case you might still find what you were looking for.
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/cbfs_priv... PS4, Line 56: if (offset + data_offset >= cbfs_dev_size(dev)) {
Wouldn't this fail on the read()? But if you are wanting to check this shouldn't you be ensuring the […]
Yeah, I don't remember why I added this. Will take it out.
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/cbfs_priv... PS4, Line 77: (uint8_t *)&mdata
Shouldn't we be reading into the byte array here and above? Then interpret through the union for the […]
Good point, that's cleaner.
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/cbfs_priv... PS4, Line 88: For others, finish the hash first if needed.
Could you please elaborate on why?
To make the API more flexible, so that callers can use return codes for things that aren't catastrophic errors. For example, the mcache code can return CB_CBFS_CACHE_FULL out of its walker() function when it runs out of memory to cache things in. Then the function that called cbfs_walk() can recognize that the cache wasn't fully built, but it can still continue booting with a half-usable cache. However, if the hash didn't verify it still needs to know that, so CB_CBFS_HASH must have priority over the errors returned from walker(). And the only way to check if the hash verified is to walk all the way to the end.
For IO errors the assumption is that the cbfs_dev is busted anyway and any further reading would be pointless (and earlier parts of this function could have already returned that anyway). So the return code priority is CB_CBFS_IO -> CB_CBFS_HASH -> other errors/CB_SUCCESS -> CB_CBFS_NOT_FOUND. Clarified this in the header file comment.
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/cbfs_priv... PS4, Line 102: sizeof(real_hash)
sorry for not being familiar w/ vb2 APIs, but it seems weird to pass in sizeof(real_hash) when it's […]
I can pass hash_size too, doesn't really make a difference.
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/cbfs_priv... PS4, Line 123:
There doesn't appear to be any checks against the size of the structures. […]
cbfs_walk() checks (data_offset > sizeof(mdata)) and (data_offset < attr_offset), so I think these cases should be handled? The idea is that cbfs_walk() will do these basic sanity checks on stuff it passes to walker() so we don't have to do it here anymore.
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/include/c... File src/commonlib/bsd/include/commonlib/bsd/cbfs_private.h:
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/include/c... PS4, Line 61: header, filename and attributes
Shouldn't we be passing along the metadata size? Why is the callback needing the logic to figure out […]
You mean pass the size of the metadata that was already read, rather than a boolean? Yeah, actually, it looks like that would work out pretty well. Rewritten.
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/include/c... PS4, Line 69: * the remaining CBFS (but not call |walker| anymore).
This function seems a bit overloaded. […]
This is already supposed to be the underlying workhorse function, the wrappers are things like cbfs_lookup() and cbfs_mcache_build(). coreboot is never supposed to call into this directly (cbfstool might or I might create more wrappers for those use cases when I get there, haven't looked into that part much yet).
I agree it's a bit complex but I tried to come up with something that can support all the use cases (normal lookup, mcache, cbfstool stuff) from one piece of core code. I think that will be better in the end than e.g. implementing the cbfstool stuff completely separately (because the hashing is tied through everything and it always needs to calculate it in exactly the same way).
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/include/c... PS4, Line 72: union cbfs_mdata *mdata
Why isn't this const?
Done
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/include/c... PS4, Line 80: to copy behind that
I don't understand what this is saying. […]
I meant "load additional metadata which is the attributes". It's never splitting attributes. If do_hash is true that means all attributes are already loaded, if it is false that means no attributes have been loaded yet.
Changed this whole thing according to the suggestion above which I think resolves this confusion as well (you just tell it exactly how many bytes are already valid in |src| now).
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/include/c... PS4, Line 83: union cbfs_mdata *src
const
Done
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/include/c... PS4, Line 87: bit shorter than this
'This' being the size returned?
Yes, but I decided to remove this function anyway
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/include/c... PS4, Line 91: offset to the file data
relative or absolute within |dev| ? […]
Not sure what you mean, relative to what? This is absolute. (Absolute to the cbfs_dev_t, that is, which may of course be relative to the whole SPI flash -- but this API doesn't have knowledge of that layer.)
"master_hash" is always supposed to be the metadata hash (I'll make sure to call the individual file hashes something else when I add that part). The point of this function is to return the (verified) metadata and data offset for a given file. The file data itself is not touched in any way, loading and verifying the file will be up to the caller.
Rewrote a bit, let me know if it's clearer now.
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/include/c... File src/commonlib/bsd/include/commonlib/bsd/cbfs_serialized.h:
https://review.coreboot.org/c/coreboot/+/38421/4/src/commonlib/bsd/include/c... PS4, Line 131: struct vb2_hash hash; Yes, this struct is supposed to have a somewhat dynamic length. The 'bytes' union shall only be valid up to the length of the current hash_type. I wanted something that can be easily allocated on the stack (then it would just be the length of the longest compiled-in hash) but is also compatible with the existing serialized attribute format (so you can just pass a pointer into a metadata blob to vboot without having to copy it out). cbfstool would have to ensure to write out the right amount of bytes for the hash_type when serializing it, and coreboot functions using the raw attribute data should sanity-check that the attribute size fits vb2_digest_size(hash.algo) before passing it around.
The other thing that you've done here is changed the serialized format by changing hash_type from a 32-bit value to a single byte. Shouldn't we add a new type and tag so things can be backwards compatible?
Oh crap, I'm a moron... this was meant to be backwards compatible, but I just realized I screwed it up. The idea was to change the 4 byte into 1 byte and 3 reserved so that it still matches the old bit representation but isn't going to cause any endianness problems going forward anymore. But of course the correct way to do that for a 4-byte big endian where we've only ever used the lowest byte is
uint8_t reserved[3]; uint8_t algo;
not the way I did it. I uploaded CL:2019952 to fix that in vboot.