Hi
As more and more x86 platforms are moving to C_ENVIRONMENT_BOOTBLOCK and therefore don't use a romcc compiled bootblock anymore a certain question arises. With the romcc bootblock there was a normal/fallback mechanism.
It works the following way: It uses RTC cmos to select between the normal and the fallback bootpaths. So depending on that bit the bootblock selected either normal/romstage or fallback/romstage which also load postcar stage, ramstage and payloads with the same prefix from there. There is also a reboot counter which makes sure that it actually gets to the point it can load the payload and depending on CONFIG_SKIP_MAX_REBOOT_CNT_CLEAR it resets that the counter.
This mechanism is not very robust and is more intended to be used to be able to test things without needing a hardware programmer to flash images in the case something goes wrong. I use it for instance to test changes on laptops which take a long time to disassemble.
Currently C_ENVIRONMENT_BOOTBLOCK lacks such generic mechanism on x86 platforms. On the first sight it looks like VBOOT with verstage running after the bootblock, might be able to achieve a similar boot scheme. VBOOT seems to lack documentation and while not that hard to get working, it looks like it is not falling back when there is problem on a RW_A/B boot path (I called die die(); somewhere in the ramstage to test). Also the tools around vboot (crossystem) are quite chromeos specific, requiring Chromeos specific ACPI code exposing the VBNV variables and also a Linux kernel exposing those ACPI methods via sysfs.
My understanding of VBOOT might be incorrect or incomplete, so it would be great if someone more knowledgeable could fill in here.
So at the moment it looks like VBOOT does not fit the bill to be able to quickly test things while having a fallback mechanism.
Now being able to run GCC compiled code in the bootblock does have the advantage of allowing much more flexibility over romcc compiled code. So it is possible to simply reimplement the same behavior with different prefixes for bootpaths but it would also be possible to do something similar to what vboot does, namely using separate FMAP regions for boot paths. This would require a simple cbfs_locator. Upstream flashrom master now supports using FMAP as a layout so it would be rather easy to use.
Using FMAP requires a little bit more work (generating a proper default FMAP, populate the CBFS FMAP regions, implementing a cbfs_locator) but does allow for nice features like locking the fallback CBFS region to make sure the fallback can't be erased by accident.
Any thought or suggestions?
Kind regards
Arthur Heymans
On Tue, Jan 22, 2019 at 6:45 AM Arthur Heymans arthur@aheymans.xyz wrote:
Hi
As more and more x86 platforms are moving to C_ENVIRONMENT_BOOTBLOCK and therefore don't use a romcc compiled bootblock anymore a certain question arises. With the romcc bootblock there was a normal/fallback mechanism.
It works the following way: It uses RTC cmos to select between the normal and the fallback bootpaths. So depending on that bit the bootblock selected either normal/romstage or fallback/romstage which also load postcar stage, ramstage and payloads with the same prefix from there. There is also a reboot counter which makes sure that it actually gets to the point it can load the payload and depending on CONFIG_SKIP_MAX_REBOOT_CNT_CLEAR it resets that the counter.
This mechanism is not very robust and is more intended to be used to be able to test things without needing a hardware programmer to flash images in the case something goes wrong. I use it for instance to test changes on laptops which take a long time to disassemble.
Currently C_ENVIRONMENT_BOOTBLOCK lacks such generic mechanism on x86 platforms. On the first sight it looks like VBOOT with verstage running after the bootblock, might be able to achieve a similar boot scheme. VBOOT seems to lack documentation and while not that hard to get working, it looks like it is not falling back when there is problem on a RW_A/B boot path (I called die die(); somewhere in the ramstage to test). Also the tools around vboot (crossystem) are quite chromeos specific, requiring Chromeos specific ACPI code exposing the VBNV variables and also a Linux kernel exposing those ACPI methods via sysfs.
My understanding of VBOOT might be incorrect or incomplete, so it would be great if someone more knowledgeable could fill in here.
There's a trycount that I think defaults to 10. After 10 failing tries it should switch slots. There's also a 'set good firmware' notion which signals to the firmware one was able to boot. This is picked up on the next reboot and the information is passed through vboot non volatile storage. I can dig up specific pointers in the code, but you may not have tried enough times to see things switch.
That said, if you want to implement fallback stuff, it should be fairly straight forward to do -- and not rely on vboot.
So at the moment it looks like VBOOT does not fit the bill to be able to quickly test things while having a fallback mechanism.
Now being able to run GCC compiled code in the bootblock does have the advantage of allowing much more flexibility over romcc compiled code. So it is possible to simply reimplement the same behavior with different prefixes for bootpaths but it would also be possible to do something similar to what vboot does, namely using separate FMAP regions for boot paths. This would require a simple cbfs_locator. Upstream flashrom master now supports using FMAP as a layout so it would be rather easy to use.
Using FMAP requires a little bit more work (generating a proper default FMAP, populate the CBFS FMAP regions, implementing a cbfs_locator) but does allow for nice features like locking the fallback CBFS region to make sure the fallback can't be erased by accident.
Any thought or suggestions?
FWIW, it's my opinion I think we'll need to start splitting cbfs into smaller ones. This isn't specific to this situation, but splitting slots into multiple cbfses (rw-a-1, rw-a-2, etc) allows one to chain/group resources as they are used along with more flexibility for signing/verification methods. What you wrote above seems sane, but I think you'll run into build limitations that don't allow one to target fmap regions for different assets. It's a lot of Make w/ some special casing currently which is because people didn't want another tool at the time -- however, once you need more cbfs regions of different granularity I think having better tooling around targeting final destination for assets is required.
Kind regards
Arthur Heymans _______________________________________________ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-leave@coreboot.org
There's a trycount that I think defaults to 10. After 10 failing tries it should switch slots. There's also a 'set good firmware' notion which signals to the firmware one was able to boot. This is picked up on the next reboot and the information is passed through vboot non volatile storage. I can dig up specific pointers in the code, but you may not have tried enough times to see things switch.
Try counters rely on userspace controlling them. You need to have the crossystem utility (part of vboot) available in userspace and all hooked up correctly so that coreboot and crossystem read and write the same NVRAM (I think this sort of works by default with x86 CMOS, on Arm not so much). Then you can run stuff like "crossystem fw_try_next=B fw_try_count=1", it should boot RW_B once and boot RW_A again on the next boot (assuming your hang happens after verstage, of course). (If you set fw_try_count=0, it will just keep booting the current slot forever. Note that there's no way to automatically fall to recovery mode (RO) with this, that only happens when explicitly triggered by code.)
However, vboot isn't really trying to solve this kind of problem and I don't think we should try to bend it to be something else. I agree that it would be better to implement this separately, and that separate FMAP sections would be a cleaner implementation. (It would be nice if we could eventually retire CONFIG_CBFS_PREFIX then, the whole fallback/... thing in all our images is just confusing to most people.)
FWIW, it's my opinion I think we'll need to start splitting cbfs into smaller ones. This isn't specific to this situation, but splitting slots into multiple cbfses (rw-a-1, rw-a-2, etc) allows one to chain/group resources as they are used along with more flexibility for signing/verification methods. What you wrote above seems sane, but I think you'll run into build limitations that don't allow one to target fmap regions for different assets. It's a lot of Make w/ some special casing currently which is because people didn't want another tool at the time -- however, once you need more cbfs regions of different granularity I think having better tooling around targeting final destination for assets is required.
Are we abandoning the idea to verify individual files instead (once someone has time to implement it) then? I'd still think that would be a nicer solution to get more flexible verification.
On Tue, Jan 22, 2019 at 6:21 PM Julius Werner jwerner@chromium.org wrote:
FWIW, it's my opinion I think we'll need to start splitting cbfs into
smaller ones. This isn't specific to this situation, but splitting slots into multiple cbfses (rw-a-1, rw-a-2, etc) allows one to chain/group resources as they are used along with more flexibility for signing/verification methods. What you wrote above seems sane, but I think you'll run into build limitations that don't allow one to target fmap regions for different assets. It's a lot of Make w/ some special casing currently which is because people didn't want another tool at the time -- however, once you need more cbfs regions of different granularity I think having better tooling around targeting final destination for assets is required.
Are we abandoning the idea to verify individual files instead (once someone has time to implement it) then? I'd still think that would be a nicer solution to get more flexible verification.
I'm just expressing my opinion on how I think we should move forward. I'm a little concerned about multiple things when it comes to doing per-file signature/hashing:
1. Time of Check and Time of Use issues. 2. Passing metadata forward for validation. 3. Complexity
For 1, this is attempting to protect physical attack. Obviously this particular problem can't be solved in isolation, but it's something to think about. When discussing 2 from a practical matter, we need to pass on the metadata information across stages to help mitigate 1 and ensure integrity of the hashes are correct. Similarly, limiting complexity is also important. If we can group the assets together that are tied to the boot flow then it's conceptually easier to limit access to regions that haven't been checked yet or shouldn't be accessed. I do think per-file metadata hashing brings a lot of complications in implementation. When in limited resource environments chunking cbfs into multiple regions lends itself well to accomplishing that while also restricting access to data/regions that aren't needed yet when thinking about limiting #1.
-Aaron
For 1, this is attempting to protect physical attack. Obviously this particular problem can't be solved in isolation, but it's something to think about.
But isn't this something that per-file hashing would probably make easier to protect against, not harder? I mean, right now we just hash the whole region once and then assume it stays good -- there is no protection. I doubt this would really become easier to solve if you split the CBFS up into chunks... you'd have to somehow built a system where a whole chunk is loaded into memory at once, verified there, and then every file we may want to access from it is accessed in that same stage from that cached version in memory.
The file is the natural unit which is loaded at a time, so I'd think scoping the verification to that would make it easiest to verify on load. I mean, on Arm we already always load whole files at a time anyway, so it's really just a matter of inserting the verification step between load and decompression on the buffer we already have. (On x86 it may be a bit different, but it should still be easier to find space to load a file at a time than to load a larger CBFS chunk at a time.)
When discussing 2 from a practical matter, we need to pass on the metadata information across stages to help mitigate 1 and ensure integrity of the hashes are correct.
This is true -- but isn't this the same for all solutions? No matter how you scope verification, if you want to verify things at load time then every stage needs to be able to run verification, and it needs some kind of trust anchor passed in from a previous stage for that. I also don't think this should be a huge hurdle... we're already passing vboot workbuffer metadata, this just means passing something more in there. (For per-file hashing, I'd assume you'd just pass the single hash covering all the metadata, and then all the metadata is verified again on every CBFS walk.)
Similarly, limiting complexity is also important. If we can group the assets together that are tied to the boot flow then it's conceptually easier to limit access to regions that haven't been checked yet or shouldn't be accessed. I do think per-file metadata hashing brings a lot of complications in implementation. When in limited resource environments chunking cbfs into multiple regions lends itself well to accomplishing that while also restricting access to data/regions that aren't needed yet when thinking about limiting #1.
Fair enough. I agree limiting complexity is always important, I'm just not ad-hoc convinced that a solution like you describe would really be less complex than per-file hashing. I think that while it makes the CBFS access code itself a bit more complex, it would save complexity in other areas (e.g. if you have arbitrary chunks then something must decide which chunk to use at what time and which file to pack into which chunk, all of which is extra code). Assuming we can "get it right once", I think per-file hashing should be a solution that will "just work" for whatever future platform ports and general features want to put in CBFS (whereas a solution where everyone who wants to put another file into CBFS must understand the verification solution well enough to make an informed decision on which chunk to place something into may end up pushing more complexity onto more people).
Anyway, I didn't want to derail this thread into discussing CBFS verification, I just wanted to mention that I still think the per-file hashing is a good idea and worth discussing. We should have a larger discussion about the pros and cons of possible approaches before we decide what we're planning to do (and then someone still needs to find time to do it, of course ;) ).
Currently with the one CBFS containing all files it is easy and simple to access every file in every stage. Wouldn't this be harder if we chose to split the CBFS into several, stand-alone CBFSes? Or, on the other hand, wouldn't we end up in duplicating CBFS files just to have them around handy?
I have the case in mind where we need access to data other than stage-code like Siemens' HWInfo. We need to access this configuration data from different stages.
Werner
-----Ursprüngliche Nachricht----- Von: Julius Werner [mailto:jwerner@chromium.org] Gesendet: Donnerstag, 24. Januar 2019 00:01 An: Aaron Durbin Cc: Julius Werner; Arthur Heymans; Coreboot Betreff: [coreboot] Re: Fallback mechanisms on x86 with C_ENVIRONMENT_BOOTBLOCK
For 1, this is attempting to protect physical attack. Obviously this particular problem can't be solved in isolation, but it's something to think
about.
But isn't this something that per-file hashing would probably make easier to protect against, not harder? I mean, right now we just hash the whole region once and then assume it stays good -- there is no protection. I doubt this would really become easier to solve if you split the CBFS up into chunks... you'd have to somehow built a system where a whole chunk is loaded into memory at once, verified there, and then every file we may want to access from it is accessed in that same stage from that cached version in memory.
The file is the natural unit which is loaded at a time, so I'd think scoping the verification to that would make it easiest to verify on load. I mean, on Arm we already always load whole files at a time anyway, so it's really just a matter of inserting the verification step between load and decompression on the buffer we already have. (On x86 it may be a bit different, but it should still be easier to find space to load a file at a time than to load a larger CBFS chunk at a time.)
When discussing 2 from a practical matter, we need to pass on the metadata information across stages to help mitigate 1 and ensure
integrity of the hashes are correct.
This is true -- but isn't this the same for all solutions? No matter how you scope verification, if you want to verify things at load time then every stage needs to be able to run verification, and it needs some kind of trust anchor passed in from a previous stage for that. I also don't think this should be a huge hurdle... we're already passing vboot workbuffer metadata, this just means passing something more in there. (For per-file hashing, I'd assume you'd just pass the single hash covering all the metadata, and then all the metadata is verified again on every CBFS walk.)
Similarly, limiting complexity is also important. If we can group the assets together that are tied to the boot flow then it's conceptually
easier to limit access to regions that haven't been checked yet or shouldn't be accessed. I do think per-file metadata hashing brings a lot of complications in implementation. When in limited resource environments chunking cbfs into multiple regions lends itself well to accomplishing that while also restricting access to data/regions that aren't needed yet when thinking about limiting #1.
Fair enough. I agree limiting complexity is always important, I'm just not ad-hoc convinced that a solution like you describe would really be less complex than per-file hashing. I think that while it makes the CBFS access code itself a bit more complex, it would save complexity in other areas (e.g. if you have arbitrary chunks then something must decide which chunk to use at what time and which file to pack into which chunk, all of which is extra code). Assuming we can "get it right once", I think per-file hashing should be a solution that will "just work" for whatever future platform ports and general features want to put in CBFS (whereas a solution where everyone who wants to put another file into CBFS must understand the verification solution well enough to make an informed decision on which chunk to place something into may end up pushing more complexity onto more people).
Anyway, I didn't want to derail this thread into discussing CBFS verification, I just wanted to mention that I still think the per-file hashing is a good idea and worth discussing. We should have a larger discussion about the pros and cons of possible approaches before we decide what we're planning to do (and then someone still needs to find time to do it, of course ;) ). _______________________________________________ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-leave@coreboot.org
On Wed, Jan 23, 2019 at 11:17 PM Zeh, Werner werner.zeh@siemens.com wrote:
Currently with the one CBFS containing all files it is easy and simple to access every file in every stage. Wouldn't this be harder if we chose to split the CBFS into several, stand-alone CBFSes? Or, on the other hand, wouldn't we end up in duplicating CBFS files just to have them around handy?
I have the case in mind where we need access to data other than stage-code like Siemens' HWInfo. We need to access this configuration data from different stages.
Such a thing would need to be loaded once and maintained in-core. But that would be true regardless of cbfs layout. However, if one doesn't care about protecting against physical attacks on boot media it's not really relevant. There's also more than one way to do things. For example, if we implemented namespacing that mapped to cbfs region then the call site would also just work as-is with a name change. e.g. 'file1' -> 'ns/file1' where 'ns' would map to a specific region. However that's more like a vfs implementation, but it's not too complicated to implement if we wanted to do such a thing.
Werner
-----Ursprüngliche Nachricht----- Von: Julius Werner [mailto:jwerner@chromium.org] Gesendet: Donnerstag, 24. Januar 2019 00:01 An: Aaron Durbin Cc: Julius Werner; Arthur Heymans; Coreboot Betreff: [coreboot] Re: Fallback mechanisms on x86 with
C_ENVIRONMENT_BOOTBLOCK
For 1, this is attempting to protect physical attack. Obviously this
particular problem can't be solved in isolation, but it's something to think
about.
But isn't this something that per-file hashing would probably make
easier to protect against, not harder? I mean, right now we just hash the
whole region once and then assume it stays good -- there is no
protection. I doubt this would really become easier to solve if you split the
CBFS up into chunks... you'd have to somehow built a system where a
whole chunk is loaded into memory at once, verified there, and then
every file we may want to access from it is accessed in that same stage
from that cached version in memory.
The file is the natural unit which is loaded at a time, so I'd think
scoping the verification to that would make it easiest to verify on load. I
mean, on Arm we already always load whole files at a time anyway, so
it's really just a matter of inserting the verification step between load
and decompression on the buffer we already have. (On x86 it may be a bit different, but it should still be easier to find
space to load a file at a time than to load a larger CBFS chunk at a
time.)
When discussing 2 from a practical matter, we need to pass on the
metadata information across stages to help mitigate 1 and ensure
integrity of the hashes are correct.
This is true -- but isn't this the same for all solutions? No matter how
you scope verification, if you want to verify things at load time then
every stage needs to be able to run verification, and it needs some kind
of trust anchor passed in from a previous stage for that. I also don't
think this should be a huge hurdle... we're already passing vboot
workbuffer metadata, this just means passing something more in there.
(For per-file hashing, I'd assume you'd just pass the single hash
covering all the metadata, and then all the metadata is verified again on
every CBFS walk.)
Similarly, limiting complexity is also important. If we can group the
assets together that are tied to the boot flow then it's conceptually
easier to limit access to regions that haven't been checked yet or
shouldn't be accessed. I do think per-file metadata hashing brings a lot of
complications in implementation. When in limited resource environments
chunking cbfs into multiple regions lends itself well to
accomplishing that while also restricting access to data/regions that
aren't needed yet when thinking about limiting #1.
Fair enough. I agree limiting complexity is always important, I'm just
not ad-hoc convinced that a solution like you describe would really be
less complex than per-file hashing. I think that while it makes the CBFS
access code itself a bit more complex, it would save complexity in
other areas (e.g. if you have arbitrary chunks then something must
decide which chunk to use at what time and which file to pack into
which chunk, all of which is extra code). Assuming we can "get it right
once", I think per-file hashing should be a solution that will "just work"
for whatever future platform ports and general features want to put in
CBFS (whereas a solution where everyone who wants to put another
file into CBFS must understand the verification solution well enough to
make an informed decision on which chunk to place something into
may end up pushing more complexity onto more people).
Anyway, I didn't want to derail this thread into discussing CBFS
verification, I just wanted to mention that I still think the per-file hashing is a
good idea and worth discussing. We should have a larger discussion about
the pros and cons of possible approaches before we decide what
we're planning to do (and then someone still needs to find time to do
it, of course ;) ).
coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an
email to coreboot-leave@coreboot.org
On Wed, Jan 23, 2019 at 4:00 PM Julius Werner jwerner@chromium.org wrote:
For 1, this is attempting to protect physical attack. Obviously this
particular problem can't be solved in isolation, but it's something to think about.
But isn't this something that per-file hashing would probably make easier to protect against, not harder? I mean, right now we just hash the whole region once and then assume it stays good -- there is no protection. I doubt this would really become easier to solve if you split the CBFS up into chunks... you'd have to somehow built a system where a whole chunk is loaded into memory at once, verified there, and then every file we may want to access from it is accessed in that same stage from that cached version in memory.
The file is the natural unit which is loaded at a time, so I'd think scoping the verification to that would make it easiest to verify on load. I mean, on Arm we already always load whole files at a time anyway, so it's really just a matter of inserting the verification step between load and decompression on the buffer we already have. (On x86 it may be a bit different, but it should still be easier to find space to load a file at a time than to load a larger CBFS chunk at a time.)
I don't believe the per-file approach necessarily makes things easier to protect. In fact the re-walk with validation might make it easier to exploit (depending on complexity of implementation). For time-of-check-time-of-use scenarios the easier thing is to load data that will be used in-core. i.e. not going back to boot media. Platform specifics with resource constraints would inherently leave these attacks open. Your suggestion on loading file, verifying, then using is valid, but my concern is all the rewalking of cbfs (comment below).
When discussing 2 from a practical matter, we need to pass on the
metadata information across stages to help mitigate 1 and ensure integrity of the hashes are correct.
This is true -- but isn't this the same for all solutions? No matter how you scope verification, if you want to verify things at load time then every stage needs to be able to run verification, and it needs some kind of trust anchor passed in from a previous stage for that. I also don't think this should be a huge hurdle... we're already passing vboot workbuffer metadata, this just means passing something more in there. (For per-file hashing, I'd assume you'd just pass the single hash covering all the metadata, and then all the metadata is verified again on every CBFS walk.)
What does that practically look like? Every time we have to re-walk we have to reverify the integrity of the metadata. Designing on the fly, to me that suggests we need to carry a copy of the metadata including offset & size after verifying it and not using it again on the boot media. That way it stays in-core. Otherwise, one has to walk all of the cbfs to only rewind and find the file again. There's variants of how big a span is covered by the metadata hash (i.e. how many entries), but one shouldn't rely upon existing entries every time we walk. It should be reliant upon the previous verified and cached metadata. Assets and access patterns very much are a part of the puzzle. Some platforms have very little assets while others have a quite a bit.
Similarly, limiting complexity is also important. If we can group the
assets together that are tied to the boot flow then it's conceptually easier to limit access to regions that haven't been checked yet or shouldn't be accessed. I do think per-file metadata hashing brings a lot of complications in implementation. When in limited resource environments chunking cbfs into multiple regions lends itself well to accomplishing that while also restricting access to data/regions that aren't needed yet when thinking about limiting #1.
Fair enough. I agree limiting complexity is always important, I'm just not ad-hoc convinced that a solution like you describe would really be less complex than per-file hashing. I think that while it makes the CBFS access code itself a bit more complex, it would save complexity in other areas (e.g. if you have arbitrary chunks then something must decide which chunk to use at what time and which file to pack into which chunk, all of which is extra code). Assuming we can "get it right once", I think per-file hashing should be a solution that will "just work" for whatever future platform ports and general features want to put in CBFS (whereas a solution where everyone who wants to put another file into CBFS must understand the verification solution well enough to make an informed decision on which chunk to place something into may end up pushing more complexity onto more people).
The decision on where files go is a static one at build time. When the cbfs chunks follow boot flow then the decision to switch to a new one follows those same boundaries. I agree, though, that one needs to understand their boot flow to make informed decisions on the asset location.
Anyway, I didn't want to derail this thread into discussing CBFS verification, I just wanted to mention that I still think the per-file hashing is a good idea and worth discussing. We should have a larger discussion about the pros and cons of possible approaches before we decide what we're planning to do (and then someone still needs to find time to do it, of course ;) ).
Agreed.
What does that practically look like? Every time we have to re-walk we have to reverify the integrity of the metadata.
I mean, that is exactly what we're doing right now anyway (unless something significantly changed in CBFS code since the last time I checked). For every single CBFS file access we start at the root and we walk the chain until we find the file we're trying to load (and all of this are real SPI transfers, it's not cached anywhere). Hashing times are generally insignificant compared to SPI access times IIRC (especially since this should be a raw hash, not an RSA signature like we use in current vboot). Validating the hash means we have to walk the whole CBFS directory rather than stopping at the file when we find it, but since we generally don't really pay attention for CBFS file placement for performance right now that's presumably not a big deal (would only double cost on average).
And there are certainly still ways to improve this with the right caching, and ways to do that in a safe way even if verification is involved, which we could explore if we wanted to. I'm just saying as a baseline that the cost of CBFS walks seems to have never bothered us in the past, so the comparable cost of reverifying metadata probably shouldn't be a major criterion to reject the per-file approach.
On Thu, Jan 24, 2019 at 6:24 PM Julius Werner jwerner@chromium.org wrote:
What does that practically look like? Every time we have to re-walk we
have to reverify the integrity of the metadata.
I mean, that is exactly what we're doing right now anyway (unless something significantly changed in CBFS code since the last time I checked). For every single CBFS file access we start at the root and we walk the chain until we find the file we're trying to load (and all of this are real SPI transfers, it's not cached anywhere). Hashing times are generally insignificant compared to SPI access times IIRC (especially since this should be a raw hash, not an RSA signature like we use in current vboot). Validating the hash means we have to walk the whole CBFS directory rather than stopping at the file when we find it, but since we generally don't really pay attention for CBFS file placement for performance right now that's presumably not a big deal (would only double cost on average).
And there are certainly still ways to improve this with the right caching, and ways to do that in a safe way even if verification is involved, which we could explore if we wanted to. I'm just saying as a baseline that the cost of CBFS walks seems to have never bothered us in the past, so the comparable cost of reverifying metadata probably shouldn't be a major criterion to reject the per-file approach.
I wasn't considering just the perf costs. I was including the notion of re-checking and re-visiting the boot media repeatedly wit TOCTOU in mind. I think we're on the same page, though, with the two approaches on what both would entail. We collectively need to align on pros/cons of both and ultimately decide on an approach.