On Wed, Jan 23, 2019 at 11:17 PM Zeh, Werner <werner.zeh(a)siemens.com> wrote:
Currently with the one CBFS containing all files it is
easy and simple to
access every file in every stage.
Wouldn't this be harder if we chose to split the CBFS into several,
Or, on the other hand, wouldn't we end up in duplicating CBFS files just
to have them around handy?
I have the case in mind where we need access to data other than
stage-code like Siemens' HWInfo.
We need to access this configuration data from different stages.
Such a thing would need to be loaded once and maintained in-core. But that
would be true regardless of cbfs layout. However, if one doesn't care about
protecting against physical attacks on boot media it's not really
relevant. There's also more than one way to do things. For example, if we
implemented namespacing that mapped to cbfs region then the call site would
also just work as-is with a name change. e.g. 'file1' -> 'ns/file1'
'ns' would map to a specific region. However that's more like a vfs
implementation, but it's not too complicated to implement if we wanted to
do such a thing.
Von: Julius Werner [mailto:email@example.com]
Gesendet: Donnerstag, 24. Januar 2019 00:01
An: Aaron Durbin
Cc: Julius Werner; Arthur Heymans; Coreboot
Betreff: [coreboot] Re: Fallback mechanisms on x86 with
> For 1, this is attempting to protect physical attack. Obviously this
particular problem can't be solved in isolation, but it's something to think
But isn't this something that per-file hashing would probably make
protect against, not harder? I mean, right now we just hash the
whole region once and then assume it stays good
-- there is no
protection. I doubt this would really become easier to solve if you
CBFS up into chunks... you'd have to somehow
built a system where a
whole chunk is loaded into memory at once, verified there,
every file we may want to access from it is
accessed in that same stage
from that cached version in memory.
The file is the natural unit which is loaded at a time, so I'd think
the verification to that would make it easiest to verify on load. I
mean, on Arm we already always load whole files
at a time anyway, so
it's really just a matter of inserting the verification
step between load
and decompression on the buffer we already have.
x86 it may be a bit different, but it should still be easier to find
space to load
a file at a time than to load a larger CBFS chunk at a
> When discussing 2 from a practical matter, we need to pass on the
information across stages to help mitigate 1 and ensure
integrity of the hashes are correct.
This is true -- but isn't this the same for all solutions? No matter how
scope verification, if you want to verify things at load time then
every stage needs to be able to run verification,
and it needs some kind
of trust anchor passed in from a previous stage for that. I
think this should be a huge hurdle... we're
already passing vboot
workbuffer metadata, this just means passing something more
(For per-file hashing, I'd assume you'd
just pass the single hash
covering all the metadata, and then all the metadata is
verified again on
every CBFS walk.)
> Similarly, limiting complexity is also important. If we can group the
together that are tied to the boot flow then it's conceptually
easier to limit access to regions that
haven't been checked yet or
shouldn't be accessed. I do think per-file
metadata hashing brings a lot of
complications in implementation. When in limited
chunking cbfs into multiple regions lends itself well to
accomplishing that while also restricting access
to data/regions that
aren't needed yet when thinking about limiting #1.
Fair enough. I agree limiting complexity is always important, I'm just
ad-hoc convinced that a solution like you describe would really be
less complex than per-file hashing. I think that
while it makes the CBFS
access code itself a bit more complex, it would save
other areas (e.g. if you have arbitrary chunks
then something must
decide which chunk to use at what time and which file to pack
which chunk, all of which is extra code).
Assuming we can "get it right
once", I think per-file hashing should be a
solution that will "just work"
for whatever future platform ports and general
features want to put in
CBFS (whereas a solution where everyone who wants to put
file into CBFS must understand the verification
solution well enough to
make an informed decision on which chunk to place something
may end up pushing more complexity onto more
Anyway, I didn't want to derail this thread into discussing CBFS
I just wanted to mention that I still think the per-file
hashing is a
good idea and worth discussing. We should have a
larger discussion about
the pros and cons of possible approaches before we decide
we're planning to do (and then someone still
needs to find time to do
it, of course ;) ).
coreboot mailing list -- coreboot(a)coreboot.org To unsubscribe send an