Hi Brian,
On Wed, May 3, 2023 at 9:27 AM Brian Milliron Brian.Milliron@foresite.com wrote:
Sorry all, the formatting came out badly. I'm forced to use Outlook for work and it doesn't do quoting or inline well. I'm resending with better formatting.
It probably makes sense to switch to direct messages going forward to
avoid spamming coreboot.org
with these details further. So, moving everybody else to Bcc.
I think the community is interested in these discussions and it is not spam, so I'll continue forwarding to the list unless this is all Top Secret or something.
No, I don't plan to share any top secret details :) Just don't like spamming wide groups of people with niche technical details.
Thanks for the reply Andrey and thank you Ron for forwarding the
message. So, only the bootloader portion
is not open source, but everything else is? How big is the bootloader
portion in either bytes or lines of code?
Up to 16K.
That's a bigger black box than I'm comfortable with. Why can't it be open sourced so people can see what this code is doing?
The chip is used in other Google products, external and internal. The "application logic" stage, i.e. cr50, is a very specific firmware that implements the logic needed by ChromeOS. And its sources are open as a part of ChromeOS project. Bootloader logic is (mostly) shared between several of those Google products, it is not specific to ChromeOS. The decision to open hardware details and details of that stage is not controlled by ChromeOS alone. The code and hardware details are closed by design as a part of the security story for those products. Closing the source becomes even more important in the newer version of the chip, where this stage is even larger and contains crypto primitives used by our "application logic" stage.
OpenTitan, already mentioned in the thread, went a different route and opened as much as possible. But that was a conscious decision and one of the main differentiators of that project. This choice affected and affects its entire lifecycle, including its security and certification stories and design decisions. Though we generally move towards more open secure products, it is not yet a typical approach even for new modern days security chips/engines, since it is harder to implement and maintain. And it definitely wasn't typical in 2016.
I don't think anyone here is eager to have another Intel ME situation where there is a closed source chip which can't be disabled doing mysterious things no one can know about.
1) That question is probably better suited for a discussion somewhere on chromium.org (or some other ChromeOS/chromebook-related forum) than coreboot.org, since in any case coreboot or any BIOS has limited ability to control what previous boot stages and other device controllers do, and what debug/update/verification mechanisms they employ. It's always possible to work around any boot chain verification rooted in coreboot, if coreboot itself can be replaced. But that's a question for the design of the product that uses coreboot, not the design of coreboot itself. E.g. using a different secure enclave/storage instead of Google security chip for verified boot attestation or antirollback information or device policies doesn't affect how easily coreboot can be replaced.
2) I'm also a bit curious about the context for these questions. Google security chips have been used by chromebooks since 2016. Are you evaluating the ability to use chromebooks for your use case, and concerned about the presence of the only partially open Google security chip there? All valid questions, but similarly it seems like they should be targeted at ChromeOS, not Coreboot.
3) In my opinion, a better comparison is with TPM chips, which also run closed source firmware and don't disclose hardware details. Not an ideal situation, but that's a pretty typical compromise between openness and practical security/certification considerations at this time.
My understanding of the chip is that there is no networking capability.
It has no internal NIC and isn't wired
into the main PCI bus so it can't communicate with the NIC on the
mainboard.
Correct.
This is important for security as we don't want the risk of anyone
remotely flashing the bios with malware
for example.
Are you talking about chromebooks? You won't be able to use a Google
security chip with cr50 on your
own product that is not a chromebook (or in some rare cases some other
Google device that uses
ChromeOS).
Whatever hardware that ships with it. Chromebooks certainly. I'm not interested in any custom hardware at the moment.
However, I want to be sure my understanding is correct because the
documentation here
https://chromium.googlesource.com/chromiumos/platform/ec/+/cr50_stab/docs/ca...
lists a capability of "OpenNoLongPP IfOpened Allow opening GSC without
physical presence"
Opening the GSC without physical presence implies some remote operation
capability.
Case closed debugging capabilities that you are looking at in that doc
are accessible (after proper authorization)
over a USB-C port, which is directly attached to the Google security chip
on chromebooks.
"opening GSC without physical presence" in this context refers to the
"CCD Open" procedure described in that
doc. The owner can change the configuration, including what authorization
is needed to change those policies.
So if the only way to access the CCD is through physical presence, why is there a setting to disable the physical presence proof?
The only way to access CCD is through a physical access to the USB port, regardless of that setting you mention above. However, besides software attacks from the AP side that are eliminated by requiring a physical connection that the main SoC doesn't have, there are different types of attacks: e.g. malicious USB appliances that the user is tricked to insert in their USB port. Requiring physical access to the USB port to use CCD doesn't stop those. For this reason, the default configuration of CCD doesn't allow you to do things like flashing coreboot or even turning off Write Protect for the flash area that contains coreboot RO with just physical access to the USB port. To get access to such features a user first needs to "open" or "unlock" CCD. And opening CCD requires additional authorization, authorization requirements depend on the type of the ownership of the device. This is what this setting is about.
W/o going into too many options, the device owner can completely disable "opening" CCD, or require going through dev mode and proving physical presence through a mechanism separate from USB (powerbutton) to prevent attacks through local USB vector. However, going through a lengthy dance of pressing the powerbutton at the required moments every time they need to open CCD may be inconvenient for device owners who do it frequently (typically, for developers who want to flash coreboot and other firmware through it). So, there's an option to disable extra physical presence requirements for the advanced users who "know what they are doing" in their lab setting. By default, CCD opening policies do require physical presence check using the powerbutton, and turning this policy off is possible only after "opening" it once going through the full default procedure.
Is there any way for the end user to verify the loaded CR50 firmware is
correct and hasn't been tampered with?
Bootloader performs firmware verification on every Google security chip
reset and wouldn't boot a firmware
image, which was tampered with. And after it was verified, the active
firmware image is protected from
modification at run-time. So, if it runs, it is not tampered with. And if you care about physical attacks in your model, you can establish a
secure comm channel to cr50 firmware,
and verify that you are not talking to an interposer the usual way you
would do it with a TPM: read and verify EK
certificate, use ActivateCredential with EK to certify a restricted TPM
signing key, then, for example, use Certify
with that restricted key to ensure that a salting key that you created
for your auth session is actually a TPM key,
and encrypt you session salt with it.
So the chip verifies itself. That's all well and good, but I'd really like to be able to verify it myself, manually. Something like deterministic builds which I can compile on my local system and then verify what's running on the chip matches what's in the source repository or at the very least get some kind of checksum that matches a checksum Google has published.
Note that requesting even an attested (signed) checksum for the active firmware from the chip itself won't help in what you want to achieve. The active firmware is stored on the internal flash, and there's no path - for update or verification purposes - that bypasses the currently running firmware and that can modify/examine the internal flash directly. So, w/o the DICE-like attestation rooted in something on the chip that you can trust (and besides cr50 you have just the closed-source bootloader and bootrom), to trust the self-reported checksum you already have to trust the running firmware.
There's no independent root device key available only to the previous boot stage with a certificate you can independently check that could certify the checksum of the running cr50 firmware. However, as I said above, the fact that cr50 is running means that the previous boot stage did perform verification. So, you can trust the self-reported version that cr50 provides to the same degree, if you trust the previous stage.
And w/o that, all that is left is verifying the update images before performing the update (though they would be rejected during the update if they are not signed correctly or attempt rollback or don't match the device). For that part: you can do that already. You can do your build and compare it with the cr50 update image which comes inside the ChromeOS image (except for the RO part). The build is only mostly deterministic though as it contains the version information, including the time of build, so the comparison will be a bit more involved. ChromeOS also provides a way to establish "known good" checksums for the archived official update images, though doesn't publish them in some advertised location. You can find them in https://chromium.googlesource.com/chromiumos/overlays/chromiumos-overlay/+/r..., though it does contain it only for the latest update, you can look at the history for earlier versions. One can verify the archive checksum against the manifest and then retrieve the update image for the verified archive and get and store the calculated checksum in the list of "known good". Google doesn't do that since we rely on the on-chip verification mechanism to prevent update to arbitrary firmware.
If the AP or EC firmware is overwritten does that clear/reset the TPM
registers?
Sorry, didn't quite understand the question. If you overwrite AP/EC firmware, you can trigger TPM owner clear, but
that depends more on what AP firmware
does with the TPM. Cr50 is (mostly) following what a TPM would do to
handle resets, TPM2_Startup and
TPM2_Clear commands, where the latter is available with lockout/platform
auth. There are some additional
protection for some configuration data even through owner clear, but
overall the owner clear flow is very similar
to what a spec-compliant TPM would use. Opening CCD also usually clears the TPM state.
My question was regarding secure state. So if I have some keys stored in TPM registers which are required to unlock the encrypted hard drive, I want to only unlock if the computer is in a verified secure state. If someone has flashed the EC or AP firmware without my knowing about it that would no longer be a secure state and I would hope the CR50 would force a reset of the TPM if the firmware has been modified so I don't try to boot a computer in an insecure state and possibly expose passwords or other encryption keys. So when you say opening the CCD clears the TPM state, does that clear the TPM registers that are used for verified boot and/or store HD encryption keys?
With the default CCD policies, "opening" CCD does clear the TPM state. Flashing-coreboot-over-CCD itself doesn't clear the TPM state. There is a separate mechanism in the newer versions of the chip that prevent booting of the device (but don't affect the TPM state) if the AP-side root of trust for the verified boot didn't pass verification.