This series of patches adds TPM 2 support to SeaBIOS in the way previously
proposed.
v2->v3:
- Converted TPM_VERSION_* from enum's to #define's and removed unnecessary
cases with TPM_VERSION_NONE in switch statements.
- Convert the log_entry internal representation to TPM 2 native format.
- Added patch that looks at command tags in the TPM_Passthrough API
call and return error code in case of TPM version mismatch.
v1->v2:
- Addressed most of Kevin's comments.
- Added patch for writing logs in TPM 2 format
Stefan
Stefan Berger (11):
tpm: Extend TPM TIS with TPM 2 support.
tpm: Factor out tpm_extend
tpm: Prepare code for TPM 2 functions
tpm: Implement tpm20_startup and tpm20_s3_resume
tpm: Implement tpm20_set_timeouts
tpm: Implement tpm20_prepboot
tpm: Implement tpm20_extend
tpm: Implement tpm20_menu
tpm: Implement TPM 2's tpm_set_failure part
tpm: Write logs in TPM 2 format
Filter TPM commands in passthrough API
src/hw/tpm_drivers.c | 38 ++-
src/hw/tpm_drivers.h | 26 +-
src/std/tcg.h | 147 +++++++++
src/tcgbios.c | 900 ++++++++++++++++++++++++++++++++++++++++++---------
4 files changed, 961 insertions(+), 150 deletions(-)
--
2.4.3
On Tue, Feb 02, 2016 at 09:56:20AM +0100, Gerd Hoffmann wrote:
> Hi,
>
> > > I'd have qemu copy the data on 0xfc write then, so things continue to
> > > work without updating seabios. So, the firmware has to allocate space,
> > > reserve it etc., and programming the 0xfc register. Qemu has to make
> > > sure the opregion appears at the address written by the firmware, by
> > > whatever method it prefers.
> >
> > Yup. It's Qemu's responsibility to expose opregion content.
> >
> > btw, prefer to do copying here. It's pointless to allow write from guest
> > side. One write example is SWSCI mailbox, thru which gfx driver can
> > trigger some SCI event to communicate with BIOS (specifically ACPI
> > methods here), mostly for some monitor operations. However it's
> > not a right thing for guest to trigger host SCI and thus kick host
> > ACPI methods.
>
> Thanks.
>
> So, question again how we do that best. Option one being the mmap way,
> i.e. basically what the patches posted by alex are doing. Option two
> being the fw_cfg way, i.e. place a opregion copy in fw_cfg and have
> seabios not only set 0xfc, but also store the opregion there by copying
> from fw_cfg.
What about option 2a - SeaBIOS copies from fw_cfg to memory and then
programs 0xfc. QEMU can detect the write to 0xfc and choose to map
that ram (thus completely ignoring the contents that were just copied
in) or it can choose not to map that ram (thus guest uses the contents
just copied in).
The advantage of this approach is that it is a bit simpler in the
firmware (no size probing is needed as the size comes from fw_cfg) and
it allows for future flexibility as the choice of mapping can be
deferred.
Totally untested seabios code below as example.
As an aside, if this type of "program a pci register" with a memory
address becomes common, we could enhance the acpi-style "linker
script" system to automate this..
-Kevin
static void intel_igd_opregion_setup(struct pci_device *dev, void *arg)
{
struct romfile_s *file = romfile_find("etc/igd-opregion");
if (!file)
return;
void *data = memalign_high(PAGE_SIZE, file->size);
if (!data) {
warn_noalloc();
return;
}
int ret = file->copy(file, data, file->size);
if (ret < 0) {
free(data);
return;
}
pci_config_writel(dev->bdf, 0xFC, (u32)data);
}
The proposed IGD OpRegion support in QEMU via vfio maps the host
OpRegion into VM system memory at the address written to the ASL
Storage register (0xFC). The OpRegion contains a 16-byte signature
followed by a 4-byte size field. Therefore SeaBIOS can allocate a
buffer of the typical size (8KB), this results in a matching e820
reserved entry for the range, then write the buffer address to the ASL
Storage register, blinking the OpRegion into the VM address space. At
that point SeaBIOS can validate the signature and size and remap if
necessary. If the OpRegion is larger than 8KB, this would result in
any conflicting ranges being temporarily mapped with the OpRegion,
which probably needs further discussion on what that could break.
Other options might be to use the same algorithm with an obscenely
sized initial buffer to make sure we do not overlap, always
re-allocating the proper sized buffer, or perhaps we could pass the
OpRegion itself or information about the OpRegion through fw_cfg.
With the posted kernel series[1] and QEMU series[2] (on top of Gerd's
IGD patches[3]), this makes the OpRegion available to the VM and
tracing shows that the guest driver does access it.
[1] https://lkml.org/lkml/2016/2/1/884
[2] https://lists.gnu.org/archive/html/qemu-devel/2016-02/msg00202.html
[3] https://lists.gnu.org/archive/html/qemu-devel/2016-01/msg00244.html
Signed-off-by: Alex Williamson <alex.williamson(a)redhat.com>
---
src/fw/pciinit.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)
diff --git a/src/fw/pciinit.c b/src/fw/pciinit.c
index c31c2fa..4f3251e 100644
--- a/src/fw/pciinit.c
+++ b/src/fw/pciinit.c
@@ -257,6 +257,52 @@ static void ich9_smbus_setup(struct pci_device *dev, void *arg)
pci_config_writeb(bdf, ICH9_SMB_HOSTC, ICH9_SMB_HOSTC_HST_EN);
}
+static void intel_igd_opregion_setup(struct pci_device *dev, void *arg)
+{
+ u16 bdf = dev->bdf;
+ u32 orig;
+ void *opregion;
+ int size = 8;
+
+ if (!CONFIG_QEMU)
+ return;
+
+ orig = pci_config_readl(bdf, 0xFC);
+
+realloc:
+ opregion = malloc_high(size * 1024);
+ if (!opregion) {
+ warn_noalloc();
+ return;
+ }
+
+ /*
+ * QEMU maps the OpRegion into system memory at the address written here,
+ * this overlaps our malloc, which marks the range e820 reserved.
+ */
+ pci_config_writel(bdf, 0xFC, cpu_to_le32((u32)opregion));
+
+ if (memcmp(opregion, "IntelGraphicsMem", 16)) {
+ pci_config_writel(bdf, 0xFC, orig);
+ free(opregion);
+ return; /* the opregion didn't magically appear, not supported */
+ }
+
+ if (size == le32_to_cpu(*(u32 *)(opregion + 16))) {
+ dprintf(1, "Intel IGD OpRegion enabled on %02x:%02x.%x\n",
+ pci_bdf_to_bus(bdf), pci_bdf_to_dev(bdf), pci_bdf_to_fn(bdf));
+ return; /* success! */
+ }
+
+ pci_config_writel(bdf, 0xFC, orig);
+ free(opregion);
+
+ if (size == 8) { /* try once more with a new size */
+ size = le32_to_cpu(*(u32 *)(opregion + 16));
+ goto realloc;
+ }
+}
+
static const struct pci_device_id pci_device_tbl[] = {
/* PIIX3/PIIX4 PCI to ISA bridge */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0,
@@ -290,6 +336,10 @@ static const struct pci_device_id pci_device_tbl[] = {
PCI_DEVICE_CLASS(PCI_VENDOR_ID_APPLE, 0x0017, 0xff00, apple_macio_setup),
PCI_DEVICE_CLASS(PCI_VENDOR_ID_APPLE, 0x0022, 0xff00, apple_macio_setup),
+ /* Intel IGD OpRegion setup */
+ PCI_DEVICE_CLASS(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, PCI_CLASS_DISPLAY_VGA,
+ intel_igd_opregion_setup),
+
PCI_DEVICE_END,
};
On Fri, Jan 22, 2016 at 03:27:28PM -0500, Stefan Berger wrote:
> "Kevin O'Connor" <kevin(a)koconnor.net> wrote on 01/21/2016 05:37:29 PM:
> > Thanks Stefan. In general it looks good to me. I have a few
> > comments, which I'll send separately. All of my comments could be
> > addressed after committing this series if desired.
>
> I can address those comments and repost a V2 with the 10th patch adding
> the part for the logging.
Hi Stefan. Sorry for the delay in responding. I have a couple of
comments on the new patch series which I will respond with separately.
> > How does one test and/or use this support? Does QEMU have support, or
> > is there hardware available on coreboot with the tpm2 hardware?
>
> I did all the testing of these patches with the vTPM with CUSE interface
> integrated into QEMU. Unfortunately the vTPM-QEMU integration train seems
> a wreck now following comments on QEMU mailing list. So, I don't know of
> any TPM 2 hardware out there, less so hardware where coreboot runs. So
> that's probably currently the number one problem.
Normally, I prefer to wait until upstream has committed the equivalent
patches. I think there is some leeway here, however, because this
series could be considered as adding support for additional hardware.
That said, if you don't know of any TPM2 hardware that is shipping, it
does raise the possibility that the specs might change by the time
actual hardware does ship. What is your feel for the trade-off
between merging now and merging after actual implementations exist?
> You know the TPM 1.2 PC BIOS specification, right? I think we can say that
> many of the functions implemented in this series for TPM 2 are necessary
> because of how it's done for TPM 1.2 as well as properties of the TPM 2
> device. This includes the TPM initialization, S3 support, setting of
> timeouts, menu items, etc. The problem with TPM 2 is that there's no
> official spec for TPM 2 for a BIOS. So it's not quite clear to me how much
> leeway we have to go about this in the areas of ACPI tables for logging
> and the API. Regarding these topics:
>
> ACPI tables for logging: The (U)EFI specification for TPM 2 don't require
> a TCPA table with the logging area because there seems to be an API for
> the OS for retrieving the log. UEFI seems to log into just some buffer,
> not connected to any ACPI table. For the BIOS we would still need that
> TCPA table. QEMU currently provides that. The Linux kernel (and all other
> OSes -- uuuh) would then have to allow a TCPA table for logging for TPM 2
> even though we cannot point to a spec for that. Not sure whether we can
> create a standard for this little gap here...
It sounds like the creators of the spec assumed only EFI machines
would have a TPM2 device. Unless there is evidence that OSes will
accept the ACPI/TCPA table in the new format, I'd be inclined to leave
it in the old format.
> BIOS API: Some functions pass the entry to write into the log via the
> function directly. Patch 10 handles that and transforms that entry into
> the log entry format as required for TPM 1.2 or TPM 2 (log entries are
> differently formatted for TPM 1.2 and for TPM 2). So the only remaining
> problem I know of is the function that allows one to pass TPM commands
> through to the TPM. This may end up causing problems in the application if
> it was written for TPM 1.2 and now there's a TPM 2 running underneath,
> which doesn't understand the TPM 1.2 commands. I would say this is likely
> the smaller of the problems also considering that there are not many
> applications out there that use that API call. Possibility to just shut
> down that function call is certainly there.
I'd say returning an error code for pass-through command requests is
the safest solution.
-Kevin