[SeaBIOS] Saving a few bytes across a reboot

Igor Mammedov imammedo at redhat.com
Thu Jan 11 13:40:53 CET 2018


On Wed, 10 Jan 2018 17:45:52 +0100
Laszlo Ersek <lersek at redhat.com> wrote:

> On 01/10/18 16:19, Marc-André Lureau wrote:
> > Hi
> >
> > ----- Original Message -----  
> >>
> >> BTW, from the "TCG PC Client Platform TPM Profile (PTP)
> >> Specification", it seems like the FIFO (TIS) interface is hard-coded
> >> *in the spec* at FED4_0000h  FED4_4FFFh. So we don't even have
> >> to make that dynamic.
> >>
> >> Regarding CRB (as an alternative to TIS+Cancel), I'm trying to wrap
> >> my brain around the exact resources that the CRB interface requries.
> >> Marc-André, can you summarize those?  
> >
> > The device is a relatively simple MMIO-only device on the sysbus:
> > https://github.com/stefanberger/qemu-tpm/commit/2f9d06f93b285d4b39966a80867584c487035db9#diff-1ef22a0d46031cf2701a185aed8ae40eR282
> >
> > The region is registered at the same address as TIS (it's not entirely
> > clear from the spec it is supposed to be there, but my laptop tpm use
> > the same). And it uses a size of 0x1000, although it's also unclear to
> > me what should be the size of the command buffer (that size can also
> > be defined at run-time now, iirc, I should adapt the code).  
> 
> Thank you -- so the "immediate" register block is in MMIO space, and
> (apparently) we can hard-code its physical address too.
Fixed mapping is fine real hardware as systems tend to have
more or less fixed configuration and fw is build specifically
the board in question. That doesn't necessarily true for QEMU
though, that's the reason why we have fwcfg and likes interfaces.
 
> My question is if we need to allocate guest RAM in addition to the
> register block, for the command buffer(s) that will transmit the
> requests/responses. I see the code you quote above says,
> 
> +    /* allocate ram in bios instead? */
> +    memory_region_add_subregion(get_system_memory(),
> +        TPM_CRB_ADDR_BASE + sizeof(struct crb_regs), &s->cmdmem);
Michael used to reject any patches with explicitly mapped memory_regions
(I recall nvdimm was trying to use something like this for DSM buffer).
since it's not migration friendly (practically it's not possible move
region when need arises) while with linker approach it guest allocated
memory, its location address is migrated as part of device state
and doesn't require any memory layout changes on QEMU side.


> ... and AFAICS your commit message poses the exact same question :)
> 
> Option 1: If we have enough room in MMIO space above the register block
> at 0xFED40000, then we could simply dump the CRB there too.
>
> Option 2: If not (or we want to avoid Option 1 for another reason), then
> the linker/loader script has to make the guest fw allocate RAM, write
> the allocation address to the TPM2 table with an ADD_POINTER command,
> and write the address back to QEMU with a WRITE_POINTER command. Is my
> understanding correct?
> 
> I wonder why we'd want to bother with Option 2, since we have to place
> the register block at a fixed MMIO address anyway.
> 
> (My understanding is that the guest has to populate the CRB, and then
> kick the hypervisor, so at least the register used for kicking must be
> in MMIO (or IO) space. And firmware cannot allocate MMIO or IO space
> (for platform devices). Thus, the register block must reside at a
> QEMU-determined GPA. Once we do that, why bother about RAM allocation?)

MMIO doesn't have to be fixed nor exist at all, we could use
linker write to file operation in FW for switching from guest
to QEMU. That's obviously intrusive work for FW and QEMU
compared to hardcodded address in both QEMU and FW but as
benefit changes to QEMU and FW don't have to be tightly coupled
and layout could be changed whenever need arises.


> > My experiments so far running some Windows tests indicate that for
> > TPM2, CRB+UEFI is required (and I managed to get an ovmf build with
> > TPM2 support).  
> 
> Awesome!
> 
> > A few test failed, it seems the "Physical Presence Interface" (PPI) is
> > also required.  
> 
> Required for what goal, exactly?
> 
> > I think that ACPI interface allows to run TPM commands during reboot,
> > by having the firmware taking care of the security aspects.  
> 
> Ugh :/ I mentioned those features in my earlier write-up, under points
> (2f2b) and (2f2c). I'm very unhappy about them. They are a *huge* mess
> for OVMF.
> 
> - They would require including (at least a large part of) the
>   Tcg2Smm/Tcg2Smm.inf driver, with all the complications I described
>   earlier as counter-arguments,
> 
> - they'd require including the MemoryOverwriteControl/TcgMor.inf driver,
> 
> - and they'd require some real difficult platform code in OVMF (e.g.
>   PEI-phase access to non-volatile UEFI variables, which I've by now
>   failed to upstream twice; PEI-phase access to all RAM; and more).
> 
> My personal opinion is that we should determine what goals require what
> TPM features, and then we should aim at a minimal set. If I understand
> correctly, PCRs and measurements already work (although the patches are
> not upstream yet) -- is that correct?
> 
> Personally I think the SSDT/_DSM-based features (TCG Hardware
> Information, TCG Memory Clear Interface, TCG Physical Presence
> Interface) are very much out of scope for "TPM Enablement".
> 
> > I think that's what Stefan is working on for Seabios and the safe
> > memory region (sorry I haven't read the whole discussion, as I am not
> > working on TPM atm)  
> 
> Yeah, with e.g. the "TCG Memory Clear Interface" feature pulled into the
> context -- from the "Platform Reset Attack Mitigation Specification" --,
> I do understand Stefan's question. Said feature is about the OS setting
> a flag in NVRAM, for the firmware to act upon, at next boot. "Saving a
> few bytes across a reboot" maps to that.
> 
> (And, as far as I understand this spec, it tells traditional BIOS
> implementors, "do whatever you want for implementing this NVRAM thingy",
> while to UEFI implementors, it says, "use exactly this and that
> non-volatile UEFI variable". Given this, I don't know how much
> commonality would be possible between SeaBIOS and OVMF.)
> 
> Similarly, about "TCG Physical Presence Interface" -- defined in the TCG
> Physical Presence Interface Specification --, I had written, "The OS can
> queue TPM operations (?) that require Physical Presence, and at next
> boot, [the firmware] would have to dispatch those pending operations."
> 
> That "queueing" maps to the same question (and NVRAM) again, yes.
> 
> 
> Again, I'm unclear about any higher level goals / requirements here, but
> I think these "extras" from the Trusted Computing Group are way beyond
> TPM enablement.
> 
> Thanks
> Laszlo




More information about the SeaBIOS mailing list