When my Skylake system comes out of S3 it fails to resume and ends up going back through the normal boot path. Console output durng resume:
coreboot-4.4-1781-g2fcabb8-heads Wed Oct 5 01:45:23 UTC 2016 ramstage starting... FSP_INFO_HEADER not set! Enumerating buses... Enabling Common Clock Configuration ASPM: Enabled L1 done Allocating resources... Reading resources... CPU_CLUSTER: 0 missing read_resources Done reading resources. Setting resources... PNP: 0c09.0 missing set_resources Done setting resources. Done allocating resources.
(and then it reboots)
My guess is that this is due to the hacks to disable the relocatable ramstage:
CONFIG_RELOCATABLE_RAMSTAGE: The reloctable ramstage support allows for the ramstage to be built as a relocatable module. The stage loader can identify a place out of the OS way so that copying memory is unnecessary during an S3 wake. When selecting this option the romstage is responsible for determing a stack location to use for loading the ramstage.
I filed an issue on the tracker related to the ramstage problem and am trying to debug it with Aaron:
https://ticket.coreboot.org/issues/78
On Mon, Oct 10, 2016 at 09:40:49AM -0600, Trammell Hudson wrote:
[...] I filed an issue on the tracker related to the ramstage problem and am trying to debug it with Aaron:
And it appears to be a bug of my own creation...
Earlier I ran into a problem that the SMM region was not zerored, leading to incorrect measurements when the startup code extended one of the PCRs with its hash. So I added code to rmodule_copy_payload() that would zero everything outside the bounds of the payload_size.
This fixed the SMM bug, but for a relocatable ramstage would also zero its .reloc section.
So, I need close coreboot issue #78 and re-examine my logic this patch:
https://github.com/osresearch/coreboot/commit/f8d2344e172c0f201df791cb1513b5...