On Fri, Jun 3, 2016 at 7:04 AM, Patrick Rudolph siro@das-labor.org wrote:
Hello, I want to start a discussion about PCI MMIO size that hit me a couple of times using coreboot. I'm focused on Intel Sandybridge, but I guess this topic applies to all x86 systems.
On most Intel systems the PCI mmio size is hard-coded in coreboot to 1024Mbyte, but the value is hidden deep inside raminit code. The mmio size for dynamic resources is limited on top by PCIEXBAR, IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's limited by graphics stolen and TSEG that require at least 40Mbyte. In total there's only space for two 256Mbyte PCI BARs, due to alignment. That's enough for systems that only do have an Intel GPU, but it fails as soon as PCI devices use more than a single 256Mbyte BAR. The PCI mmio size is set in romstage, but PCI BARs are configured in ramstage.
Following questions came to my mind:
- How does the MRC handle this ?
- Should the user be able to modify PCI mmio size ?
- How to pass the required PCI mmio size to romstage ? A good place seems to be the mrc cache, but ramstage doesn't know
about it's structure.
- How is this solved on AMD systems ?
- Should the romstage scan PCI devices and count required BAR size ?
In the past (not sure if it's still true), Intel UEFI systems do a reboot after calculating the MMIO space requirements so that the memory init code knows how much to allocate for MMIO resource allocation. I always found that distasteful and I have always used a 2GiB I/O hole ever since. I never cared about 32-bit kernels so I always found that to be a decent tradeoff. It also makes MTRR and address space easy to digest when looking at things.
Regards, Patrick
-- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot