[coreboot] Discussion about dynamic PCI MMIO size on x86

Aaron Durbin adurbin at google.com
Mon Jun 6 04:07:22 CEST 2016


On Sat, Jun 4, 2016 at 12:00 PM, Zoran Stojsavljevic
<zoran.stojsavljevic at gmail.com> wrote:
> Hello to all,
>
> If I correctly remember: PCIe configuration space addressing consists of 3
> parts: bus (8 bits), device (5 bits) and function (3 bits). This gives in
> total 8+5+3= 16 bits, thus 2^16 (65536). With additional 256 bytes legacy,
> gives maximum of 16MB of configuration address space (just below TOM) for
> legacy. With 4KB per function (extended config space), gives 256MB
> (0x10000000). Here it is:
> https://en.wikipedia.org/wiki/PCI_configuration_space

For extended config space it's 256 MiB total.
>
> The question about system memory beyond the PCIe bridge is the another
> question. Seems to me that 2GB is too much (not compliant with IA32/x86,
> only with x86_64 architecture). Thus I agree with Aaron. Hard to comply with
> IA32/x86.
>
> But in the past I saw similar behavior with BIOSes (double reboot). Not sure
> if this is the real reason (Aaron writes: Intel UEFI systems do a reboot
> after calculating the MMIO space requirements so that the memory init code
> knows how much to allocate for MMIO resource allocation.). Hard to imagine
> for UEFI 32bit compliant BIOSes, all 64 BIOSes are UEFI compliant BIOSes.
>
> If Aaron is correct, I've learned something new. ;-)

It's a trade off of accessible memory (DRAM) vs simplification. If
your kernel you are booting is 64-bit it matters nothing at all using
2GiB I/O hole. If you care about 32-bit kernels then you care more
about the size of the I/O hole. As i noted previously, I haven't been
concerned with 32-bit kernels for over decade, but that's just what
I'm used to and am concerned about. Obviously if 32-bit kernels are a
concern then you wouldn't use a 2GiB hole when trying to maximize dram
access.


>
> Thank you,
> Zoran
>
>
> On Sat, Jun 4, 2016 at 6:49 PM, ron minnich <rminnich at gmail.com> wrote:
>>
>> Another Kconfig option? How many people will really understand what it
>> means and whether to use it?
>>
>> Has just reserving 2 GiB as a hard and fast rule hurt anyone yet?
>>
>> thanks
>>
>> ron
>>
>> On Fri, Jun 3, 2016 at 11:25 PM Patrick Rudolph <siro at das-labor.org>
>> wrote:
>>>
>>> On 2016-06-03 05:41 PM, Aaron Durbin via coreboot wrote:
>>> > On Fri, Jun 3, 2016 at 7:04 AM, Patrick Rudolph <siro at das-labor.org>
>>> > wrote:
>>> >> Hello,
>>> >> I want to start a discussion about PCI MMIO size that hit me a couple
>>> >> of
>>> >> times using coreboot.
>>> >> I'm focused on Intel Sandybridge, but I guess this topic applies to
>>> >> all
>>> >> x86 systems.
>>> >>
>>> >> On most Intel systems the PCI mmio size is hard-coded in coreboot to
>>> >> 1024Mbyte, but the value is hidden deep inside raminit code.
>>> >> The mmio size for dynamic resources is limited on top by PCIEXBAR,
>>> >> IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's
>>> >> limited by graphics stolen and TSEG that require at least 40Mbyte.
>>> >> In total there's only space for two 256Mbyte PCI BARs, due to
>>> >> alignment.
>>> >> That's enough for systems that only do have an Intel GPU, but it fails
>>> >> as soon as PCI devices use more than a single 256Mbyte BAR.
>>> >> The PCI mmio size is set in romstage, but PCI BARs are configured in
>>> >> ramstage.
>>> >>
>>> >> Following questions came to my mind:
>>> >> * How does the MRC handle this ?
>>> >> * Should the user be able to modify PCI mmio size ?
>>> >> * How to pass the required PCI mmio size to romstage ?
>>> >>   A good place seems to be the mrc cache, but ramstage doesn't know
>>> >> about it's structure.
>>> >> * How is this solved on AMD systems ?
>>> >> * Should the romstage scan PCI devices and count required BAR size ?
>>> >
>>> > In the past (not sure if it's still true), Intel UEFI systems do a
>>> > reboot after calculating the MMIO space requirements so that the
>>> > memory init code knows how much to allocate for MMIO resource
>>> > allocation. I always found that distasteful and I have always used a
>>> > 2GiB I/O hole ever since. I never cared about 32-bit kernels so I
>>> > always found that to be a decent tradeoff. It also makes MTRR and
>>> > address space easy to digest when looking at things.
>>> >
>>>
>>> I like the idea of a reboot. It only has to be done after hardware
>>> changes that affect the PCI mmio size.
>>> With mrc cache in place it shouldn't be notable at all.
>>>
>>> On the other hand, hard-coding the limit is much simpler.
>>> What do you think about a Kconfig option
>>> "Optimize PCI mmio size for x86_64 OS" ?
>>>
>>> It would increase the size to 2GiB. Of course it would work on i386, but
>>> you might see less usable
>>> DRAM than before.
>>>
>>> >>
>>> >> Regards,
>>> >> Patrick
>>> >>
>>> >> --
>>> >> coreboot mailing list: coreboot at coreboot.org
>>> >> https://www.coreboot.org/mailman/listinfo/coreboot
>>>
>>> --
>>> coreboot mailing list: coreboot at coreboot.org
>>> https://www.coreboot.org/mailman/listinfo/coreboot
>>
>>
>> --
>> coreboot mailing list: coreboot at coreboot.org
>> https://www.coreboot.org/mailman/listinfo/coreboot
>
>



More information about the coreboot mailing list