On Sat, Jun 4, 2016 at 12:00 PM, Zoran Stojsavljevic
<zoran.stojsavljevic(a)gmail.com> wrote:
> Hello to all,
>
> If I correctly remember: PCIe configuration space addressing consists of 3
> parts: bus (8 bits), device (5 bits) and function (3 bits). This gives in
> total 8+5+3= 16 bits, thus 2^16 (65536). With additional 256 bytes legacy,
> gives maximum of 16MB of configuration address space (just below TOM) for
> legacy. With 4KB per function (extended config space), gives 256MB
> (0x10000000). Here it is:
> https://en.wikipedia.org/wiki/PCI_configuration_space
For extended config space it's 256 MiB total.
>
> The question about system memory beyond the PCIe bridge is the another
> question. Seems to me that 2GB is too much (not compliant with IA32/x86,
> only with x86_64 architecture). Thus I agree with Aaron. Hard to comply with
> IA32/x86.
>
> But in the past I saw similar behavior with BIOSes (double reboot). Not sure
> if this is the real reason (Aaron writes: Intel UEFI systems do a reboot
> after calculating the MMIO space requirements so that the memory init code
> knows how much to allocate for MMIO resource allocation.). Hard to imagine
> for UEFI 32bit compliant BIOSes, all 64 BIOSes are UEFI compliant BIOSes.
>
> If Aaron is correct, I've learned something new. ;-)
It's a trade off of accessible memory (DRAM) vs simplification. If
your kernel you are booting is 64-bit it matters nothing at all using
2GiB I/O hole. If you care about 32-bit kernels then you care more
about the size of the I/O hole. As i noted previously, I haven't been
concerned with 32-bit kernels for over decade, but that's just what
I'm used to and am concerned about. Obviously if 32-bit kernels are a
concern then you wouldn't use a 2GiB hole when trying to maximize dram
access.
>
> Thank you,
> Zoran
>
>
> On Sat, Jun 4, 2016 at 6:49 PM, ron minnich <rminnich(a)gmail.com> wrote:
>>
>> Another Kconfig option? How many people will really understand what it
>> means and whether to use it?
>>
>> Has just reserving 2 GiB as a hard and fast rule hurt anyone yet?
>>
>> thanks
>>
>> ron
>>
>> On Fri, Jun 3, 2016 at 11:25 PM Patrick Rudolph <siro(a)das-labor.org>
>> wrote:
>>>
>>> On 2016-06-03 05:41 PM, Aaron Durbin via coreboot wrote:
>>> > On Fri, Jun 3, 2016 at 7:04 AM, Patrick Rudolph <siro(a)das-labor.org>
>>> > wrote:
>>> >> Hello,
>>> >> I want to start a discussion about PCI MMIO size that hit me a couple
>>> >> of
>>> >> times using coreboot.
>>> >> I'm focused on Intel Sandybridge, but I guess this topic applies to
>>> >> all
>>> >> x86 systems.
>>> >>
>>> >> On most Intel systems the PCI mmio size is hard-coded in coreboot to
>>> >> 1024Mbyte, but the value is hidden deep inside raminit code.
>>> >> The mmio size for dynamic resources is limited on top by PCIEXBAR,
>>> >> IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's
>>> >> limited by graphics stolen and TSEG that require at least 40Mbyte.
>>> >> In total there's only space for two 256Mbyte PCI BARs, due to
>>> >> alignment.
>>> >> That's enough for systems that only do have an Intel GPU, but it fails
>>> >> as soon as PCI devices use more than a single 256Mbyte BAR.
>>> >> The PCI mmio size is set in romstage, but PCI BARs are configured in
>>> >> ramstage.
>>> >>
>>> >> Following questions came to my mind:
>>> >> * How does the MRC handle this ?
>>> >> * Should the user be able to modify PCI mmio size ?
>>> >> * How to pass the required PCI mmio size to romstage ?
>>> >> A good place seems to be the mrc cache, but ramstage doesn't know
>>> >> about it's structure.
>>> >> * How is this solved on AMD systems ?
>>> >> * Should the romstage scan PCI devices and count required BAR size ?
>>> >
>>> > In the past (not sure if it's still true), Intel UEFI systems do a
>>> > reboot after calculating the MMIO space requirements so that the
>>> > memory init code knows how much to allocate for MMIO resource
>>> > allocation. I always found that distasteful and I have always used a
>>> > 2GiB I/O hole ever since. I never cared about 32-bit kernels so I
>>> > always found that to be a decent tradeoff. It also makes MTRR and
>>> > address space easy to digest when looking at things.
>>> >
>>>
>>> I like the idea of a reboot. It only has to be done after hardware
>>> changes that affect the PCI mmio size.
>>> With mrc cache in place it shouldn't be notable at all.
>>>
>>> On the other hand, hard-coding the limit is much simpler.
>>> What do you think about a Kconfig option
>>> "Optimize PCI mmio size for x86_64 OS" ?
>>>
>>> It would increase the size to 2GiB. Of course it would work on i386, but
>>> you might see less usable
>>> DRAM than before.
>>>
>>> >>
>>> >> Regards,
>>> >> Patrick
>>> >>
>>> >> --
>>> >> coreboot mailing list: coreboot(a)coreboot.org
>>> >> https://www.coreboot.org/mailman/listinfo/coreboot
>>>
>>> --
>>> coreboot mailing list: coreboot(a)coreboot.org
>>> https://www.coreboot.org/mailman/listinfo/coreboot
>>
>>
>> --
>> coreboot mailing list: coreboot(a)coreboot.org
>> https://www.coreboot.org/mailman/listinfo/coreboot
>
>
Thanks Martin,
I will definitely check it out.
-----Original Message-----
From: Martin Roth [mailto:gaumless@gmail.com]
Sent: Monday, 6 June 2016 12:34 AM
To: Naveed Ghori
Cc: coreboot
Subject: Re: [coreboot] Add bootorder file for Seabios
Hi Naveed,
Currently, the bootorder file would need to be manually added after the build.
https://linkprotect.cudasvc.com/url?a=https://www.coreboot.org/SeaBIOS%23Co…
I just pushed a patch to allow the bootorder to be added during the build process.
https://linkprotect.cudasvc.com/url?a=https://review.coreboot.org/15076&c=E…
Martin
On Tue, May 31, 2016 at 2:28 AM, Naveed Ghori <naveed.ghori(a)dti.com.au> wrote:
> What is the easiest way to do this.
>
> Is there already a way to do it cleanly and have the bootorder file in
> the mainboard area or will I have to add a patch to add it on to CBFS.
>
>
>
> Regards,
>
> Naveed Ghori | Lead Firmware & Driver Engineer
>
> DTI Group Ltd | Transit Security & Surveillance
>
> 31 Affleck Road, Perth Airport, Western Australia 6105, AU
>
> P +61 8 9373 2905,151 | F +61 8 9479 1190 | naveed.ghori(a)dti.com.au
>
>
>
> Visit our website
> https://linkprotect.cudasvc.com/url?a=https://www.dti.com.au&c=E,1,QPs
> TCBN9uj1QWAuCHJciZF_ToY-hw31WKGXpEEphJo7CcAKcm6E8r14rtIoCjibvTocxd5HKR
> tvGOowohSHfAXvj2Wj3cz0zF1haH9rBk7Z1EQoPDaiwyD4ikg,,&typo=1
>
> The information contained in this email is confidential. If you
> receive this email in error, please inform DTI Group Ltd via the above contact details.
> If you are not the intended recipient, you may not use or disclose the
> information contained in this email or attachments.
>
>
> --
> coreboot mailing list: coreboot(a)coreboot.org
> https://linkprotect.cudasvc.com/url?a=https://www.coreboot.org/mailman
> /listinfo/coreboot&c=E,1,04kGpbMBCjLuZLuKsDHrxORSMPat7HM4yj1h0OZ9m2VKS
> V04FPD4JbJ8gNrUFRqZSM1qLAZasqKvDlGXQnaNX9RWO6fAcB-IzxY-AAWFm8Q7&typo=1
What is the easiest way to do this.
Is there already a way to do it cleanly and have the bootorder file in the mainboard area or will I have to add a patch to add it on to CBFS.
Regards,
Naveed Ghori | Lead Firmware & Driver Engineer
DTI Group Ltd | Transit Security & Surveillance
31 Affleck Road, Perth Airport, Western Australia 6105, AU
P +61 8 9373 2905,151 | F +61 8 9479 1190 | naveed.ghori(a)dti.com.au
Visit our website www.dti.com.au<http://www.dti.com.au>
The information contained in this email is confidential. If you receive this email in error, please inform DTI Group Ltd via the above contact details. If you are not the intended recipient, you may not use or disclose the information contained in this email or attachments.
Hello to all,
If I correctly remember: PCIe configuration space addressing consists of 3
parts: bus (8 bits), device (5 bits) and function (3 bits). This gives in
total 8+5+3= 16 bits, thus 2^16 (65536). With additional 256 bytes legacy,
gives maximum of 16MB of configuration address space (just below TOM) for
legacy. With 4KB per function (extended config space), gives 256MB
(0x10000000). Here it is:
https://en.wikipedia.org/wiki/PCI_configuration_space
The question about system memory beyond the PCIe bridge is the another
question. Seems to me that 2GB is too much (not compliant with IA32/x86,
only with x86_64 architecture). Thus I agree with Aaron. Hard to comply
with IA32/x86.
But in the past I saw similar behavior with BIOSes (double reboot). Not
sure if this is the real reason (*Aaron writes: Intel UEFI systems do
a reboot after calculating the MMIO space requirements so that the memory
init code knows how much to allocate for MMIO resource **allocation.*).
Hard to imagine for UEFI 32bit compliant BIOSes, all 64 BIOSes are UEFI
compliant BIOSes.
If Aaron is correct, I've learned something new. ;-)
Thank you,
Zoran
On Sat, Jun 4, 2016 at 6:49 PM, ron minnich <rminnich(a)gmail.com> wrote:
> Another Kconfig option? How many people will really understand what it
> means and whether to use it?
>
> Has just reserving 2 GiB as a hard and fast rule hurt anyone yet?
>
> thanks
>
> ron
>
> On Fri, Jun 3, 2016 at 11:25 PM Patrick Rudolph <siro(a)das-labor.org>
> wrote:
>
>> On 2016-06-03 05:41 PM, Aaron Durbin via coreboot wrote:
>> > On Fri, Jun 3, 2016 at 7:04 AM, Patrick Rudolph <siro(a)das-labor.org>
>> wrote:
>> >> Hello,
>> >> I want to start a discussion about PCI MMIO size that hit me a couple
>> of
>> >> times using coreboot.
>> >> I'm focused on Intel Sandybridge, but I guess this topic applies to all
>> >> x86 systems.
>> >>
>> >> On most Intel systems the PCI mmio size is hard-coded in coreboot to
>> >> 1024Mbyte, but the value is hidden deep inside raminit code.
>> >> The mmio size for dynamic resources is limited on top by PCIEXBAR,
>> >> IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's
>> >> limited by graphics stolen and TSEG that require at least 40Mbyte.
>> >> In total there's only space for two 256Mbyte PCI BARs, due to
>> alignment.
>> >> That's enough for systems that only do have an Intel GPU, but it fails
>> >> as soon as PCI devices use more than a single 256Mbyte BAR.
>> >> The PCI mmio size is set in romstage, but PCI BARs are configured in
>> >> ramstage.
>> >>
>> >> Following questions came to my mind:
>> >> * How does the MRC handle this ?
>> >> * Should the user be able to modify PCI mmio size ?
>> >> * How to pass the required PCI mmio size to romstage ?
>> >> A good place seems to be the mrc cache, but ramstage doesn't know
>> >> about it's structure.
>> >> * How is this solved on AMD systems ?
>> >> * Should the romstage scan PCI devices and count required BAR size ?
>> >
>> > In the past (not sure if it's still true), Intel UEFI systems do a
>> > reboot after calculating the MMIO space requirements so that the
>> > memory init code knows how much to allocate for MMIO resource
>> > allocation. I always found that distasteful and I have always used a
>> > 2GiB I/O hole ever since. I never cared about 32-bit kernels so I
>> > always found that to be a decent tradeoff. It also makes MTRR and
>> > address space easy to digest when looking at things.
>> >
>>
>> I like the idea of a reboot. It only has to be done after hardware
>> changes that affect the PCI mmio size.
>> With mrc cache in place it shouldn't be notable at all.
>>
>> On the other hand, hard-coding the limit is much simpler.
>> What do you think about a Kconfig option
>> "Optimize PCI mmio size for x86_64 OS" ?
>>
>> It would increase the size to 2GiB. Of course it would work on i386, but
>> you might see less usable
>> DRAM than before.
>>
>> >>
>> >> Regards,
>> >> Patrick
>> >>
>> >> --
>> >> coreboot mailing list: coreboot(a)coreboot.org
>> >> https://www.coreboot.org/mailman/listinfo/coreboot
>>
>> --
>> coreboot mailing list: coreboot(a)coreboot.org
>> https://www.coreboot.org/mailman/listinfo/coreboot
>>
>
> --
> coreboot mailing list: coreboot(a)coreboot.org
> https://www.coreboot.org/mailman/listinfo/coreboot
>
On 2016-06-03 05:41 PM, Aaron Durbin via coreboot wrote:
> On Fri, Jun 3, 2016 at 7:04 AM, Patrick Rudolph <siro(a)das-labor.org> wrote:
>> Hello,
>> I want to start a discussion about PCI MMIO size that hit me a couple of
>> times using coreboot.
>> I'm focused on Intel Sandybridge, but I guess this topic applies to all
>> x86 systems.
>>
>> On most Intel systems the PCI mmio size is hard-coded in coreboot to
>> 1024Mbyte, but the value is hidden deep inside raminit code.
>> The mmio size for dynamic resources is limited on top by PCIEXBAR,
>> IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's
>> limited by graphics stolen and TSEG that require at least 40Mbyte.
>> In total there's only space for two 256Mbyte PCI BARs, due to alignment.
>> That's enough for systems that only do have an Intel GPU, but it fails
>> as soon as PCI devices use more than a single 256Mbyte BAR.
>> The PCI mmio size is set in romstage, but PCI BARs are configured in
>> ramstage.
>>
>> Following questions came to my mind:
>> * How does the MRC handle this ?
>> * Should the user be able to modify PCI mmio size ?
>> * How to pass the required PCI mmio size to romstage ?
>> A good place seems to be the mrc cache, but ramstage doesn't know
>> about it's structure.
>> * How is this solved on AMD systems ?
>> * Should the romstage scan PCI devices and count required BAR size ?
>
> In the past (not sure if it's still true), Intel UEFI systems do a
> reboot after calculating the MMIO space requirements so that the
> memory init code knows how much to allocate for MMIO resource
> allocation. I always found that distasteful and I have always used a
> 2GiB I/O hole ever since. I never cared about 32-bit kernels so I
> always found that to be a decent tradeoff. It also makes MTRR and
> address space easy to digest when looking at things.
>
I like the idea of a reboot. It only has to be done after hardware
changes that affect the PCI mmio size.
With mrc cache in place it shouldn't be notable at all.
On the other hand, hard-coding the limit is much simpler.
What do you think about a Kconfig option
"Optimize PCI mmio size for x86_64 OS" ?
It would increase the size to 2GiB. Of course it would work on i386, but
you might see less usable
DRAM than before.
>>
>> Regards,
>> Patrick
>>
>> --
>> coreboot mailing list: coreboot(a)coreboot.org
>> https://www.coreboot.org/mailman/listinfo/coreboot
On Fri, Jun 3, 2016 at 10:35 AM, ron minnich <rminnich(a)gmail.com> wrote:
> Given its name (SPD) it sure seems like SPD.
>
> As to your not being able to address it, a lot of times this can happen if
> there is an SMBUS mux on the board and it's not set up on POR so you can
> see that device.
>
It's likely at address 0x50. There's always the 7-bit vs 8-bit designation
that confuses people.
>
> ron
>
> On Thu, Jun 2, 2016 at 6:53 PM 김유석 <poplinux0(a)gmail.com> wrote:
>
>> Dear Sir.
>>
>> MY EVB is ADI SG-2440.
>>
>> I was found some eeprom on my evb.
>>
>>
>> This eeprom(CAT24C0EYI-GT3) is store the config of SDRAM. *I gu**ess.*
>>
>> So, I was search ADDR 0xA0 and procedure of config for sdram from
>> coreboot source tree.
>>
>> But can't found hands on ADDR 0xA0 and found the some structure for
>> config of sdram, but not used this strcture.
>>
>>
>> Could you explain to me that "*Role of this eeprom*".
>>
>> Thank you.
>>
>>
>>
>>
>>
>> --
>> coreboot mailing list: coreboot(a)coreboot.org
>> https://www.coreboot.org/mailman/listinfo/coreboot
>
>
> --
> coreboot mailing list: coreboot(a)coreboot.org
> https://www.coreboot.org/mailman/listinfo/coreboot
>
Dear Sir.
MY EVB is ADI SG-2440.
I was found some eeprom on my evb.
This eeprom(CAT24C0EYI-GT3) is store the config of SDRAM. *I gu**ess.*
So, I was search ADDR 0xA0 and procedure of config for sdram from
coreboot source tree.
But can't found hands on ADDR 0xA0 and found the some structure for
config of sdram, but not used this strcture.
Could you explain to me that "*Role of this eeprom*".
Thank you.
Hello,
I want to start a discussion about PCI MMIO size that hit me a couple of
times using coreboot.
I'm focused on Intel Sandybridge, but I guess this topic applies to all
x86 systems.
On most Intel systems the PCI mmio size is hard-coded in coreboot to
1024Mbyte, but the value is hidden deep inside raminit code.
The mmio size for dynamic resources is limited on top by PCIEXBAR,
IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's
limited by graphics stolen and TSEG that require at least 40Mbyte.
In total there's only space for two 256Mbyte PCI BARs, due to alignment.
That's enough for systems that only do have an Intel GPU, but it fails
as soon as PCI devices use more than a single 256Mbyte BAR.
The PCI mmio size is set in romstage, but PCI BARs are configured in
ramstage.
Following questions came to my mind:
* How does the MRC handle this ?
* Should the user be able to modify PCI mmio size ?
* How to pass the required PCI mmio size to romstage ?
A good place seems to be the mrc cache, but ramstage doesn't know
about it's structure.
* How is this solved on AMD systems ?
* Should the romstage scan PCI devices and count required BAR size ?
Regards,
Patrick