Subrata Banik has posted comments on this change. ( https://review.coreboot.org/c/coreboot/+/40722 )
Change subject: [WIP] Add Multiple Segment support ......................................................................
Patch Set 3:
Patch Set 3:
just my $0.02 but given the complexity of AML code we have, wouldn't it make sense to generate with acpigen?
We should definitely look into that. But I wouldn't start with a dynamic version before we got the static one right.
I'm not sure if we have to report these resource again. Maybe we don't need the `extrahostbridge.asl`?
i was referring to Intel RC code for customer where i could find the usage of "extrahostbridge.asl" for additional segment,
And the authors of that RC code do they have more experience with multiple PCI segment groups than we have? I don't think anybody gets such things right on the first try.
I have looked further into this and disovered the ACPI "Module" device. It groups multiple devices together and can specify shared resources for them. Coincidentally, after learning about this, it turned out that the ACPI spec uses it in the example code for _SEG ;)
Please have a look at ACPI spec 6.3, `6.5.6.1 Example`. I think this is how we should do it. Basically:
Device (ND0) { Name(_HID, "ACPI0004") /* Module Device */ Method(_CRS, ...) Device (PCI0) { ... } Device (PCI1) { ... } }
Thanks for the point, yes, those references looks good.But i was adhering to Intel RC code which has been tested with this feature
https://github.com/otcshare/CCG-TGL-Generic-SiC/tree/master/ClientOneSilicon...
In current coreboot pci tree, it appear as below.
Scope (_SB) { Device (PCI0) { Name (_HID, EisaId ("PNP0A08") /* PCI Express Bus */) // _HID: Hardware ID Name (_CID, EisaId ("PNP0A03") /* PCI Bus */) // _CID: Compatible ID Name (_SEG, Zero) // _SEG: PCI Segment Name (_UID, Zero) // _UID: Unique ID Device (MCHC) { Name (_ADR, Zero) // _ADR: Address OperationRegion (MCHP, PCI_Config, Zero, 0x0100) Field (MCHP, DWordAcc, NoLock, Preserve) { Offset (0x40), EPEN, 1, , 11, EPBR, 20, Offset (0x48), MHEN, 1, , 14, MHBR, 17, Offset (0x60), PXEN, 1, PXSZ, 2, , 23, PXBR, 6, Offset (0x68), DIEN, 1, , 11, DIBR, 20, Offset (0xA0), TOM, 64, TUUD, 64, Offset (0xBC), TLUD, 32 } }
Method (_CRS, 0, Serialized) // _CRS: Current Resource Settings { Name (MCRS, ResourceTemplate () { ..... } .... } }
Device (PCI1) { Name (_HID, EisaId ("PNP0A08") /* PCI Express Bus */) // _HID: Hardware ID Name (_CID, EisaId ("PNP0A03") /* PCI Bus */) // _CID: Compatible ID Name (_SEG, One) // _SEG: PCI Segment Name (_UID, One) // _UID: Unique ID Method (_CRS, 0, Serialized) // _CRS: Current Resource Settings { Name (MCRS, ResourceTemplate () { ..... } .... } } }
Now we can optimize the common resources between PCI0 and PCI1 later and keep side common space for sharing.
I think we can take optimization pieces later?