Hi all

Thanks a lot for the input.

I looked a bit further into this and it looks like only the resource allocation parts assumes one downstream bus under link_list.
The rest of coreboot seems to properly account for sibling busses, so maybe making the allocator loop over ->next in busses is
not so bad after all. https://review.coreboot.org/c/coreboot/+/62967 implements this.

OTOH I'm however under the impression that the sconfig tool currently does not easily allow for statically defining multibus domains.

Kind regards

Arthur


On Fri, Mar 18, 2022 at 3:20 PM Nico Huber <nico.h@gmx.de> wrote:
Hi Lance,

On 18.03.22 05:06, Lance Zhao wrote:
> Stack idea is from
> https://www.intel.com/content/www/us/en/developer/articles/technical/utilizing-the-intel-xeon-processor-scalable-family-iio-performance-monitoring-events.html

thank you very much! The diagrams are enlightening. I always assumed
Intel calls these "stacks" because there are multiple components invol-
ved that matter for software/firmware development. Turns out these
stacks are rather black boxes to us and we don't need to know what
components compose a stack, is that right?

Looking at these diagrams, I'd say the IIO stacks are PCI host bridges
from our point of view.

> In linux, sometimes domain is same as "segment", I am not sure current
> coreboot on xeon_sp already cover the case of multiple segment yet.

These terms are highly ambiguous. We always need to be careful to not
confuse them, e.g. "domain" in one project can mean something very dif-
ferent than our "domain device".

Not sure if you are referring to "PCI bus segments". These are very dif-
ferent from our "domain" term. I assume coreboot supports multiple
PCI bus segments. At least it looks like one just needs to initialize
`.secondary` and `.subordinate` of the downstream link of a PCI host
bridge accordingly.

There is also the term "PCI segment group". This refers to PCI bus
segments that share a space of 256 buses, e.g. one PCI bus segment
could occupy buses 0..15 and another 16..31 in the same group. Multiple
PCI segment groups are currently not explicitly supported. Might work,
though, if the platform has a single, consecutive ECAM/MMCONF region to
access more than the first group.

Nico