Nico Huber has posted comments on this change. ( https://review.coreboot.org/c/coreboot/+/44512 )
Change subject: soc/intel/common: Call pci_dev_request_bus_master() from .final ops ......................................................................
Patch Set 6:
(1 comment)
https://review.coreboot.org/c/coreboot/+/44512/2//COMMIT_MSG Commit Message:
https://review.coreboot.org/c/coreboot/+/44512/2//COMMIT_MSG@15 PS2, Line 15: access
My understanding is that we currently enable PCI_COMMAND_MASTER for most internal controllers within coreboot - I2C, GSPI, Fast SPI, UART, integrated graphics, etc. These are primarily the controllers that get utilized within coreboot/payload.
Yes, but mostly by accident. To my knowledge, there are historically three cases when we enable bus mastering: * When documentation tells us to for the initialization sequence. This is for instance the case for Intels IGD device (I guess the sequence happens in FSP nowadays anyway), albeit for unknown reasons (why would the configuration need it if we don't trigger DMA transfers?). * When we are going to hide a device and don't have a chance to enable it later (e.g. "ACPI mode" of some devices, if they really need bus mastering, idk). * When people confuse enabling of MMIO resources with bus mastering (this seems to be the majority of quirks in coreboot today). * When we want compatibility with a broken payload driver. This is rather new. While some of us try to get rid of quirks, others currently add new ones *shrug*
However, in a lot of cases, we rely on FSP to do this configuration for us. Example: Early I2C configuration does the PCI_COMMAND_MASTER setting, but late I2C configuration doesn't do this. Also, I don't think FSP is consistent about performing this configuration for all controllers.
Bus mastering and configuration are traditionally two separate things. With rare exceptions, it's up to the driver that wants a device to perform DMA to enable bus mastering. It doesn't belong into coreboot if it can be done later, IMHO.
The most recent issue that we ran into was SATA configuration being performed by FSP but not really setting PCI_COMMAND_MASTER which resulted in issues in payload.
I read about it. What payload was that actually?
I have thought of coreboot as performing the initialization of controllers so that the payload can use them without having to redo that initialization. Thus, my comment to Subrata that we should ensure that PCI_COMMAND_MASTER is set for all controllers consistently within coreboot.
If we want an option to enable it globally, ok. But why are we adding quirks for individual devices?
I understand concerns about doing this for hotpluggable buses(e.g. usb4 controllers). But, other than that I believe it should be okay to do the setting of PCI_COMMAND_MASTER within coreboot.
There may be more than a single chip on the mainboard. And not everything soldered to a mainboard has the same level of trust.
I might be wrong with my understanding here. So if this understanding is not correct, I think we should at least ensure that the assumptions are properly documented so that the payloads can take appropriate actions to handle this.
Yes, documentation please :) If I look at recently added quirks, they look like accidents, e.g. 41934bfe no word about payload compatibility. For somebody who knews that bus mastering doesn't belong into coreboot, it looks like something to revert.
Our current goal at secunet was to make the current spurious bus master settings optional (if somebody really needs them for compatibility with broken payloads, that's ok for me). And then probably take care that no new quirks are added. Adding more cases where it gets enabled seems completely wrong, IMHO. Because it just hides errors somewhere else. For the same reason, I think all optional cases should be disabled by default. It's just hiding errors and then we need it for compatibility.
And then there are security concerns. I can write something about that too. Maybe it suffices to say, the current state in coreboot makes us look like amateurs ;)
If we don't need to enable Bus Master at all until we're ready to jump to the payload, we can postpone setting it as much as possible to mitigate DMA attacks
That is roughly my thought as well. For a spec this should suffice:
By default, all PCI devices are left with bus mastering disabled unless * a device already performs DMA (e.g. graphics framebuffer) * a device may need DMA later but it can't be enabled later (e.g. ACPI mode).
What is missing is the handling of bridge devices which currently get bus mastering enabled in generic code. That aside, we have already tested minimal bus master settings. It works well with FILO and, IIRC, SeaBIOS too.