On Sun, Feb 06, 2011 at 03:41:45PM +0200, Eduard - Gabriel Munteanu wrote:
On Sun, Feb 06, 2011 at 01:47:57PM +0200, Michael S. Tsirkin wrote:
On Fri, Feb 04, 2011 at 01:24:14AM +0200, Eduard - Gabriel Munteanu wrote:
Hi,
[snip]
+/*
- IVRS (I/O Virtualization Reporting Structure) table.
- Describes the AMD IOMMU, as per:
- "AMD I/O Virtualization Technology (IOMMU) Specification", rev 1.26
- */
+struct ivrs_ivhd +{
- u8 type;
- u8 flags;
- u16 length;
- u16 devid;
- u16 capab_off;
- u32 iommu_base_low;
- u32 iommu_base_high;
- u16 pci_seg_group;
- u16 iommu_info;
- u32 reserved;
- u8 entry[0];
+} PACKED;
+struct ivrs_table +{
- ACPI_TABLE_HEADER_DEF /* ACPI common table header. */
- u32 iv_info;
- u32 reserved[2];
- struct ivrs_ivhd ivhd;
+} PACKED;
prefix with amd_iommu_ or amd_ then ?
This should be standard nomenclature already, even if IVRS is AMD IOMMU-specific.
Yes but the specific structure is amd specific, isn't it?
#include "acpi-dsdt.hex"
static void @@ -579,6 +609,59 @@ build_srat(void) return srat; }
+#define IVRS_SIGNATURE 0x53525649 // IVRS +#define IVRS_MAX_DEVS 32 +static void * +build_ivrs(void) +{
- int iommu_bdf, iommu_cap;
- int bdf, max, i;
- struct ivrs_table *ivrs;
- struct ivrs_ivhd *ivhd;
- /* Note this currently works for a single IOMMU! */
Meant to be a FIXME? How hard is it to fix? Just stick this in a loop?
I suspect a real BIOS would have these values hardcoded anyway, according to the topology of the PCI bus and which IOMMUs sit where.
Which values exactly?
You already mentioned the possibility of multiple IOMMU capabilities in the same function/bus, in which case there's probably no easy way to guess it from SeaBIOS.
It's easy enough to enumerate capabilities and pci devices, isn't it?
[snip]
+static void amd_iommu_init(u16 bdf, void *arg) +{
- int cap;
- u32 base_addr;
- cap = pci_find_capability(bdf, PCI_CAP_ID_SEC);
- if (cap < 0) {
return;
- }
There actually can be multiple instances of this capability according to spec. Do we care?
Hm, perhaps we should at least assign a base address there, that's easy. As for QEMU/KVM usage we probably don't need it.
I expect assigning multiple domains will be useful. I'm guessing multiple devices is what systems have in this case? If so I'm not really sure why is there need for multiple iommu capabilities per device.
- if (amd_iommu_addr >= BUILD_AMD_IOMMU_END) {
return;
- }
- base_addr = amd_iommu_addr;
- amd_iommu_addr += 0x4000;
- pci_config_writel(bdf, cap + 0x0C, 0);
- pci_config_writel(bdf, cap + 0x08, 0);
- pci_config_writel(bdf, cap + 0x04, base_addr | 1);
+}
static const struct pci_device_id pci_class_tbl[] = { /* STORAGE IDE */ PCI_DEVICE_CLASS(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_1, @@ -279,6 +302,10 @@ static const struct pci_device_id pci_class_tbl[] = { PCI_DEVICE_CLASS(PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_BRIDGE_PCI, pci_bios_init_device_bridge),
- /* AMD IOMMU */
Makes sense to limit to AMD vendor id?
I don't think so, I assume any PCI_CLASS_SYSTEM_IOMMU device would implement the same specification, considering these ids have been assigned by PCI-SIG.
This hasn't been the case in the past, e.g. with PCI_CLASS_NETWORK_ETHERNET, so I see no reason to assume it here.
- PCI_DEVICE_CLASS(PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_SYSTEM_IOMMU,
amd_iommu_init),
- /* default */ PCI_DEVICE(PCI_ANY_ID, PCI_ANY_ID, pci_bios_allocate_regions),
@@ -408,6 +435,8 @@ pci_setup(void) pci_region_init(&pci_bios_prefmem_region, BUILD_PCIPREFMEM_START, BUILD_PCIPREFMEM_END - 1);
amd_iommu_addr = BUILD_AMD_IOMMU_START;
pci_bios_init_bus();
int bdf, max;
-- 1.7.3.4
Thanks for your review, I read your other comments and will resubmit once I fix those issues.
Eduard