On Wed, Aug 02, 2017 at 12:33:12AM +0300, Alexander Bezzubikov wrote:
2017-08-01 23:31 GMT+03:00 Laszlo Ersek lersek@redhat.com:
(Whenever my comments conflict with Michael's or Marcel's, I defer to them.)
On 07/29/17 01:37, Aleksandr Bezzubikov wrote:
Signed-off-by: Aleksandr Bezzubikov zuban32s@gmail.com
docs/pcie.txt | 46 ++++++++++-------- docs/pcie_pci_bridge.txt | 121 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 147 insertions(+), 20 deletions(-) create mode 100644 docs/pcie_pci_bridge.txt
diff --git a/docs/pcie.txt b/docs/pcie.txt index 5bada24..338b50e 100644 --- a/docs/pcie.txt +++ b/docs/pcie.txt @@ -46,7 +46,7 @@ Place only the following kinds of devices directly on the Root Complex: (2) PCI Express Root Ports (ioh3420), for starting exclusively PCI Express hierarchies.
- (3) DMI-PCI Bridges (i82801b11-bridge), for starting legacy PCI
(3) PCIE-PCI Bridge (pcie-pci-bridge), for starting legacy PCI hierarchies.
(4) Extra Root Complexes (pxb-pcie), if multiple PCI Express Root Buses
When reviewing previous patches modifying / adding this file, I requested that we spell out "PCI Express" every single time. I'd like to see the same in this patch, if possible.
OK, I didn't know it.
@@ -55,18 +55,18 @@ Place only the following kinds of devices directly on the Root Complex: pcie.0 bus ---------------------------------------------------------------------------- | | | |
- | PCI Dev | | PCIe Root Port | | DMI-PCI Bridge | | pxb-pcie |
- | PCI Dev | | PCIe Root Port | | PCIE-PCI Bridge | | pxb-pcie |
2.1.1 To plug a device into pcie.0 as a Root Complex Integrated Endpoint use: -device <dev>[,bus=pcie.0] 2.1.2 To expose a new PCI Express Root Bus use: -device pxb-pcie,id=pcie.1,bus_nr=x[,numa_node=y][,addr=z]
Only PCI Express Root Ports and DMI-PCI bridges can be connected
Only PCI Express Root Ports, PCIE-PCI bridges and DMI-PCI bridges can be connected
It would be nice if we could keep the flowing text wrapped to 80 chars.
Also, here you add the "PCI Express-PCI" bridge to the list of allowed controllers (and you keep DMI-PCI as permitted), but above DMI was replaced. I think these should be made consistent -- we should make up our minds if we continue to recommend the DMI-PCI bridge or not. If not, then we should eradicate all traces of it. If we want to keep it at least for compatibility, then it should remain as fully documented as it is now.
Now I'm beginning to think that we shouldn't keep the DMI-PCI bridge even for compatibility and may want to use a new PCIE-PCI bridge everywhere (of course, except some cases when users are sure they need exactly DMI-PCI bridge for some reason)
Can dmi-pci support shpc? why doesn't it? For compatibility?
to the pcie.1 bus: -device ioh3420,id=root_port1[,bus=pcie.1][,chassis=x][,slot=y][,addr=z] \
-device i82801b11-bridge,id=dmi_pci_bridge1,bus=pcie.1
-device pcie-pci-bridge,id=pcie_pci_bridge1,bus=pcie.1
2.2 PCI Express only hierarchy @@ -130,21 +130,25 @@ Notes: Legacy PCI devices can be plugged into pcie.0 as Integrated Endpoints, but, as mentioned in section 5, doing so means the legacy PCI device in question will be incapable of hot-unplugging. -Besides that use DMI-PCI Bridges (i82801b11-bridge) in combination +Besides that use PCIE-PCI Bridges (pcie-pci-bridge) in combination with PCI-PCI Bridges (pci-bridge) to start PCI hierarchies. +Instead of the PCIE-PCI Bridge DMI-PCI one can be used, +but it doens't support hot-plug, is not crossplatform and since that
s/doens't/doesn't/
s/since that/therefore it/
+is obsolete and deprecated. Use the PCIE-PCI Bridge if you're not +absolutely sure you need the DMI-PCI Bridge.
-Prefer flat hierarchies. For most scenarios a single DMI-PCI Bridge +Prefer flat hierarchies. For most scenarios a single PCIE-PCI Bridge (having 32 slots) and several PCI-PCI Bridges attached to it (each supporting also 32 slots) will support hundreds of legacy devices. -The recommendation is to populate one PCI-PCI Bridge under the DMI-PCI Bridge +The recommendation is to populate one PCI-PCI Bridge under the PCIE-PCI Bridge until is full and then plug a new PCI-PCI Bridge...
pcie.0 bus ---------------------------------------------- | |
- | PCI Dev | | DMI-PCI BRIDGE |
- | PCI Dev | | PCIE-PCI BRIDGE |
| | ------------------ ------------------ | PCI-PCI Bridge | | PCI-PCI Bridge | ...
@@ -157,11 +161,11 @@ until is full and then plug a new PCI-PCI Bridge... 2.3.1 To plug a PCI device into pcie.0 as an Integrated Endpoint use: -device <dev>[,bus=pcie.0] 2.3.2 Plugging a PCI device into a PCI-PCI Bridge:
-device i82801b11-bridge,id=dmi_pci_bridge1[,bus=pcie.0] \
-device pci-bridge,id=pci_bridge1,bus=dmi_pci_bridge1[,chassis_nr=x][,addr=y] \
-device pcie-pci-bridge,id=pcie_pci_bridge1[,bus=pcie.0] \
-device pci-bridge,id=pci_bridge1,bus=pcie_pci_bridge1[,chassis_nr=x][,addr=y] \ -device <dev>,bus=pci_bridge1[,addr=x] Note that 'addr' cannot be 0 unless shpc=off parameter is passed to
the PCI Bridge.
the PCI Bridge, and can never be 0 when plugging into the PCIE-PCI Bridge.
- IO space issues
=================== @@ -219,25 +223,27 @@ do not support hot-plug, so any devices plugged into Root Complexes cannot be hot-plugged/hot-unplugged: (1) PCI Express Integrated Endpoints (2) PCI Express Root Ports
- (3) DMI-PCI Bridges
- (3) PCIE-PCI Bridges (4) pxb-pcie
Be aware that PCI Express Downstream Ports can't be hot-plugged into an existing PCI Express Upstream Port.
-PCI devices can be hot-plugged into PCI-PCI Bridges. The PCI hot-plug is ACPI -based and can work side by side with the PCI Express native hot-plug. +PCI devices can be hot-plugged into PCIE-PCI and PCI-PCI Bridges. +The PCI hot-plug into PCI-PCI bridge is ACPI based, whereas hot-plug into +PCIE-PCI bridges is SHPC-base. They both can work side by side with the PCI Express native hot-plug.
s/SHPC-base/SHPC-based/
And, I don't understand the difference between "ACPI based" and "SHPC based". I thought the PCI Express - PCI bridge reused the same hotplug mechanism of the PCI-PCI bridge. That is, "ACPI based" and "SHPC based" were the same thing, and they were both used by the PCI Express-PCI bridge, and the PCI-PCI bridge.
I'm basing this on the fact that the "shpc=off" property was always mentioned for PCI-PCI bridges in this document (in a different context, see above), which implies that the PCI-PCI bridge has a SHPC too. And, we've always called that "ACPI based" hotplug in this section.
Now we don't have ACPI hotplug support for Q35 machines, which variants are the only valid machines to use PCIE-PCI bridges with.
PCI Express devices can be natively hot-plugged/hot-unplugged into/from -PCI Express Root Ports (and PCI Express Downstream Ports). +PCI Express Root Ports (and PCI Express Downstream Ports) and PCIExpress-to-PCI Bridges.
I don't think this is right; PCI Express endpoints should be plugged into PCI Express downstream and root ports only. (... See the last sentence of "2. Device placement strategy".)
Agreed.
5.1 Planning for hot-plug: (1) PCI hierarchy Leave enough PCI-PCI Bridge slots empty or add one
or more empty PCI-PCI Bridges to the DMI-PCI Bridge.
or more empty PCI-PCI Bridges to the PCIE-PCI Bridge. For each such PCI-PCI Bridge the Guest Firmware is expected to reserve 4K IO space and 2M MMIO range to be used for all devices behind it.
Appropriate PCI capability is designed, see pcie_pci_bridge.txt. Because of the hard IO limit of around 10 PCI Bridges (~ 40K space) per system don't use more than 9 PCI-PCI Bridges, leaving 4K for the
diff --git a/docs/pcie_pci_bridge.txt b/docs/pcie_pci_bridge.txt new file mode 100644 index 0000000..ad392ad --- /dev/null +++ b/docs/pcie_pci_bridge.txt @@ -0,0 +1,121 @@ +Generic PCIExpress-to-PCI Bridge +================================
+Description +=========== +PCIE-to-PCI bridge is a new method for legacy PCI +hierarchies creation on Q35 machines.
+Previously Intel DMI-to-PCI bridge was used for this purpose. +But due to its strict limitations - no support of hot-plug, +no cross-platform and cross-architecture support - a new generic +PCIE-to-PCI bridge should now be used for any legacy PCI device usage +with PCI Express machine.
+This generic PCIE-PCI bridge is a cross-platform device, +can be hot-plugged into appropriate root port (requires additional actions, +see 'PCIE-PCI bridge hot-plug' section), +and supports devices hot-plug into the bridge itself +(with some limitations, see below).
+Hot-plug of legacy PCI devices into the bridge +is provided by bridge's built-in Standard hot-plug Controller. +Though it still has some limitations, see 'Limitations' below.
+PCIE-PCI bridge hot-plug +======================= +As opposed to Windows, Linux guest requires extra efforts to +enable PCIE-PCI bridge hot-plug. +Motivation - now on init any PCI Express root port which doesn't have +any device plugged in, has no free buses reserved to provide any of them +to a hot-plugged devices in future.
+To solve this problem we reserve additional buses on a firmware level. +Currently only SeaBIOS is supported. +The way of bus number to reserve delivery is special +Red Hat vendor-specific PCI capability, added to the root port +that is planned to have PCIE-PCI bridge hot-plugged in.
+Capability layout (defined in include/hw/pci/pci_bridge.h):
- uint8_t id; Standard PCI capability header field
- uint8_t next; Standard PCI capability header field
- uint8_t len; Standard PCI vendor-specific capability header field
- uint8_t type; Red Hat vendor-specific capability type
List of currently existing types:
QEMU = 1
- uint16_t non_pref_16; Non-prefetchable memory limit
- uint8_t bus_res; Minimum number of buses to reserve
- uint8_t io_8; IO space limit in case of 8-bit value
- uint32_t io_32; IO space limit in case of 32-bit value
This two values are mutually exclusive,
i.e. they can't both be >0.
- uint32_t pref_32; Prefetchable memory limit in case of 32-bit value
- uint64_t pref_64; Prefetchable memory limit in case of 64-bit value
This two values are mutually exclusive (just as IO limit),
i.e. they can't both be >0.
+Memory limits are unused now, in future they are planned +to be used for providing similar hints to the firmware.
+At the moment this capability is used only in +QEMU generic PCIE root port (-device pcie-root-port). +Capability construction function takes bus range value +from root ports' common property 'bus_reserve'. +By default it is set to 0 to leave root port's default +behavior unchanged.
This paragraph looks too narrow. Please fill it / wrap it to 80 chars or so (this should apply to all flowing text paragraphs).
+Usage +===== +A detailed command line would be:
+[qemu-bin + storage options] +-m 2G +-device ioh3420,bus=pcie.0,id=rp1 +-device ioh3420,bus=pcie.0,id=rp2 +-device pcie-root-port,bus=pcie.0,id=rp3,bus-reserve=1 +-device pcie-pci-bridge,id=br1,bus=rp1 +-device pcie-pci-bridge,id=br2,bus=rp2 +-device e1000,bus=br1,addr=8
Backslashes seem to be missing.
I took pci_expander_bridge.txt as an example, and there I found no backslashes. I understand we need them if we plan to copy-paste this to the command-line directly, and if so, I will add them.
Thanks Laszlo
+Then in monitor it's OK to do: +device_add pcie-pci-bridge,id=br3,bus=rp3 +device_add e1000,bus=br2,addr=1 +device_add e1000,bus=br3,addr=1
+Here you have:
- (1) Cold-plugged:
- Root ports: 1 QEMU generic root port with the capability mentioned above,
2 ioh3420 root ports;
- 2 PCIE-PCI bridges plugged into 2 different root ports;
- e1000 plugged into the first bridge.
- (2) Hot-plugged:
- PCIE-PCI bridge, plugged into QEMU generic root port;
- 2 e1000 cards, one plugged into the cold-plugged PCIE-PCI bridge,
another plugged into the hot-plugged bridge.
+Limitations +=========== +The PCIE-PCI bridge can be hot-plugged only into pcie-root-port that +has proper 'bus_reserve' property value to provide secondary bus for the hot-plugged bridge.
+Windows 7 and older versions don't support hot-plug devices into the PCIE-PCI bridge. +To enable device hot-plug into the bridge on Linux there're 3 ways: +1) Build shpchp module with this patch http://www.spinics.net/lists/linux-pci/msg63052.html +2) Wait until the kernel patch mentioned above get merged into upstream -
- it's expected to happen in 4.14.
+3) set 'msi_enable' property to false - this forced the bridge to use legacy INTx,
- which allows the bridge to notify the OS about hot-plug event without having
- BUSMASTER set.
+Implementation +============== +The PCIE-PCI bridge is based on PCI-PCI bridge, +but also accumulates PCI Express features +as a PCI Express device (is_express=1).
-- Aleksandr Bezzubikov
On 08/01/17 23:39, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 12:33:12AM +0300, Alexander Bezzubikov wrote:
2017-08-01 23:31 GMT+03:00 Laszlo Ersek lersek@redhat.com:
(Whenever my comments conflict with Michael's or Marcel's, I defer to them.)
On 07/29/17 01:37, Aleksandr Bezzubikov wrote:
Signed-off-by: Aleksandr Bezzubikov zuban32s@gmail.com
docs/pcie.txt | 46 ++++++++++-------- docs/pcie_pci_bridge.txt | 121 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 147 insertions(+), 20 deletions(-) create mode 100644 docs/pcie_pci_bridge.txt
diff --git a/docs/pcie.txt b/docs/pcie.txt index 5bada24..338b50e 100644 --- a/docs/pcie.txt +++ b/docs/pcie.txt @@ -46,7 +46,7 @@ Place only the following kinds of devices directly on the Root Complex: (2) PCI Express Root Ports (ioh3420), for starting exclusively PCI Express hierarchies.
- (3) DMI-PCI Bridges (i82801b11-bridge), for starting legacy PCI
(3) PCIE-PCI Bridge (pcie-pci-bridge), for starting legacy PCI hierarchies.
(4) Extra Root Complexes (pxb-pcie), if multiple PCI Express Root Buses
When reviewing previous patches modifying / adding this file, I requested that we spell out "PCI Express" every single time. I'd like to see the same in this patch, if possible.
OK, I didn't know it.
@@ -55,18 +55,18 @@ Place only the following kinds of devices directly on the Root Complex: pcie.0 bus ---------------------------------------------------------------------------- | | | |
- | PCI Dev | | PCIe Root Port | | DMI-PCI Bridge | | pxb-pcie |
- | PCI Dev | | PCIe Root Port | | PCIE-PCI Bridge | | pxb-pcie |
2.1.1 To plug a device into pcie.0 as a Root Complex Integrated Endpoint use: -device <dev>[,bus=pcie.0] 2.1.2 To expose a new PCI Express Root Bus use: -device pxb-pcie,id=pcie.1,bus_nr=x[,numa_node=y][,addr=z]
Only PCI Express Root Ports and DMI-PCI bridges can be connected
Only PCI Express Root Ports, PCIE-PCI bridges and DMI-PCI bridges can be connected
It would be nice if we could keep the flowing text wrapped to 80 chars.
Also, here you add the "PCI Express-PCI" bridge to the list of allowed controllers (and you keep DMI-PCI as permitted), but above DMI was replaced. I think these should be made consistent -- we should make up our minds if we continue to recommend the DMI-PCI bridge or not. If not, then we should eradicate all traces of it. If we want to keep it at least for compatibility, then it should remain as fully documented as it is now.
Now I'm beginning to think that we shouldn't keep the DMI-PCI bridge even for compatibility and may want to use a new PCIE-PCI bridge everywhere (of course, except some cases when users are sure they need exactly DMI-PCI bridge for some reason)
Can dmi-pci support shpc? why doesn't it? For compatibility?
I don't know why, but the fact that it doesn't is the reason libvirt settled on auto-creating a dmi-pci bridge and a pci-pci bridge under that for Q35. The reasoning was (IIRC Laine's words correctly) that the dmi-pci bridge cannot receive hotplugged devices, while the pci-pci bridge cannot be connected to the root complex. So both were needed.
Thanks Laszlo
On 02/08/2017 1:23, Laszlo Ersek wrote:
On 08/01/17 23:39, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 12:33:12AM +0300, Alexander Bezzubikov wrote:
2017-08-01 23:31 GMT+03:00 Laszlo Ersek lersek@redhat.com:
(Whenever my comments conflict with Michael's or Marcel's, I defer to them.)
On 07/29/17 01:37, Aleksandr Bezzubikov wrote:
Signed-off-by: Aleksandr Bezzubikov zuban32s@gmail.com
docs/pcie.txt | 46 ++++++++++-------- docs/pcie_pci_bridge.txt | 121 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 147 insertions(+), 20 deletions(-) create mode 100644 docs/pcie_pci_bridge.txt
diff --git a/docs/pcie.txt b/docs/pcie.txt index 5bada24..338b50e 100644 --- a/docs/pcie.txt +++ b/docs/pcie.txt @@ -46,7 +46,7 @@ Place only the following kinds of devices directly on the Root Complex: (2) PCI Express Root Ports (ioh3420), for starting exclusively PCI Express hierarchies.
- (3) DMI-PCI Bridges (i82801b11-bridge), for starting legacy PCI
(3) PCIE-PCI Bridge (pcie-pci-bridge), for starting legacy PCI hierarchies.
(4) Extra Root Complexes (pxb-pcie), if multiple PCI Express Root Buses
When reviewing previous patches modifying / adding this file, I requested that we spell out "PCI Express" every single time. I'd like to see the same in this patch, if possible.
OK, I didn't know it.
@@ -55,18 +55,18 @@ Place only the following kinds of devices directly on the Root Complex: pcie.0 bus ---------------------------------------------------------------------------- | | | |
- | PCI Dev | | PCIe Root Port | | DMI-PCI Bridge | | pxb-pcie |
| PCI Dev | | PCIe Root Port | | PCIE-PCI Bridge | | pxb-pcie |
2.1.1 To plug a device into pcie.0 as a Root Complex Integrated Endpoint use: -device <dev>[,bus=pcie.0] 2.1.2 To expose a new PCI Express Root Bus use: -device pxb-pcie,id=pcie.1,bus_nr=x[,numa_node=y][,addr=z]
Only PCI Express Root Ports and DMI-PCI bridges can be connected
Only PCI Express Root Ports, PCIE-PCI bridges and DMI-PCI bridges can be connected
It would be nice if we could keep the flowing text wrapped to 80 chars.
Also, here you add the "PCI Express-PCI" bridge to the list of allowed controllers (and you keep DMI-PCI as permitted), but above DMI was replaced. I think these should be made consistent -- we should make up our minds if we continue to recommend the DMI-PCI bridge or not. If not, then we should eradicate all traces of it. If we want to keep it at least for compatibility, then it should remain as fully documented as it is now.
Now I'm beginning to think that we shouldn't keep the DMI-PCI bridge even for compatibility and may want to use a new PCIE-PCI bridge everywhere (of course, except some cases when users are sure they need exactly DMI-PCI bridge for some reason)
Can dmi-pci support shpc? why doesn't it? For compatibility?
Yes, mainly because I as far as I know the Intel device hasn't an SHPC controller. Is may be possible to make it work with it, but now we don't have a reason since we have PCIe_PCI bridge.
I don't know why, but the fact that it doesn't is the reason libvirt settled on auto-creating a dmi-pci bridge and a pci-pci bridge under that for Q35.
And hotplug doesn't work even for this configuration! (last time I checked)
Thanks, Marcel
The reasoning was (IIRC Laine's words correctly) that the
dmi-pci bridge cannot receive hotplugged devices, while the pci-pci bridge cannot be connected to the root complex. So both were needed.
Thanks Laszlo
On Wed, Aug 02, 2017 at 12:23:46AM +0200, Laszlo Ersek wrote:
On 08/01/17 23:39, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 12:33:12AM +0300, Alexander Bezzubikov wrote:
2017-08-01 23:31 GMT+03:00 Laszlo Ersek lersek@redhat.com:
(Whenever my comments conflict with Michael's or Marcel's, I defer to them.)
On 07/29/17 01:37, Aleksandr Bezzubikov wrote:
Signed-off-by: Aleksandr Bezzubikov zuban32s@gmail.com
docs/pcie.txt | 46 ++++++++++-------- docs/pcie_pci_bridge.txt | 121 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 147 insertions(+), 20 deletions(-) create mode 100644 docs/pcie_pci_bridge.txt
diff --git a/docs/pcie.txt b/docs/pcie.txt index 5bada24..338b50e 100644 --- a/docs/pcie.txt +++ b/docs/pcie.txt @@ -46,7 +46,7 @@ Place only the following kinds of devices directly on the Root Complex: (2) PCI Express Root Ports (ioh3420), for starting exclusively PCI Express hierarchies.
- (3) DMI-PCI Bridges (i82801b11-bridge), for starting legacy PCI
(3) PCIE-PCI Bridge (pcie-pci-bridge), for starting legacy PCI hierarchies.
(4) Extra Root Complexes (pxb-pcie), if multiple PCI Express Root Buses
When reviewing previous patches modifying / adding this file, I requested that we spell out "PCI Express" every single time. I'd like to see the same in this patch, if possible.
OK, I didn't know it.
@@ -55,18 +55,18 @@ Place only the following kinds of devices directly on the Root Complex: pcie.0 bus ---------------------------------------------------------------------------- | | | |
- | PCI Dev | | PCIe Root Port | | DMI-PCI Bridge | | pxb-pcie |
- | PCI Dev | | PCIe Root Port | | PCIE-PCI Bridge | | pxb-pcie |
2.1.1 To plug a device into pcie.0 as a Root Complex Integrated Endpoint use: -device <dev>[,bus=pcie.0] 2.1.2 To expose a new PCI Express Root Bus use: -device pxb-pcie,id=pcie.1,bus_nr=x[,numa_node=y][,addr=z]
Only PCI Express Root Ports and DMI-PCI bridges can be connected
Only PCI Express Root Ports, PCIE-PCI bridges and DMI-PCI bridges can be connected
It would be nice if we could keep the flowing text wrapped to 80 chars.
Also, here you add the "PCI Express-PCI" bridge to the list of allowed controllers (and you keep DMI-PCI as permitted), but above DMI was replaced. I think these should be made consistent -- we should make up our minds if we continue to recommend the DMI-PCI bridge or not. If not, then we should eradicate all traces of it. If we want to keep it at least for compatibility, then it should remain as fully documented as it is now.
Now I'm beginning to think that we shouldn't keep the DMI-PCI bridge even for compatibility and may want to use a new PCIE-PCI bridge everywhere (of course, except some cases when users are sure they need exactly DMI-PCI bridge for some reason)
Can dmi-pci support shpc? why doesn't it? For compatibility?
I don't know why, but the fact that it doesn't is the reason libvirt settled on auto-creating a dmi-pci bridge and a pci-pci bridge under that for Q35. The reasoning was (IIRC Laine's words correctly) that the dmi-pci bridge cannot receive hotplugged devices, while the pci-pci bridge cannot be connected to the root complex. So both were needed.
Thanks Laszlo
OK. Is it true that dmi-pci + pci-pci under it will allow hotplug on Q35 if we just flip the bit in _OSC?
On 08/02/17 15:47, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 12:23:46AM +0200, Laszlo Ersek wrote:
On 08/01/17 23:39, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 12:33:12AM +0300, Alexander Bezzubikov wrote:
2017-08-01 23:31 GMT+03:00 Laszlo Ersek lersek@redhat.com:
(Whenever my comments conflict with Michael's or Marcel's, I defer to them.)
On 07/29/17 01:37, Aleksandr Bezzubikov wrote:
Signed-off-by: Aleksandr Bezzubikov zuban32s@gmail.com
docs/pcie.txt | 46 ++++++++++-------- docs/pcie_pci_bridge.txt | 121 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 147 insertions(+), 20 deletions(-) create mode 100644 docs/pcie_pci_bridge.txt
diff --git a/docs/pcie.txt b/docs/pcie.txt index 5bada24..338b50e 100644 --- a/docs/pcie.txt +++ b/docs/pcie.txt @@ -46,7 +46,7 @@ Place only the following kinds of devices directly on the Root Complex: (2) PCI Express Root Ports (ioh3420), for starting exclusively PCI Express hierarchies.
- (3) DMI-PCI Bridges (i82801b11-bridge), for starting legacy PCI
(3) PCIE-PCI Bridge (pcie-pci-bridge), for starting legacy PCI hierarchies.
(4) Extra Root Complexes (pxb-pcie), if multiple PCI Express Root Buses
When reviewing previous patches modifying / adding this file, I requested that we spell out "PCI Express" every single time. I'd like to see the same in this patch, if possible.
OK, I didn't know it.
@@ -55,18 +55,18 @@ Place only the following kinds of devices directly on the Root Complex: pcie.0 bus ---------------------------------------------------------------------------- | | | |
- | PCI Dev | | PCIe Root Port | | DMI-PCI Bridge | | pxb-pcie |
- | PCI Dev | | PCIe Root Port | | PCIE-PCI Bridge | | pxb-pcie |
2.1.1 To plug a device into pcie.0 as a Root Complex Integrated Endpoint use: -device <dev>[,bus=pcie.0] 2.1.2 To expose a new PCI Express Root Bus use: -device pxb-pcie,id=pcie.1,bus_nr=x[,numa_node=y][,addr=z]
Only PCI Express Root Ports and DMI-PCI bridges can be connected
Only PCI Express Root Ports, PCIE-PCI bridges and DMI-PCI bridges can be connected
It would be nice if we could keep the flowing text wrapped to 80 chars.
Also, here you add the "PCI Express-PCI" bridge to the list of allowed controllers (and you keep DMI-PCI as permitted), but above DMI was replaced. I think these should be made consistent -- we should make up our minds if we continue to recommend the DMI-PCI bridge or not. If not, then we should eradicate all traces of it. If we want to keep it at least for compatibility, then it should remain as fully documented as it is now.
Now I'm beginning to think that we shouldn't keep the DMI-PCI bridge even for compatibility and may want to use a new PCIE-PCI bridge everywhere (of course, except some cases when users are sure they need exactly DMI-PCI bridge for some reason)
Can dmi-pci support shpc? why doesn't it? For compatibility?
I don't know why, but the fact that it doesn't is the reason libvirt settled on auto-creating a dmi-pci bridge and a pci-pci bridge under that for Q35. The reasoning was (IIRC Laine's words correctly) that the dmi-pci bridge cannot receive hotplugged devices, while the pci-pci bridge cannot be connected to the root complex. So both were needed.
Thanks Laszlo
OK. Is it true that dmi-pci + pci-pci under it will allow hotplug on Q35 if we just flip the bit in _OSC?
Marcel, what say you?... :)
On 02/08/2017 17:16, Laszlo Ersek wrote:
On 08/02/17 15:47, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 12:23:46AM +0200, Laszlo Ersek wrote:
On 08/01/17 23:39, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 12:33:12AM +0300, Alexander Bezzubikov wrote:
2017-08-01 23:31 GMT+03:00 Laszlo Ersek lersek@redhat.com:
(Whenever my comments conflict with Michael's or Marcel's, I defer to them.)
On 07/29/17 01:37, Aleksandr Bezzubikov wrote: > Signed-off-by: Aleksandr Bezzubikov zuban32s@gmail.com > --- > docs/pcie.txt | 46 ++++++++++-------- > docs/pcie_pci_bridge.txt | 121 +++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 147 insertions(+), 20 deletions(-) > create mode 100644 docs/pcie_pci_bridge.txt > > diff --git a/docs/pcie.txt b/docs/pcie.txt > index 5bada24..338b50e 100644 > --- a/docs/pcie.txt > +++ b/docs/pcie.txt > @@ -46,7 +46,7 @@ Place only the following kinds of devices directly on the Root Complex: > (2) PCI Express Root Ports (ioh3420), for starting exclusively PCI Express > hierarchies. > > - (3) DMI-PCI Bridges (i82801b11-bridge), for starting legacy PCI > + (3) PCIE-PCI Bridge (pcie-pci-bridge), for starting legacy PCI > hierarchies. > > (4) Extra Root Complexes (pxb-pcie), if multiple PCI Express Root Buses
When reviewing previous patches modifying / adding this file, I requested that we spell out "PCI Express" every single time. I'd like to see the same in this patch, if possible.
OK, I didn't know it.
> @@ -55,18 +55,18 @@ Place only the following kinds of devices directly on the Root Complex: > pcie.0 bus > ---------------------------------------------------------------------------- > | | | | > - ----------- ------------------ ------------------ -------------- > - | PCI Dev | | PCIe Root Port | | DMI-PCI Bridge | | pxb-pcie | > - ----------- ------------------ ------------------ -------------- > + ----------- ------------------ ------------------- -------------- > + | PCI Dev | | PCIe Root Port | | PCIE-PCI Bridge | | pxb-pcie | > + ----------- ------------------ ------------------- -------------- > > 2.1.1 To plug a device into pcie.0 as a Root Complex Integrated Endpoint use: > -device <dev>[,bus=pcie.0] > 2.1.2 To expose a new PCI Express Root Bus use: > -device pxb-pcie,id=pcie.1,bus_nr=x[,numa_node=y][,addr=z] > - Only PCI Express Root Ports and DMI-PCI bridges can be connected > + Only PCI Express Root Ports, PCIE-PCI bridges and DMI-PCI bridges can be connected
It would be nice if we could keep the flowing text wrapped to 80 chars.
Also, here you add the "PCI Express-PCI" bridge to the list of allowed controllers (and you keep DMI-PCI as permitted), but above DMI was replaced. I think these should be made consistent -- we should make up our minds if we continue to recommend the DMI-PCI bridge or not. If not, then we should eradicate all traces of it. If we want to keep it at least for compatibility, then it should remain as fully documented as it is now.
Now I'm beginning to think that we shouldn't keep the DMI-PCI bridge even for compatibility and may want to use a new PCIE-PCI bridge everywhere (of course, except some cases when users are sure they need exactly DMI-PCI bridge for some reason)
Can dmi-pci support shpc? why doesn't it? For compatibility?
I don't know why, but the fact that it doesn't is the reason libvirt settled on auto-creating a dmi-pci bridge and a pci-pci bridge under that for Q35. The reasoning was (IIRC Laine's words correctly) that the dmi-pci bridge cannot receive hotplugged devices, while the pci-pci bridge cannot be connected to the root complex. So both were needed.
Thanks Laszlo
OK. Is it true that dmi-pci + pci-pci under it will allow hotplug on Q35 if we just flip the bit in _OSC?
Marcel, what say you?... :)
Will test and get back to you (it may actually work)
Thanks, Marcel
On 02/08/2017 17:21, Marcel Apfelbaum wrote:
On 02/08/2017 17:16, Laszlo Ersek wrote:
On 08/02/17 15:47, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 12:23:46AM +0200, Laszlo Ersek wrote:
On 08/01/17 23:39, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 12:33:12AM +0300, Alexander Bezzubikov wrote:
2017-08-01 23:31 GMT+03:00 Laszlo Ersek lersek@redhat.com: > (Whenever my comments conflict with Michael's or Marcel's, I > defer to them.) > > On 07/29/17 01:37, Aleksandr Bezzubikov wrote: >> Signed-off-by: Aleksandr Bezzubikov zuban32s@gmail.com >> --- >> docs/pcie.txt | 46 ++++++++++-------- >> docs/pcie_pci_bridge.txt | 121 >> +++++++++++++++++++++++++++++++++++++++++++++++ >> 2 files changed, 147 insertions(+), 20 deletions(-) >> create mode 100644 docs/pcie_pci_bridge.txt >> >> diff --git a/docs/pcie.txt b/docs/pcie.txt >> index 5bada24..338b50e 100644 >> --- a/docs/pcie.txt >> +++ b/docs/pcie.txt >> @@ -46,7 +46,7 @@ Place only the following kinds of devices >> directly on the Root Complex: >> (2) PCI Express Root Ports (ioh3420), for starting >> exclusively PCI Express >> hierarchies. >> >> - (3) DMI-PCI Bridges (i82801b11-bridge), for starting legacy >> PCI >> + (3) PCIE-PCI Bridge (pcie-pci-bridge), for starting legacy PCI >> hierarchies. >> >> (4) Extra Root Complexes (pxb-pcie), if multiple PCI >> Express Root Buses > > When reviewing previous patches modifying / adding this file, I > requested that we spell out "PCI Express" every single time. I'd > like to > see the same in this patch, if possible.
OK, I didn't know it.
> >> @@ -55,18 +55,18 @@ Place only the following kinds of devices >> directly on the Root Complex: >> pcie.0 bus >> >> ---------------------------------------------------------------------------- >> >> | | >> | | >> - ----------- ------------------ ------------------ >> -------------- >> - | PCI Dev | | PCIe Root Port | | DMI-PCI Bridge | | >> pxb-pcie | >> - ----------- ------------------ ------------------ >> -------------- >> + ----------- ------------------ ------------------- >> -------------- >> + | PCI Dev | | PCIe Root Port | | PCIE-PCI Bridge | | >> pxb-pcie | >> + ----------- ------------------ ------------------- >> -------------- >> >> 2.1.1 To plug a device into pcie.0 as a Root Complex >> Integrated Endpoint use: >> -device <dev>[,bus=pcie.0] >> 2.1.2 To expose a new PCI Express Root Bus use: >> -device >> pxb-pcie,id=pcie.1,bus_nr=x[,numa_node=y][,addr=z] >> - Only PCI Express Root Ports and DMI-PCI bridges can be >> connected >> + Only PCI Express Root Ports, PCIE-PCI bridges and DMI-PCI >> bridges can be connected > > It would be nice if we could keep the flowing text wrapped to 80 > chars. > > Also, here you add the "PCI Express-PCI" bridge to the list of > allowed > controllers (and you keep DMI-PCI as permitted), but above DMI was > replaced. I think these should be made consistent -- we should > make up > our minds if we continue to recommend the DMI-PCI bridge or not. > If not, > then we should eradicate all traces of it. If we want to keep it at > least for compatibility, then it should remain as fully > documented as it > is now.
Now I'm beginning to think that we shouldn't keep the DMI-PCI bridge even for compatibility and may want to use a new PCIE-PCI bridge everywhere (of course, except some cases when users are sure they need exactly DMI-PCI bridge for some reason)
Can dmi-pci support shpc? why doesn't it? For compatibility?
I don't know why, but the fact that it doesn't is the reason libvirt settled on auto-creating a dmi-pci bridge and a pci-pci bridge under that for Q35. The reasoning was (IIRC Laine's words correctly) that the dmi-pci bridge cannot receive hotplugged devices, while the pci-pci bridge cannot be connected to the root complex. So both were needed.
Thanks Laszlo
OK. Is it true that dmi-pci + pci-pci under it will allow hotplug on Q35 if we just flip the bit in _OSC?
Marcel, what say you?... :)
Good news, works with: -device i82801b11-bridge,id=b1 -device pci-bridge,id=b2,bus=b1,chassis_nr=1,msi=off
Notice bridge's msi=off until the following kernel bug will be merged: https://www.spinics.net/lists/linux-pci/msg63052.html
Thanks, Marcel
Will test and get back to you (it may actually work)
Thanks, Marcel
On Wed, Aug 02, 2017 at 06:36:29PM +0300, Marcel Apfelbaum wrote:
Can dmi-pci support shpc? why doesn't it? For compatibility?
I don't know why, but the fact that it doesn't is the reason libvirt settled on auto-creating a dmi-pci bridge and a pci-pci bridge under that for Q35. The reasoning was (IIRC Laine's words correctly) that the dmi-pci bridge cannot receive hotplugged devices, while the pci-pci bridge cannot be connected to the root complex. So both were needed.
Thanks Laszlo
OK. Is it true that dmi-pci + pci-pci under it will allow hotplug on Q35 if we just flip the bit in _OSC?
Marcel, what say you?... :)
Good news, works with: -device i82801b11-bridge,id=b1 -device pci-bridge,id=b2,bus=b1,chassis_nr=1,msi=off
And presumably it works for modern windows? OK, so it looks like patch 1 is merely a bugfix, I'll merge it for 2.10.
Notice bridge's msi=off until the following kernel bug will be merged: https://www.spinics.net/lists/linux-pci/msg63052.html
Does libvirt support msi=off as a work-around?
Thanks, Marcel
On 02/08/2017 19:26, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 06:36:29PM +0300, Marcel Apfelbaum wrote:
> Can dmi-pci support shpc? why doesn't it? For compatibility?
I don't know why, but the fact that it doesn't is the reason libvirt settled on auto-creating a dmi-pci bridge and a pci-pci bridge under that for Q35. The reasoning was (IIRC Laine's words correctly) that the dmi-pci bridge cannot receive hotplugged devices, while the pci-pci bridge cannot be connected to the root complex. So both were needed.
Thanks Laszlo
OK. Is it true that dmi-pci + pci-pci under it will allow hotplug on Q35 if we just flip the bit in _OSC?
Marcel, what say you?... :)
Good news, works with: -device i82801b11-bridge,id=b1 -device pci-bridge,id=b2,bus=b1,chassis_nr=1,msi=off
And presumably it works for modern windows? OK, so it looks like patch 1 is merely a bugfix, I'll merge it for 2.10.
Tested with Win10, I think is OK to merge if for 2.10.
Notice bridge's msi=off until the following kernel bug will be merged: https://www.spinics.net/lists/linux-pci/msg63052.html
Does libvirt support msi=off as a work-around?
Adding Laine, maybe he has the answer.
Thanks, Marcel
Thanks, Marcel
On 08/02/2017 01:58 PM, Marcel Apfelbaum wrote:
On 02/08/2017 19:26, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 06:36:29PM +0300, Marcel Apfelbaum wrote:
>> Can dmi-pci support shpc? why doesn't it? For compatibility? > > I don't know why, but the fact that it doesn't is the reason libvirt > settled on auto-creating a dmi-pci bridge and a pci-pci bridge under > that for Q35. The reasoning was (IIRC Laine's words correctly) > that the > dmi-pci bridge cannot receive hotplugged devices, while the pci-pci > bridge cannot be connected to the root complex. So both were needed.
At least that's what I was told :-) (seriously, 50% of the convoluted rules encoded into libvirt's PCI bus topology construction and connection rules come from trial and error, and the other 50% come from advice and recommendations from others who (unlike me) actually know something about PCI.)
Of course the whole setup of plugging a pci-bridge into a dmi-to-pci-bridge was (at the time at least) an exercise in futility, since hotplug didn't work properly on pci-bridge+Q35 anyway (that initially wasn't explained to me; it was only after I had constructed the odd bus topology and it was in released code that someone told me "Oh, by the way, hotplug to pci-bridge doesn't work on Q35". At first it was described as a bug, then later reclassified as a future feature.)
(I guess the upside is that all of the horrible complex/confusing code needed to automatically add two controllers just to plug in a single endpoint is now already in the code, and will "just work" if/when needed).
Now that I go back to look at this thread (qemu-devel is just too much for me to try and read unless something has been Cc'ed to me - I really don't know how you guys manage it!), I see that pcie-pci-bridge has been implemented, and we (libvirt) will want to use that instead of dmi-to-pci-bridge when available. And pcie-pci-bridge itself can have endpoints hotplugged into it, correct? This means there will need to be patches for libvirt that check for the presence of pcie-pci-bridge, and if it's found they will replace any auto-added dmi-to-pci-bridge+pci-bridge with a long pcie-pci-bridge.
> > Thanks > Laszlo
OK. Is it true that dmi-pci + pci-pci under it will allow hotplug on Q35 if we just flip the bit in _OSC?
Marcel, what say you?... :)
Good news, works with: -device i82801b11-bridge,id=b1 -device pci-bridge,id=b2,bus=b1,chassis_nr=1,msi=off
And presumably it works for modern windows? OK, so it looks like patch 1 is merely a bugfix, I'll merge it for 2.10.
Tested with Win10, I think is OK to merge if for 2.10.
Notice bridge's msi=off until the following kernel bug will be merged: https://www.spinics.net/lists/linux-pci/msg63052.html
Does libvirt support msi=off as a work-around?
We have no explicit setting for msi on pci controllers. The only place we explicitly set that is on the ivshmem device.
That doesn't mean that we couldn't add it. However, if we were going to do it manually, that would mean adding another knob that we have to support forever. And even if we wanted to do it automatically, we would not only need to find something in qemu to key off of when deciding whether or not to set it, but we would *still* have to explicitly store the setting in the config so that migrations between hosts using differing versions of qemu would preserve guest ABI. Are there really enough people demanding (with actual concrete plans of *using*) hotplug of legacy PCI devices on Q35 guests *immediately* that we want to permanently pollute libvirt's code in this manner just for an interim workaround?
I didn't have enough time/energy to fully parse all the rest of this thread - is msi=off currently required for pcie-pci-bridge hotplug as well? (not that it changes my opinion - just as we can tell people "upgrade to a new qemu and libvirt if you want to hotplug legacy PCI devices on Q35 guests", we can also tell them "Oh, and wait X weeks and upgrade to a new kernel too".
On 03/08/2017 5:41, Laine Stump wrote:
On 08/02/2017 01:58 PM, Marcel Apfelbaum wrote:
On 02/08/2017 19:26, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 06:36:29PM +0300, Marcel Apfelbaum wrote:
>>> Can dmi-pci support shpc? why doesn't it? For compatibility? >> >> I don't know why, but the fact that it doesn't is the reason libvirt >> settled on auto-creating a dmi-pci bridge and a pci-pci bridge under >> that for Q35. The reasoning was (IIRC Laine's words correctly) >> that the >> dmi-pci bridge cannot receive hotplugged devices, while the pci-pci >> bridge cannot be connected to the root complex. So both were needed.
Hi Laine,
At least that's what I was told :-) (seriously, 50% of the convoluted rules encoded into libvirt's PCI bus topology construction and connection rules come from trial and error, and the other 50% come from advice and recommendations from others who (unlike me) actually know something about PCI.)
Of course the whole setup of plugging a pci-bridge into a dmi-to-pci-bridge was (at the time at least) an exercise in futility, since hotplug didn't work properly on pci-bridge+Q35 anyway (that initially wasn't explained to me; it was only after I had constructed the odd bus topology and it was in released code that someone told me "Oh, by the way, hotplug to pci-bridge doesn't work on Q35". At first it was described as a bug, then later reclassified as a future feature.)
(I guess the upside is that all of the horrible complex/confusing code needed to automatically add two controllers just to plug in a single endpoint is now already in the code, and will "just work" if/when needed).
Now that I go back to look at this thread (qemu-devel is just too much for me to try and read unless something has been Cc'ed to me - I really don't know how you guys manage it!), I see that pcie-pci-bridge has been implemented, and we (libvirt) will want to use that instead of dmi-to-pci-bridge when available. And pcie-pci-bridge itself can have endpoints hotplugged into it, correct?
Yes.
This means there will need to be patches for libvirt that check for the presence of pcie-pci-bridge, and if it's found they will replace any auto-added dmi-to-pci-bridge+pci-bridge with a long pcie-pci-bridge.
The PCIe-PCI bridge is to be plugged into a PCIe Root Port and then you can add PCI devices to it. The devices can be hot-plugged into it (see below the limitations) and even the bridge itself can be hot-plugged (old OSes might not support it).
So the device will replace the dmi-pci-bridge + pci-pci bridge completely.
libvirt will have 2 options: 1. Start with a pcie-pci bridge attached to a PCIe Root Port and all legacy PCI devices should land there (or on bus 0) (You can use the "auto" device addressing, add PCI devices automatically to this device until the bridge is full, then use the last slot to add a pci brigde or use another pcie-pci bridge) 2. Leave a PCIe Root Port empty and configure with hints for the fw that we might want to hotplug a pcie-pci bridge into it. If a PCI device is needed, hotplug the pcie-pci bridge first, then the device.
The above model gives you enough elasticity so if you: 1. don't need PCI devices -> create the machine with no pci controllers 2. need PCI devices -> add a pcie-pci bridge and you get a legacy PCI bus supporting hotplug. 3. might need PCI devices -> leave a PCIe Root Port empty (+ hints)
>> >> Thanks >> Laszlo > > OK. Is it true that dmi-pci + pci-pci under it will allow hotplug > on Q35 if we just flip the bit in _OSC?
Marcel, what say you?... :)
Good news, works with: -device i82801b11-bridge,id=b1 -device pci-bridge,id=b2,bus=b1,chassis_nr=1,msi=off
And presumably it works for modern windows? OK, so it looks like patch 1 is merely a bugfix, I'll merge it for 2.10.
Tested with Win10, I think is OK to merge if for 2.10.
Notice bridge's msi=off until the following kernel bug will be merged: https://www.spinics.net/lists/linux-pci/msg63052.html
Does libvirt support msi=off as a work-around?
We have no explicit setting for msi on pci controllers. The only place we explicitly set that is on the ivshmem device.
We need msi=off because of a bug in Linux Kernel. Even if the bug is fixed (there is already a patch upstream), we don't know when will get in (actually 4.14) and what versions will include it.
That doesn't mean that we couldn't add it. However, if we were going to do it manually, that would mean adding another knob that we have to support forever. And even if we wanted to do it automatically, we would not only need to find something in qemu to key off of when deciding whether or not to set it, but we would *still* have to explicitly store the setting in the config so that migrations between hosts using differing versions of qemu would preserve guest ABI.
It is not even something QEMU can be queried about. It depends on the guest OS.
Are there really enough people demanding (with actual concrete plans of *using*) hotplug of legacy PCI devices on Q35 guests *immediately* that we want to permanently pollute libvirt's code in this manner just for an interim workaround?
If/when Q35 would become the default machine, we want feature parity, so the users can keep the exact (almost) setup on q35. PCI hotplug is part of it.
I didn't have enough time/energy to fully parse all the rest of this thread - is msi=off currently required for pcie-pci-bridge hotplug as well?
Yes.
(not that it changes my opinion - just as we can tell people
"upgrade to a new qemu and libvirt if you want to hotplug legacy PCI devices on Q35 guests", we can also tell them "Oh, and wait X weeks and upgrade to a new kernel too".
I agree it will be hard to manage such a flag on libvirt automatically, but exposing an msi property to the pcie-pci-bridge and adding a comment: "switch to off if pci-hotplug doesn't work" would be ok?
An alternative is to not expose "msi" to libvirt and default it to off. In the future, if the feature proves valuable, we can ask libvirt to help for transition to "on".
Thanks, Marcel
On 08/03/2017 06:29 AM, Marcel Apfelbaum wrote:
On 03/08/2017 5:41, Laine Stump wrote:
On 08/02/2017 01:58 PM, Marcel Apfelbaum wrote:
On 02/08/2017 19:26, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 06:36:29PM +0300, Marcel Apfelbaum wrote:
>>>> Can dmi-pci support shpc? why doesn't it? For compatibility? >>> >>> I don't know why, but the fact that it doesn't is the reason >>> libvirt >>> settled on auto-creating a dmi-pci bridge and a pci-pci bridge >>> under >>> that for Q35. The reasoning was (IIRC Laine's words correctly) >>> that the >>> dmi-pci bridge cannot receive hotplugged devices, while the >>> pci-pci >>> bridge cannot be connected to the root complex. So both were >>> needed.
Hi Laine,
At least that's what I was told :-) (seriously, 50% of the convoluted rules encoded into libvirt's PCI bus topology construction and connection rules come from trial and error, and the other 50% come from advice and recommendations from others who (unlike me) actually know something about PCI.)
Of course the whole setup of plugging a pci-bridge into a dmi-to-pci-bridge was (at the time at least) an exercise in futility, since hotplug didn't work properly on pci-bridge+Q35 anyway (that initially wasn't explained to me; it was only after I had constructed the odd bus topology and it was in released code that someone told me "Oh, by the way, hotplug to pci-bridge doesn't work on Q35". At first it was described as a bug, then later reclassified as a future feature.)
(I guess the upside is that all of the horrible complex/confusing code needed to automatically add two controllers just to plug in a single endpoint is now already in the code, and will "just work" if/when needed).
Now that I go back to look at this thread (qemu-devel is just too much for me to try and read unless something has been Cc'ed to me - I really don't know how you guys manage it!), I see that pcie-pci-bridge has been implemented, and we (libvirt) will want to use that instead of dmi-to-pci-bridge when available. And pcie-pci-bridge itself can have endpoints hotplugged into it, correct?
Yes.
This means there will need to be patches for libvirt that check for the presence of pcie-pci-bridge, and if it's found they will replace any auto-added dmi-to-pci-bridge+pci-bridge with a long pcie-pci-bridge.
The PCIe-PCI bridge is to be plugged into a PCIe Root Port and then you can add PCI devices to it. The devices can be hot-plugged into it (see below the limitations) and even the bridge itself can be hot-plugged (old OSes might not support it).
So the device will replace the dmi-pci-bridge + pci-pci bridge completely.
libvirt will have 2 options:
- Start with a pcie-pci bridge attached to a PCIe Root Port and all legacy PCI devices should land there (or on bus 0) (You can use the "auto" device addressing, add PCI devices automatically to this device until the bridge is full, then use the last slot to add a pci brigde or use another pcie-pci bridge)
- Leave a PCIe Root Port empty and configure with hints for the fw that we might want to hotplug a pcie-pci bridge into it. If a PCI device is needed, hotplug the pcie-pci bridge first, then the device.
The above model gives you enough elasticity so if you:
- don't need PCI devices -> create the machine with no pci controllers
- need PCI devices -> add a pcie-pci bridge and you get a legacy PCI bus supporting hotplug.
- might need PCI devices -> leave a PCIe Root Port empty (+ hints)
I'm not sure what to do in libvirt about (3). Right now if an unused root port is found in the config when adding a new endpoint device with no PCI address, the new endpoint will be attached to that existing root port. In order for one of the "save it for later" root ports to work, I guess we will need to count that root port as unavailable when setting PCI addresses on an inactive guest, but then allow hotplugging into it. But what if someone wants to hotplug a PCI Express endpoint, and the only root-port that's available is this one that's marked to allow plugging in a pcie-pci-bridge? Do we fail the endpoint hotplug (even though it could have succeeded)? Or do we allow it, and then later potentially fail an attempt to hotplug a pcie-pci-bridge? (To be clear - I don't think there's really anything better that qemu could do to help this situation; I'm just thinking out loud about how libvirt can best deal with it)
>>> >>> Thanks >>> Laszlo >> >> OK. Is it true that dmi-pci + pci-pci under it will allow hotplug >> on Q35 if we just flip the bit in _OSC? > > Marcel, what say you?... :)
Good news, works with: -device i82801b11-bridge,id=b1 -device pci-bridge,id=b2,bus=b1,chassis_nr=1,msi=off
And presumably it works for modern windows? OK, so it looks like patch 1 is merely a bugfix, I'll merge it for 2.10.
Tested with Win10, I think is OK to merge if for 2.10.
Notice bridge's msi=off until the following kernel bug will be merged: https://www.spinics.net/lists/linux-pci/msg63052.html
Does libvirt support msi=off as a work-around?
We have no explicit setting for msi on pci controllers. The only place we explicitly set that is on the ivshmem device.
We need msi=off because of a bug in Linux Kernel. Even if the bug is fixed (there is already a patch upstream), we don't know when will get in (actually 4.14) and what versions will include it.
That doesn't mean that we couldn't add it. However, if we were going to do it manually, that would mean adding another knob that we have to support forever. And even if we wanted to do it automatically, we would not only need to find something in qemu to key off of when deciding whether or not to set it, but we would *still* have to explicitly store the setting in the config so that migrations between hosts using differing versions of qemu would preserve guest ABI.
It is not even something QEMU can be queried about. It depends on the guest OS.
Right, so libvirt has no way of detecting whether of not it's needed. And if we provide the setting and publish documentation telling people that they need to set it off to support hotplug, then we'll get people still setting msi=off years from now, even if they aren't doing any legacy PCI hotplug ("cargo cult" sysadminning).
Are there really enough people demanding (with actual concrete plans of *using*) hotplug of legacy PCI devices on Q35 guests *immediately* that we want to permanently pollute libvirt's code in this manner just for an interim workaround?
If/when Q35 would become the default machine, we want feature parity, so the users can keep the exact (almost) setup on q35. PCI hotplug is part of it.
Sure. But proper operation is coming in the kernel. And Q35 isn't the default yet. The world has already waited several years for all of this. If it's just going to be a matter of a couple months more before the final piece is in place, why add code to support a workaround that will only be needed by a very small number of people (early adopters and testers who will anyway be testing the workaround rather than the completed feature in the full stack) for a very short time?
If the kernel fix is something that can't be backported into stable kernels being used on downstream distros that *will* get the qemu and libvirt features backported/rebased in, then maybe we should think about supporting a workaround. Otherwise, I think we should just let it all settle and it will work itself out.
I didn't have enough time/energy to fully parse all the rest of this thread - is msi=off currently required for pcie-pci-bridge hotplug as well?
Yes.
(not that it changes my opinion - just as we can tell people
"upgrade to a new qemu and libvirt if you want to hotplug legacy PCI devices on Q35 guests", we can also tell them "Oh, and wait X weeks and upgrade to a new kernel too".
I agree it will be hard to manage such a flag on libvirt automatically, but exposing an msi property to the pcie-pci-bridge and adding a comment: "switch to off if pci-hotplug doesn't work" would be ok?
An alternative is to not expose "msi" to libvirt and default it to off. In the future, if the feature proves valuable, we can ask libvirt to help for transition to "on".
msi=off vs. msi=on is a guest abi difference, right? If that's the case, then the only way we could "transition to 'on'" in the future would be if we keep track of the msi setting in the config from the beginning (or alternately. we default to setting msi=off *only for pcie-pci-bridge* when building the qemu commandline, and then at some later time we add support to the config for msi=on, then figure out some way that libvirt can decide to add that to the config *in new definitions only*). This latter is a bad plan, because we would know from the outset that we'll need to add the msi attribute to the config at some time in the future, and by not adding it immediately we create the need for more complex code in the future (to deal with making sure that "msi=off" is the same as "no msi specified" for existing configs, but that "no msi specified" can mean "set msi to whatever is appropriate" for new configs. There's already code like that in libvirt, and it's a pain to keep it straight and explain it to other people - it's just a regression trap waiting for someone unfamiliar with the code to come in and accidentally break it when they think they're just "cleaning up ugly code".
So if we do it at all, we should just add the msi attribute right away and allow people to manually set it off (your first suggestion). But again, what is the time window and number of users this will actually be helping? It sounds like it's working itself out anyway.
(Is there any other use for being able to set msi=off?)
On 03/08/2017 16:58, Laine Stump wrote:
On 08/03/2017 06:29 AM, Marcel Apfelbaum wrote:
On 03/08/2017 5:41, Laine Stump wrote:
On 08/02/2017 01:58 PM, Marcel Apfelbaum wrote:
On 02/08/2017 19:26, Michael S. Tsirkin wrote:
On Wed, Aug 02, 2017 at 06:36:29PM +0300, Marcel Apfelbaum wrote:
>>>>> Can dmi-pci support shpc? why doesn't it? For compatibility? >>>> >>>> I don't know why, but the fact that it doesn't is the reason >>>> libvirt >>>> settled on auto-creating a dmi-pci bridge and a pci-pci bridge >>>> under >>>> that for Q35. The reasoning was (IIRC Laine's words correctly) >>>> that the >>>> dmi-pci bridge cannot receive hotplugged devices, while the >>>> pci-pci >>>> bridge cannot be connected to the root complex. So both were >>>> needed.
Hi Laine,
At least that's what I was told :-) (seriously, 50% of the convoluted rules encoded into libvirt's PCI bus topology construction and connection rules come from trial and error, and the other 50% come from advice and recommendations from others who (unlike me) actually know something about PCI.)
Of course the whole setup of plugging a pci-bridge into a dmi-to-pci-bridge was (at the time at least) an exercise in futility, since hotplug didn't work properly on pci-bridge+Q35 anyway (that initially wasn't explained to me; it was only after I had constructed the odd bus topology and it was in released code that someone told me "Oh, by the way, hotplug to pci-bridge doesn't work on Q35". At first it was described as a bug, then later reclassified as a future feature.)
(I guess the upside is that all of the horrible complex/confusing code needed to automatically add two controllers just to plug in a single endpoint is now already in the code, and will "just work" if/when needed).
Now that I go back to look at this thread (qemu-devel is just too much for me to try and read unless something has been Cc'ed to me - I really don't know how you guys manage it!), I see that pcie-pci-bridge has been implemented, and we (libvirt) will want to use that instead of dmi-to-pci-bridge when available. And pcie-pci-bridge itself can have endpoints hotplugged into it, correct?
Yes.
This means there will need to be patches for libvirt that check for the presence of pcie-pci-bridge, and if it's found they will replace any auto-added dmi-to-pci-bridge+pci-bridge with a long pcie-pci-bridge.
The PCIe-PCI bridge is to be plugged into a PCIe Root Port and then you can add PCI devices to it. The devices can be hot-plugged into it (see below the limitations) and even the bridge itself can be hot-plugged (old OSes might not support it).
So the device will replace the dmi-pci-bridge + pci-pci bridge completely.
libvirt will have 2 options:
- Start with a pcie-pci bridge attached to a PCIe Root Port and all legacy PCI devices should land there (or on bus 0) (You can use the "auto" device addressing, add PCI devices automatically to this device until the bridge is full, then use the last slot to add a pci brigde or use another pcie-pci bridge)
- Leave a PCIe Root Port empty and configure with hints for the fw that we might want to hotplug a pcie-pci bridge into it. If a PCI device is needed, hotplug the pcie-pci bridge first, then the device.
The above model gives you enough elasticity so if you:
- don't need PCI devices -> create the machine with no pci controllers
- need PCI devices -> add a pcie-pci bridge and you get a legacy PCI bus supporting hotplug.
- might need PCI devices -> leave a PCIe Root Port empty (+ hints)
I'm not sure what to do in libvirt about (3). Right now if an unused root port is found in the config when adding a new endpoint device with no PCI address, the new endpoint will be attached to that existing root port. In order for one of the "save it for later" root ports to work, I guess we will need to count that root port as unavailable when setting PCI addresses on an inactive guest, but then allow hotplugging into it.
For Q35 you need such policy anyway. The only way to allow PCI Express Hotplug (I am not referring now to our legacy PCI hotplug) you need to leave a few PCIe Root Ports empty. How many is an interesting question. Maybe a domain property (free-slots=x) ? For our scenario the only difference is the empty Root Port has a hint/a few hints for the firmware. Maybe all free Root Ports should behave the same.
But what if someone wants to hotplug a PCI Express endpoint, and the only root-port that's available is this one that's marked to allow plugging in a pcie-pci-bridge? Do we fail the endpoint hotplug (even though it could have succeeded)?
First come, first served.
Or do we allow it, and then later potentially fail an attempt to hotplug a pcie-pci-bridge? (To be clear - I don't think there's really anything better that qemu could do to help this situation; I'm just thinking out loud about how libvirt can best deal with it)
I think it would be very difficult to differentiate between the free Root Ports. The user should leave enough empty Root Ports, all of them should have the hint for the firmware to allow the pcie-pci bridge.
>>>> >>>> Thanks >>>> Laszlo >>> >>> OK. Is it true that dmi-pci + pci-pci under it will allow hotplug >>> on Q35 if we just flip the bit in _OSC? >> >> Marcel, what say you?... :)
Good news, works with: -device i82801b11-bridge,id=b1 -device pci-bridge,id=b2,bus=b1,chassis_nr=1,msi=off
And presumably it works for modern windows? OK, so it looks like patch 1 is merely a bugfix, I'll merge it for 2.10.
Tested with Win10, I think is OK to merge if for 2.10.
Notice bridge's msi=off until the following kernel bug will be merged: https://www.spinics.net/lists/linux-pci/msg63052.html
Does libvirt support msi=off as a work-around?
We have no explicit setting for msi on pci controllers. The only place we explicitly set that is on the ivshmem device.
We need msi=off because of a bug in Linux Kernel. Even if the bug is fixed (there is already a patch upstream), we don't know when will get in (actually 4.14) and what versions will include it.
That doesn't mean that we couldn't add it. However, if we were going to do it manually, that would mean adding another knob that we have to support forever. And even if we wanted to do it automatically, we would not only need to find something in qemu to key off of when deciding whether or not to set it, but we would *still* have to explicitly store the setting in the config so that migrations between hosts using differing versions of qemu would preserve guest ABI.
It is not even something QEMU can be queried about. It depends on the guest OS.
Right, so libvirt has no way of detecting whether of not it's needed. And if we provide the setting and publish documentation telling people that they need to set it off to support hotplug, then we'll get people still setting msi=off years from now, even if they aren't doing any legacy PCI hotplug ("cargo cult" sysadminning).
Are there really enough people demanding (with actual concrete plans of *using*) hotplug of legacy PCI devices on Q35 guests *immediately* that we want to permanently pollute libvirt's code in this manner just for an interim workaround?
If/when Q35 would become the default machine, we want feature parity, so the users can keep the exact (almost) setup on q35. PCI hotplug is part of it.
Sure. But proper operation is coming in the kernel. And Q35 isn't the default yet. The world has already waited several years for all of this. If it's just going to be a matter of a couple months more before the final piece is in place, why add code to support a workaround that will only be needed by a very small number of people (early adopters and testers who will anyway be testing the workaround rather than the completed feature in the full stack) for a very short time?
I see your point.
If the kernel fix is something that can't be backported into stable kernels being used on downstream distros that *will* get the qemu and libvirt features backported/rebased in, then maybe we should think about supporting a workaround. Otherwise, I think we should just let it all settle and it will work itself out.
We've just got confirmation the fix will be in all stable versions: https://patchwork.kernel.org/patch/9848431/
I didn't have enough time/energy to fully parse all the rest of this thread - is msi=off currently required for pcie-pci-bridge hotplug as well?
Yes.
(not that it changes my opinion - just as we can tell people
"upgrade to a new qemu and libvirt if you want to hotplug legacy PCI devices on Q35 guests", we can also tell them "Oh, and wait X weeks and upgrade to a new kernel too".
I agree it will be hard to manage such a flag on libvirt automatically, but exposing an msi property to the pcie-pci-bridge and adding a comment: "switch to off if pci-hotplug doesn't work" would be ok?
An alternative is to not expose "msi" to libvirt and default it to off. In the future, if the feature proves valuable, we can ask libvirt to help for transition to "on".
msi=off vs. msi=on is a guest abi difference, right?
I am not sure about it, a hw vendor can sell 2 identical cards, one without msi support, the other with. We should be able to interchange the cards. Of course changing the card because of the kernel sounds crazy enough.
If that's the case,
then the only way we could "transition to 'on'" in the future would be if we keep track of the msi setting in the config from the beginning (or alternately. we default to setting msi=off *only for pcie-pci-bridge* when building the qemu commandline, and then at some later time we add support to the config for msi=on, then figure out some way that libvirt can decide to add that to the config *in new definitions only*).
This is what I meant.
This latter is a bad plan,
My bad :)
because we would know from the outset that we'll need to add the msi attribute to the config at some time in the future, and by not adding it immediately we create the need for more complex code in the future (to deal with making sure that "msi=off" is the same as "no msi specified" for existing configs, but that "no msi specified" can mean "set msi to whatever is appropriate" for new configs. There's already code like that in libvirt, and it's a pain to keep it straight and explain it to other people - it's just a regression trap waiting for someone unfamiliar with the code to come in and accidentally break it when they think they're just "cleaning up ugly code".
Understood
So if we do it at all, we should just add the msi attribute right away and allow people to manually set it off (your first suggestion). But again, what is the time window and number of users this will actually be helping? It sounds like it's working itself out anyway.
It will work itself out, since the patch will be merged in stable versions. My *only* concern is we are going to announce PCI hotplug legacy support and some early adapter will ask how can he do that with libvirt. It will be reasonable to ask him to update/patch the guest kernel?
(Is there any other use for being able to set msi=off?)
Not from what I know. Thanks, Marcel