On 24/07/2017 7:53, Kinsella, Ray wrote:
Hi Ray, Thank you for the details,
So as it turns out at 512 devices, it is nothing to do SeaBIOS, it was the Kernel again. It is taking quite a while to startup, a little over two hours (7489 seconds). The main culprits appear to be enumerating/initializing the PCI Express ports and enabling interrupts.
The PCI Express Root Ports are taking a long time to enumerate/ initializing. 42 minutes in total=2579/60=64 ports in total, 40 seconds each.
Even if I am not aware of how much time would take to init a bare-metal PCIe Root Port, it seems too much.
[ 50.612822] pci_bus 0000:80: root bus resource [bus 80-c1] [ 172.345361] pci 0000:80:00.0: PCI bridge to [bus 81] ... [ 2724.734240] pci 0000:80:08.0: PCI bridge to [bus c1] [ 2751.154702] ACPI: Enabled 2 GPEs in block 00 to 3F
I assume the 1 hour (3827 seconds) below is being spent enabling interrupts.
Assuming you are referring to legacy interrupts, maybe is possible to disable them and use only MSI/MSI-X for PCIe Root Ports (based on user input, we can't disable INTx for all the ports)
[ 2899.394288] ACPI: PCI Interrupt Link [GSIG] enabled at IRQ 22 [ 2899.531324] ACPI: PCI Interrupt Link [GSIH] enabled at IRQ 23 [ 2899.534778] ACPI: PCI Interrupt Link [GSIE] enabled at IRQ 20 [ 6726.914388] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled [ 6726.937932] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [ 6726.964699] Linux agpgart interface v0.103
There finally there is another 20 minutes to find in the boot.
[ 7489.202589] virtio_net virtio515 enp193s0f0: renamed from eth513
Poky (Yocto Project Reference Distro) 2.3 qemux86-64 ttyS0
qemux86-64 login: root
I will remove the virtio-net-pci devices and hotplug them instead. In theory it should improve boot time, at expense of incurring some of these costs at runtime.
I would appreciate if you can share the results.
Thanks, Marcel
Ray K
-----Original Message----- From: Kevin O'Connor [mailto:kevin@koconnor.net] Sent: Sunday, July 23, 2017 1:05 PM To: Marcel Apfelbaum marcel@redhat.com; Kinsella, Ray ray.kinsella@intel.com Cc: qemu-devel@nongnu.org; seabios@seabios.org; Gerd Hoffmann kraxel@redhat.com; Michael Tsirkin mst@redhat.com Subject: Re: >256 Virtio-net-pci hotplug Devices
On Sun, Jul 23, 2017 at 07:28:01PM +0300, Marcel Apfelbaum wrote:
On 22/07/2017 2:57, Kinsella, Ray wrote:
When scaling up to 512 Virtio-net devices SeaBIOS appears to really slow down when configuring PCI Config space - haven't manage to get this to work yet.
If there is a slowdown in SeaBIOS, it would help to produce a log with timing information - see: https://www.seabios.org/Debugging#Timing_debug_messages
It may also help to increase the debug level in SeaBIOS to get more fine grained timing reports.
-Kevin
Hi Marcel,
On 24/07/2017 00:14, Marcel Apfelbaum wrote:
On 24/07/2017 7:53, Kinsella, Ray wrote:
Even if I am not aware of how much time would take to init a bare-metal PCIe Root Port, it seems too much.
So I repeated the testing for 64, 128, 256 and 512 ports. I ensured the configuration was sane, that 128 was twice the number of root ports and virtio-pci-net devices as 64.
I got the following results - shown in seconds, as you can see it is non linear but not exponential, there is something that is not scaling well.
64 128 256 512 PCIe Root Ports 14 72 430 2672 ACPI 4 35 342 3863 Loading Drivers 1 1 31 621 Total Boot 34 137 890 7516
( I did try to test 1024 devices, but it just dies silently )
Ray K
On 25/07/2017 21:00, Kinsella, Ray wrote:
Hi Marcel,
Hi Ray,
On 24/07/2017 00:14, Marcel Apfelbaum wrote:
On 24/07/2017 7:53, Kinsella, Ray wrote:
Even if I am not aware of how much time would take to init a bare-metal PCIe Root Port, it seems too much.
So I repeated the testing for 64, 128, 256 and 512 ports. I ensured the configuration was sane, that 128 was twice the number of root ports and virtio-pci-net devices as 64.
I got the following results - shown in seconds, as you can see it is non linear but not exponential, there is something that is not scaling well.
64 128 256 512
PCIe Root Ports 14 72 430 2672 ACPI 4 35 342 3863 Loading Drivers 1 1 31 621 Total Boot 34 137 890 7516
( I did try to test 1024 devices, but it just dies silently )
Ray K
It is an issue worth looking into it, one more question, all the measurements are from OS boot? Do you use SeaBIOS? No problems with the firmware?
Thanks, Marcel
Hi Marcel,
Yup - I am using Seabios by default. I took all the measures from the Kernel time reported in syslog. As Seabios wasn't exhibiting any obvious scaling problem.
Ray K
-----Original Message----- From: Marcel Apfelbaum [mailto:marcel@redhat.com] Sent: Wednesday, August 2, 2017 5:43 AM To: Kinsella, Ray ray.kinsella@intel.com; Kevin O'Connor kevin@koconnor.net Cc: Tan, Jianfeng jianfeng.tan@intel.com; seabios@seabios.org; Michael Tsirkin mst@redhat.com; qemu-devel@nongnu.org; Gerd Hoffmann kraxel@redhat.com Subject: Re: [Qemu-devel] >256 Virtio-net-pci hotplug Devices
It is an issue worth looking into it, one more question, all the measurements are from OS boot? Do you use SeaBIOS? No problems with the firmware?
Thanks, Marcel