I did a very simple test : 1, modify qemu/hw/block/virtio-blk.c, let virtio-blk use 256 queue: *s->vq = virtio_add_queue(vdev, 128, virtio_blk_handle_output);* --> *s->vq = virtio_add_queue(vdev, 256 , virtio_blk_handle_output);*
** 2, compile qemu and run a VM which boots from virtio-blk disk.
The VM can NOT boot because seabios use macro MAX_QUEUE_NUM as 128. So, does virtio-pci support more queues? Looking forward your reply!
On Wed, Apr 19, 2017 at 2:08 PM, Zhenwei Pi firstname.lastname@example.org wrote:
I did a very simple test : 1, modify qemu/hw/block/virtio-blk.c, let virtio-blk use 256 queue: s->vq = virtio_add_queue(vdev, 128, virtio_blk_handle_output); --> s->vq = virtio_add_queue(vdev, 256 , virtio_blk_handle_output);
This change does *not* make virtio-blk use 256 queues. It increases the descriptor ring size for the one and only virtqueue.
You are using an old version of QEMU. Recent QEMU releases support virtio-blk multiqueue using the -device virtio-blk-pci,num-queues=N parameter.
Keep in mind that QEMU's block layer does not support true multiqueue yet. I/O requests from all virtqueues in a device are processed by just one thread inside QEMU. There is work in progress to remove this limitation and support true multiqueue block layer in QEMU.
Regarding SeaBIOS, recent SeaBIOS versions that support VIRTIO 1.0 clamp descriptor ring size to 128. The BIOS typically does not perform a lot of parallel I/O requests so this doesn't matter. When the guest OS boots up its drivers will reset the virtio-blk device and can use the full descriptor ring size offered by the hypervisor. Therefore you don't need to worry about SeaBIOS. The default machine type in recent QEMU releases automatically enables VIRTIO 1.0.