On 03/11/17 19:42, Jean-Philippe Brucker wrote:
On 03/11/17 15:34, Gerd Hoffmann wrote:
On Fri, 2017-11-03 at 13:49 +0000, Jean-Philippe Brucker wrote:
On 02/11/17 15:50, Gerd Hoffmann wrote:
Queues have 256 entries on kvmtool, support that. Needs more memory for virtqueues now. But with the move to 32bit drivers for virtio this should not be much of an issue any more.
Known problems (probably kvmtool bugs): * Must bump to 260 entries to make things actually work, otherwise kvmtool segfaults. Oops.
You mean setting MAX_QUEUE_NUM to 256 instead of 260 in seabios causes a kvmtool crash?
yes.
Do you have any more detail on the segfault?
Ok, lets have a look with gdb ...
Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7f81caf3c700 (LWP 20234)] virt_queue__get_head_iov (vq=vq@entry=0x7f82576be0a0, iov=iov@entry=0x7 f826770aae0, out=out@entry=0x7f826770bae0, in=in@entry=0x7f826770bae2, head=65104, kvm=kvm@entry=0x246eee0) at virtio/core.c:105 105 *out = *in = 0; (gdb) bt #0 0x000000000040c91b in virt_queue__get_head_iov (vq=vq@entry=0x7f825 76be0a0, iov=iov@entry=0x7f826770aae0, out=out@entry=0x7f826770bae0, in =in@entry=0x7f826770bae2, head=65104, kvm=kvm@entry=0x246eee0) at virtio/core.c:105 #1 0x000000000040bbf7 in virtio_blk_thread (bdev=0x7f82576be010, vq=0x7f82576be0a0, kvm=0x246eee0) at virtio/blk.c:134 #2 0x000000000040bbf7 in virtio_blk_thread (dev=0x7f82576be010) at virtio/blk.c:208 #3 0x00007f82571c6e25 in start_thread () at /lib64/libpthread.so.0 #4 0x00007f82543b134d in clone () at /lib64/libc.so.6 (gdb) print *vq $1 = {vring = {num = 256, desc = 0x7f824cf3e000, avail = 0x7f824cf3f000, used = 0x7f824cf40000}, pfn = 524285, last_avail_idx = 263, last_used_signalled = 1, endian = 1}
last_avail_idx looks bogus ...
It follows avail->idx, which wraps naturally at 65536 (regardless of the ring size). But head=65104 seems bogus, it should be an index into the descriptor table. So either seabios puts that value in the avail ring, or kvmtool reads some uninitialized ring entry. I haven't found how we can get into this situation yet.
When MAX_QUEUE_NUM = queue size, SeaBIOS doesn't reserve space for used/avail events, so when kvmtool writes to what it thinks is the avail event (a u16 located right after the used ring), it actually overwrites the next member of SeaBIOS' vring_virtqueue, which is vring->num. This results in further corruption as soon as the value written in vring->num exceeds queue size.
Thanks for reporting it, I'll send a fix for kvmtool.
Thanks, Jean