The LBA Format Data structure is dword-sized, but struct nvme_lba_format
erroneously contains an additional member, misaligning all LBAF
descriptors after the first and causing them to be misinterpreted.
Remove it.
Signed-off-by: Florian Larysch <fl(a)n621.de>
---
src/hw/nvme-int.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
index a4c1555..5779203 100644
--- a/src/hw/nvme-int.h
+++ b/src/hw/nvme-int.h
@@ -156,7 +156,6 @@ struct nvme_lba_format {
u16 ms;
u8 lbads;
u8 rp;
- u8 res;
};
struct nvme_identify_ns {
--
2.34.1
Dear SeaBIOS folks,
with the latest NVMe fixes, would it make sense to tag a 1.15.1 release?
The 1.16.0 release is planned for end of February, as far as I
understand it. Maybe that could be moved up two weeks?
Kind regards,
Paul
Commit b68f313c9139 ("nvme: Record maximum allowed request size")
introduced a use of "identify" past it being passed to free(). Latch the
value of interest into a local variable.
Reported-by: Coverity (ID 1497613)
Signed-off-by: Jan Beulich <jbeulich(a)suse.com>
---
It was a Xen Project Coverity run which reported this after our updating
to 1.15.0.
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -620,6 +620,7 @@
identify->nn, (identify->nn == 1) ? "" : "s");
ctrl->ns_count = identify->nn;
+ u8 mdts = identify->mdts;
free(identify);
if ((ctrl->ns_count == 0) || nvme_create_io_queues(ctrl)) {
@@ -631,7 +632,7 @@
/* Populate namespace IDs */
int ns_idx;
for (ns_idx = 0; ns_idx < ctrl->ns_count; ns_idx++) {
- nvme_probe_ns(ctrl, ns_idx, identify->mdts);
+ nvme_probe_ns(ctrl, ns_idx, mdts);
}
dprintf(3, "NVMe initialization complete!\n");
The variable rx_bytes is marked VARLOW, but there was a missing
GET_LOW() to access rx_bytes. Fix by copying rx_bytes to a local
variable and avoid the repetitive segment memory accesses.
Reported-by: Gabe Black <gabe.black(a)gmail.com>
Signed-off-by: Volker Rümelin <vr_qemu(a)t-online.de>
Signed-off-by: Kevin O'Connor <kevin(a)koconnor.net>
---
src/sercon.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/src/sercon.c b/src/sercon.c
index 66a1f24..3019d9b 100644
--- a/src/sercon.c
+++ b/src/sercon.c
@@ -626,7 +626,7 @@ sercon_check_event(void)
u16 addr = GET_LOW(sercon_port);
u16 keycode;
u8 byte, count = 0;
- int seq, chr;
+ int seq;
// check to see if there is a active serial port
if (!addr)
@@ -640,20 +640,23 @@ sercon_check_event(void)
// read all available data
while (inb(addr + SEROFF_LSR) & 0x01) {
byte = inb(addr + SEROFF_DATA);
- if (GET_LOW(rx_bytes) < sizeof(rx_buf)) {
- SET_LOW(rx_buf[rx_bytes], byte);
- SET_LOW(rx_bytes, GET_LOW(rx_bytes) + 1);
+ u8 rb = GET_LOW(rx_bytes);
+ if (rb < sizeof(rx_buf)) {
+ SET_LOW(rx_buf[rb], byte);
+ SET_LOW(rx_bytes, rb + 1);
count++;
}
}
for (;;) {
// no (more) input data
- if (!GET_LOW(rx_bytes))
+ u8 rb = GET_LOW(rx_bytes);
+ if (!rb)
return;
// lookup escape sequences
- if (GET_LOW(rx_bytes) > 1 && GET_LOW(rx_buf[0]) == 0x1b) {
+ u8 next_char = GET_LOW(rx_buf[0]);
+ if (rb > 1 && next_char == 0x1b) {
seq = findseq();
if (seq >= 0) {
enqueue_key(GET_GLOBAL(termseq[seq].keycode));
@@ -664,12 +667,11 @@ sercon_check_event(void)
// Seems we got a escape sequence we didn't recognise.
// -> If we received data wait for more, maybe it is just incomplete.
- if (GET_LOW(rx_buf[0]) == 0x1b && count)
+ if (next_char == 0x1b && count)
return;
// Handle input as individual char.
- chr = GET_LOW(rx_buf[0]);
- keycode = ascii_to_keycode(chr);
+ keycode = ascii_to_keycode(next_char);
if (keycode)
enqueue_key(keycode);
shiftbuf(1);
--
2.31.1
This is a resend of the previous series. Changes since last time:
* Patch 1: Fix function comment on nvme_io_xfer()
* Patch 3-5: Added "goto bounce" to nvme_io_xfer() as suggested by Alex
* Patch 6: Added separate "single dma bounce buffer" patch to this series
* Patch 6: Add checking for malloc failure
-Kevin
Kevin O'Connor (6):
nvme: Rework nvme_io_readwrite() to return -1 on error
nvme: Add nvme_bounce_xfer() helper function
nvme: Convert nvme_build_prpl() to nvme_prpl_xfer()
nvme: Pass prp1 and prp2 directly to nvme_io_xfer()
nvme: Build the page list in the existing dma buffer
nvme: Only allocate one dma bounce buffer for all nvme drives
src/hw/nvme-int.h | 10 ---
src/hw/nvme.c | 163 ++++++++++++++++++++++------------------------
2 files changed, 78 insertions(+), 95 deletions(-)
--
2.31.1
Hi Alex,
I was looking at your recent fix for SeaBIOS NVME. I agree that it is
not valid to write to the f-segment during runtime. However, I was
struggling to understand the PRP list management in the nvme.c code.
I came up with a different implementation that avoids allocating
another buffer in the "high zone". Instead, the code in this patch
series builds the page list in the existing "dma bounce buffer".
I don't have a good way to test this on real hardware. It passes
simple qemu tests (both with and without kvm). For example:
qemu-system-x86_64 -k en-us -snapshot -L test -chardev stdio,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -m 512 -drive file=dos-drivec,if=none,id=drive0 -device nvme,drive=drive0,serial=nvme-1 -boot menu=on
Thanks,
-Kevin
Kevin O'Connor (5):
nvme: Rework nvme_io_readwrite() to return -1 on error
nvme: Add nvme_bounce_xfer() helper function
nvme: Convert nvme_build_prpl() to nvme_prpl_xfer()
nvme: Pass prp1 and prp2 directly to nvme_io_xfer()
nvme: Build the page list in the existing dma buffer
src/hw/nvme-int.h | 7 ---
src/hw/nvme.c | 143 ++++++++++++++++++++--------------------------
2 files changed, 61 insertions(+), 89 deletions(-)
--
2.31.1
On Wed, 19 Jan 2022 15:48:20 +0000
Peter Maydell <peter.maydell(a)linaro.org> wrote:
> On Wed, 19 Jan 2022 at 14:44, Godmar Back <gback(a)cs.vt.edu> wrote:
> > after upgrading to 6.2.0, I observed that code such as MIT's xv6 (see
> > [1]) is no longer able to detect multiple CPUs. Their code works in
> > 6.1.1, however.
>
> Hi; this isn't a great place for reporting QEMU bugs, because
> it's more of a user-to-user discussion list. Not all QEMU
> developers read it. I've cc'd the ACPI maintainers, who
> hopefully may have an idea about what's happening here.
> You could also file a bug at
> https://gitlab.com/qemu-project/qemu/-/issues
>
> > I built 6.1.1 from source and 6.2.0 from source and I have also tested
> > with CentOS stream's 6.1.1 qemu-kvm and was able to pinpoint this
> > change to these 2 versions of qemu. I am using qemu-system-i386
> > specifically.
> >
> > I tried to go through the ChangeLog to see if the `-smp` option was
> > deprecated or changed. I found this note [2] about invalid topologies
> > having been removed in 5.2. Here's what I found after long
> > experimentation:
> >
> > The legacy MP tables appear to work only if you specify the longform
> > `-smp cpus=4,cores=1,threads=1,sockets=4` in 6.2.0. If you specify
> > `-smp 4` or `-smp cpus=4` it will not work in 6.2.0 (it worked in
> > 6.1.1). I am guessing that perhaps the MP tables add entries for each
> > socket, but that perhaps the behavior of the shortcuts `-smp n` and
> > `-smp cpus=n` was changed to influence the number of cores rather than
> > sockets.
> >
> > In other words, `-smp cpus=n` now means `-smp
> > cpus=n,cores=n,threads=1,sockets=1` whereas in 6.1.1 and before it
> > meant `-smp cpus=n,cores=1,threads=1,sockets=n`. I note that
> > specifying `-smp cpus=4,cores=4,threads=1,sockets=1` in 6.1.1 also
> > does not create 4 entries in the legacy MP tables.
Thanks for reporting issue in such a detailed way.
QEMU doesn't generate legacy MP tables and as reported
the above issue is still present in earlier versions when
cores are used.
Well seabios has a comment:
/* Only populate the MPS tables with the first logical CPU in
each package */
So I'd guess it has never worked for anything but sockets.
With QEMU starting to prefer cores over sockets by default
I'd suggest to either
* explicitly provide desired topology (i.e. sockets)
* use older machine type which still preffers sockets by default
(ex: up to 6.1 machine types)
If anybody cares about legacy tables + cores/threads usecase,
I suggest to investigate what can be done on SeaBIOS side which
generates MP tables (assuming if anything could be done at all).
CCing SeaBIOS mail-list.
> > Can someone confirm/deny this? If so, it's a breaking change that
> > perhaps could be mentioned in the Changelog to save others the time
> > when they upgrade. Affected educational OS include MIT's xv6 and
> > Stanford's pintos OS.
Legacy MP table is not actively maintained part of the code,
hence it's configuration which is not tested.
However if someone is interested in maintaining this, one
should contribute at least a testcase that will warn developers
early if usecase is broken. We can't promise not breaking it
ever but at least we would be able to document any breaking
changes in release notes.
> > Thanks for all the work you do on qemu!
> >
> > - Godmar
> >
> > [1] https://github.com/mit-pdos/xv6-public/blob/eeb7b415dbcb12cc362d0783e41c3d1…
> > [2] https://qemu-project.gitlab.io/qemu/about/removed-features.html#smp-invalid…
> >
> > (I'm typing this email in gmail using the plaintext setting, hopefully
> > it'll preserve line breaks.)
>
> thanks
> -- PMM
>