qemu ramfb allows to place a boot framebuffer in normal ram. The ramfb
vgabios needs a bigger chunk of ram for that, so increase the amout of
Obvious drawback is we waste the memory in case ramfb isn't used.
Better ideas are welcome.
Signed-off-by: Gerd Hoffmann <kraxel(a)redhat.com>
src/config.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/config.h b/src/config.h
index 93c8dbc2d5..5912c0dd29 100644
@@ -17,7 +17,7 @@
// Maximum number of map entries in the e820 map
#define BUILD_MAX_E820 32
// Space to reserve in high-memory for tables
-#define BUILD_MAX_HIGHTABLE (256*1024)
+#define BUILD_MAX_HIGHTABLE (4*1024*1024)
// Largest supported externaly facing drive id
#define BUILD_MAX_EXTDRIVE 16
// Number of bytes the smbios may be and still live in the f-segment
KVM Forum 2018: Call For Participation
October 24-26, 2018 - Edinburgh International Conference Centre - Edinburgh, UK
(All submissions must be received before midnight June 14, 2018)
KVM Forum is an annual event that presents a rare opportunity
for developers and users to meet, discuss the state of Linux
virtualization technology, and plan for the challenges ahead.
We invite you to lead part of the discussion by submitting a speaking
proposal for KVM Forum 2018.
At this highly technical conference, developers driving innovation
in the KVM virtualization stack (Linux, KVM, QEMU, libvirt) can
meet users who depend on KVM as part of their offerings, or to
power their data centers and clouds.
KVM Forum will include sessions on the state of the KVM
virtualization stack, planning for the future, and many
opportunities for attendees to collaborate. After more than ten
years of development in the Linux kernel, KVM continues to be a
critical part of the FOSS cloud infrastructure.
This year, KVM Forum is joining Open Source Summit in Edinburgh, UK. Selected
talks from KVM Forum will be presented on Wednesday October 24 to the full
audience of the Open Source Summit. Also, attendees of KVM Forum will have
access to all of the talks from Open Source Summit on Wednesday.
* Scaling, latency optimizations, performance tuning, real-time guests
* Hardening and security
* New features
KVM and the Linux kernel:
* Nested virtualization
* Resource management (CPU, I/O, memory) and scheduling
* VFIO: IOMMU, SR-IOV, virtual GPU, etc.
* Networking: Open vSwitch, XDP, etc.
* virtio and vhost
* Architecture ports and new processor features
* Management interfaces: QOM and QMP
* New devices, new boards, new architectures
* Graphics, desktop virtualization and virtual GPU
* New storage features
* High availability, live migration and fault tolerance
* Emulation and TCG
* Firmware: ACPI, UEFI, coreboot, U-Boot, etc.
Management and infrastructure
* Managing KVM: Libvirt, OpenStack, oVirt, etc.
* Storage: Ceph, Gluster, SPDK, etc.r
* Network Function Virtualization: DPDK, OPNFV, OVN, etc.
SUBMITTING YOUR PROPOSAL
Abstracts due: June 14, 2018
Please submit a short abstract (~150 words) describing your presentation
proposal. Slots vary in length up to 45 minutes. Also include the proposal
type -- one of:
- technical talk
- end-user talk
Submit your proposal here:http://events.linuxfoundation.org/cfp
Please only use the categories "presentation" and "panel discussion"
You will receive a notification whether or not your presentation proposal
was accepted by August 10, 2018.
Speakers will receive a complimentary pass for the event. In the instance
that case your submission has multiple presenters, only the primary speaker for a
proposal will receive a complimentary event pass. For panel discussions, all
panelists will receive a complimentary event pass.
A good technical talk should not just report on what has happened over
the last year; it should present a concrete problem and how it impacts
the user and/or developer community. Whenever applicable, focus on
work that needs to be done, difficulties that haven't yet been solved,
and on decisions that other developers should be aware of. Summarizing
recent developments is okay but it should not be more than a small
portion of the overall talk.
One of the big challenges as developers is to know what, where and how
people actually use our software. We will reserve a few slots for end
users talking about their deployment challenges and achievements.
If you are using KVM in production you are encouraged submit a speaking
proposal. Simply mark it as an end-user talk. As an end user, this is a
unique opportunity to get your input to developers.
HANDS-ON / BOF SESSIONS
We will reserve some time for people to get together and discuss
strategic decisions as well as other topics that are best solved within
These sessions will be announced during the event. If you are interested
in organizing such a session, please add it to the list at
Let people you think who might be interested know about your BOF, and encourage
them to add their names to the wiki page as well. Please try to
add your ideas to the list before KVM Forum starts.
If you are proposing a panel discussion, please make sure that you list
all of your potential panelists in your the abstract. We will request full
biographies if a panel is acceped.
HOTEL / TRAVEL
This year's event will take place at the Edinburgh International Conference Centre.
For information about discounted hotel room rate for conference attendees
at the nearby Sheraton Grand Hotel & Spa, Edinburgh, please visit
Submission deadline: June 14, 2018
Notification: August 10, 2018
Schedule announced: August 16, 2018
Event dates: October 24-26, 2018
Thank you for your interest in KVM. We're looking forward to your
submissions and seeing you at the KVM Forum 2018 in October!
-your KVM Forum 2018 Program Committee
Please contact us with any questions or comments at
> The change  itself is rather old, so I wondered if I'm missing that this
> was implemented in a totally different way. Do I have to switch/set options
> these days instead of using that patch?
It should just work. qemu passes ram regions to the firmware using
fw_cfg (etc/e820) these days, and both seabios and ovmf support this for
> So the question become why the change is not upstream yet?
> Was it maybe discussed in the past and Nack'ed for some reason?
RHEL/CentOS 6 implemented the extra cmos byte, but this was not accepted
upstream in favor of the more flexible e820 solution.
RHEL/CentOS 7 supports both etc/e820 (upstream) and cmos (el6), but the
later is only there for backward compatibility reasons.
On 11/06/2018 15:21, Christian Ehrhardt wrote:
> I was asked about x86 Guests of >1TB in size. And while some discussions
> where around qemu/libvirt and host-phys-bits  I realized that in
> Seabios I need exactly what is already in CentOS/RHEL  to get the
> phys-bits passed on.
> The change  itself is rather old, so I wondered if I'm missing that
> this was implemented in a totally different way. Do I have to switch/set
> options these days instead of using that patch?
That patch is not needed anymore. It is only there to support old
machine types. In newer versions of QEMU, QEMU builds the e820 memory
map for SeaBIOS, and that is enough to support >=1TB guests nicely.
> But I saw that it is still applied even to rather recent versions.
> So the question become why the change is not upstream yet?
> Was it maybe discussed in the past and Nack'ed for some reason?
> I didn't find the discussion if that is the case and would appreciate
> the pointer.
> We are closing in to make 1TB more common rather quickly, so I wonder if
> really nothing would speak against it - would it be reasonable to
> consider committing that upstream to Seabios these days?
> : https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1769053
> : https://git.centos.org/blob/rpms!seabios.git/14f0fd75785bc5f1468fa84fbd3a16…
> P.S. Subscribing people acking the original patch as they might have old
> context to provide on this.
> Christian Ehrhardt
> Software Engineer, Ubuntu Server
> Canonical Ltd
I was asked about x86 Guests of >1TB in size. And while some discussions
where around qemu/libvirt and host-phys-bits  I realized that in Seabios
I need exactly what is already in CentOS/RHEL  to get the phys-bits
The change  itself is rather old, so I wondered if I'm missing that this
was implemented in a totally different way. Do I have to switch/set options
these days instead of using that patch?
But I saw that it is still applied even to rather recent versions.
So the question become why the change is not upstream yet?
Was it maybe discussed in the past and Nack'ed for some reason?
I didn't find the discussion if that is the case and would appreciate the
We are closing in to make 1TB more common rather quickly, so I wonder if
really nothing would speak against it - would it be reasonable to consider
committing that upstream to Seabios these days?
P.S. Subscribing people acking the original patch as they might have old
context to provide on this.
Software Engineer, Ubuntu Server
On Tue, Jun 05, 2018 at 06:09:54PM +0300, Nikolay Ivanets wrote:
> > As far as I know, QEMU should be able to tell SeaBIOS the exact
> > geometry to use (eg, via qemu -hdachs c,h,s[,t] option). If you're
> > not getting the expected behavior, be sure to include the seabios log
> > file - see: https://www.seabios.org/Debugging
> Here is geometry when I have with patched bios (Windows boot):
> drive 0x000f57f0: PCHS=16383/16/63 translation=lba
> LCHS=1024/255/32 s=143305920
> Here is geometry I have with original Seabios (boot fails):
> drive 0x000f0bd0: PCHS=16383/16/63 translation=lba
> LCHS=1024/255/63 s=143305920
> Qemu limits what we can specify in CHS to 16383, 16 and 63
> respectively and I don't see any combinations which will perform
> necessary translation from PCHS=16383/16/63 to LCHS=1024/255/32.
If I understand it correctly, you're looking for LCHS=1024/255/32. I
don't know why QEMU wont let you specify heads=255, but short of doing
that, I don't see a way for SeaBIOS to obtain that logical mapping.
The disk geometry translation stuff is arcane - the SeaBIOS code is
translated from the original Bochs bios code. If it needs to be
enhanced, it would require both a qemu patch (to specify the desired
translation/geometry) and a seabios patch (to implement it).
On Tue, Jun 05, 2018 at 12:51:54PM +0300, Nikolay Ivanets wrote:
> Well, I've played with disk geometry and translation mode options for
> attached disk to Qemu but it didn't help.
> Maximum I could do is to make correct VBR sector to load at boot
> process setting secs=32,trans=large or secs=32,trans=none but then
> bootstrap code from VBR fails with "Disk read error". It reads next N
> sectors into a memory trying to find bootloader (NTLDR) and fails. I
> also noticed Volume Boot Record (first partition sector) contains
> "sectors per track" record at offset 0x18 (from beginning of
> Finally I've replace all entries of '63' to '32' in src/block.c. Now
> number of sectors/track becomes 32 instead of hard-coded 63 in LBA
> translation mode . Compiled BIOS and supplied to Qemu. Now Windows
> boots successfully even without manually specifying of disk geometry.
> Definetely it is not a permanent fix. But might be considered as a
> workaround for disks with 32 sectors/track.
> p.s. It seems HP servers have an option 32/63 sectors per track and 32
> is default choese.
> Did anyone face with 32/63 sectors per track problem? How did you
> solve it, if any?
> May be I miss something and Seabios developers can point me a right direction?
As far as I know, QEMU should be able to tell SeaBIOS the exact
geometry to use (eg, via qemu -hdachs c,h,s[,t] option). If you're
not getting the expected behavior, be sure to include the seabios log
file - see: https://www.seabios.org/Debugging
Well, I've played with disk geometry and translation mode options for
attached disk to Qemu but it didn't help.
Maximum I could do is to make correct VBR sector to load at boot
process setting secs=32,trans=large or secs=32,trans=none but then
bootstrap code from VBR fails with "Disk read error". It reads next N
sectors into a memory trying to find bootloader (NTLDR) and fails. I
also noticed Volume Boot Record (first partition sector) contains
"sectors per track" record at offset 0x18 (from beginning of
Finally I've replace all entries of '63' to '32' in src/block.c. Now
number of sectors/track becomes 32 instead of hard-coded 63 in LBA
translation mode . Compiled BIOS and supplied to Qemu. Now Windows
boots successfully even without manually specifying of disk geometry.
Definetely it is not a permanent fix. But might be considered as a
workaround for disks with 32 sectors/track.
p.s. It seems HP servers have an option 32/63 sectors per track and 32
is default choese.
Did anyone face with 32/63 sectors per track problem? How did you
solve it, if any?
May be I miss something and Seabios developers can point me a right direction?
сб, 2 черв. 2018 о 17:49 Nikolay Ivanets <stenavin(a)gmail.com> пише:
> I have a bootable disk image taken from physical machine. Original BIOS is unknown. Operating system installed is Windows 2003 R2.
> I'm trying to boot it with Qemu and Seabios and get "Error loading operating system".
> After investigation I found:
> - image contains MBR partition table and single NTFS partition
> - partition starts at CHS (0,1,1) and absolute offset is 0x20 (32 sectors)
> Debugging MBR bootstrap code I found it makes several INT13 calls to BIOS: AH=08, AH=02 to get disk parameters and load first partition sector.
> It appeared BIOS returns CHS as (x,y,63) and thus MBR bootstrap loads 63rd sector instead 32nd and "Error loading operating system" happens.
> It seems well-known BIOS incompatibility issue.
> Is there any way to workaround this in Seabios?
> I would appreciate for any tips.
> Mykola Ivanets