On Tue, Jun 11, 2024 at 04:15:06PM GMT, John Levon wrote:
On Mon, Feb 26, 2024 at 04:00:34PM +0100, Gerd Hoffmann wrote:
On Mon, Feb 26, 2024 at 10:56:05AM +0000, Max Tottenham wrote:
On 02/26, Gerd Hoffmann wrote:
Recommended action: turn off 64-bit support (long mode) in the cpu:
qemu -cpu host,lm=off
Hi Gerd
Thanks for the response,
that gets the VM booting - unfortunately we have many customers who may be running 32bit distro kernels - we won't know ahead of time before launching the VM whether they need this compatibility flag or not, I don't think we can use this as a suitable work-around.
You can turn this off completely this way:
Apologies for resurrecting this old thread. We have just hit this case.
+#if 0 if (CPUPhysBits >= 36 && CPULongMode && RamSizeOver4G) pci_pad_mem64 = 1; +#endif
dprintf(1, "=== PCI bus & bridge init ===\n"); if (pci_probe_host() != 0) {
Another option would be to try tweak the condition which turns on pci_pad_mem64. The obvious candidate would be to raise the memory limit, i.e. turn this on only in case memory is present above 64G (outside the PAE-addressable physical address space), or choose some value between 4G and 64G.
I'm wondering how widespread it is in 2024 to run 32bit kernels with alot of memory?
I see that no change has been made to seabios for this regression. Is it the position of the maintainers that such guest VMs are no longer supported by seabios, and anyone doing so is responsible for patching as necessary? Or would there still be interest in fixing this up in master?
Well, the discussion simply died, while I was hoping for some feedback to figure how the heuristics can be adjusted best ...
Managing memory larger than the virtual address space comes with a performance penalty because it is not possible to map all memory permanently. So a 32-bit kernel has to do more page table updates than a 64-bit kernel because any access to HIGHMEM requires mapping changes.
The more memory a 32-bit kernel has to manage the higher the performance penalty is. Not only due to more HIGHMEM mapping operations, but also because the amount of LOWMEM (permanently mapped) memory needed to manage the memory (i.e. the 'struct page' array in case of linux) goes up, increasing the memory pressure in LOWMEM.
So my naive expectation would be that 32-bit guests have relatively small amounts of memory assigned, where the performance hit isn't too much of a problem. I have no idea whenever this is actually the case though.
So, in short, can you (or anyone else running into this) share some information what the typical / maximal amount of memory is for 32-bit guests in real world deployments?
thanks & take care, Gerd