On Thu, Jun 20, 2024 at 02:48:03PM GMT, Kevin O'Connor wrote:
On Thu, Jun 20, 2024 at 07:08:38PM +0100, John Levon wrote:
On Thu, Jun 20, 2024 at 01:29:11PM -0400, Kevin O'Connor wrote:
I'm leery of moving this heuristic to 64G of ram. I can understand the logic of >4G of ram indicating support for >4G pci. However, it seems strange to me that there would be guests with 50G of ram that can't handle >4G pci, but not similar guests with 70G of ram.. It
Such a guest cannot possibly address that much, so it doesn't seem at all unreasonable to require a config change there. That is, by definition there can't be a workload in the OS that's relying on 70G of RAM.
Thanks. I missed that PAE is limited to 36bits (64GB).
I had assumed that, but it's not. That was the case in all 32-bit processors with PAE support.
But 64-bit processors have the same physical address space limit in both long mode and pae paging mode. For early intel processors that happened to be 64GB too.
Is the problem that SeaBIOS created PCI mappings >4G or is the problem that SeaBIOS created PCI mappings >64G?
Good question. Older linux kernels have known problems with mappings above 64TB (aka 46 phys-bits). This is addressed by commit a6ed6b701f0a ("limit address space used for pci devices.").
John, can you check with your guests? Try reduce pci_mem64_top and see if things start working at some point.
take care, Gerd