On Mon, Oct 14, 2013 at 02:16:23PM +0200, Gerd Hoffmann wrote:
Hi,
And there is slight difference between PCI holes and PCI address space mappings represented by MemoryRegion-s in QEMU.
Basically we only need to inform BIOS where to PCI address spaces start and simple "etc/pcimem64-start" + "etc/pcimem32-start" are just fine for that. And for now (memory hotplug) we need only the first one, the second one is very well hardcoded in Seabios/QEMU.
Yes, 32bit hole is basically the [ end-of-memory -> ioapic ] range. Can't see any reason to make that configurable, it is quite unlikely that this ever changes.
For the 64bit window it makes sense to configure it, to give qemu a little more control about the address space layout.
Hmm okay but the only reason we want to control this is because it conflicts with memory hotplug, no?
I vaguely recall we discussed just passing amount of hot-pluggable memory in, in the past, but I don't remember the reason we decided against it. Could you refresh my memory?
To me it makes more sense to just go the direct route and say "please put the 64bit bars at this location" rather than indirect "we might want hotplug $thatmuch memory" and then expect the bios to leave that much room.
Only if the newfeature address is not under bios control. I know that bios is simplistic so all it cares about ATM is pci window, but can't shake the impression that we are better off telling the guest what's going on rather than what it should do.
In particular the issue that was discussed (what to do if pci start is set by host to below ram end) will simply go away if we pass in an incremental value: there will be no invalid configurations.
If we ever want reserve more address space for other reasons it'll be easier with the direct variant as the bios doesn't need to be aware that we'll need some address space for $newfeature too.
We can call it "reserved memory space" rather than hotplug memory if you prefer.
A more flexible approach would be a separate table (like e820 table, but only for PCI ranges), but it seems a bit of overkill for now (I can't picture a need for more than 2 PCI address space mappings).
It's not unthinkable. Multiple ECAM regions (for multi-root systems) can make holes in the address space.
Sounds pretty theoretic ...
What? Multiple PCI roots?
Also, we just ignore everything above the ioapic, but that's not a must, we could maybe use address space above ioapic.
Any reason why we should that?
cheers, Gerd
32 bit address space is contrained, using it is preferable for 32 bit guests ...