Hi, On Tue, Apr 24, 2012 at 10:52:24AM +0300, Gleb Natapov wrote:
On Mon, Apr 23, 2012 at 02:31:15PM +0200, Vasilis Liaskovitis wrote:
The 440fx spec mentions: "The address range from the top of main DRAM to 4 Gbytes (top of physical memory space supported by the 440FX PCIset) is normally mapped to PCI. The PMC forwards all accesses within this address range to PCI."
What we probably want is that the initial memory map creation takes into account all dimms specified (both populated/unpopulated)
Yes.
So "-m 1G, -device dimm,size=1G,populated=true -device dimm,size=1G,populated=false" would create a system map with top of memory and start of PCI-hole at 2G.
What -m 1G means on this command line? Isn't it redundant?
yes, this was redundant with the original concept.
May be we should make -m create non unplaggable, populated slot starting at address 0. Ten you config above will specify 3G memory with 2G populated (first of which is not removable) and 1G unpopulated. PCI hole starts above 3G.
I agree -m should mean one big unpluggable slot.
So in the new proposal,"-device dimm populated=true" means a hot-removable dimm that has already been hotplugged.
A question here is when exactly should the initial hot-add event for this dimm be played? If the relevant OSPM has not yet been initialized (e.g. acpi_memhotplug module in a linux guest needs to be loaded), the guest may not see the event. This is a general issue of course, but with initially populated hot-removable dimms it may be a bigger issue. Can ospm acpi initialization be detected?
Or maybe you are suggesting "populated=true" is part of initial memory (i.e. not hot-added, but still hot-removable). Though in that case guestOS may use it for bootmem allocations, making hot-remove more likely to fail at the memory offlining stage.
This may require some shifting of physical address offsets around 3.5GB-4GB - is this the minimum PCI hole allowed?
Currently it is 1G in QEMU code.
ok
E.g. if we specify 4x1GB DIMMs (onlt the first initially populated) -m 1G, -device dimm,size=1G,populated=true -device dimm,size=1G,populated=false -device dimm,size=1G,populated=false -device dimm,size=1G,populated=false
we create the following memory map: dimm0: [0,1G) dimm1: [1G, 2G) dimm2: [2G, 3G) dimm3: [4G, 5G) or dimm3 is split into [3G, 3.5G) and [4G, 4.5G)
does either of these options sound reasonable?
We shouldn't split dimms IMO. Just unnecessary complication. Better make bigger PCI hole.
ok
thanks,
- Vasilis