On 03/16/2012 03:09 PM, Vasilis Liaskovitis wrote:
Hi,
On Thu, Mar 15, 2012 at 02:01:38PM +0200, Gleb Natapov wrote:
Commenting a little bit late, but since you've said that you are working on a new version of the patch... better late than never.
On Thu, Aug 11, 2011 at 04:39:38PM +0200, Vasilis Liaskovitis wrote:
Hi,
I am testing a set of experimental patches for memory-hotplug on x86_64 host / guest combinations. I have implemented this in a similar way to cpu-hotplug.
A dynamic SSDT table with all memory devices is created at boot time. This table calls static methods from the DSDT. A byte array indicates which memory device is online or not. This array is kept in sync with a qemu-kvm bitmap array through ioport 0xaf20. Qemu-kvm updates this table on a "mem_set" command and an ACPI event is triggered.
Memory devices are 128MB in size (to match /sys/devices/memory/block_size_bytes in x86_64). They are constructed dynamically in src/ssdt-mem.asl , similarly to hotpluggable-CPUs. The _CRS memstart-memend attribute for each memory device is defined accordingly, skipping the hole at 0xe0000000 - 0x100000000. Hotpluggable memory is always located above 4GB.
What is the reason for this limitation?
We currently model a PCI hole from below_4g_mem_size to 4GB, see i440fx_init call in pc_init1. The decision was discussed here: http://patchwork.ozlabs.org/patch/105892/ afaict because there was no clear resolution on using a top-of-memory register. So, hotplugging will start at 4GB + above_4g_mem_size. Unless we can model the pci hole more accurately hardware-wise.
Qemu-kvm sets the upper bound of hotpluggable memory with "maxmem = [totalmemory in MB]" on the command line. Maxmem is an argument for "-m" similar to maxcpus for smp. E.g. "-m 1024,maxmem=2048" on the qemu command line will create memory devices for 2GB of RAM, enabling only 1GB initially.
Qemu_monitor triggers a memory hotplug with: (qemu) mem_set [memory range in MBs] online
As far as I see mem_set does not get memory range as a parameter. The parameter is amount of memory to add/remove and memory is added/removed to/from the top.
This is not flexible enough. Find grained control for memory slots is needed. What about exposing memory slot configuration to command line like this:
-memslot mem=size,populated=yes|no
adding one of those for each slot.
yes, I agree we need this. Is the idea to model all physical DIMMs? For initial system RAM does it make sense to explicitly specify slots at the command line, or infer them?
I think we can allocate a new qemu ram MemoryRegion for each new hotplugged slot/DIMM, so there will be a 1-1 mapping between new populated slots and qemu memory ram regions. Perhaps we want initial memory allocation to also comply with physical slot/DIMM modeling. Initial (cold) RAM is created as a single MemoryRegion pc.ram
Also in kvm we can easily run out of kvm_memory_slots (10 slots per VM and 32 system-wide I think)
mem_set will get slot id to populate/depopulate just like cpu_set gets cpu slot number to remove and not just yanks cpus with highest slot id.
right, but I think for upstream qemu, people would like to eventually use device_add, instead of a new mem_set command. Pretty much the same way as cpu hotplug?
For this to happen, memory devices should be modeled in QOM/qdev. Are we planning on keeping a CPUSocket structures for CPUs? or perhaps modelling a memory controller
I'd rather dump CPUSocket structure unless it's really required, it was introduced just for providing hotplug-able icc bus for cpus since hot-plug on sysbus was disabled.
is the right way. What type should the memory controller/devices be a child of?
I 'll try to resubmit in a few weeks time, though depending on feedack qom/qdev of memory devices will probably take longer.
thanks,
- Vasilis
-- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html