Commenting a little bit late, but since you've said that you are working on a new version of the patch... better late than never.
On Thu, Aug 11, 2011 at 04:39:38PM +0200, Vasilis Liaskovitis wrote:
Hi,
I am testing a set of experimental patches for memory-hotplug on x86_64 host / guest combinations. I have implemented this in a similar way to cpu-hotplug.
A dynamic SSDT table with all memory devices is created at boot time. This table calls static methods from the DSDT. A byte array indicates which memory device is online or not. This array is kept in sync with a qemu-kvm bitmap array through ioport 0xaf20. Qemu-kvm updates this table on a "mem_set" command and an ACPI event is triggered.
Memory devices are 128MB in size (to match /sys/devices/memory/block_size_bytes in x86_64). They are constructed dynamically in src/ssdt-mem.asl , similarly to hotpluggable-CPUs. The _CRS memstart-memend attribute for each memory device is defined accordingly, skipping the hole at 0xe0000000 - 0x100000000. Hotpluggable memory is always located above 4GB.
What is the reason for this limitation?
Qemu-kvm sets the upper bound of hotpluggable memory with "maxmem = [totalmemory in MB]" on the command line. Maxmem is an argument for "-m" similar to maxcpus for smp. E.g. "-m 1024,maxmem=2048" on the qemu command line will create memory devices for 2GB of RAM, enabling only 1GB initially.
Qemu_monitor triggers a memory hotplug with: (qemu) mem_set [memory range in MBs] online
As far as I see mem_set does not get memory range as a parameter. The parameter is amount of memory to add/remove and memory is added/removed to/from the top.
This is not flexible enough. Find grained control for memory slots is needed. What about exposing memory slot configuration to command line like this:
-memslot mem=size,populated=yes|no
adding one of those for each slot.
mem_set will get slot id to populate/depopulate just like cpu_set gets cpu slot number to remove and not just yanks cpus with highest slot id.
-- Gleb.