Hi,
On Fri, Sep 21, 2012 at 04:03:26PM -0600, Eric Blake wrote:
On 09/21/2012 05:17 AM, Vasilis Liaskovitis wrote:
Guest can respond to ACPI hotplug events e.g. with _EJ or _OST method. This patch implements a tail queue to store guest notifications for memory hot-add and hot-remove requests.
Guest responses for memory hotplug command on a per-dimm basis can be detected with the new hmp command "info memhp" or the new qmp command "query-memhp"
Naming doesn't match the QMP code.
will fix.
Examples:
(qemu) device_add dimm,id=ram0
These notification items should probably be part of migration state (not yet implemented).
In the case of libvirt driving migration, you already said in 10/19 that libvirt has to start the destination with the populated=on|off fields correct for each dimm according to the state it was in at the time the
That patch actually alleviates this restriction for the off->on direction i.e. it allows for the target-VM to not have its args updated for dimm hot-add. (e.g. Let's say the source was started with a dimm, initialy off. The dimm is hot-plugged, and then migrated . WIth patch 10/19, the populated arg doesn't have to be updated on the target) The other direction (off->on) still needs correct arg change.
If libvirt/management layers guarantee the dimm arguments are correctly changed, I don't see that we need 10/19 patch eventually.
What I think is needed is another hmp/qmp command, that will report which dimms are on/off at any given time e.g.
(monitor) info memory-hotplug
dimm0: off dimm1: on ... dimmN: off
This can be used on the source by libvirt / other layers to find out the populated dimms, and construct the correct command line on the destination. Does this make sense to you?
The current patch only deals with success/failure event notifications (not on-off state of dimms) and should probably be renamed to "query-memory-hotplug-events".
host started the update. Can the host hot unplug memory after migration has started?
Good testcase. I would rather not allow any hotplug operations while the migration is happening.
What do we do with pci hotplug during migration currently? I found a discussion dating from a year ago, suggesting the same as the simplest solution, but I don't know what's currently implemented. http://lists.nongnu.org/archive/html/qemu-devel/2011-07/msg01204.html
+## +# @MemHpInfo: +# +# Information about status of a memory hotplug command +# +# @dimm: the Dimm associated with the result +# +# @result: the result of the hotplug command +# +# Since: 1.3 +# +## +{ 'type': 'MemHpInfo',
- 'data': {'dimm': 'str', 'request': 'str', 'result': 'str'} }
Should 'result' be a bool (true for success, false for still pending) or an enum, instead of a free-form string? Likewise, isn't 'request' going to be exactly one of two values (plug or unplug)?
agreed with 'request'.
For 'result' it is also a boolean, but with 'success' and 'failure' (rather than 'pending'). Items are only queued when the guest has given us a definite _OST or _EJ result wich is either success or fail. If an operation is pending, nothing is queued here.
Perhaps queueing pending operations also has a usecase, but this isn't addressed in this patch.
thanks,
- Vasilis