(2011/10/12 2:00), Lai Jiangshan wrote:
From: Kenji Kaneshigekaneshige.kenji@jp.fujitsu.com
Currently, NMI interrupt is blindly sent to all the vCPUs when NMI button event happens. This doesn't properly emulate real hardware on which NMI button event triggers LINT1. Because of this, NMI is sent to the processor even when LINT1 is maskied in LVT. For example, this causes the problem that kdump initiated by NMI sometimes doesn't work on KVM, because kdump assumes NMI is masked on CPUs other than CPU0.
With this patch, KVM_NMI ioctl is handled as follows.
When in-kernel irqchip is enabled, KVM_NMI ioctl is handled as a request of triggering LINT1 on the processor. LINT1 is emulated in in-kernel irqchip.
When in-kernel irqchip is disabled, KVM_NMI ioctl is handled as a request of injecting NMI to the processor. This assumes LINT1 is already emulated in userland.
(laijs) Changed from v1: Add KVM_NMI API document Add KVM_CAP_USER_NMI
Signed-off-by: Kenji Kaneshigekaneshige.kenji@jp.fujitsu.com Tested-by: Lai Jiangshanlaijs@cn.fujitsu.com
Documentation/virtual/kvm/api.txt | 20 ++++++++++++++++++++ arch/x86/kvm/irq.h | 1 + arch/x86/kvm/lapic.c | 7 +++++++ arch/x86/kvm/x86.c | 12 ++++++++++++ include/linux/kvm.h | 3 +++ 5 files changed, 43 insertions(+), 0 deletions(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index b0e4b9c..5c24cc3 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -1430,6 +1430,26 @@ is supported; 2 if the processor requires all virtual machines to have an RMA, or 1 if the processor can use an RMA but doesn't require it, because it supports the Virtual RMA (VRMA) facility.
+4.64 KVM_NMI
+Capability: KVM_CAP_USER_NMI +Architectures: x86 +Type: vcpu ioctl +Parameters: none +Returns: 0 on success, -1 on error
+This ioctl injects NMI to the vcpu.
+If with capability KVM_CAP_LAPIC_NMI, KVM_NMI ioctl is handled as follows:
- When in-kernel irqchip is enabled, KVM_NMI ioctl is handled as a
request of triggering LINT1 on the processor. LINT1 is emulated in
in-kernel lapic irqchip.
- When in-kernel irqchip is disabled, KVM_NMI ioctl is handled as a
request of injecting NMI to the processor. This assumes LINT1 is
already emulated in userland lapic.
- The kvm_run structure
Application code obtains a pointer to the kvm_run structure by
diff --git a/arch/x86/kvm/irq.h b/arch/x86/kvm/irq.h index 53e2d08..0c96315 100644 --- a/arch/x86/kvm/irq.h +++ b/arch/x86/kvm/irq.h @@ -95,6 +95,7 @@ void kvm_pic_reset(struct kvm_kpic_state *s); void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu); void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu); void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu); +void kvm_apic_lint1_deliver(struct kvm_vcpu *vcpu); void __kvm_migrate_apic_timer(struct kvm_vcpu *vcpu); void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu); void __kvm_migrate_timers(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 57dcbd4..87fe36a 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1039,6 +1039,13 @@ void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu) kvm_apic_local_deliver(apic, APIC_LVT0); }
+void kvm_apic_lint1_deliver(struct kvm_vcpu *vcpu) +{
- struct kvm_lapic *apic = vcpu->arch.apic;
- kvm_apic_local_deliver(apic, APIC_LVT1);
+}
- static struct kvm_timer_ops lapic_timer_ops = { .is_periodic = lapic_is_periodic, };
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 84a28ea..6862ef7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2729,12 +2729,24 @@ static int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, return 0; }
+#ifdef KVM_CAP_LAPIC_NMI +static int kvm_vcpu_ioctl_nmi(struct kvm_vcpu *vcpu) +{
- if (irqchip_in_kernel(vcpu->kvm))
kvm_apic_lint1_deliver(vcpu);
- else
kvm_inject_nmi(vcpu);
- return 0;
+} +#else static int kvm_vcpu_ioctl_nmi(struct kvm_vcpu *vcpu) { kvm_inject_nmi(vcpu);
return 0; } +#endif
I don't think we need to keep old kvm_vcpu_ioctl_nmi() behavior because it's clearly a bug.
Regards, Kenji Kaneshige