(2011/10/10 19:26), Avi Kivity wrote:
On 10/10/2011 08:06 AM, Lai Jiangshan wrote:
From: Kenji Kaneshigekaneshige.kenji@jp.fujitsu.com
Currently, NMI interrupt is blindly sent to all the vCPUs when NMI button event happens. This doesn't properly emulate real hardware on which NMI button event triggers LINT1. Because of this, NMI is sent to the processor even when LINT1 is maskied in LVT. For example, this causes the problem that kdump initiated by NMI sometimes doesn't work on KVM, because kdump assumes NMI is masked on CPUs other than CPU0.
With this patch, KVM_NMI ioctl is handled as follows.
- When in-kernel irqchip is enabled, KVM_NMI ioctl is handled as a
request of triggering LINT1 on the processor. LINT1 is emulated in in-kernel irqchip.
- When in-kernel irqchip is disabled, KVM_NMI ioctl is handled as a
request of injecting NMI to the processor. This assumes LINT1 is already emulated in userland.
Please add a KVM_NMI section to Documentation/virtual/kvm/api.txt.
-static int kvm_vcpu_ioctl_nmi(struct kvm_vcpu *vcpu) -{
- kvm_inject_nmi(vcpu);
- return 0;
-}
static int vcpu_ioctl_tpr_access_reporting(struct kvm_vcpu *vcpu, struct kvm_tpr_access_ctl *tac) { @@ -3038,9 +3031,10 @@ long kvm_arch_vcpu_ioctl(struct file *fi break; } case KVM_NMI: {
- r = kvm_vcpu_ioctl_nmi(vcpu);
- if (r)
- goto out;
- if (irqchip_in_kernel(vcpu->kvm))
- kvm_apic_lint1_deliver(vcpu);
- else
- kvm_inject_nmi(vcpu);
r = 0; break; }
Why did you drop kvm_vcpu_ioctl_nmi()?
Please add (and document) a KVM_CAP flag that lets userspace know the new behaviour is supported.
Sorry for the delayed responding.
I don't understand why new KVM_CAP flag is needed.
I think the old behavior was clearly a bug, and new behavior is not a new capability. Furthermore, the kvm patch and the qemu patch in this patchset can be applied independently. If only the kvm patch is applied, NMI bug in kernel irq is fixed and qemu NMI behavior is not changed. If the only the qemu patch is applied, qemu NMI bug is fixed and the NMI behavior in kernel irq is not changed.
Regards, Kenji Kaneshige