[coreboot] AMD platform: IO-APIC => Local APIC delivery modes

Andriy Gapon avg at FreeBSD.org
Fri Oct 7 16:59:15 CEST 2016

I apologize that my questions are not about coreboot and I am sort of hijacking
the forum.  But I think that this is a place where people with good knowledge of
the hardware can be found.  Maybe my issue will be of interest to you as well.
And I hope to get some help.  Maybe even some secrets shared :-)

First, the hardware that I am talking about: it's a typical consumer system with
a Family 10h AMD processor, SB700 southbridge and 780G northbridge:

What I want to achieve is to get an interrupt generated by a SuperIO chip
(external to the chipset) delivered to a CPU as a NMI.
The interrupt is IRQ3 connected to pin 3 of the IO-APIC and everything just
works if I forget about NMI:
device -> IO-APIC pin 3 -> (fixed mode interrupt message) -> Local APIC ->
interrupt handler at the configured vector.
NMI is what makes it interesting.

So, the first thing I tried is simply to set the NMI delivery mode for the pin.
Unfortunately, that does not work, the system gets reset as soon as the
interrupt is generated.
So, my first question is: can that be made to work at all?
Perhaps, there are some registers that need to be correctly programmed in the
chipset or in the processor for that to work.
Or maybe it can not work at all.  For example, for Intel ICH9R southbridge it is
documented that SMI, NMI and INIT must not be used.  I couldn't find any such
restriction explicitly stated for the AMD chipsets.

So, I decided to not give up and to try to use the legacy interrupt mode to get
what I want.  I think that that's how Linux NMI watchdog driver used to work.
So, I programmed LINT0 and LINT1 for NMI delivery mode (on all cores, all two of
them), enabled legacy PIC (I guess that it's built into the chipset) and made
sure that the interrupt is unmasked.  But absolutely nothing happened when the
interrupt is generated.
>From this I concluded that the PIC is not connected to the CPUs.

Just to be sure that I didn't make any mistake with the PIC programming I
decided to check that the only other possibility worked, that is, that the PIC
is connected to pin 0 of the IO-APIC.  By default the pin was masked (by the OS,
I guess), so I programmed it to ExtINT delivery mode with the BSP as the
physical destination (also edge-triggered, active high).
What I observed next was a bit surprising.  Every time I generated the interrupt
the target CPU would set bit 6 (received illegal vector) in its Local APIC's
error status register.
I concluded that the interrupt got routed from the PIC to the pin 0 of the
IO-APIC, but then there was a problem delivering the ExtINT message.

I looked for mentions of ExtINT in the Family 10 BKDG and stumbled upon the
LintEn bit in F0x68 register.  The bit is described as such:

> LintEn: local interrupt conversion enable. Read-write. 1=Enables the conversion of broadcast
> ExtInt and NMI interrupt requests to LINT0 and LINT1 local interrupts, respectively, before deliver-
> ing to the local APIC. This conversion only takes place if the local APIC is hardware enabled. LINT0
> and LINT1 are controlled by APIC350 and APIC360. 0=ExtInt/NMI interrupts delivered unchanged.

The bit was unset.  I decided to set it and see what happens.  Much to my
surprise I got NMIs delivered to both cores.  Then I remembered that I still had
NMI delivery mode set for LINT0 on both of them.  And this happened despite the
destination being programmed to zero (the BSP's APIC ID), not the broadcast address.
Another weird thing was that the 'received illegal vector' bit was still getting

So, I got what I wanted but in a complex configuration:
device -> PIC -> IO-APIC pin 0 -> (ExtINT) -> (converted to LINT0) -> NMI

I decided to simplify that scheme a little bit to this:
device -> IO-APIC pin 3 -> (ExtINT) -> (converted to LINT0) -> NMI
so that I didn't have to enable the PIC.  That is, I reprogrammed the pin 3 to
ExtINT delivery mode, masked pin 0 and disabled PIC.
But that didn't work.  Regardless of how LintEn was set I was getting just the
bad vector errors without NMI.

Then I decided to incrementally go back to working configuration.
I enabled PIC - still no NMI.
I unmasked IRQ3 - got NMI!  Pin 0 was still masked.
So, this working configuration was slightly different from the original working
device -> PIC [IRQ3 unmasked] -> IO-APIC pin 0 -> (Masked)
      \-> IO-APIC pin 3 -> (ExtINT) -> (converted to LINT0) -> NMI
So, the interrupt was going through the APIC route, but the legacy stuff had to
be enabled for ExtINT to be converted to LINT0.

I think that this is an interesting discovery: PIC's configuration affects how
the IO-APIC communicates to the Local APICs.

I went over the BKDG and SB7x0 documentation (RRG, RPG) and only PCI_Reg 62h of
device 20 function 0 in SB7x0 caught my eye.  Namely, K8_INTR, MT3_Set and
MT3_Auto bits.  They all are about K8 INTR [NMI] message.
On my system the register is set to 0x24, that is K8_INTR is set, but MT3_Set
and MT3_Auto are not...
In the coreboot source code I see that the register is set up with exactly the
same value in src/southbridge/amd/sb700/early_setup.c.
And, for what it's worth, bit 5 (0x20 mask) is documented as "reserved".

I would greatly appreciate if anyone could tell me how to configure the system
to get NMIs working in the simplest possible fashion.  That is, without enabling
PIC.  And preferably without requiring the message to LINTn conversion with LintEn.

I will be happy with just magic setting.
But if there will be an explanation of what the settings do and how interrupt
routing works within the chipset and between the chipset and the CPU, then that
would be terrific!

I hope that this was not a boring read.
Thank you!

Andriy Gapon

More information about the coreboot mailing list