[coreboot] AMD platform: IO-APIC => Local APIC delivery modes

Andriy Gapon avg at FreeBSD.org
Sat Oct 8 21:42:45 CEST 2016


There was a bit of incorrect information in my previous email.

I performed tests on a similar system but with SB850 southbridge:
https://www.asus.com/ae-en/Motherboards/M4A89GTD_PRO/specifications/
This system has a Phenom II X4 955 processor installed.
BTW, the system that I tested earlier has an Athlon II X2 250 processor.
And then I retested the original system.

So, I was wrong that there was no PIC -> LINT0 connection.  On both systems that
connection exists and work perfectly well.  To be clear, my tests show that PIC
is connected to both LINT0 and I/O APIC pin 0.
As I've just written, programming LINT0 LVT works as expected.

But routing interrupts via I/O APIC pin 0 has exactly the same problems as the
other I/O interrupts.

When the delivery mode is set to ExtINT the interrupts seem to be delivered just
if they were normal (fixed) vectored interrupts.  So, e.g., setting vector bits
to zero produces RcvdIllegalVector on a target core, but setting a valid vector
results a in call to it.

When the delivery mode is set to NMI, on the SB700+Athlon system I get an
immediate reset.  On the SB850+Phenom system it seems that only the target core
hangs.  Other cores keep working and they report SendAcceptError when trying to
IPI the affected core.

There is one quirk that caused my confusion during the original tests.
On the SB700+Athlon system clearing LintEn bit results in LINT0 ignoring PIC
interrupts.  On the SB850+Phenom that bit does not affect PIC -> LINT0 route.
The latter seems correct, the former looks like a bug.

So, as I wrote in the previous message, I can get things to work.
But I am still curious if it's possible to get NMI delivery mode work for I/O
APIC interrupts.  It's clear that the southbridges do send something.  But
either those message are wrong, or the processors are not properly accepting them.
I wonder if anyone got those working.


On 07/10/2016 17:59, Andriy Gapon wrote:
> 
> I apologize that my questions are not about coreboot and I am sort of hijacking
> the forum.  But I think that this is a place where people with good knowledge of
> the hardware can be found.  Maybe my issue will be of interest to you as well.
> And I hope to get some help.  Maybe even some secrets shared :-)
> 
> First, the hardware that I am talking about: it's a typical consumer system with
> a Family 10h AMD processor, SB700 southbridge and 780G northbridge:
> http://www.gigabyte.com/products/product-page.aspx?pid=3004#sp
> 
> What I want to achieve is to get an interrupt generated by a SuperIO chip
> (external to the chipset) delivered to a CPU as a NMI.
> The interrupt is IRQ3 connected to pin 3 of the IO-APIC and everything just
> works if I forget about NMI:
> device -> IO-APIC pin 3 -> (fixed mode interrupt message) -> Local APIC ->
> interrupt handler at the configured vector.
> NMI is what makes it interesting.
> 
> So, the first thing I tried is simply to set the NMI delivery mode for the pin.
> Unfortunately, that does not work, the system gets reset as soon as the
> interrupt is generated.
> So, my first question is: can that be made to work at all?
> Perhaps, there are some registers that need to be correctly programmed in the
> chipset or in the processor for that to work.
> Or maybe it can not work at all.  For example, for Intel ICH9R southbridge it is
> documented that SMI, NMI and INIT must not be used.  I couldn't find any such
> restriction explicitly stated for the AMD chipsets.
> 
> So, I decided to not give up and to try to use the legacy interrupt mode to get
> what I want.  I think that that's how Linux NMI watchdog driver used to work.
> So, I programmed LINT0 and LINT1 for NMI delivery mode (on all cores, all two of
> them), enabled legacy PIC (I guess that it's built into the chipset) and made
> sure that the interrupt is unmasked.  But absolutely nothing happened when the
> interrupt is generated.
> From this I concluded that the PIC is not connected to the CPUs.
> 
> Just to be sure that I didn't make any mistake with the PIC programming I
> decided to check that the only other possibility worked, that is, that the PIC
> is connected to pin 0 of the IO-APIC.  By default the pin was masked (by the OS,
> I guess), so I programmed it to ExtINT delivery mode with the BSP as the
> physical destination (also edge-triggered, active high).
> What I observed next was a bit surprising.  Every time I generated the interrupt
> the target CPU would set bit 6 (received illegal vector) in its Local APIC's
> error status register.
> I concluded that the interrupt got routed from the PIC to the pin 0 of the
> IO-APIC, but then there was a problem delivering the ExtINT message.
> 
> I looked for mentions of ExtINT in the Family 10 BKDG and stumbled upon the
> LintEn bit in F0x68 register.  The bit is described as such:
> 
>> LintEn: local interrupt conversion enable. Read-write. 1=Enables the conversion of broadcast
>> ExtInt and NMI interrupt requests to LINT0 and LINT1 local interrupts, respectively, before deliver-
>> ing to the local APIC. This conversion only takes place if the local APIC is hardware enabled. LINT0
>> and LINT1 are controlled by APIC350 and APIC360. 0=ExtInt/NMI interrupts delivered unchanged.
> 
> The bit was unset.  I decided to set it and see what happens.  Much to my
> surprise I got NMIs delivered to both cores.  Then I remembered that I still had
> NMI delivery mode set for LINT0 on both of them.  And this happened despite the
> destination being programmed to zero (the BSP's APIC ID), not the broadcast address.
> Another weird thing was that the 'received illegal vector' bit was still getting
> set.
> 
> So, I got what I wanted but in a complex configuration:
> device -> PIC -> IO-APIC pin 0 -> (ExtINT) -> (converted to LINT0) -> NMI
> 
> I decided to simplify that scheme a little bit to this:
> device -> IO-APIC pin 3 -> (ExtINT) -> (converted to LINT0) -> NMI
> so that I didn't have to enable the PIC.  That is, I reprogrammed the pin 3 to
> ExtINT delivery mode, masked pin 0 and disabled PIC.
> But that didn't work.  Regardless of how LintEn was set I was getting just the
> bad vector errors without NMI.
> 
> Then I decided to incrementally go back to working configuration.
> I enabled PIC - still no NMI.
> I unmasked IRQ3 - got NMI!  Pin 0 was still masked.
> So, this working configuration was slightly different from the original working
> configuration:
> device -> PIC [IRQ3 unmasked] -> IO-APIC pin 0 -> (Masked)
>       \-> IO-APIC pin 3 -> (ExtINT) -> (converted to LINT0) -> NMI
> So, the interrupt was going through the APIC route, but the legacy stuff had to
> be enabled for ExtINT to be converted to LINT0.
> 
> I think that this is an interesting discovery: PIC's configuration affects how
> the IO-APIC communicates to the Local APICs.
> 
> I went over the BKDG and SB7x0 documentation (RRG, RPG) and only PCI_Reg 62h of
> device 20 function 0 in SB7x0 caught my eye.  Namely, K8_INTR, MT3_Set and
> MT3_Auto bits.  They all are about K8 INTR [NMI] message.
> On my system the register is set to 0x24, that is K8_INTR is set, but MT3_Set
> and MT3_Auto are not...
> In the coreboot source code I see that the register is set up with exactly the
> same value in src/southbridge/amd/sb700/early_setup.c.
> And, for what it's worth, bit 5 (0x20 mask) is documented as "reserved".
> 
> I would greatly appreciate if anyone could tell me how to configure the system
> to get NMIs working in the simplest possible fashion.  That is, without enabling
> PIC.  And preferably without requiring the message to LINTn conversion with LintEn.
> 
> I will be happy with just magic setting.
> But if there will be an explanation of what the settings do and how interrupt
> routing works within the chipset and between the chipset and the CPU, then that
> would be terrific!
> 
> I hope that this was not a boring read.
> Thank you!
> 
> 


-- 
Andriy Gapon



More information about the coreboot mailing list