Hi Rudolf,
thanks for your comments... I will try to work them into the code. But here is some more discussion, maybe I am still confused.
* Rudolf Marek r.marek@assembler.cz [101214 23:54]:
Hi,
I will be back on the weekend. Just a comment to this:
+void smm_init(void) +{
- msr_t msr;
- msr = rdmsr(HWCR_MSR);
- if (msr.lo & (1 << 0)) {
// This sounds like a bug... ?
Well the lock survives all resets except power on.
Yes, and that might be a bug in how we keep state across resets.
Also, we were trying to initialize SMM several times on each boot, once per CPU and then once in the southbridge code. So you should actually have seen this message on every boot, with locking enabled. Not just on reboots.
printk(BIOS_DEBUG, "SMM is still locked from last boot, using old handler.\n");
return;
- }
- /* Only copy SMM handler once, not once per CPU */
- if (!smm_handler_copied) {
msr_t syscfg_orig, mtrr_aseg_orig;
smm_handler_copied = 1;
// XXX Really?
Yes if you mess with MTRR you need to do that otherwise it is undefined (check system programming manual AMD...)
I know about the cache disable requirement for MTRR changes
The comment was supposed to be about whether it's needed to modify the SYSCFG and the Fixes ASEG MTRR at all. The changes you are doing should be done through the SMM_MASK MSR instead. And there is no mention on SYSCFG changes for SMM in the BKDG. I keep thinking we should not do that at all.
disable_cache();
syscfg_orig = rdmsr(SYSCFG_MSR);
mtrr_aseg_orig = rdmsr(MTRRfix16K_A0000_MSR);
// XXX Why?
This is because there are AMD extension which tells if MTRR is MMIO or memory and we need them ON check AMD programming manual.
Which one, where? I am referring to 32559, and I can't find any such requirement in the SMM chapter. Check SMM_MASK MSR on page 282/283 in 32559.
msr = syscfg_orig;
msr.lo |= SYSCFG_MSR_MtrrFixDramModEn;
Allow changes to MTRR extended attributes
msr.lo &= ~SYSCFG_MSR_MtrrFixDramEn;
turn the extended attributes off until we fix them so A0000 is routed to memory.
wrmsr(SYSCFG_MSR, msr);
/* set DRAM access to 0xa0000 */
// XXX but why?
And we tell that on A0000 is memory. This is true until SMM is enabled, then the SMM logic is used. We use the extended attributes.
That's what we do with the /* enable the SMM memory window */ chunk below.
msr.lo = 0x18181818;
msr.hi = 0x18181818;
wrmsr(MTRRfix16K_A0000_MSR, msr);
+#if 0 // obviously wrong stuff from Rudolf's patch
msr.lo |= SYSCFG_MSR_MtrrFixDramEn;
wrmsr(SYSCFG_MSR, msr);
This need to be fixed I want to write back the SYSCFG_MSR to disable the extended features.
+#endif
Up to now we enabled the Memory access to A0000 which until now is not used as
/* enable the SMM memory window */
msr = rdmsr(SMM_MASK_MSR);
msr.lo |= (1 << 0); // Enable ASEG SMRAM Range
msr.lo &= ~(1 << 2); // Open ASEG SMRAM Range
wrmsr(SMM_MASK_MSR, msr);
We need to COPY FIRST and then ENABLE SMM because until the enable ASEG is done we can write to memory as it is normal memory. (this is kind of equvalent of the D_OPEN in intel)
No, I don't think so. The above does NOT ENABLE SMM but opens the SMM RAM area to be accessible. This is the equivalent to D_OPEN on Intel.
/* copy the real SMM handler */
memcpy((void *)SMM_BASE, &_binary_smm_start, (size_t)&_binary_smm_size);
wbinvd();
This needs to be bit more up.
It needs to be after the SMM_MASK write.
msr = rdmsr(SMM_MASK_MSR);
msr.lo |= ~(1 << 2); // Close ASEG SMRAM Range
wrmsr(SMM_MASK_MSR, msr);
From now the SMM restrictions apply no acces to ASEG is possible outside SMM.
Yes.
+#if 0 // obviously wrong stuff from Rudolf's patch
msr.lo &= ~SYSCFG_MSR_MtrrFixDramEn;
This just disables the extended attribs, but allows the modification of them
wrmsr(SYSCFG_MSR, msr);
+#endif
// XXX But why?
This turns off the magic extra MTTR access types and we disable them and restore what was there.
wrmsr(MTRRfix16K_A0000_MSR, mtrr_aseg_orig);
wrmsr(SYSCFG_MSR, syscfg_orig);
enable_cache();
- }
- /* But set SMM base address on all CPUs/cores */
- msr = rdmsr(SMM_BASE_MSR);
- msr.lo = SMM_BASE;
- wrmsr(SMM_BASE_MSR, msr);
+}
And yes we need to call SMM set base addr, together with SMM lock...
depending on what you mean by SMM lock. The smm_lock() function must be called after all CPU cores had a chance to set SMBASE to the correct value.
, so doing it in the same function is not really an option. It needs to happen some time later (in lpc.c, while the SMBASE setup happens in src/cpu/x86/lapic/lapic_cpu_init.c)
More on that later, I think we at least need to solve the settings of smm base on another cores and locking them.
My patch does exactly that. It does however set the same address on all cores, which is a bug. Will post an updated patch.
CHeck my todo list for details.
Ok.. here:
- only enabled board is m2v-mx se (trivial to enable others VIA 8237 based)
someone else got to do thi
- PCIe access is hardcoded to 0xe000000
PCIe access removed and PCI register saved in the latest patch
a) change the SMM handler to save PCIcfg/PCIdata??? regs b) make MMCONFIG static to 0xe00000 c) some clever way to get idea where it is without PCI access
a) implemented as we can not always rely on PCIe being there, and people will run into the problem later on.
- Dualcore/more cpus
I hardcoded that for single CPU. You need most likely change model_fxx_init.c and call smm_init from there (except that copy can done via CPU)
smm_init is called from initialize_cpus(), see above.
All CPUs needs to set Aseg ENable and SMM_BASE (to different address)
done in the latest patch, with same address though. This needs a fix.
No smm_relocate.S is used because we dont need it on AMD, we have the MSRs...
Done in the latest patch.
- remove HACK_SMM
The header file dislikes the include someone needs to fix that.
done in latest patch.
- SMM locking
Should be done (correctly) in the latest patch. Needs testing though.
Missing from your TODO is 3 and 4. The intel SMM handler can do that stuff, so implementing it is hopefully mostly a matter of finding a dilligent soul grabbing the data sheets and giving it a go:
- poweroff without 4s delay before ACPI is there - acpi enable/disable - IO traps
Hope it is more clear now. Please check following:
BKDG for opteron/athlon 13.2.1.2 SYSCFG Register And AMD system programming manual for the MTRR extended attributes.
Stefan