Hi,
I will be back on the weekend. Just a comment to this:
+void smm_init(void) +{
- msr_t msr;
- msr = rdmsr(HWCR_MSR);
- if (msr.lo & (1 << 0)) {
// This sounds like a bug... ?
Well the lock survives all resets except power on.
printk(BIOS_DEBUG, "SMM is still locked from last boot, using old handler.\n");
return;
- }
- /* Only copy SMM handler once, not once per CPU */
- if (!smm_handler_copied) {
msr_t syscfg_orig, mtrr_aseg_orig;
smm_handler_copied = 1;
// XXX Really?
Yes if you mess with MTRR you need to do that otherwise it is undefined (check system programming manual AMD...)
disable_cache();
syscfg_orig = rdmsr(SYSCFG_MSR);
mtrr_aseg_orig = rdmsr(MTRRfix16K_A0000_MSR);
// XXX Why?
This is because there are AMD extension which tells if MTRR is MMIO or memory and we need them ON check AMD programming manual.
msr = syscfg_orig;
msr.lo |= SYSCFG_MSR_MtrrFixDramModEn;
Allow changes to MTRR extended attributes
msr.lo &= ~SYSCFG_MSR_MtrrFixDramEn;
turn the extended attributes off until we fix them so A0000 is routed to memory.
wrmsr(SYSCFG_MSR, msr);
/* set DRAM access to 0xa0000 */
// XXX but why?
And we tell that on A0000 is memory. This is true until SMM is enabled, then the SMM logic is used. We use the extended attributes.
msr.lo = 0x18181818;
msr.hi = 0x18181818;
wrmsr(MTRRfix16K_A0000_MSR, msr);
+#if 0 // obviously wrong stuff from Rudolf's patch
msr.lo |= SYSCFG_MSR_MtrrFixDramEn;
wrmsr(SYSCFG_MSR, msr);
This need to be fixed I want to write back the SYSCFG_MSR to disable the extended features.
+#endif
Up to now we enabled the Memory access to A0000 which until now is not used as
/* enable the SMM memory window */
msr = rdmsr(SMM_MASK_MSR);
msr.lo |= (1 << 0); // Enable ASEG SMRAM Range
msr.lo &= ~(1 << 2); // Open ASEG SMRAM Range
wrmsr(SMM_MASK_MSR, msr);
We need to COPY FIRST and then ENABLE SMM because until the enable ASEG is done we can write to memory as it is normal memory. (this is kind of equvalent of the D_OPEN in intel)
/* copy the real SMM handler */
memcpy((void *)SMM_BASE, &_binary_smm_start, (size_t)&_binary_smm_size);
wbinvd();
This needs to be bit more up.
msr = rdmsr(SMM_MASK_MSR);
msr.lo |= ~(1 << 2); // Close ASEG SMRAM Range
wrmsr(SMM_MASK_MSR, msr);
From now the SMM restrictions apply no acces to ASEG is possible outside SMM.
+#if 0 // obviously wrong stuff from Rudolf's patch
msr.lo &= ~SYSCFG_MSR_MtrrFixDramEn;
This just disables the extended attribs, but allows the modification of them
wrmsr(SYSCFG_MSR, msr);
+#endif
// XXX But why?
This turns off the magic extra MTTR access types and we disable them and restore what was there.
wrmsr(MTRRfix16K_A0000_MSR, mtrr_aseg_orig);
wrmsr(SYSCFG_MSR, syscfg_orig);
enable_cache();
- }
- /* But set SMM base address on all CPUs/cores */
- msr = rdmsr(SMM_BASE_MSR);
- msr.lo = SMM_BASE;
- wrmsr(SMM_BASE_MSR, msr);
+}
And yes we need to call SMM set base addr, together with SMM lock...
More on that later, I think we at least need to solve the settings of smm base on another cores and locking them. CHeck my todo list for details.
Hope it is more clear now. Please check following:
BKDG for opteron/athlon 13.2.1.2 SYSCFG Register And AMD system programming manual for the MTRR extended attributes.
Thanks, Rudolf