Interesting discussion... Thank you Patrick for making me aware of some "things". ;-)
Here is what read from my search about this topic from/on the net:
Processor switches to System Management Mode (SMM) from protected or real-address mode upon receiving System Management Interrupt (SMI) from various internal or external devices or generated by software. In response to SMI it executes special SMI handler located in System Management RAM (SMRAM) region reserved by the BIOS from Operating System for various SMI handlers. SMRAM is consisting of several regions contiguous in physical memory: compatibility segment (CSEG) fixed to addresses 0xA0000 - 0xBFFFF below 1MB or top segment (TSEG) that can reside anywhere in the physical memory. *If CPU accesses CSEG while not in SMM mode (regular protected mode code), memory controller forwards the access to video memory instead of DRAM. Similarly, non-SMM access to TSEG memory is not allowed by the hardware. Consequently, access to SMRAM regions is allowed only while processor is executing code in SMM mode. At boot time, system BIOS firmware initializes SMRAM, decompresses SMI handlers stored in BIOS ROM and copies them to SMRAM. BIOS firmware then should "lock" SMRAM to enable its protection guaranteed by chipset so that SMI handlers cannot be modified by the OS or any other code from this point on.*
Upon receiving SMI CPU starts fetching SMI handler instructions from SMRAM in big real mode with predefined CPU state. *Shortly after that, SMI code in modern systems initializes and loads Global Descriptor Table (GDT) and transitions CPU to protected mode without paging. SMI handlers can access 4GB of physical memory.* Operating System execution is suspended for the entire time SMI handler is executing till it resumes to protected mode and restarts OS execution from the point it was interrupted by SMI.
Default treatment of SMI and SMM code by the processor that supports virtual machine extensions (for example, Intel VMX) is to leave virtual machine mode upon receiving SMI for the entire time SMI handler is executing [intel_man]. Nothing can cause CPU to exit to virtual machine root (host) mode when in SMM, meaning that Virtual Machine Monitor (VMM) does not control/virtualize SMI handlers.
Quite obviously that SMM represents an isolated and "privileged" environment and is a target for malware/rootkits. Once malicious code is injected into SMRAM, no OS kernel or VMM based anti-virus software can protect the system nor can they remove it from SMRAM.
It should be noted that the first study of SMM based rootkits was provided in the paper [smm] followed by [phrack_smm] and a proof of concept implementation of SMM based keylogger and network backdoor in [smm_rkt].
Very interesting... Never analyzed in my entire life SMM mode (too lazy and too ignorant, I admit), but, after reading of this thread, I started searching, much more important, I started finally thinking (mozgi krutit6)! And few questions actually popped in my mind.
It is obvious, that there are dedicated HW extensions in x86 CPU/PCH architecture which are enabling CPU SMM mode upon SMI int. Then, the whole show is done automatically by HW. The 0xA0000 to 0xBFFFF legacy space got swapped by 128KB of DXE creation (UEFI stage where SMRAM memory (in the first 4GB of DDRAM) is defined and initialized, then populated/decompressed from BIOS SPI as SMI handler), and then this 128KB of SMRAM is locked and shadowed by real 0xA0000 - 0xBFFFF video memory Real 16bit Mode.
And, questions... marked by [#]!
[1] Why BIOS flash should not be read and SMI handler decompressed outside the executable environment? OK, I do get it, BIOS flash is normally locked. [2] So, to program 2MB/4MB of Coreboot on the top of BIOS, the BIOS flash should be unlocked, correct? Or signed?
OK, even then, let say, I overlooked something... Now, I see. Since it is impossible to access 128KB of SMI handler (shadowed by HW), YES, it accesses somewhere in its code PCIe function: uint32_t pci_read_config32[d](pci_devfn_t dev, *unsigned int where*), and *where* can be easily changed in other modes. Weak point for rootkit/malicious attack!
The new question here is: [3] Why SMI handler should access this f-n? For exploring/testing ACPI with platform PCIe devices? Or something else???
Since I see that SMI handler is actually putting the whole HW/platform in Protected NON paged mode (making/enforcing its own GTD), to have (?) enough memory for data required (it can access all TOLUD, up to 4GB, or I am mistaken)???
And, anyway: [4] How much of very complex logic could be put in SMI (text segment) handler, in total of 128KB of code? A lot, if the whole logic is written in x86 asm, which, I assume, it is truly?! Or I overlooked something!?
Thank you, Zoran _______
On Wed, Apr 12, 2017 at 7:17 AM, ron minnich rminnich@gmail.com wrote:
On Tue, Apr 11, 2017 at 7:18 PM Taiidan@gmx.com Taiidan@gmx.com wrote:
I was under the impression that coreboots native init boards disabled SMM post-init and that this issue only applies to intel's FSP blobbed stuff, am I incorrect?
we held the line on smm until about 2006, but the i945 pushed us over the edge. I personally don't like it but what can you do?
-- coreboot mailing list: coreboot@coreboot.org https://mail.coreboot.org/mailman/listinfo/coreboot