<p>Patrick Rudolph has uploaded this change for <strong>review</strong>.</p><p><a href="https://review.coreboot.org/c/coreboot/+/30118">View Change</a></p><pre style="font-family: monospace,monospace; white-space: pre-wrap;">arch/x86/boot: Call payload in protected mode<br><br>On ARCH_RAMSTAGE_X86_64 call the payload in protected mode.<br>Add a helper function to call arbitraty code in protected mode,<br>similar to the real mode call handler.<br><br>Tested using SeaBios as payload.<br>Untested for anything else.<br><br>Change-Id: I6552ac30f1b6205e08e16d251328e01ce3fbfd14<br>Signed-off-by: Patrick Rudolph <siro@das-labor.org><br>---<br>M src/arch/x86/boot.c<br>M src/arch/x86/c_start.S<br>M src/include/program_loading.h<br>3 files changed, 186 insertions(+), 10 deletions(-)<br><br></pre><pre style="font-family: monospace,monospace; white-space: pre-wrap;">git pull ssh://review.coreboot.org:29418/coreboot refs/changes/18/30118/1</pre><pre style="font-family: monospace,monospace; white-space: pre-wrap;"><span>diff --git a/src/arch/x86/boot.c b/src/arch/x86/boot.c</span><br><span>index 2967cf6..cb9f34b 100644</span><br><span>--- a/src/arch/x86/boot.c</span><br><span>+++ b/src/arch/x86/boot.c</span><br><span>@@ -32,13 +32,20 @@</span><br><span> </span><br><span> void arch_prog_run(struct prog *prog)</span><br><span> {</span><br><span style="color: hsl(0, 100%, 40%);">- __asm__ volatile (</span><br><span style="color: hsl(0, 100%, 40%);">-#ifdef __x86_64__</span><br><span style="color: hsl(0, 100%, 40%);">- "jmp *%%rdi\n"</span><br><span style="color: hsl(120, 100%, 40%);">+#ifdef __ARCH_x86_64__</span><br><span style="color: hsl(120, 100%, 40%);">+#if ENV_RAMSTAGE</span><br><span style="color: hsl(120, 100%, 40%);">+ protected_mode_call(false, (uintptr_t)prog_entry(prog),</span><br><span style="color: hsl(120, 100%, 40%);">+ (uintptr_t)prog_entry_arg(prog));</span><br><span> #else</span><br><span style="color: hsl(0, 100%, 40%);">- "jmp *%%edi\n"</span><br><span style="color: hsl(0, 100%, 40%);">-#endif</span><br><span style="color: hsl(0, 100%, 40%);">-</span><br><span style="color: hsl(120, 100%, 40%);">+ __asm__ volatile (</span><br><span style="color: hsl(120, 100%, 40%);">+ "jmp *%%rdi\n"</span><br><span> :: "D"(prog_entry(prog))</span><br><span> );</span><br><span style="color: hsl(120, 100%, 40%);">+#endif</span><br><span style="color: hsl(120, 100%, 40%);">+#else</span><br><span style="color: hsl(120, 100%, 40%);">+ __asm__ volatile (</span><br><span style="color: hsl(120, 100%, 40%);">+ "jmp *%%edi\n"</span><br><span style="color: hsl(120, 100%, 40%);">+ :: "D"(prog_entry(prog))</span><br><span style="color: hsl(120, 100%, 40%);">+ );</span><br><span style="color: hsl(120, 100%, 40%);">+#endif</span><br><span> }</span><br><span>diff --git a/src/arch/x86/c_start.S b/src/arch/x86/c_start.S</span><br><span>index 6426ef3..8148e58 100644</span><br><span>--- a/src/arch/x86/c_start.S</span><br><span>+++ b/src/arch/x86/c_start.S</span><br><span>@@ -50,7 +50,7 @@</span><br><span> movl %eax, %gs</span><br><span> #ifdef __x86_64__</span><br><span> mov $0x48, %ecx</span><br><span style="color: hsl(0, 100%, 40%);">- call SetCodeSelector</span><br><span style="color: hsl(120, 100%, 40%);">+ call SetCodeSelector64</span><br><span> #endif</span><br><span> </span><br><span> post_code(POST_ENTRY_C_START) /* post 13 */</span><br><span>@@ -207,7 +207,7 @@</span><br><span> </span><br><span> .section ".text._start", "ax", @progbits</span><br><span> #ifdef __x86_64__</span><br><span style="color: hsl(0, 100%, 40%);">-SetCodeSelector:</span><br><span style="color: hsl(120, 100%, 40%);">+SetCodeSelector64:</span><br><span> # save rsp because iret will align it to a 16 byte boundary</span><br><span> mov %rsp, %rdx</span><br><span> </span><br><span>@@ -219,14 +219,14 @@</span><br><span> push %rsp</span><br><span> pushfq</span><br><span> push %rcx # cx is code segment selector from caller</span><br><span style="color: hsl(0, 100%, 40%);">- mov $setCodeSelectorLongJump, %rax</span><br><span style="color: hsl(120, 100%, 40%);">+ mov $setCodeSelectorLongJump64, %rax</span><br><span> push %rax</span><br><span> </span><br><span> # the iret will continue at next instruction, with the new cs value</span><br><span> # loaded</span><br><span> iretq</span><br><span> </span><br><span style="color: hsl(0, 100%, 40%);">-setCodeSelectorLongJump:</span><br><span style="color: hsl(120, 100%, 40%);">+setCodeSelectorLongJump64:</span><br><span> # restore rsp, it might not have been 16-byte aligned on entry</span><br><span> mov %rdx, %rsp</span><br><span> ret</span><br><span>@@ -237,3 +237,161 @@</span><br><span> .previous</span><br><span> .code32</span><br><span> #endif</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+#ifdef __ARCH_x86_64__</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /*</span><br><span style="color: hsl(120, 100%, 40%);">+ * Functions to handle mode switches from longmode to protected</span><br><span style="color: hsl(120, 100%, 40%);">+ * mode. Similar to realmode switches.</span><br><span style="color: hsl(120, 100%, 40%);">+ */</span><br><span style="color: hsl(120, 100%, 40%);">+ .section .bss, "aw", @nobits</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ .section ".text._mode_switch", "ax", @progbits</span><br><span style="color: hsl(120, 100%, 40%);">+.code64</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ .globl protected_mode_call</span><br><span style="color: hsl(120, 100%, 40%);">+protected_mode_call:</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ push %rbp</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %rsp, %rbp</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Preserve registers */</span><br><span style="color: hsl(120, 100%, 40%);">+ push %rbx</span><br><span style="color: hsl(120, 100%, 40%);">+ push %r12</span><br><span style="color: hsl(120, 100%, 40%);">+ push %r13</span><br><span style="color: hsl(120, 100%, 40%);">+ push %r14</span><br><span style="color: hsl(120, 100%, 40%);">+ push %r15</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Arguments to stack */</span><br><span style="color: hsl(120, 100%, 40%);">+ push %rdi</span><br><span style="color: hsl(120, 100%, 40%);">+ push %rsi</span><br><span style="color: hsl(120, 100%, 40%);">+ push %rdx</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Ensure cache is clean. */</span><br><span style="color: hsl(120, 100%, 40%);">+ invd</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Running in compatibility mode? */</span><br><span style="color: hsl(120, 100%, 40%);">+ mov -48(%rbp), %rax</span><br><span style="color: hsl(120, 100%, 40%);">+ test %rax, %rax</span><br><span style="color: hsl(120, 100%, 40%);">+ jne 1f</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Disable paging */</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %cr0, %rax</span><br><span style="color: hsl(120, 100%, 40%);">+ and $0x7FFFFFFF, %rax</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %rax, %cr0</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Disable long mode */</span><br><span style="color: hsl(120, 100%, 40%);">+ mov $0xC0000080, %rcx</span><br><span style="color: hsl(120, 100%, 40%);">+ rdmsr</span><br><span style="color: hsl(120, 100%, 40%);">+ and $(~0x100), %rax</span><br><span style="color: hsl(120, 100%, 40%);">+ wrmsr</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Disable PAE */</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %cr4, %rax</span><br><span style="color: hsl(120, 100%, 40%);">+ and $(~0x20), %rax</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %rax, %cr4</span><br><span style="color: hsl(120, 100%, 40%);">+1:</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Set 32-bit code segment and ss */</span><br><span style="color: hsl(120, 100%, 40%);">+ mov $0x10, %rcx</span><br><span style="color: hsl(120, 100%, 40%);">+ call SetCodeSelector32</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+.code32</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Use flat 32-bit data segment. */</span><br><span style="color: hsl(120, 100%, 40%);">+ movl $0x18, %eax</span><br><span style="color: hsl(120, 100%, 40%);">+ movl %eax, %ds</span><br><span style="color: hsl(120, 100%, 40%);">+ movl %eax, %es</span><br><span style="color: hsl(120, 100%, 40%);">+ movl %eax, %ss</span><br><span style="color: hsl(120, 100%, 40%);">+ movl %eax, %fs</span><br><span style="color: hsl(120, 100%, 40%);">+ movl %eax, %gs</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ movl -56(%ebp), %eax /* Function to call */</span><br><span style="color: hsl(120, 100%, 40%);">+ movl -64(%ebp), %ebx /* Argument 0 */</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Align the stack */</span><br><span style="color: hsl(120, 100%, 40%);">+ andl $0xFFFFFFF0, %esp</span><br><span style="color: hsl(120, 100%, 40%);">+ subl $12, %esp</span><br><span style="color: hsl(120, 100%, 40%);">+ pushl %ebx /* Argument 0 */</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ call *%eax</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ pushl %eax</span><br><span style="color: hsl(120, 100%, 40%);">+ pushl $0</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Running in compatibility mode? */</span><br><span style="color: hsl(120, 100%, 40%);">+ mov -48(%ebp), %eax</span><br><span style="color: hsl(120, 100%, 40%);">+ test %eax, %eax</span><br><span style="color: hsl(120, 100%, 40%);">+ jne 1f</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Enable PAE */</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %cr4, %eax</span><br><span style="color: hsl(120, 100%, 40%);">+ or $0x20, %eax</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %eax, %cr4</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Enable long mode */</span><br><span style="color: hsl(120, 100%, 40%);">+ mov $0xC0000080, %ecx</span><br><span style="color: hsl(120, 100%, 40%);">+ rdmsr</span><br><span style="color: hsl(120, 100%, 40%);">+ or $0x100, %eax</span><br><span style="color: hsl(120, 100%, 40%);">+ wrmsr</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Enable paging */</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %cr0, %eax</span><br><span style="color: hsl(120, 100%, 40%);">+ or $0x80000000, %eax</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %eax, %cr0</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Just to be sure ... */</span><br><span style="color: hsl(120, 100%, 40%);">+ lgdt %cs:gdtaddr</span><br><span style="color: hsl(120, 100%, 40%);">+1:</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Back to long mode */</span><br><span style="color: hsl(120, 100%, 40%);">+ ljmp $0x48, $1f</span><br><span style="color: hsl(120, 100%, 40%);">+.code64</span><br><span style="color: hsl(120, 100%, 40%);">+1: movl $0x18, %eax</span><br><span style="color: hsl(120, 100%, 40%);">+ movl %eax, %ds</span><br><span style="color: hsl(120, 100%, 40%);">+ movl %eax, %es</span><br><span style="color: hsl(120, 100%, 40%);">+ movl %eax, %ss</span><br><span style="color: hsl(120, 100%, 40%);">+ movl %eax, %fs</span><br><span style="color: hsl(120, 100%, 40%);">+ movl %eax, %gs</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Place return value in rax */</span><br><span style="color: hsl(120, 100%, 40%);">+ pop %rax</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Restore registers */</span><br><span style="color: hsl(120, 100%, 40%);">+ pop %r15</span><br><span style="color: hsl(120, 100%, 40%);">+ pop %r14</span><br><span style="color: hsl(120, 100%, 40%);">+ pop %r13</span><br><span style="color: hsl(120, 100%, 40%);">+ pop %r12</span><br><span style="color: hsl(120, 100%, 40%);">+ pop %rbx</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ /* Restore stack pointer */</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %rbp, %rsp</span><br><span style="color: hsl(120, 100%, 40%);">+ pop %rbp</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ ret</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+.code64</span><br><span style="color: hsl(120, 100%, 40%);">+SetCodeSelector32:</span><br><span style="color: hsl(120, 100%, 40%);">+ # save rsp because iret will align it to a 16 byte boundary</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %rsp, %rdx</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ # use iret to jump to a 32-bit offset in a new code segment</span><br><span style="color: hsl(120, 100%, 40%);">+ # iret will pop cs:rip, flags, then ss:rsp</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %ss, %ax # need to push ss..</span><br><span style="color: hsl(120, 100%, 40%);">+ push %rax # push ss instuction not valid in x64 mode,</span><br><span style="color: hsl(120, 100%, 40%);">+ # so use ax</span><br><span style="color: hsl(120, 100%, 40%);">+ push %rsp</span><br><span style="color: hsl(120, 100%, 40%);">+ pushfq</span><br><span style="color: hsl(120, 100%, 40%);">+ push %rcx # cx is code segment selector from caller</span><br><span style="color: hsl(120, 100%, 40%);">+ mov $setCodeSelectorLongJump32, %rax</span><br><span style="color: hsl(120, 100%, 40%);">+ push %rax</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ # the iret will continue at next instruction, with the new cs value</span><br><span style="color: hsl(120, 100%, 40%);">+ # loaded</span><br><span style="color: hsl(120, 100%, 40%);">+ iretq</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+.code32</span><br><span style="color: hsl(120, 100%, 40%);">+setCodeSelectorLongJump32:</span><br><span style="color: hsl(120, 100%, 40%);">+ # restore esp, it might not have been 16-byte aligned on entry</span><br><span style="color: hsl(120, 100%, 40%);">+ mov %edx, %esp</span><br><span style="color: hsl(120, 100%, 40%);">+ ret</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+ .previous</span><br><span style="color: hsl(120, 100%, 40%);">+#endif</span><br><span>diff --git a/src/include/program_loading.h b/src/include/program_loading.h</span><br><span>index 468f0b3..84e5194 100644</span><br><span>--- a/src/include/program_loading.h</span><br><span>+++ b/src/include/program_loading.h</span><br><span>@@ -182,6 +182,17 @@</span><br><span> * if ramstage overwrites low memory. */</span><br><span> void backup_ramstage_section(uintptr_t base, size_t size);</span><br><span> </span><br><span style="color: hsl(120, 100%, 40%);">+/* Run function in protected mode.</span><br><span style="color: hsl(120, 100%, 40%);">+ * @arg compatibility_mode Use 32-bit compatibility mode instead of</span><br><span style="color: hsl(120, 100%, 40%);">+ * protected mode</span><br><span style="color: hsl(120, 100%, 40%);">+ * @arg func_ptr Function to call in protected mode</span><br><span style="color: hsl(120, 100%, 40%);">+ * @arg Argument to pass to called function</span><br><span style="color: hsl(120, 100%, 40%);">+ *</span><br><span style="color: hsl(120, 100%, 40%);">+ * @return The called function returned value</span><br><span style="color: hsl(120, 100%, 40%);">+*/</span><br><span style="color: hsl(120, 100%, 40%);">+uint32_t protected_mode_call(bool compatibility_mode, uintptr_t func_ptr,</span><br><span style="color: hsl(120, 100%, 40%);">+ uint32_t argument);</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span> /***********************</span><br><span> * PAYLOAD LOADING *</span><br><span> ***********************/</span><br><span></span><br></pre><p>To view, visit <a href="https://review.coreboot.org/c/coreboot/+/30118">change 30118</a>. To unsubscribe, or for help writing mail filters, visit <a href="https://review.coreboot.org/settings">settings</a>.</p><div itemscope itemtype="http://schema.org/EmailMessage"><div itemscope itemprop="action" itemtype="http://schema.org/ViewAction"><link itemprop="url" href="https://review.coreboot.org/c/coreboot/+/30118"/><meta itemprop="name" content="View Change"/></div></div>
<div style="display:none"> Gerrit-Project: coreboot </div>
<div style="display:none"> Gerrit-Branch: master </div>
<div style="display:none"> Gerrit-Change-Id: I6552ac30f1b6205e08e16d251328e01ce3fbfd14 </div>
<div style="display:none"> Gerrit-Change-Number: 30118 </div>
<div style="display:none"> Gerrit-PatchSet: 1 </div>
<div style="display:none"> Gerrit-Owner: Patrick Rudolph <siro@das-labor.org> </div>
<div style="display:none"> Gerrit-MessageType: newchange </div>