Author: rminnich Date: 2009-03-05 06:48:43 +0100 (Thu, 05 Mar 2009) New Revision: 1143
Modified: coreboot-v3/arch/x86/secondary.S Log: This is working up to the ljmpl to protected mode. It has all the debugging in, using locations 0 and _secondary_start as POST.
Calling from initram did not work out, as we have to disable_car in initram to make such a call work (on core2). For now, I am calling this from stage1_phase 3, before stage2 is called. But that has increased the code size of stage1, which is not a great idea.
What I am thinking we ought to do: call this from stage2, before phase 1, so that CPUs are nice and set up and quiet. Provide phase2 with an SMP-safe printk.
This is here so others may see it and correct my work. The good news is that SMP startup on core2 on v3 is now starting to go. But the better news is that the way this is working is pretty generic and ought to apply to much more than just core2.
To really look at object you might want to get ndisasm.
Signed-off-by: Ronald G. Minnich rminnich@gmail.com Acked-by: Ronald G. Minnich rminnich@gmail.com
Modified: coreboot-v3/arch/x86/secondary.S =================================================================== --- coreboot-v3/arch/x86/secondary.S 2009-03-01 19:23:02 UTC (rev 1142) +++ coreboot-v3/arch/x86/secondary.S 2009-03-05 05:48:43 UTC (rev 1143) @@ -28,28 +28,52 @@ .code16 .balign 4096 cli - movl $1b, %ebx + movw $0xdead, 0 + movw $0xbeef, 2 xorl %eax, %eax movl %eax, %cr3 /* Invalidate TLB*/ /* On hyper threaded cpus, invalidating the cache here is * very very bad. Don't. */ + movw $0, 0 + movl $1b, %ebx + movw $1, 0 + movw $2, 0
/* setup the data segment */ movw %cs, %ax + movw %ax, 2 + movw $3, 0 movw %ax, %ds + movw $4, 0 + /* past this point, "0" means ds:0, i.e. cs:0, or the + * segment part of the address. + */
data32 lgdt gdtaddr - _secondary_start +// data32 lgdt %cs:gdtptr + movw $5, 0
movl %cr0, %eax + movw $6, 0 andl $0x7FFAFFD1, %eax /* PG,AM,WP,NE,TS,EM,MP = 0 */ + movw $7, 0 orl $0x60000001, %eax /* CD, NW, PE = 1 */ + movw $8, 0 movl %eax, %cr0 + movw $9, 0 + hlt + /* tested to this point but not past it */
- ljmpl $0x10, $1f + /* I am pretty sure this just jumps back into + * ROM; it's an abs jump + */ + data32 ljmp $0x10, $secondary32 + movw $0xa, 0 1: .code32 secondary32: + hlt movw $0x18, %ax movw %ax, %ds movw %ax, %es