Attention is currently required from: Kapil Porwal, Subrata Banik.
9 comments:
Patchset:
Why duplicate the x86 code into x64? Why not capture the differences with preprocessor as done in coreboot?
I just submit something for your view and i felt we are back/forth talking about we need to implement ABI first.
I really wish to submit the clean code but i pull something from my downstream repo that I'm working to enable this support.
Alright. No worries. I suppose review can happen later.
File payloads/libpayload/Kconfig:
Patch Set #20, Line 168: default 0x80100000 if ARCH_X86_64
why use a different address than 32bit code?
File payloads/libpayload/arch/x86/exec_64.S:
Patch Set #20, Line 41: i386_do_exec
I would expect 32bit code with this name.
File payloads/libpayload/arch/x86/gdb.c:
should be larger size if you do movq. Maybe use uintptr_t for it?
File payloads/libpayload/arch/x86/head_64.S:
/* Enable special x86 functions if present. */
push %rdi
push %rsi
push %rdx
push %rcx
push %r8
push %r9
Why save/restore registers that you don't use but not the ones you actually use like rax, rbx, rcx, rdx?
File payloads/libpayload/arch/x86/libpayload_64.ldscript:
OUTPUT_FORMAT(elf64-x86-64)
OUTPUT_ARCH(x86_64)
Any reason for a separate linker script with the same content? You can add multiple OUTPUT_FORMAT and ARCH in here afaik?
File payloads/libpayload/arch/x86/multiboot.c:
*
*
* Copyright (C) 2008 Advanced Micro Devices, Inc.
*
Multiboot2 predates long mode. I don't think this compilation unit should be added with long mode.
Patch Set #20, Line 117: if (loader_rdi != MULTIBOOT_MAGIC)
Looking at how you intend to boot with will simply never be the case as RDI contains a pointer to the coreboot tables.
File payloads/libpayload/arch/x86/string.c:
asm volatile(
"rep ; movsl\n\t"
#if CONFIG(LP_ARCH_X86_64)
"movq %4,%%rcx\n\t"
#else
"movl %4,%%ecx\n\t"
#endif
"rep ; movsb\n\t"
: "=&c" (d0), "=&D" (d1), "=&S" (d2)
: "0" (n >> 2), "g" (n & 3), "1" (dest), "2" (src)
: "memory"
);
Increment by 8 bytes instead of 4.
#if CONFIG(LP_ARCH_X86_64)
asm volatile(
"rep ; movsq\n\t"
"mov %4,%%rcx\n\t"
"rep ; movsb\n\t"
: "=&c" (d0), "=&D" (d1), "=&S" (d2)
: "0" (n >> 3), "g" (n & 7), "1" (dest), "2" (src)
: "memory"
);
#else
asm volatile(
"rep ; movsl\n\t"
"movl %4,%%ecx\n\t"
"rep ; movsb\n\t"
: "=&c" (d0), "=&D" (d1), "=&S" (d2)
: "0" (n >> 2), "g" (n & 3), "1" (dest), "2" (src)
: "memory"
);
#endif
To view, visit change 81968. To unsubscribe, or for help writing mail filters, visit settings.