Attention is currently required from: Arthur Heymans, Kapil Porwal.
9 comments:
Patchset:
> > Why duplicate the x86 code into x64? Why not capture the differences with preprocessor as done i […]
Acknowledged
File payloads/libpayload/Kconfig:
Patch Set #20, Line 168: default 0x80100000 if ARCH_X86_64
why use a different address than 32bit code?
no reason, should be using the same addr
File payloads/libpayload/arch/x86/exec_64.S:
Patch Set #20, Line 41: i386_do_exec
I would expect 32bit code with this name.
Acknowledged
File payloads/libpayload/arch/x86/gdb.c:
should be larger size if you do movq. […]
Acknowledged
File payloads/libpayload/arch/x86/head_64.S:
/* Enable special x86 functions if present. */
push %rdi
push %rsi
push %rdx
push %rcx
push %r8
push %r9
Why save/restore registers that you don't use but not the ones you actually use like rax, rbx, rcx, rdx?
my bad, I misunderstood it with 64-bit ABIs.
File payloads/libpayload/arch/x86/libpayload_64.ldscript:
OUTPUT_FORMAT(elf64-x86-64)
OUTPUT_ARCH(x86_64)
Any reason for a separate linker script with the same content? You can add multiple OUTPUT_FORMAT and ARCH in here afaik?
as you know that coreinfo/nvramciu doesn't have support for x86_64 (and can only support i386 aka 32-bit mode) hence, we're temporarily using a separate linker script to avoid compilation issues. Once 64-bit support is added, we can consolidate to a single linker script using OUTPUT_ARCH(i386:x86_64) for multi-arch support
File payloads/libpayload/arch/x86/multiboot.c:
*
*
* Copyright (C) 2008 Advanced Micro Devices, Inc.
*
Multiboot2 predates long mode. I don't think this compilation unit should be added with long mode.
Acknowledged
Patch Set #20, Line 117: if (loader_rdi != MULTIBOOT_MAGIC)
Looking at how you intend to boot with will simply never be the case as RDI contains a pointer to the coreboot tables.
marking resolved based on https://review.coreboot.org/c/coreboot/+/81968/comment/9ec1549f_6d1a9277/
File payloads/libpayload/arch/x86/string.c:
asm volatile(
"rep ; movsl\n\t"
#if CONFIG(LP_ARCH_X86_64)
"movq %4,%%rcx\n\t"
#else
"movl %4,%%ecx\n\t"
#endif
"rep ; movsb\n\t"
: "=&c" (d0), "=&D" (d1), "=&S" (d2)
: "0" (n >> 2), "g" (n & 3), "1" (dest), "2" (src)
: "memory"
);
Increment by 8 bytes instead of 4.
#if CONFIG(LP_ARCH_X86_64)
asm volatile(
"rep ; movsq\n\t"
"mov %4,%%rcx\n\t"
"rep ; movsb\n\t"
: "=&c" (d0), "=&D" (d1), "=&S" (d2)
: "0" (n >> 3), "g" (n & 7), "1" (dest), "2" (src)
: "memory"
);
#else
asm volatile(
"rep ; movsl\n\t"
"movl %4,%%ecx\n\t"
"rep ; movsb\n\t"
: "=&c" (d0), "=&D" (d1), "=&S" (d2)
: "0" (n >> 2), "g" (n & 3), "1" (dest), "2" (src)
: "memory"
);
#endif
thanks for identifying the mistake.
To view, visit change 81968. To unsubscribe, or for help writing mail filters, visit settings.