Patrick Georgi (pgeorgi@google.com) just uploaded a new patch set to gerrit, which you can find at http://review.coreboot.org/11395
-gerrit
commit a829c2059a63f84ba5e977e81d937c9f4605f93a Author: Jimmy Huang jimmy.huang@mediatek.com Date: Thu Aug 13 10:48:49 2015 +0800
arm64: declare do_dcsw_op as function
do_dcsw_op is coded as a label, it's possible that linker will place do_dcsw_op on unaligned address. To avoid this situation, we declare do_dcsw_op as a function. Also explicitly set the 2nd argument of ENTRY_WITH_ALIGN(name, bits) to 2.
do_dcsw_op: cbz x3, exit c103d: b40003e3 cbz x3, c10b9 <exit> mov x10, xzr c1041: aa1f03ea mov x10, xzr adr x14, dcsw_loop_table // compute inner loop address
BRANCH=none BUG=none TEST=build and check do_dcsw_op in elf file
Change-Id: Ieb5f4188d6126ac9f6ddb0bfcc67452f79de94ad Signed-off-by: Patrick Georgi patrick@georgi-clan.de Original-Commit-Id: 4ee26b76089fab82cf4fb9b21c9f15b29e57b453 Original-Change-Id: Id331e8ecab7ea8782e97c10b13e8810955747a51 Original-Signed-off-by: Jimmy Huang jimmy.huang@mediatek.com Original-Reviewed-on: https://chromium-review.googlesource.com/293660 Original-Reviewed-by: Julius Werner jwerner@chromium.org Original-Commit-Queue: Yidi Lin yidi.lin@mediatek.com Original-Tested-by: Yidi Lin yidi.lin@mediatek.com --- src/arch/arm64/armv8/cache_helpers.S | 3 ++- src/arch/arm64/include/arch/asm.h | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/arch/arm64/armv8/cache_helpers.S b/src/arch/arm64/armv8/cache_helpers.S index dc74dad..b94bc30 100644 --- a/src/arch/arm64/armv8/cache_helpers.S +++ b/src/arch/arm64/armv8/cache_helpers.S @@ -54,7 +54,7 @@ b do_dcsw_op .endm
-do_dcsw_op: +ENTRY(do_dcsw_op) cbz x3, exit mov x10, xzr adr x14, dcsw_loop_table // compute inner loop address @@ -92,6 +92,7 @@ level_done: isb exit: ret +ENDPROC(do_dcsw_op)
.macro dcsw_loop _op loop2__op: diff --git a/src/arch/arm64/include/arch/asm.h b/src/arch/arm64/include/arch/asm.h index 851f3f9..878509e 100644 --- a/src/arch/arm64/include/arch/asm.h +++ b/src/arch/arm64/include/arch/asm.h @@ -30,7 +30,7 @@ .align bits; \ name:
-#define ENTRY(name) ENTRY_WITH_ALIGN(name, 0) +#define ENTRY(name) ENTRY_WITH_ALIGN(name, 2)
#define END(name) \ .size name, .-name