We will need writeback_range() to implement S3 suspend properly on K8, GeodeLX and Qemu. C7 and Core2 are not affected. Either way, writeback_range() is a good thing to have, so start implementing it on K8. I uncovered a bug in the AMD manuals. That bug is annotated in the code as well to make sure nobody wants to "fix" this code.
Compile tested on DBM690T and Qemu.
Signed-off-by: Carl-Daniel Hailfinger c-d.hailfinger.devel.2006@gmx.net
Index: corebootv3-writeback_range/include/arch/x86/cpu.h =================================================================== --- corebootv3-writeback_range/include/arch/x86/cpu.h (Revision 1046) +++ corebootv3-writeback_range/include/arch/x86/cpu.h (Arbeitskopie) @@ -329,5 +329,6 @@ void setup_resource_map(const struct rmap *rm, u32 max); EXPORT_SYMBOL(setup_resource_map);
+void writeback_range(unsigned long start, unsigned long len);
#endif /* ARCH_X86_CPU_H */ Index: corebootv3-writeback_range/arch/x86/amd/k8/stage1.c =================================================================== --- corebootv3-writeback_range/arch/x86/amd/k8/stage1.c (Revision 1046) +++ corebootv3-writeback_range/arch/x86/amd/k8/stage1.c (Arbeitskopie) @@ -1,7 +1,7 @@ /* * This file is part of the coreboot project. * - * Copyright (C) 2007 Advanced Micro Devices, Inc. + * Copyright (C) 2008 Carl-Daniel Hailfinger * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by @@ -39,6 +39,33 @@ }
/** + * This function is processor specific. + */ +void writeback_range(unsigned long start, unsigned long len) +{ + unsigned long cachelinesize, i; + /* Number of quadwords flushed by CLFLUSH in bit 8..15 + * NOTE: The AMD64 Architecture Programmer's Manual Volume 3 rev 3.14 + * CLFLUSH section contradicts the AMD CPUID Specification rev 2.28 + * about where to find CLFLUSH size. + * I'm using the CPUID spec definition. + */ + cachelinesize = (cpuid_ebx(0x00000001) & 0xff00) >> 8; + /* Convert quadwords to bytes */ + cachelinesize <<= 3; + /* MFENCE before and after CLFLUSH is needed according to + * AMD64 Architecture Programmer's Manual Volume 3: CLFLUSH + */ + __asm__ volatile ("mfence"); + for (i = start; i < start + len; i += cachelinesize) { + __asm__ volatile ( + "clflush (%0) \n" + ::"a" (i)); + } + __asm__ volatile ("mfence"); +} + +/** * Disable Cache As RAM (CAR) after memory is setup. * * Unknown how to do this just yet.