Am 21.11.2010 um 20:07 schrieb Alexander Graf:
On 21.11.2010, at 18:50, Andreas Färber wrote:
Clean up use of SDR1 fields.
IBM's "The Programming Environments for 32-Bit Microprocessors": The HTABMASK field in SDR1 contains a mask value that determines how many bits from the hash are used in the page table index. This mask must be of the form 0b00...011...1; that is, a string of 0 bits followed by a string of 1bits. The 1 bits determine how many additional bits (at least 10) from the hash are used in the index; HTABORG must have this same number of low-order bits equal to 0.
IBM's "The Programming Environments Manual for 64-bit Microprocessors" 3.0: The HTABSIZE field in SDR1 contains an integer value that determines how many bits from the hash are used in the page table index. This number must not exceed 28. HTABSIZE is used to generate a mask of the form 0b00...011...1; that is, a string of 0 bits followed by a string of 1-bits. The 1-bits determine how many additional bits (beyond the minimum of 11) from the hash are used in the index. The HTABORG must have this same number of low-order bits equal to 0.
Adjust alignment mask and hash size for ppc64.
Note that the page table does not yet scale with the memory size, and the secondary hash is not yet used.
v3:
- Fix shifts. Thanks to Segher and Alex.
- Fix HTABSIZE for 32-bit ppc64: ((2 << 15) - 1) >> 16 happened to
be 0, so that it worked as expected. Avoid underflow, spotted by Alex. Make this fix available for legacy 64-bit support as well.
- Drop get_hash_size(). It was only used in hash_page_32() and caused
unnecessary conversions between HTABMASK and size.
- Introduce {mfsdr1,mtsdr1} as suggested by Alex.
Note that this changes the volatility for hash_page_64().
- Conditionalize 64-bit support, drop 32-bit support for ppc64.
- Adjust hash size for ppc64.
v2:
- Use HTABSIZE for ppc64, fix HTABMASK usage for ppc.
Error spotted by Segher.
Cc: Alexander Graf agraf@suse.de Cc: Segher Boessenkool segher@kernel.crashing.org Signed-off-by: Andreas Färber andreas.faerber@web.de
diff --git a/arch/ppc/qemu/ofmem.c b/arch/ppc/qemu/ofmem.c index 72694b3..40a23ff 100644 --- a/arch/ppc/qemu/ofmem.c +++ b/arch/ppc/qemu/ofmem.c
@@ -228,6 +223,11 @@ ea_to_phys( ucell ea, ucell *mode ) return phys; }
+#if defined(__powerpc64__) || defined(CONFIG_PPC_64BITSUPPORT) +#ifndef CONFIG_PPC_64BITSUPPORT +#define hash_page_64 hash_page +#endif
I'm not sure why, but this doesn't feel good :). Too much preprocessor magic. Maybe explicitly call hash_page_64 or hash_page_32 depending on the outcome of if_ppc64() and not preprocess things?
Actually I am serious about getting rid of hash_page_32() in real ppc64 code. I have an idea to simplify this. On second thoughts the processor magic might be confusing when debugging.
Andreas