On 21.11.2010, at 20:19, Andreas Färber wrote:
Am 21.11.2010 um 20:14 schrieb Alexander Graf:
On 21.11.2010, at 19:53, Andreas Färber wrote:
Set up SLBs with slbmte instead of mtsrin, suggested by Alex.
v2:
- Don't initialize 64 SLBs, then invalidate them, as in IBM's application note
for the 970. Use slbia instead, recommended by Alex.
- Conditionalize when to use SLB or SR.
Cc: Alexander Graf agraf@suse.de Signed-off-by: Andreas Färber andreas.faerber@web.de
arch/ppc/qemu/ofmem.c | 25 ++++++++++++++++++++++--- 1 files changed, 22 insertions(+), 3 deletions(-)
diff --git a/arch/ppc/qemu/ofmem.c b/arch/ppc/qemu/ofmem.c index 72694b3..24f3a25 100644 --- a/arch/ppc/qemu/ofmem.c +++ b/arch/ppc/qemu/ofmem.c @@ -387,7 +387,7 @@ void setup_mmu( unsigned long ramsize ) { ofmem_t *ofmem;
- unsigned long sdr1, sr_base;
- unsigned long sdr1; unsigned long hash_base; unsigned long hash_mask = 0xfff00000; /* alignment for ppc64 */ int i;
@@ -399,13 +399,32 @@ setup_mmu( unsigned long ramsize ) sdr1 = hash_base | ((HASH_SIZE-1) >> 16); asm volatile("mtsdr1 %0" :: "r" (sdr1) );
+#if defined(__powerpc64__) || defined(CONFIG_PPC_64BITSUPPORT) +#ifdef CONFIG_PPC_64BITSUPPORT
Phew - too much ifdef for my taste. How about the idea I mentioned in the mail before to just make is_ppc64 return always 0 on ppc32 hosts without compat config option? Then you could also protect the slbia and slbmte and whatever ppc64 specific pieces with #ifdefs but not care about the rest :). Would hopefully make this code a lot more readable!
- if (is_ppc64()) {
+#endif
- /* Segment Lookaside Buffer */
- asm volatile("slbia" ::: "memory");
Inline function please :)
- for (i = 0; i < 16; i++) {
unsigned long rs = ((0x400 + i) << 12) | (0x10 << 7);
unsigned long rb = ((unsigned long)i << 28) | (1 << 27) | i;
asm volatile("slbmte %0,%1" :: "r" (rs), "r" (rb) : "memory");
Inline function again
Don't see the advantage here, these two are used nowhere else. But I'll do.
Yeah, it just makes the code easier to read. Sorry to make you write more code :). We can also later on do things like
static inline void slbmte(unsigned long rs, unsigned long rb) { #if defined(CONFIG_PPC64) || defined(CONFIG_PPC64_COMPAT) asm volatile("slbmte %0,%1" :: "r" (rs), "r" (rb) : "memory"); #endif }
at which point the assembler doesn't have to know about ppc64 instructions unless we're building to possibly run on ppc64 :).
- }
- asm volatile("isync" ::: "memory");
And this would be awesome to get as inline function too! :)
Is this one necessary at all?
If we're running the code in IR=1 and are modifying any slb entry that might be related to the segment we're running in, then yes.
I suspect we're in real mode here though, so the isync should happen before the mtmsr(mfmsr() | MSR_IR) or through an rfi which would be context synchronizing again :).
Alex