On 16.11.2010, at 00:52, Andreas Färber wrote:
Am 16.11.2010 um 00:41 schrieb Alexander Graf:
On 16.11.2010, at 00:39, Andreas Färber wrote:
Set up SLBs with slbmte instead of mtsrin, suggested by Alex. Adopt SLB example code from IBM application note.
Cc: Alexander Graf agraf@suse.de Signed-off-by: Andreas Färber andreas.faerber@web.de
arch/ppc/qemu/ofmem.c | 35 ++++++++++++++++++++++++++++++++--- 1 files changed, 32 insertions(+), 3 deletions(-)
diff --git a/arch/ppc/qemu/ofmem.c b/arch/ppc/qemu/ofmem.c index e8b0b24..85b9956 100644 --- a/arch/ppc/qemu/ofmem.c +++ b/arch/ppc/qemu/ofmem.c @@ -393,7 +393,7 @@ void setup_mmu( unsigned long ramsize ) { ofmem_t *ofmem;
- unsigned long sdr1, sr_base, msr;
- unsigned long sdr1, msr; unsigned long hash_base; unsigned long hash_mask = 0xfff00000; /* alignment for ppc64 */ int i;
@@ -405,13 +405,42 @@ setup_mmu( unsigned long ramsize ) sdr1 = hash_base | ((HASH_SIZE-1) >> 16); asm volatile("mtsdr1 %0" :: "r" (sdr1) );
+#ifdef __powerpc64__ +#define SLB_SIZE 64 +#else +#define SLB_SIZE 16 +#endif +#if 1//def __powerpc64__ +#if 1
- /* Initialize SLBs */
- for (i = 0; i < SLB_SIZE; i++) {
unsigned long rs = (i << 12) | (0 << 7);
unsigned long rb = ((unsigned long)i << 28) | (0 << 27) | i;
asm volatile("slbmte %0,%1" :: "r" (rs), "r" (rb) : "memory");
- }
- /* Invalidate SLBs */
- for (i = 1; i < SLB_SIZE; i++) {
unsigned long rb = ((unsigned long)i << 28) | (0 << 27);
asm volatile("slbie %0" :: "r" (rb) : "memory");
- }
+#endif
- /* Set SLBs */
- for (i = 0; i < 16; i++) {
unsigned long rs = ((0x400 + i) << 12) | (0x10 << 7);
unsigned long rb = ((unsigned long)i << 28) | (1 << 27) | i;
asm volatile("slbmte %0,%1" :: "r" (rs), "r" (rb) : "memory");
PPC32 doesn't have an SLB, only SRs :). So there you still need mtsrin (or mtsr).
Thanks for the reminder. Can we agree then that OpenBIOS/ppc64 only needs to care about SLBs? What I've been testing here though is that OpenBIOS/ppc on ppc64-softmmu doesn't break through my changes. And on ppc unsigned long is 32 bits only. :) Obviously needs cleanup, just an RFC.
Well, then why don't you leave it at the mtsrin code path? If you're running without MSR_SF you don't need any segments higher than 0xf :).
I'd be interested to hear if this code that I ported is really necessary or correct (first setting them up so that they don't refer to the same *SID, then explicitly invalidate them and then set up the 16 ones we care about for real.
I don't think you need to go through all that effort. For starters, you can just use "slbia" to invalidate all slb entries except for entry 0. That one you can just slbie manually. And then you can usually assume that on RESET, all segments are clear I'd assume :).
I usually prefer to read code instead of specs when it comes to the PPC MMU. So if you like, check out kvm.git. The ppc64 mmu implementation is in arch/powerpc/kvm/book3s_64_mmu.c. Check out kvmppc_mmu_book3s_64_slbmte and you'll quickly see which bits belong where :).
Also, is SLB_SIZE 64 a universal number? This is from a document on the 970 MMU, and it's supposedly implementation-specific. Sounds scary.
It is implementation specific. I don't think it gets smaller than 64 though, so you're good.
Alex