Laszlo Ersek lersek@redhat.com writes:
Is it possible that the current barrier() is not sufficient for the intended purpose in an L2 guest?
What happens if you drop your current patch, but replace
__asm__ __volatile__("": : :"memory")
in the barrier() macro definition, with a real, heavy-weight barrier, such as
__asm__ __volatile__("mfence": : :"memory")
(See mb() in "arch/x86/include/asm/barrier.h" in the kernel.)
Thanks for the suggestion,
unfortunately, it doesn't change anything :-(
... I think running in L2 could play a role here; see "Documentation/memory-barriers.txt", section "VIRTUAL MACHINE GUESTS"; from kernel commit 6a65d26385bf ("asm-generic: implement virt_xxx memory barriers", 2016-01-12).
See also the commit message.
I see, thank you.
It seems, however, that the issue here is not about barriers: first of all it is 100% reproducible and second, surrounding '*(volatile u32 *)addr = val' with all sorts of barriers doesn't help. I *think* this is some sort of a mis-assumption about this memory which is handled with vmexits so both L0 and L1 hypervisors are getting involved. More debugging ...