Attention is currently required from: Raul Rangel, Furquan Shaikh, Tim Wawrzynczak, Julius Werner, Rob Barnes, Karthik Ramasubramanian. Kyösti Mälkki has posted comments on this change. ( https://review.coreboot.org/c/coreboot/+/59021 )
Change subject: arch/x86/smp/spinlock: Fix threading when !STAGE_HAS_SPINLOCKS ......................................................................
Patch Set 2:
(1 comment)
File src/arch/x86/include/arch/smp/spinlock.h:
https://review.coreboot.org/c/coreboot/+/59021/comment/f4585196_d61553d1 PS2, Line 50: thread_coop_disable();
Can you explain why this __always_inline is needed? (Git says it has been here since 2005 and was ad […]
For this particular __always_inline, it might be unnecessary. I just don't know. AGESA codebase had (poor) structure were stack pointers were manipulated in a way that could not retrieve return address from the stack so inlining was mandatory. AMD family10 codebase has (well had) other issues with spinlocks and there is some ongoing effort in bringing asus/kgpe-d16 back upstream. This was mostly criticism on allowing to add non-inlined function call into function with __always_inline attribute in the first place.
A mutex with threading disabled has no other context to switch to, and behaves exactly like spinlock there. I believe Raul's proposal 1 in CB:59094 was to replace spinlocks with mutexes.
For a simple sequence, like with spin_lock(cmos_lock) where register access is not atomic but an index/data pair, I see no reason to switch to mutex, though.