Attention is currently required from: Anastasia Klimchuk, Brian Norris, Chen, Nikolai Artemiev, Stefan Reinauer.
Peter Marheine has posted comments on this change. ( https://review.coreboot.org/c/flashrom/+/80806?usp=email )
Change subject: udelay: clock_getres() does not provide clock_gettime() resolution ......................................................................
Patch Set 3:
(1 comment)
Commit Message:
https://review.coreboot.org/c/flashrom/+/80806/comment/66216798_cba81099 : PS2, Line 10: Linux makes this clearer
I think `#ifdef __linux__` is a good plan B , but before we come to it let me try invite few more pe […]
I looked at what FreeBSD does:
1. `sys_clock_getres` calls `kern_clock_getres` 2. `kern_clock_geres(CLOCK_MONOTONIC)` computes the clock period using `tc_getfrequency` 3. `tc_getfrequency` uses the declared frequency of the first `struct timehands` in the system 4. The primary time handler is chosen in `tc_init` based on each counter's declared quality, where larger values are better
The RISC-V timecounter driver (`sys/riscv/riscv/timer.c`) declares a constant high quality (1000) and its frequency is defined by the `timebase-frequency` shared by all CPUs in the system. Among device trees in `sys/contrib/device-tree/src/riscv/`, it looks like this varies between 1 MHz and 24 MHz.
This traces back to `clock_gettime` through `nanouptime` and eventually `tc_delta` on the active timecounter, which calls its `tc_get_timecount`. In the RISC-V driver, that eventually executes a `rdtime` instruction which reads the hardware counter.
---
So at least FreeBSD on RISC-V reports accurate resolution for `CLOCK_MONOTONIC`, and it looks like other platforms are similar. It looks like the x86 PIIX timer driver has a known fixed frequency of ~3.57 MHz, for instance. I think Linux has an unusual interpretation here.
Another side of this is that Linux does this because it can't depend on a stable high-resolution timer existing, so it's conservative. On some systems it might end up needing to use an i8253 timer which has exactly the same period as one tick (== the reported resolution in low-resolution mode) so it's useful to use the calibrated delay loop.
I wonder if it might be more helpful to make this a user-visible option maybe with a compile-time default? There's no harm in using OS timers if they're slower than desired, just chip delays may tend to be longer than they need to be so programming might be slower than it needs to be. If it's exposed as an option, users can judge whether it saves time to skip calibration and distributors can choose a default according to expected typical use.