Hi Aarya,
On 11.04.22 13:46, Aarya Chaumal wrote:
In the "Optimizing Erase Function Selection" project what is considered a minimum level of success. I think it should be that the resulting average erase time should be similar to the theoretical value based on the implemented algorithm and less than the current erase time.
theoretical values are hard to get right. Beside the documented delays of a flash chip, we have delays in the programmer and in case of external programmers round-trip delays on the bus where they are attached (e.g. USB, UART).
As the current code is implicitly optimized for smaller changes, I would define basic success as reducing the programming time for a full flash-chip write (could be tested by writing random data) without increasing the time for a small change (e.g. write random data with a layout region of 4KiB).
Are there any other objectives that need to be achieved for the project?
IIRC, the description mentioned a prerequisite: We need to know in advance if the programmer will be able to execute a specific erase command. This is very important so we can get rid of the current fallback strategy that simply tries the next command if one fails.
The fallback strategy is somewhat controversial anyway: If the programmer rejects a command, it's reasonable to try another. However if an erase fails due to hardware trouble (transfer failure or even the flash chip failing), trying again with a bigger erase block size can make things worse.
Also what stretch objectives could be there for the project?
Maybe optimization for more write patterns (re-writing a full flash doesn't happen very often). However, we'll have to see if that is possible/necessary. Maybe one could take the indi- vidual erase times of a flash chip into account. For instance if a 4KiB erase takes x ms and a 64KiB erase takes 10*x, then it might speed things up to use the bigger erase block even if it wouldn't have to be fully erased.
Nico