choosing the best fitting erase block size can safe a significant amount of runtime. That´s why I would like to check if this can be improved.
Just to get the right starting point, here some thoughts to be discussed from my point of limited view: As I understand the code, the algorithm for standard erase operation (e.g. before writing some random data) is this:
1) take the first entry of `flash->chip->block_erasers`
2) try to erase using this parameters
3) if check_block_eraser() fails try the next entry of `flash->chip->block_erasers`
In this case the first entry given to `block_erasers` in `flashchips.c` is as the default entry used for erase operation for the given flashchip. As the first entry normally goes with the least block size this should work in almost every case. But on the other hand this is generally the worse choice regarding to run time if a erase operation with a bigger erase size would do the job as well.
A) Would it be reasonable to just change the order and go from the bigger block size to lower block size? So the biggest size block erase operation is handled as a default and only in case it does not fit an erase operation with lower block size is used.
I might have overseen some important aspects, so here just some other options:
B) Make `block_eraser` entry selectable via cmd line parameter
C) Add timing values of erase operations to flashchips.c and calculate and choose best runtime
Let me know what do you think.
SIEB & MEYER AG