On 30.09.2017 20:28, Felix Held wrote:
Write strategy: Invalidate any entry with matching key, append after last entry if there is enough space in the current erase block. If not, finalize current erase block with an invalid entry spanning the free space of the block, write variable into next erase block. If we run out of erase blocks, defragment the whole thing (needs one com- plete erase block size to cache things, afaics)?
What happens when the power gets cut right between invalidating the old entry and writing the new entry? One of the design goals should be to always have a consistent state in the flash chip. So for the case that there is enough free space in the SPI NOR flash, I'd write the new entry and invalidate the old entry after that.
For the defragmentation at least one erase block needs to be reserved so that the new entries can be written before the old ones are invalidated.
The easiest method of wear leveling would be to use the blocks as a ring buffer; sure this doesn't minimize the number of erase cycles, but since NOR flash is quite robust and new entries probably won't get written that often, I'd probably prefer this less complex approach to increase the code readability and maintainability.
I agree that we don't want any complex wear leveling. n+1 blocks erased for n blocks of data should be good enough. I also had some kind of ring buffer in mind. I will describe my current sketch in more detail below.
Read strategy: Either cache and index the whole thing, or read until the first valid entry with matching key.
The question I have here is what to do if there is more than one valid entry. Just using the first one introduces a bit of undetermined behavior (block order in the flash might be different after defragmentation).
No, always choosing the first would be determined behavior. In case we keep things in order it's also the better choice when it comes to sear- ching the current, valid entry (we'd have to always search the whole used space for a newer version otherwise). I'd treat it like this: If the old version wasn't invalidated, the update didn't finish. So we can ignore the new version.
So either use some counter (beware of overflows!) on either the flash erase block (small overhead and rather efficient) or the entry (full region has to be either cached or read every time and needs more bytes than the counter per erase block) or accept the possibility of undetermined behavior (IMHO not a good idea).
I'm not sure how to use a per erase-block counter? When would it update? we could only do that on every erase. A per variable counter might be useful, but I still hope it will work without.
Also a good idea would be to invalidate everything except the newest entry when reading in the case that there is more than one valid entry.
Might be done in a sanitation phase before each write. But if we define the first entry to be the valid one, we probably don't need it. Update: I didn't come around in my scheme below... maybe we should make the update in three steps: tag for update, insert new value, invalidate. This way we would only have to search for duplicates for valid entries tagged for update.
For boolean options you can also reserve a word of 32 or 64 bits and flip one bit for every toggle and test if the number of flipped bits is even or odd, so you don't have to write a new entry every time the option gets toggled. Also works for n-bit values, but I'm not sure if it's worth the more complex code.
How about optimizing every update (without a change in length)? If it's doable in place, just write the new value. Your bit flip could then be implemented in the next layer (keeping this one more focused).
Having some sort of key-value store that can be written to during runtime would also be great for replacing the CMOS user option table thing.
I think both could live side-by-side, the user could choose where to store settings.
Updated scheme: With n erase blocks, the first n-1 blocks would be num- bered 0..n-2. The last block would be the "spare block". The spare block has a header (or footer, with size <= size of an entry's header) that contains the number of another block (or 0xffff).
Updated write: Before each write, check if the spare block is marked with another blocks number. If it is, find the first block with free space and continue the defragmentation process at this point.
Find duplicate entries and invalidate the ones added later Find the first block m with invalid entries Write m into the spare block's footer Copy all valid entries from m into the spare block (it will fit, because there was at least one invalidated entry in m) For i in m .. n-3 Erase i For j in m+1 .. n-3 Copy all valid entries from j that fit into i, invalidating each! End For End For Add back all entries from spare block, invalidating each Erase the spare block
Updated read: If the spare block is marked 0xffff, read starting from block 0 until the first valid matching entry. If the spare block is marked with block m, search from block 0 to m-1, then in the spare block, then in blocks m to n-2.
Might work? it's too late here to judge ;) Nico