Bugfix: File stream read inefficiency, allocation f_string_t instead of char, and actually use state.step_small.
The file stream reader requires the buffer to be pre-allocated.
Prevent the resize from resizing an extra time if the resulting size read is smaller than the requested size.
The caller can then optimize this by setting the read size to 1 digit larger than the actual file size.
Also switch to fread_unlocked() and handle the locks manually.
The strings are being allocated as f_string_t.
The f_string_t type definition is actually a "char *".
This is the size of a memory address (and could be as large as 64-bit type on 64-bit architectures).
This is a huge mistake because this should only be using size of char, which is 1.
I provided a state.step_large and state.step_small to the FSS functions as a quick solution for more control over memory management.
It turns out this is not being used and for very large files this can be very wasteful.
In the long term, I believe a better fix is needed where the files are pre-processed to determine the objects and contents.
Then, the structures can be allocated with a known size.
The reason for this is that it seems that memory resizes are significantly more expensive than processing an arbitrarily large string.
Increasing the cost of processing that string from one time to two times is likely worth the cost to save time and resources lost due to memory re-allocations.