I'm running an experiment that holds several millions of results.
In the past, I've tried to store these values on a database but the performance was decreasing too fast as the number of records increased.
So, I've reverted to a simpler solution and instead of using a database I've decided to store each record as an individual file.
Things seemed to be going well but what is the limit for the number of files that one can hold inside a single folder?
I'm running Ubuntu x64 on a EXT3 file system. Looking at wikipedia I see:
The maximum number of inodes (and hence the maximum number of files and directories) is set when the file system is created. If V is the volume size in bytes, then the default number of inodes is given by V/(2^13) (or the number of blocks, whichever is less), and the minimum by V/(2^23). The default was deemed sufficient for most applications. The max number of subdirectories in one directory is fixed to 32000.
So, to get the number of blocks I've looked on the instructions from this page:
I've ran the code below to discover the number of available blocks:
sudo /sbin/dumpe2fs /dev/sdb1 | grep "Block size"The result is 4096
The volume size (V) is 1 447 337 552 and 2^13 = 8192 to use in the formula.
When doing 1447337552/(2^13) the result is roughly 176 676 but I see from the report on the file manager that more files are already inside. If I use 4096 (following the rule of lower blocks), the max result should be 353 353.
I'm unable of accessing the folder from the Gnome shell (can only see the tip when selecting the folder on the footer of the window) with an indication of 1 436 916 files on the sub folder.
Is it correct to assume that the max number of files on this case is between 176 and 353 thousand of files per folder on this case?