How can I increase the number of inodes in an ext4 filesystem?

It seems that you have a lot more files than normal expectation.

I don't know whether there is a solution to change the inode table size dynamically. I'm afraid that you need to back-up your data, and create new filesystem, and restore your data.

To create new filesystem with such a huge inode table, you need to use '-N' option of mke2fs(8).

I'd recommend to use '-n' option first (which does not create the fs, but display the use-ful information) so that you could get the estimated number of inodes. Then if you need to, use '-N' to create your filesystem with a specific inode numbers.


With 3.2 million inodes, you can have 3.2 million files and directories, total (but multiple hardlinks to a file only use one inode).

Yes, it can be set when creating a filesystem on the partition. The options -T usage-type, -N number-of-inodes, or -i bytes-per-inode can all set the number of inodes. I generally use -i, after comparing the output of du -s and find | wc -l for a similar collection of files and allowing for some slack.

No, it can't be changed in-place on an existing filesystem. However:

  • If you're running LVM or the filesystem is on a SAN's LUN (either directly on the LUN, or as the last partition on the LUN), or you have empty space on the disk after the partition, you can grow the partition and then use resize2fs to expand the filesystem. This adds more inodes in proportion to the added space, roughly. If you want to avoid running out of inodes before space assuming that future files on average have about the same size, set a high enough reserved block percentage using tune2fs -m.
  • If you have enough space and can take the filesystem offline, then take it offline, create a new filesystem with more inodes, and copy all the files over.
  • If just a subset of the files are using a lot of the inodes and you have enough free space, create a filesystem on a loop device backed by a file on the filesystem, create a filesystem with more inodes (and maybe smaller blocks as well) on it, and move the offending directories into it. That's probably a performance hit and a maintenance hassle, but it is an alternative.
  • And of course, if you can delete a lot of unneeded files, that should help too.

As another workaround I could suggest considering packing huge collections of files into an uncompressed(!) tar archive, and then using archivemount to mount it as a filesystem. A tar archive is better for sharing than a filesystem image and provides similar performance when backing up to a cloud or another storage.


If the collection is supposed to be read-only, squashfs may be an option, but it requires certain options enabled in the kernel, and xz compression is available for tar as well with the same performance.