Can I run out of disk space by creating a very large number of empty files?

This output suggests 28786688 inodes overall, after which the next attempt to create a file in the root filesystem (device /dev/sda2) will return ENOSPC ("No space left on device").

Explanation: on the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.

Linux supports other filesystems without this limitation. On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries.

Thinking about it, tmpfs is another example where inodes are allocated dynamically. It's hard to know what the maximum number of inodes reported by df -i would actually mean in practice for these filesystems. I wouldn't attach any meaning to the value shown.

"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.

"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer

Creating empty files involves using the following:

  • inodes, one per file;
  • additional directory entries, also one per file, but aggregated.

The number of available inodes is often determined when a file system is created, and can’t be changed (some file systems such as Btrfs or XFS allocate inodes dynamically). That’s what’s measured by df -i. When you run out of inodes, you can’t create new files or directories, even if you have disk space available.

Directory entries take up space too, from the available disk space. You can see this by looking at the size of a directory: it’s always a multiple of the block size, and when a directory contains lots of files, its size grows. If you run out of disk space, you may not be able to create new files or directories in a directory which is “full” (i.e., where adding a new file would involve allocating a new block), even if you have inodes available.

So yes, it is possible to run out of disk space using only empty files.

Pure logic argument:

A file name consists of a non-zero amount of bytes. Even with theoretical maximum compression in a hypothetical file system designed to allow the absolute maximum amount of file names, each file name will still consume at least one bit somewhere on your physical disk. Probably more, but "1 bit per file" is the trivial minimum.

Calculate the amount of bits that can possibly fit on your platters, and that is a theoretical maximum number of (empty or non-empty) files you can store on it.

So, the answer is yes. Eventually, you will run out of space, no matter what storage you are using, if you keep adding empty files. Obviously you will run out much sooner than the maximum calculated in this fashion, but run out you will.