What is the max number of attached volumes per Amazon EC2 instance?

The accepted answer is wrong. There is a limit. I have direct experience right now with EC2 t3.medium, m5a.large, c5.xlarge, running under Amazon Linux, here is what I found:

  • there seems to be a hard limit of 26 volumes
  • the device names are /dev/sd[a-z], /dev/xvd[a-z], /dev/xvd[a-z][a-z]

The Amazon Documentation says indirectly that the limit is (currently) 26 devices:

EBS volumes are exposed as NVMe block devices on Nitro-based instances. The device names are /dev/nvme0n1, /dev/nvme1n1, and so on. The device names that you specify in a block device mapping are renamed using NVMe device names (/dev/nvme[0-26]n1). The block device driver can assign NVMe device names in a different order than you specified for the volumes in the block device mapping.

So, while you can generate tons of device names with /dev/xvd?? that will actually work, and they don't have to be in any order, and you can mix and match all the combination, e.g., /dev/sdf, /dev/xvdz, /dev/xvdxy, there is still a limit of 26 devices.

What happens if you go beyond this limit? Two things:

  • If the instance is running, the volume you are trying to attach will remain stuck in "attaching" state.
  • If the instance is stopped, the volume attaches without problem, but when you try to start the instance, it will get stuck in "pending" state.

Because of this behavior, I doubt that the issue is about the OS, Linux, Windows, FreeBSD, whatever. If it was about the OS, the instance would enter "running" state and then get stuck on boot, but wouldn't get stuck in "pending".

Also, you may want to list your /dev/ directory to see for yourself, but you do not have to worry about those nitro device names /dev/nvme* and wonder how they are mapped from the device names that you specified in the attach-volume command; you will find both, i.e, in the above example, you will find the device names /dev/sdf, /dev/xvdz, /dev/xvdxy, as is, but you also find the /dev/nvme* nodes. You can use the device names that you specified during the attach-volume command for things like mkfs, and, I strongly recommend, you then use the UUID=... format to specify the volumes in your /etc/fstab, and never try mounting by /dev/ node name.


In fact there is no limit if you stick with Linux (Windows instances are limited to 16 EBS volumes). You may have to change the naming of the devices, then you can get easily up to 24 volumes:

/dev/sdf1  /dev/sdf5  /dev/sdf9  /dev/sdg4  /dev/sdg8  /dev/sdh3
/dev/sdf2  /dev/sdf6  /dev/sdg1  /dev/sdg5  /dev/sdg9  /dev/sdh4
/dev/sdf3  /dev/sdf7  /dev/sdg2  /dev/sdg6  /dev/sdh1  /dev/sdh5
/dev/sdf4  /dev/sdf8  /dev/sdg3  /dev/sdg7  /dev/sdh2  /dev/sdh6

For further information take a look at the docs: Attaching the Volume to an Instance.


AWS says that there is a limit of 40 volumes for linux and 26 or 16 for Windows with this caveat for each. Attaching more than * volumes to a * instance is supported on a best effort basis only and is not guaranteed.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html