are EBS volumes wiped after use?
From the AWS documentation
The physical block storage used by deleted EBS volumes is overwritten with zeroes before it is allocated to another account.
From an AWS rep on their forums.
I can confirm that when any customer volume is terminated (be it EBS or an instance storage volume) it is completely wiped before being made available for use by other customers.
If this is genuine and you really have someone else's data you need to get in touch with AWS. Extraordinary claims require extraordinary evidence.
TLDR; I did two sets of tests and was unable to reproduce the results that @stevelandiss produced.
Update - test one
I tried this out myself. Here's what I did and my results.
TLDR; could not reproduce.
0) I allocated an m3.medium spot instance, with gp2 and io1 (provisioned IOPS) volumes, 10GB each. I used the standard Ubuntu 16.04 AMI (ami-b7a114d7). Note that I could not mount as /dev/xvdb as the OP suggested, AWS forced me to use longer names like /dev/xvdba which makes me slightly suspicious.
1) I installed photorec/testdisk
apt-get install testdisk
2) I used lsblk to look at the volumes available
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdba 202:13312 0 10G 0 disk
xvdbb 202:13568 0 10G 0 disk
xvdca 202:19968 0 4G 0 disk
I tried to mount the disks just to check, but of course they have no file system so it failed
mount /dev/xvdba /gp2/ mount: wrong fs type, bad option, bad superblock on /dev/xvdba, missing codepage or helper program, or other error
In some cases useful info is found in syslog - try dmesg | tail or so.
3) I made file systems on each device
mkfs -t ext4 /dev/xvdba
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: e32b2ed1-a0f8-49df-895d-c56b9802a009
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
root@ip-11-0-2-184:/home/ubuntu# mkfs -t ext4 /dev/xvdbb
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: 4f1f7c75-bbce-4887-aac7-02e197a36c89
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
4) I mounted the disks
mount /dev/xvdba /gp2/
mount /dev/xvdbb /pio/
5) I ran photorec on each volume
photorec /dev/xvdba
GP2
IO1 provisioned IOPS
As you can see no files were found. If @stevelandiss can point out what he did differently I can try again to reproduce. For example he didn't mention any mounting, and he used a different device name. I'll try again without mounting a few minutes, but I want to save this update so I don't lose it.
Update - test two
This time I did much the same, but I didn't create a file system or mount the disk. This is closer to what to what @stevelandiss did. This made no difference, no files were recovered.
GP2
IO1 provisioned IOPS
from the wipefs man pages:
wipefs does not erase the filesystem itself nor any other data from the device
we need more information about the volume. How did you create it? Are you 100% sure that no one else created it but you?
AWS does not share how they designed the technology, so i am guessing as a NetApp certified storage guy. EBS Volumes are abstraction layers, built on RAID groups. I doubt it will just one single disk. So everytime you provision a volume, you will(would) be getting chuncks from different physical devices. That makes it very unlikely for you to get complete files.
Give us more information how you provisioned the volume. But i am guessing you are making a mistake at some point. Or else this would be a huge security violation on AWS about such a basic feature.
here is the test i made, i create a new volume, a new instance. attached the volume to the instance and then ran photoRec test. i found 0 files as expected.
PhotoRec 7.0, Data Recovery Utility, April 2015
Christophe GRENIER <[email protected]>
http://www.cgsecurity.org
Disk /dev/xvdf - 1073 MB / 1024 MiB (RO)
Partition Start End Size in sectors
P Unknown 0 0 1 130 138 8 2097152
0 files saved in /home/ec2-user/testdisk-7.0/recup_dir directory.
Recovery completed.
do you have any other IAM users in your account? maybe they attached that volume to their instances and used that way.