Mount a EBS volume (not snapshot) to Elastic Beanstalk EC2
I have found a solution. It could be improved by removing the "sleep 10" but unfortunately that required because aws ec2 attach-volume
is async and returns straight away before the attachment takes place.
container_commands:
01mount:
command: "aws ec2 attach-volume --volume-id vol-XXXXXX --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdh"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /data"
test: "[ ! -d /data ]"
04mount:
command: "mount /dev/sdh /data"
test: "! mountpoint -q /dev/sdh"
Note. Ideally it would be run in commands
section not container_commands
but the environment variables are not set in time.
To add to @Simon's answer (to avoid traps for the unwary):
- If the persistent storage being mounted will ultimately be used inside a Docker container (e.g. if you're running Jenkins and want to persist jenkins_home), you need to restart the docker container after running the mount.
- You need to have the 'ec2:AttachVolumes' action permitted against both the EC2 instance (or the instance/* ARN) and the volume(s) you want to attach (or the volume/* ARN) in the EB assumed role policy. Without this, the
aws ec2 attach-volume
command fails. - You need to pass in the
--region
to theaws ec2 ...
command as well (at least, as of this writing)
Here's a config file that you can drop in .ebextensions
. You will need to provide the VOLUME_ID
that you want to attach. The test commands make it so that attaching and mounting only happens as needed, so that you can eb deploy
repeatedly without errors.
container_commands:
00attach:
command: |
export REGION=$(/opt/aws/bin/ec2-metadata -z | awk '{print substr($2, 0, length($2)-1)}')
export INSTANCE_ID=$(/opt/aws/bin/ec2-metadata -i | awk '{print $2}')
export VOLUME_ID=$(aws ec2 describe-volumes --region ${REGION} --output text --filters Name=tag:Name,Values=tf-trading-prod --query 'Volumes[*].VolumeId')
aws ec2 attach-volume --region ${REGION} --device /dev/sdh --instance-id ${INSTANCE_ID} --volume-id ${VOLUME_ID}
aws ec2 wait volume-in-use --region ${REGION} --volume-ids ${VOLUME_ID}
sleep 1
test: "! file -E /dev/xvdh"
01mkfs:
command: "mkfs -t ext3 /dev/xvdh"
test: "file -s /dev/xvdh | awk '{print $2}' | grep -q data"
02mkdir:
command: "mkdir -p /data"
03mount:
command: "mount /dev/xvdh /data"
test: "! mountpoint /data"