Cannot connect to EC2 instance after converting Ubuntu 16 PV to Ubuntu 16 HVM
I realize that this question wasn't seen very much, but just in case, I'm hoping my results can help out someone in the future (maybe even me the next time I attempt this). I would like to thank Steve E. from Amazon support for helping me get my instance migrated <3
Anyways, there were 2 issues when migrating my Ubuntu 16.04 M3 (PV) instance to an Ubuntu 16.04 C5 (HVM) instance. The first issue was that the new C5 instances do use the new naming conventions, so other tutorials about migrating PV to HVM don't work quite the same way. The other issue was that my M3 (PV) instance had been through upgrades to Ubuntu. I actually had gone from Ubuntu 12 -> Ubuntu 14 -> Ubuntu 16 in the past year or so. This caused an issue where cloud network files weren't generated, and so my instance could not be reached.
Anyways to migrate an Ubuntu 16.04 PV instance to an HVM instance using the new nvme naming convention do the following:
Pre-Requisites Summary:
Before starting, make sure to install the following on your PV instance:
$ sudo apt-get install grub-pc grub-pc-bin grub-legacy-ec2 grub-gfxpayload-lists $ sudo apt-get install linux-aws
- Stop the PV Instance & Create a snapshot of the its root volume, restore this snapshot as a new EBS volume on the same availability zone of the source (Start the PV Instance right after the snapshot creation)
- Launch a new C5 HVM instance (destination) selecting the Ubuntu Server 16.04 LTS (HVM) on the same availability zone of the source instance (Keep this new instance EBS root volume size to 8GB, as this root volume will only be used temporarily)
- After the instance launches, attach the volume you restored on the step 1 (that's the root volume from the PV instance) as
/dev/sdf
(on the Ubuntu system, the name will benvme1n1
). - Create a new (blank) EBS volume (same size as your 'source' PV root volume) and attach to the HVM instance as
/dev/sdg
(on the Ubuntu system, the name will benvme2n1
)
Migration:
Once logged into your instance, use sudo su
to execute all commands as a root user.
Display your volumes
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 8G 0 disk └─nvme0n1p1 259:1 0 8G 0 part / nvme1n1 259:2 0 100G 0 disk nvme2n1 259:3 0 100G 0 disk
nvme0n1
is the HVM root you just created (just to boot this time)nvme1n1
is the PV root restored (will be converted to HVM)nvme2n1
is the blank volume (will receive the conversion from the PV root asnvme1n1
)Create a new partition on
nvme2n1
(nvme2n1p1
will be created)# parted /dev/nvme2n1 --script 'mklabel msdos mkpart primary 1M -1s print quit' # partprobe /dev/nvme2n1 # udevadm settle
Check the 'source' volume and minimize the size of original filesystem to speed up the process. We do not want to copy free disk space in the next step.
# e2fsck -f /dev/nvme1n1 ; resize2fs -M /dev/nvme1n1
Duplicate 'source' to 'destination' volume
# dd if=/dev/nvme1n1 of=/dev/nvme2n1p1 bs=$(blockdev --getbsz /dev/nvme1n1) conv=sparse count=$(dumpe2fs /dev/nvme1n1 | grep "Block count:" | cut -d : -f2 | tr -d "\\ ")
Resize the 'destination' volume to maximum:
# e2fsck -f /dev/nvme2n1p1 && resize2fs /dev/nvme2n1p1
Prepare the destination volume:
# mount /dev/nvme2n1p1 /mnt/ && mount -o bind /dev/ /mnt/dev && mount -o bind /sys /mnt/sys && mount -o bind /proc /mnt/proc
chroot
to the new volume# chroot /mnt/
Reinstall grub on the chrooted volume:
# grub-install --recheck /dev/nvme2n1 # update-grub
Exit the
chroot
# exit
Shutdown the instance
# shutdown -h now
After the conversion you need now to do this:
Detach the 3 volumes you previous had on the HVM instance. Attach the last volume you created (blank) as
/dev/sda1
on the console (it was previously attached as/dev/nvme2n1
) on the HVM instance. Start the HVM instance.
The new HVM instance should now boot successfully and will be an exact copy of the old source PV instance (if you used the correct source volume). Once you have confirmed that everything is working, the source instance can be terminated.
Updating network configuration (optional)
Now, the steps above will work for a majority of the people here. However, my instance status was still not being reached. The reason was because I upgraded Ubuntu on my instance, instead of starting from a fresh image. This left the eth0
config activated, an no 50-cloud-init.cfg
config file.
If you already have the file /etc/network/interfaces.d/50-cloud-init.cfg
, then you can follow along and update the file, instead of creating a new one. Also assume all commands are run via sudo su
.
Shutdown the instance, detach volumes, and enter the same configuration as before. Attach the 8GB volume as
/dev/sda1/
, and your final destination volume as/dev/sdf/
. Start the instance up and login.Mount
/dev/sdf
, which should now benvme1n1p1
by doing the following:# mount /dev/nvme1n1p1 /mnt/ && mount -o bind /dev/ /mnt/dev && mount -o bind /sys /mnt/sys && mount -o bind /proc /mnt/proc
Either create or update the file:
/etc/network/interfaces.d/50-cloud-init.cfg
With the following:
# This file is generated from information provided by # the datasource. Changes to it will not persist across an instance. # To disable cloud-init's network configuration capabilities, write a file # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: # network: {config: disabled} auto lo iface lo inet loopback auto ens5 iface ens5 inet dhcp
Exit
chroot
(exit
), shut down the instance (shutdown -h now
).Follow step 9 from before!
You should be done!