Btrfs RAID1: How to replace a disk drive that is physically no more there?
Turns out that this is a limitation of btrfs as of beginning of 2017. To get the filesystem mounted rw again, one needs to patch the kernel. I have not tried it though. I am planing to move away from btrfs because of this; one should not have to patch a kernel to be able to replace a faulty disk.
Click on the following links for details:
- Kernel patch here
- Full email thread
Please leave a comment if you still suffer from this problem as of 2020. I believe that people would like to know if this has been fixed or not.
Update: I moved to good old mdadm and lvm and am very happy with my RAID10 4x4 Tb (8 Tb total space), as of 2020-10-20. It is proven, works well, not resource intensive and I have full trust in it.
replace
needs the filesystem to be mounted rw
to operate.
In a degraded BTRFS RAID1 filesytem, you have one and only one chance to mount the filesystem rw
using -o degraded
:
degraded (default: off) Allow mounts with less devices than the RAID profile constraints require. A read-write mount (or remount) may fail when there are too many devices missing, for example if a stripe member is completely missing from RAID0.
After rw
mount, find the devid
of the missing device:
btrfs filesystem show /mountpoint
Replace the missing with the new device:
btrfs replace start -B <devid> /dev/new-disk /mountpoint
Check the status:
btrfs replace status /mountpoint
replace
will resume on reboot.
Add the new drive to the filesystem with btrfs device add /dev/sdd /mountpoint
then remove the missing drive with btrfs dev del missing /mountpoint
remounting the filesystem may be required before btrfs dev del missing
will work.