DRBD on raw disk block device
Solution 1:
drbdadm attach data
isn't the only command you want to be using after creating the metadata.
One of the following procedures should work for getting your device up:
drbdadm create-md data
drbdadm up data
-- or --
drbdadm create-md data
drbdsetup-84 new-resource data
drbdsetup-84 new-minor data 1 0
drbdmeta 1 v08 /dev/sdb internal apply-al
drbdsetup-84 attach 1 /dev/sdb /dev/sdb internal
drbdsetup-84 connect data ipv4:10.10.10.16:7789 ipv4:10.10.10.17:7789 --protocol=C
Once you've done that, you'll have a device with a connection state of "Connected" and a disk state of "Inconsistent/ Inconsistent"; this will always/ only be the case after you create brand new meta-data on both nodes.
From there, simply pick one node to promote to Primary, which will cause DRBD to sync from Primary => Secondary:
# drbdadm primary data --force
You should never under normal circumstances need to use --force
to promote your DRBD device from here on out.
However, you also said:
As this disk is virtual and hypervisor I use allow on-the-fly disk extension, I do not want to bother with LVM operations or re-partitioning when comes time to extend my DRBD file system
That probably isn't going to work with DRBD. DRBD puts it's metadata at the end of the block device, and in that metadata the number of blocks (and other things) are tracked. Dynamically extending the backing block device is likely going to cause problems for you.
Solution 2:
In the very specific case of Debian DRBD package, there is no need to operate "attach data".
Here is the minimal sequence to get DRBD up and running with Debian:
- Create your ressource file
/etc/drbd.d/data.res
on both nodes, typically to define/dev/drbd1
(remind this volume number1
for clear bitmap operation!) - Invoke
drbdadm create-md data
on both nodes - Start service on both nodes, they should wait for each other to be ready:
systemctl start drbd.service
- Confirm
Connected
state withdrbdadm cstate data
. If not, do not go further until any service startup or network connectivity issue is solved. - On
primary
node only, clear bitmap to prevent useless initial synchronization:drbdadm -- --clear-bitmap new-current-uuid data/1
(mind last parameter:resourceName/volumeNumber
) - On
primary
node only, promote node asprimary
:drbdadm primary data
From that point, on primary
node, /dev/drbd1
device is available for any regular block operations like blockdev
or mkfs
.
Trigger clear bitmap operation with care, it makes any data on secondary node unrecoverable. By the way, it is really convenient for initial setup as it prevents your secondary node storage to be fully written for hours, enforcing your virtualization layer to allocate blocks on storage, which is annoying for thin provisioning.