Terry : DRBD

DRBD in Action

What is DRBD?

DRBD refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based raid-1.

In the illustration above, the two orange boxes represent two servers that form an HA cluster. The boxes contain the usual components of a Linux kernel: file system, buffer cache, disk scheduler, disk drivers, TCP/IP stack and network interface card (NIC) driver. The black arrows illustrate the flow of data between these components.

The orange arrows show the flow of data, as DRBD mirrors the data of a highly available service from the active node of the HA cluster to the standby node of the HA cluster.

DRBD got merged into Linux mainline since 2.6.33.

Installation

Environment

  • Ubuntu 12.10 x86_64
  • kernel 3.5.0
  • DRBD 8.3.13

Distributed Replicated Block Device (DRBD) mirrors block devices between multiple hosts. The replication is transparent to other applications on the host systems. Any block device hard disks, partitions, RAID devices, logical volumes, etc can be mirrored.

To get started using drbd, first install the necessary packages

sudo apt-get install drbd8-utils

Configuration

Single-primary mode

In single-primary mode, any resource is, at any given time, in the primary role on only one cluster member. Since it is thus guaranteed that only one cluster node manipulates the data at any moment, this mode can be used with any conventional file system (ext3, ext4, XFS etc.).

Deploying DRBD in single-primary mode is the canonical approach for high availability (fail-over capable) clusters.

Servers

  • drbd1 10.1.1.11
  • drbd2 10.1.1.12

/etc/hosts

Icon

Make sure hostname is correctly set on both nodes. Do necessary IP hostname mapping in /etc/hosts

For example, make sure the snippet is added

/etc/hosts
# snippet
10.1.1.11	drbd1
10.1.1.12	drbd2 

on both nodes.

Added a new HDD dedicated for DRBD on each box, in this case, both /dev/sdb.

To configure drbd, on drbd1 edit /etc/drbd.conf

/etc/drbd.conf
global { usage-count no; }
common { syncer { rate 100M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;
                degr-wfc-timeout 60;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "secret";
        }
        on drbd1 {
                device /dev/drbd0;
                disk /dev/sdb;
                address 10.1.1.11:7788;
                meta-disk internal;
        }
        on drbd2 {
                device /dev/drbd0;
                disk /dev/sdb;
                address 10.1.1.12:7788;
                meta-disk internal;
        }
}  

Use scp or rsync to copy the file to drbd2 over ssh.

Use drbdadm utility to initialize the meta data storage. On each server execute

sudo drbdadm create-md r0
 
# Sample output
# drbdadm create-md r0
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.

On both hosts, start (restart if already started) the drbd daemon

sudo service drbd restart

 
# on drbd1
root@drbd1:~# service drbd start
 * Starting DRBD resources                                                      [ d(r0) s(r0) n(r0) ]degr-wfc-timeout has to be shorter than wfc-timeout
degr-wfc-timeout implicitly set to wfc-timeout (15s)
outdated-wfc-timeout has to be shorter than degr-wfc-timeout
outdated-wfc-timeout implicitly set to degr-wfc-timeout (15s)
...                                                                      [ OK ]

# on drbd2
root@drbd2:~# service drbd start
 * Starting DRBD resources                                                      [ d(r0) s(r0) n(r0) ]degr-wfc-timeout has to be shorter than wfc-timeout
degr-wfc-timeout implicitly set to wfc-timeout (15s)
outdated-wfc-timeout has to be shorter than degr-wfc-timeout
outdated-wfc-timeout implicitly set to degr-wfc-timeout (15s)
                                                                         [ OK ]

On the drbd1, or whichever host you wish to be the primary, run

sudo drbdadm -- --overwrite-data-of-peer primary all

After executing the above command, the data will start syncing with the secondary host. To watch the progress, on drbd1 enter the following

watch -n1 cat /proc/drbd

To stop watching, use Ctrl+c

Finally, create a file system out of /dev/drbd0 and mount it on drbd1

sudo mkfs.ext4 /dev/drbd0
sudo mount -t ext4 /dev/drbd0 /drbd 

Testing

To test that the data is actually syncing between the hosts copy some files on the drbd01, the primary, to /drbd

sudo rsync -av --progress --stats /etc/default /drbd

Next, unmount /drbd

sudo umount /drbd

Demote the primary server (drbd1) to the secondary role

sudo drbdadm secondary r0

Now on the secondary server (drbd2) promote it to the primary role

sudo drbdadm primary r0

Lastly, mount the partition

sudo mount /dev/drbd0 /drbd

/drbd on drbd2 should be mirrored, it's been replicated from drbd1.

Reference

The DRBD User's Guide (8.0 - 8.3.x)

Ubuntu Server Guide - Clustering - DRBD

Attachments:

drbd.gif (image/gif)