This site best when viewed with a modern standards-compliant browser. We recommend Firefox Get Firefox!.

Linux-HA project logo
Providing Open Source High-Availability Software for Linux and other OSes since 1999.

USA Flag UK Flag

Japanese Flag

Homepage

About Us

Contact Us

Legal Info

How To Contribute

Security Issues

This web page is no longer maintained. Information presented here exists only to avoid breaking historical links.
The Project stays maintained, and lives on: see the Linux-HA Reference Documentation.
To get rid of this notice, you may want to browse the old wiki instead.

1 February 2010 Hearbeat 3.0.2 released see the Release Notes

18 January 2009 Pacemaker 1.0.7 released see the Release Notes

16 November 2009 LINBIT new Heartbeat Steward see the Announcement

Last site update:
2014-10-25 19:34:27

only some outline notes by Tim J at present...full content will follow soonish hopefully...

See also: DRBD, DRBD/QuickStart07, DRBD/NFS and the excellent documentation on the drbd.org site.

Active/Passive (hot standby)

This is usually the simplest configuration. Here, you have all your active services (e.g. Apache, MySQL, whatever...) all running on one node. The other node sits doing nothing. (See also A Basic Single IP Address Configuration)

In this case, you can possibly get away with just one DRBD share (resource) for multiple applications. The basic idea is that you have a DRBD share on which you put all your data (not normally binaries or other system files). Let's assume Apache, for example. You might opt for a 'high availability' mountpoint /ha/web. This will be shared across both nodes with DRBD but remember that only the primary node can mount it; you can't have it mounted on both nodes at the same time.

A typical, simple DRBD config for this would look like:

TODO: typical active/passive DRBD config

You would configure Apache with all the websites having document roots under this share, e.g. you might have:

|-/ha
|--- web
|----- client1
|----- client2

snippet from Apache config:

<VirtualHost 192.168.0.1>
   ServerName client1.example.com
   DocumentRoot /ha/web/client1
   ...
</VirtualHost>

<VirtualHost 192.168.0.1>
   ServerName client2.example.com
   DocumentRoot /ha/web/client2
   ...
</VirtualHost>

Setting up heartbeat isn't difficult but there are two steps you need to be aware of:

  • running the 'drbddisk' script, which triggers DRBD to make the current node primary for the given resource

  • actually mounting the filesystem

Here's a typical, simple, snippet from haresources showing an active/passive configuration. In this case, node1 is the default primary, and there is a "floating" IP address 192.168.0.1:

node1 drbddisk::web Filesystem::/dev/drbd0::/mnt/ha::ext3 192.168.0.1 httpd

Note the order in which the resources are started. drbddisk needs to go first, since it configures DRBD to be primary. Mounting the filesystem is next, and actually running Apache is last. Setting up the network IP obviously needs to happen at some point before Apache starts.

  • <!> drbddisk was datadisk in drbd-0.6.x.

    <!> web above (the parameter to datadisk or drbddisk) is the resource name you chose for the resource section in drbd.conf, not the device (unless you chose to give your resources device names...)

With several resources, it has to be written like

castor 10.0.0.30 drbddisk::r0 drbddisk::r1  \
        Filesystem::/dev/drbd0::/crypt::xfs \
        Filesystem::/dev/drbd1::/data::xfs  \
        samba nfs-kernel-server

Active/Active with different services on both nodes

You might rightly think that the active/passive scenario is rather inefficient, because assuming your systems are generally reliable, you have a perfectly good server sitting doing nothing. Therefore, you might prefer to balance the load somewhat using an active/active configuration, where one node "normally" runs some services, and the other node "normally" runs some other (different) services. (See also A Two IP address Active/Active Configuration)

There are a couple of things you need to bear in mind before doing this:

  • If one node can comfortably cope with the load of running all the services, there isn't necessarily much point in trying to split them. However:
  • If one node can't cope with the load, then consider what's going to happen when you need to failover all services to one node

(these considerations are general to HA, not DRBD-specific)

However, assuming this is what you want to do, and that the services running on each node each require a shared filesystem (i.e. DRBD), then you need to take the following steps:

  1. Make sure you have separate DRBD resources (drbd0, drbd1...) set up. Since it's not currently possible with DRBD to have a shared filesystem mounted on both primary and secondary nodes (unless you simulate it using cross-mounting with NFS or similar, which is way beyond the scope of this document), we need to separate the filesystem into groups according to which applications are going to be "grouped" together on the nodes when both nodes are up. For maximum flexibility in assigning nodes to applications, you might well want to have a DRBD share per application, assuming of course that no two applications need to access the same share.

  2. If the DRBD devices are on the same disk spindle, use sync groups to ensure that DRBD sync happens at a reasonable speed
  3. Configure Heartbeat appropriately

Let's run through these steps. We'll assume by way of example that you want to run MySQL and Apache. You need one DRBD share for the MySQL data, and one for the Apache data. The nodes will be called 'node1' and 'node2'. We decide to configure it as follows:

share

DRBD resource

DRBD device

Physical device

mountpoint

Apache

r0

/dev/drbd0

/dev/sda1

/ha/web

MySQL

r1

/dev/drbd1

/dev/sda2

/ha/mysql

Let's assume, to complicate things, that you have a limited number of physical disks and therefore are forced to have drbd0 and drbd1 on the same spindle (i.e. same hard disk) as shown in the table above (both resources are on /dev/sda in this case). This isn't ideal, but is OK thanks to the 'sync-groups' option. This governs the order in which DRBD resources synchronise (normally, they would sync in parallel, which is going to kill performance if you have got two resources on one disk as the drive would be constantly seeking backwards and forwards reading/writing from/to the two resources). So we use sync-groups to make one resource sync first. It doesn't really matter which goes first. Here's a snippet of a possible drbd.conf. NOTE Only the relevant options are shown here for clarity; you need all your usual options such as disk-size, sync-max etc. in there too:

  • <!> Syntax was different with drbd-0.6
    Just adopting the well commented example drbd.conf to your needs should be easy, though.

# Our web share
resource web {
  protocol C;
  incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f";
  startup { wfc-timeout 0; degr-wfc-timeout     120; }
  disk { on-io-error detach; } # or panic, ...
  syncer {
     group 0;
     rate 6M;
  }
  on node1 {
    device /dev/drbd0;
    disk /dev/sda1;
    address 192.168.99.1:7788;
    meta-disk /dev/sdb1[0];
  }
  on node2 {
    device /dev/drbd0;
    disk /dev/sda1;
    address 192.168.99.2:7788;
    meta-disk /dev/sdb1[0];
  }
}

# Our MySQL share
resource db {
  protocol C;
  incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f";
  startup { wfc-timeout 0; degr-wfc-timeout     120; }
  disk { on-io-error detach; } # or panic, ...
  syncer {
     group 1;
     rate 6M;
  }
  on node1 {
    device /dev/drbd1;
    disk /dev/sda2;
    address 192.168.99.1:7789;
    meta-disk /dev/sdb1[1];
  }
  on node2 {
    device /dev/drbd1;
    disk /dev/sda2;
    address 192.168.99.2:7789;
    meta-disk /dev/sdb1[1];
  }
}

In the above example, drbd0 will sync first and drbd1 second.

Now, for the heartbeat config. The resources section in haresources is going to look something like this: (you're probably going to need other resources too, like for example IP addresses; this is a simple example)

node1 drbddisk::web Filesystem::/dev/drbd0::/ha/web::ext3 httpd
node2 drbddisk::db Filesystem::/dev/drbd1::/ha/mysql::ext3 mysqld

Note how node1 will "normally" run Apache using the drbd0 resource, and node2 will "normally" run MySQL with the drbd1 resource. If failover occurs, obviously one node will run both, and become DRBD primary for both resources (and have both shares mounted).

Keeping config files in sync

An obvious question you might ask is "how do I keep config files in sync, e.g. for Apache". The simple answer is that you should probably look at using rsync, scp or some other similar method to keep your configs in sync.

  • (!) If you have many config files (or "cluster files") to keep in sync over arbitrary many hosts, have a look at csync2 by LinBit.