리눅스 DRBD + HA(heartbeat) 구성하기

Active/Passive (hot standby)

This is usually the simplest configuration. Here, you have all your active services (e.g. Apache, MySQL, whatever…) all running on one node. The other node sits doing nothing. (See also A Basic Single IP Address Configuration)

In this case, you can possibly get away with just one DRBD share (resource) for multiple applications. The basic idea is that you have a DRBD share on which you put all your data (not normally binaries or other system files). Let’s assume Apache, for example. You might opt for a ‘high availability’ mountpoint /ha/web. This will be shared across both nodes with DRBD but remember that only the primary node can mount it; you can’t have it mounted on both nodes at the same time.

A typical, simple DRBD config for this would look like:

TODO: typical active/passive DRBD config

You would configure Apache with all the websites having document roots under this share, e.g. you might have:

|-/ha

|— web

|—– client1

|—– client2

snippet from Apache config:

<VirtualHost 192.168.0.1>

   ServerName client1.example.com

   DocumentRoot /ha/web/client1

   …

</VirtualHost>

<VirtualHost 192.168.0.1>

   ServerName client2.example.com

   DocumentRoot /ha/web/client2

   …

</VirtualHost>

Setting up heartbeat isn’t difficult but there are two steps you need to be aware of:

running the ‘drbddisk’ script, which triggers DRBD to make the current node primary for the given resource

actually mounting the filesystem

Here’s a typical, simple, snippet from haresources showing an active/passive configuration. In this case, node1 is the default primary, and there is a “floating” IP address 192.168.0.1:

node1 drbddisk::web Filesystem::/dev/drbd0::/mnt/ha::ext3 192.168.0.1 httpd

Note the order in which the resources are started. drbddisk needs to go first, since it configures DRBD to be primary. Mounting the filesystem is next, and actually running Apache is last. Setting up the network IP obviously needs to happen at some point before Apache starts.

http://www.linux-ha.org/_cache/_wiki_modern_img_attention.png” width=15> drbddisk was datadisk in drbd-0.6.x.

http://www.linux-ha.org/_cache/_wiki_modern_img_attention.png” width=15> web above (the parameter to datadisk or drbddisk) is the resource name you chose for the resource section in drbd.conf, not the device (unless you chose to give your resources device names…)

With several resources, it has to be written like

castor 10.0.0.30 drbddisk::r0 drbddisk::r1  \\

        Filesystem::/dev/drbd0::/crypt::xfs \\

        Filesystem::/dev/drbd1::/data::xfs  \\

        samba nfs-kernel-server

Active/Active with different services on both nodes

You might rightly think that the active/passive scenario is rather inefficient, because assuming your systems are generally reliable, you have a perfectly good server sitting doing nothing. Therefore, you might prefer to balance the load somewhat using an active/active configuration, where one node “normally” runs some services, and the other node “normally” runs some other (different) services. (See also A Two IP address Active/Active Configuration)

There are a couple of things you need to bear in mind before doing this:

If one node can comfortably cope with the load of running all the services, there isn’t necessarily much point in trying to split them. However:

If one node can’t cope with the load, then consider what’s going to happen when you need to failover all services to one node

(these considerations are general to HA, not DRBD-specific)

However, assuming this is what you want to do, and that the services running on each node each require a shared filesystem (i.e. DRBD), then you need to take the following steps:

Make sure you have separate DRBD resources (drbd0, drbd1…) set up. Since it’s not currently possible with DRBD to have a shared filesystem mounted on both primary and secondary nodes (unless you simulate it using cross-mounting with NFS or similar, which is way beyond the scope of this document), we need to separate the filesystem into groups according to which applications are going to be “grouped” together on the nodes when both nodes are up. For maximum flexibility in assigning nodes to applications, you might well want to have a DRBD share per application, assuming of course that no two applications need to access the same share.

If the DRBD devices are on the same disk spindle, use sync groups to ensure that DRBD sync happens at a reasonable speed

Configure Heartbeat appropriately

Let’s run through these steps. We’ll assume by way of example that you want to run MySQL and Apache. You need one DRBD share for the MySQL data, and one for the Apache data. The nodes will be called ‘node1’ and ‘node2’. We decide to configure it as follows:

share

DRBD resource

DRBD device

Physical device

mountpoint

Apache

r0

/dev/drbd0

/dev/sda1

/ha/web

MySQL

r1

/dev/drbd1

/dev/sda2

/ha/mysql

http://www.linux-ha.org/_cache/_wiki_modern_img_attention.png” width=15> /dev/drbdX have been /dev/nbX in drbd-0.6.x and older, see also DRBD/QuickStart07

Let’s assume, to complicate things, that you have a limited number of physical disks and therefore are forced to have drbd0 and drbd1 on the same spindle (i.e. same hard disk) as shown in the table above (both resources are on /dev/sda in this case). This isn’t ideal, but is OK thanks to the ‘sync-groups’ option. This governs the order in which DRBD resources synchronise (normally, they would sync in parallel, which is going to kill performance if you have got two resources on one disk as the drive would be constantly seeking backwards and forwards reading/writing from/to the two resources). So we use sync-groups to make one resource sync first. It doesn’t really matter which goes first. Here’s a snippet of a possible drbd.conf. NOTE Only the relevant options are shown here for clarity; you need all your usual options such as disk-size, sync-max etc. in there too:

http://www.linux-ha.org/_cache/_wiki_modern_img_attention.png” width=15> Syntax was different with drbd-0.6

Just adopting the well commented example drbd.conf to your needs should be easy, though.

# Our web share

resource web {

  protocol C;

  incon-degr-cmd “echo ‘!DRBD! pri on incon-degr’ | wall ; sleep 60 ; halt -f”;

  startup { wfc-timeout 0; degr-wfc-timeout     120; }

  disk { on-io-error detach; } # or panic, …

  syncer {

     group 0;

     rate 6M;

  }

  on node1 {

    device /dev/drbd0;

    disk /dev/sda1;

    address 192.168.99.1:7788;

    meta-disk /dev/sdb1[0];

  }

  on node2 {

    device /dev/drbd0;

    disk /dev/sda1;

    address 192.168.99.2:7788;

    meta-disk /dev/sdb1[0];

  }

}

# Our MySQL share

resource db {

  protocol C;

  incon-degr-cmd “echo ‘!DRBD! pri on incon-degr’ | wall ; sleep 60 ; halt -f”;

  startup { wfc-timeout 0; degr-wfc-timeout     120; }

  disk { on-io-error detach; } # or panic, …

  syncer {

     group 1;

     rate 6M;

  }

  on node1 {

    device /dev/drbd1;

    disk /dev/sda2;

    address 192.168.99.1:7789;

    meta-disk /dev/sdb1[1];

  }

  on node2 {

    device /dev/drbd1;

    disk /dev/sda2;

    address 192.168.99.2:7789;

    meta-disk /dev/sdb1[1];

  }

}

In the above example, drbd0 will sync first and drbd1 second.

Now, for the heartbeat config. The resources section in haresources is going to look something like this: (you’re probably going to need other resources too, like for example IP addresses; this is a simple example)

node1 drbddisk::web Filesystem::/dev/drbd0::/ha/web::ext3 httpd

node2 drbddisk::db Filesystem::/dev/drbd1::/ha/mysql::ext3 mysqld

Note how node1 will “normally” run Apache using the drbd0 resource, and node2 will “normally” run MySQL with the drbd1 resource. If failover occurs, obviously one node will run both, and become DRBD primary for both resources (and have both shares mounted).

Keeping config files in sync

An obvious question you might ask is “how do I keep config files in sync, e.g. for Apache”. The simple answer is that you should probably look at using rsync, scp or some other similar method to keep your configs in sync.

If you have many config files (or “cluster files”) to keep in sync over arbitrary many hosts, have a look at csync2 by LinBit.

서진우

슈퍼컴퓨팅 전문 기업 클루닉스/ 상무(기술이사)/ 정보시스템감리사/ 시스존 블로그 운영자

You may also like...

4 Responses

  1. 2024년 9월 10일

    … [Trackback]

    […] Info to that Topic: nblog.syszone.co.kr/archives/3047 […]

  2. 2024년 10월 11일

    … [Trackback]

    […] Find More on that Topic: nblog.syszone.co.kr/archives/3047 […]

  3. 2024년 11월 11일

    … [Trackback]

    […] Read More Information here on that Topic: nblog.syszone.co.kr/archives/3047 […]

  4. 2024년 11월 14일

    … [Trackback]

    […] Read More Information here on that Topic: nblog.syszone.co.kr/archives/3047 […]

페이스북/트위트/구글 계정으로 댓글 가능합니다.