DRBD를 이용하여 HA 구성이 된 상태를 가지고 NFS 로 export 를 해서 사용을 하는 방법에 대해서 공유를 하고자 한다. 이 방법이 가장 심플하게 이중화된 스토리지 형태를 공유하여 사용하는 방법 중의 하나이다.
우선 DRBD 및 Heartbeat 설치 하는 방법은 전편에 설명이 되어 있으니 이전 글을 먼저 숙지해야 한다.
NFS를 사용을 하려면 먼저 커널에서 지원이 되어야 한다.
커널 컴파일 시에 아래 NFS 부분을 꼭 체크 하도록 하자.
File systems —> [*] Network File Systems —> <*> NFS file system support [*] Provide NFSv3 client support [*] Provide client support for the NFSv3 ACL protocol extension [*] Provide NFSv4 client support (EXPERIMENTAL) <*> NFS server support -*- NFS server support for NFS version 3 [*] NFS server support for the NFSv3 ACL protocol extension [*] NFS server support for NFS version 4 (EXPERIMENTAL) |
그리고 양쪽 서버에 NFS 프로그램을 설치한다.
apt-get 을 이용하자.
dongho7:~# apt-get install nfs-kernel-server |
일단 NFS 데몬은 stop 시켜 놓는다. 후에 heartbeat 에서 관리 되도록 해야 한다.
dongho7:~# /etc/init.d/nfs-kernel-server stop Stopping NFS kernel daemon: mountd nfsd. Unexporting directories for NFS kernel daemon…. |
양쪽 서버에 동일하게 /etc/exports 설정을 한다.
dongho7:~# vi /etc/exports dongho7:~# cat /etc/exports # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync) hostname2(ro,sync) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt) # /srv/nfs4/homes gss/krb5i(rw,sync) # /drbd/cluster 10.30.6.1/24(rw,no_root_squash,sync) |
DRBD primary 쪽에서 filesystem 을 만든다.
먼저 primary 가 맞는지 /proc/drbd 에서 확인을 해 본다.
그리고 primary 쪽에서만 mkfs 를 하면 된다.
dongho7:~# cat /proc/drbd version: 8.2.6 (api:88/proto:86-88) GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by root@dongho7, 2008-09-01 19:20:28 0: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r— ns:16384 nr:0 dw:0 dr:16384 al:0 bm:8 lo:0 pe:0 ua:0 ap:0 oos:0
dongho7:~# drbdadm state cluster Primary/Secondary
dongho7:~# mkfs -t ext3 /dev/drbd0 mke2fs 1.40-WIP (14-Nov-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 3662848 inodes, 7325399 blocks 366269 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 224 block groups 32768 blocks per group, 32768 fragments per group 16352 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000
Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 28 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. |
양쪽 서버에 동일하게 마운트를 위한 디렉토리를 만든다.
dongho7:~# mkdir -p /drbd/cluster |
Primary 쪽에서 마운트를 한다.
dongho7:~# mount /dev/drbd0 /drbd/cluster dongho7:~# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 972404 185392 737616 21% / tmpfs 251984 0 251984 0% /lib/init/rw udev 10240 44 10196 1% /dev tmpfs 251984 0 251984 0% /dev/shm /dev/sda2 4814968 963768 3606612 22% /usr /dev/sda5 2893628 237836 2508800 9% /var none 251984 0 251984 0% /dev/shm /dev/drbd0 28841816 176200 27200540 1% /drbd/cluster |
그리고 중요한 부분 작업이 남아 있다.
NFS 에서 사용하는 lock 관리를 DRBD 볼륨으로 옮겨 놓아야 한다.
양쪽 서버 모두 작업을 한다.
이때 Secondary 에서는 cp 명령은 필요 없고 ln 명령어로 링크만 걸면 된다.
작업 완료후 Primary 에서 unmount 를 반드시 한다.
dongho7:~# cp -a /var/lib/nfs /var/lib/nfs.bak dongho7:~# mv /var/lib/nfs /drbd/cluster/var_lib_nfs dongho7:~# ln -s /drbd/cluster/var_lib_nfs /var/lib/nfs dongho7:~# ls -l /var/lib/nfs lrwxrwxrwx 1 root root 25 Sep 9 11:57 /var/lib/nfs -> /drbd/cluster/var_lib_nfs dongho7:~# umount /drbd/cluster
dongho8:/# mv /var/lib/nfs /var/lib/nfs.bak dongho8:/# ln -s /drbd/cluster/var_lib_nfs /var/lib/nfs dongho8:/# ls -l /var/lib/nfs lrwxrwxrwx 1 root root 25 Sep 9 12:00 /var/lib/nfs -> /drbd/cluster/var_lib_nfs |
아래 또한 중요한 부분으로 NFS 에서 statd 데몬은 hostname으로 인식을 하게 되어 있어 failover 시에 문제가 발생 할수 있다. 그래서 statd 데몬에게 다른 이름으로 인식을 시키도록 만들어 준다. 양쪽 서버 모두 작업을 한다.
dongho7:~# vi /etc/default/nfs-common dongho7:~# cat /etc/default/nfs-common # If you do not set values for the NEED_ options, they will be attempted # autodetected; this should be sufficient for most people. Valid alternatives # for the NEED_ options are “yes” and “no”.
# Options for rpc.statd. # Should rpc.statd listen on a specific port? This is especially useful # when you have a port-based firewall. To use a fixed port, set this # this variable to a statd argument like: “–port 4000 –outgoing-port 4001”. # For more information, see rpc.statd(8) or http://wiki.debian.org/?SecuringNFS STATDOPTS=”-n drbd-storage”
# Some kernels need a separate lockd daemon; most don’t. Set this if you # want to force an explicit choice for some reason. NEED_LOCKD=
# Do you want to start the idmapd daemon? It is only needed for NFSv4. NEED_IDMAPD=
# Do you want to start the gssd daemon? It is required for Kerberos mounts. NEED_GSSD= |
이제 heartbeat 쪽으로 설정을 하자.
먼저 nfs 데몬이 완전히 죽어 있지 않은 경우 죽이도록 스크립트를 하나 추가한다.
dongho7:~# echo ‘killall -9 nfsd ; exit 0’ > /etc/heartbeat/resource.d/killnfsd dongho7:~# chmod 755 /etc/heartbeat/resource.d/killnfsd dongho7:~# ls -l /etc/heartbeat/resource.d/killnfsd -rwxr-xr-x 1 root root 25 Sep 9 13:31 /etc/heartbeat/resource.d/killnfsd |
그리고 haresources 를 다음과 같이 변경한다.
drbd 쪽에 부하를 줄이기 위해 noatime, nodiratime 을 추가 옵션으로 준 것이다.
양쪽 서버에 각각 설정을 한다.
dongho7:/etc/ha.d# vi haresources dongho7:/etc/ha.d# cat haresources dongho7 drbddisk::cluster \ Filesystem::/dev/drbd0::/drbd/cluster::ext3::noatime,nodiratime \ killnfsd \ nfs-common \ nfs-kernel-server \ Delay::2::0 \ IPaddr::10.30.6.10/24/eth0 |
이제 heartbeat 데몬을 시작해 본다.
데몬을 시작 하고 좀 이따 df 명령을 내려 보면 heartbeat 에 의해서 마운트가 되어 있는 것을 볼 수 있다.
dongho7:~# /etc/init.d/heartbeat start Starting High-Availability services: Done.
dongho7:~# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 972404 185372 737636 21% / tmpfs 251984 0 251984 0% /lib/init/rw udev 10240 44 10196 1% /dev tmpfs 251984 0 251984 0% /dev/shm /dev/sda2 4814968 963768 3606612 22% /usr /dev/sda5 2893628 237912 2508724 9% /var none 251984 0 251984 0% /dev/shm /dev/drbd0 28841816 176232 27200508 1% /drbd/cluster |
정상적으로 failover 가 잘 되는지 테스트 해보자.
Primary 쪽에서 heartbeat 데몬을 재시작하면 Secondary 쪽으로 잘 넘어 가는 것을 볼 수 있다.
dongho7:~# ip addr show 1: lo: <LOOPBACK,UP,10000> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop link/ether 06:50:29:8f:01:8f brd ff:ff:ff:ff:ff:ff 4: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:13:8f:fc:bd:43 brd ff:ff:ff:ff:ff:ff inet 10.30.6.13/24 brd 10.30.6.255 scope global eth0 inet 10.30.6.10/24 brd 10.30.6.255 scope global secondary eth0:0 5: eth1: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0e:2e:f7:03:e8 brd ff:ff:ff:ff:ff:ff inet 192.168.0.13/24 brd 192.168.0.255 scope global eth1 dongho7:~# /etc/init.d/heartbeat restart Stopping High-Availability services: Done.
Waiting to allow resource takeover to complete: Done.
Starting High-Availability services: Done.
dongho8:/etc/ha.d# ip addr show 1: lo: <LOOPBACK,UP,10000> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop link/ether b6:b6:62:1a:10:a4 brd ff:ff:ff:ff:ff:ff 4: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:19:66:16:58:11 brd ff:ff:ff:ff:ff:ff inet 10.30.6.14/24 brd 10.30.6.255 scope global eth0 inet 10.30.6.10/24 brd 10.30.6.255 scope global secondary eth0:0 5: eth1: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0e:2e:f7:03:d5 brd ff:ff:ff:ff:ff:ff inet 192.168.0.14/24 brd 192.168.0.255 scope global eth1 dongho8:/etc/ha.d# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 972404 206008 717000 23% / tmpfs 251984 0 251984 0% /lib/init/rw udev 10240 44 10196 1% /dev tmpfs 251984 0 251984 0% /dev/shm /dev/sda2 4814968 3080320 1490060 68% /usr /dev/sda5 2893628 1344680 1401956 49% /var none 251984 0 251984 0% /dev/shm /dev/drbd0 28841816 176232 27200508 1% /drbd/cluster |
다른 제 3의 서버에서 한번 nfs 로 마운트를 해본다.
이때 ip 는 가상 아이피를 사용하여 마운트를 하도록 한다.
아래 처럼 잘 마운트 되는 것을 확인 할 수 있다.
dongho3:~# ip addr show 1: lo: <LOOPBACK,UP,10000> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: peth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:e0:4c:a9:b7:42 brd ff:ff:ff:ff:ff:ff inet6 fe80::2e0:4cff:fea9:b742/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000 link/ether 00:13:8f:ac:00:80 brd ff:ff:ff:ff:ff:ff 4: sit0: <NOARP> mtu 1480 qdisc noop link/sit 0.0.0.0 brd 0.0.0.0 5: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue link/ether 00:e0:4c:a9:b7:42 brd ff:ff:ff:ff:ff:ff inet 10.30.6.11/24 brd 10.30.6.255 scope global eth0 inet6 fe80::2e0:4cff:fea9:b742/64 scope link valid_lft forever preferred_lft forever dongho3:~# mount -t nfs 10.30.6.10:/drbd/cluster /mnt/nfs dongho3:~# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 972404 395704 527304 43% / tmpfs 473908 0 473908 0% /lib/init/rw udev 10240 48 10192 1% /dev tmpfs 473908 0 473908 0% /dev/shm /dev/sda2 4814968 1981892 2588488 44% /usr /dev/sda5 2893628 779704 1966932 29% /var /dev/sda6 66311448 4667516 58275508 8% /home 10.30.6.10:/drbd/cluster 28841856 176256 27200512 1% /mnt/nfs |
아래 failover 테스트 화면을 동영상으로 만든 것이다.
더블 클릭 하면 큰 화면으로 볼 수 있다.
5 Responses
3constant
2forster
… [Trackback]
[…] Read More here on that Topic: nblog.syszone.co.kr/archives/3164 […]
… [Trackback]
[…] Find More here on that Topic: nblog.syszone.co.kr/archives/3164 […]
… [Trackback]
[…] There you can find 90486 more Information on that Topic: nblog.syszone.co.kr/archives/3164 […]