RedHat EnterPrise 서버에 클러스터 구성하기
– RedHat EnterPrise 서버에 클러스터 구성하기 (이재원)
참고 URL: http://elibrary.fultus.com/technical/index.jsp
http://www.redhat.com/docs/manuals/csgfs/admin-guide/
1) OS 인스톨 (X윈도우 포함할것)
2) 필요한 패키지 받을것.. (CentOS 32bit 이용시 URL:http://mirror.secuidc.com/centos/4.4/csgfs/)
– 패키지 리스트 ( yum localinstall “/RPM폴더/*” )
– rpm 파일 임포트하기 : rpm –import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-4
GFS-6.1.6-1.i386.rpm
GFS-kernel-2.6.9-58.3.centos4.i686.rpm
GFS-kernheaders-2.6.9-58.3.centos4.i686.rpm
ccs-1.0.7-0.i686.rpm
ccs-devel-1.0.7-0.i686.rpm
cman-1.0.11-0.i686.rpm
cman-devel-1.0.11-0.i686.rpm
cman-kernel-2.6.9-45.5.centos4.i686.rpm
cman-kernheaders-2.6.9-45.5.centos4.i686.rpm
dlm-1.0.1-1.i686.rpm
dlm-devel-1.0.1-1.i686.rpm
dlm-kernel-2.6.9-42.13.centos4.i686.rpm
dlm-kernheaders-2.6.9-42.13.centos4.i686.rpm
fence-1.32.25-1.i686.rpm
gnbd-1.0.8-0.i686.rpm
gnbd-kernel-2.6.9-9.44.centos4.i686.rpm
gnbd-kernheaders-2.6.9-9.44.centos4.i686.rpm
gulm-1.0.8-0.i686.rpm
gulm-devel-1.0.8-0.i686.rpm
iddev-2.0.0-3.i686.rpm
iddev-devel-2.0.0-3.i686.rpm
ipvsadm-1.24-6.i386.rpm
kernel-2.6.9-42.0.2.EL.i686.rpm
kernel-devel-2.6.9-42.0.2.EL.i686.rpm
kernel-doc-2.6.9-42.0.2.EL.noarch.rpm
lvm2-cluster-2.02.06-7.0.RHEL4.i386.rpm
magma-1.0.6-0.i686.rpm
magma-devel-1.0.6-0.i686.rpm
magma-plugins-1.0.9-0.i386.rpm
perl-Net-Telnet-3.03-3.noarch.rpm
rgmanager-1.9.54-1.i386.rpm
system-config-cluster-1.0.25-1.0.noarch.rpm
==================== 수동설치시 순서 ====================================
Installing: magma ####################### [ 1/28]
Installing: ccs ####################### [ 2/28]
Installing: gulm ####################### [ 3/28]
Installing: cman-kernel ####################### [ 4/28]
Installing: cman ####################### [ 5/28]
Installing: dlm-kernel ####################### [ 6/28]
Installing: dlm ####################### [ 7/28]
Installing: perl-Net-Telnet ####################### [ 8/28]
Installing: gnbd-kernel ####################### [ 9/28]
Installing: rgmanager ####################### [10/28]
Installing: dlm-kernheaders ####################### [11/28]
Installing: gnbd ####################### [12/28]
Installing: GFS ####################### [13/28]
Installing: fence ####################### [14/28]
Installing: magma-devel ####################### [15/28]
Installing: iddev-devel ####################### [16/28]
Installing: GFS-kernheaders ####################### [17/28]
Installing: cman-kernheaders ####################### [18/28]
Installing: gnbd-kernheaders ####################### [19/28]
Installing: cman-devel ####################### [20/28]
Installing: lvm2-cluster ####################### [21/28]
Installing: dlm-devel ####################### [22/28]
Installing: ccs-devel ####################### [23/28]
Installing: GFS-kernel ####################### [24/28]
Installing: magma-plugins ####################### [25/28]
Installing: iddev ####################### [26/28]
==============================================================================
%% GFS-kernel 패키지를 먼저 대입해본후 떨어지는 메세지로 커널버젼을 짐작할수있다..
3. HOSTS 파일편집
– host1 (cs1)
127.0.0.1 localhost.localdomain localhost
10.10.10.190 cs1.epersnet.com cs1
10.10.10.191 cs2.epersnet.com cs2
– host2 (cs2)
127.0.0.1 localhost.localdomain localhost
10.10.10.191 cs2.epersnet.com cs2
10.10.10.190 cs1.epersnet.com cs1
%% 주의할점은 루프백 어드레스에 호스트네임이나 서버네임이 들어가면 않됨
– 각 호스트의 호스트네임은 cs1.epersnet.com , cs2.epersnet.com 이런식으로작성
– 각 호스트간의 핑테스트할것
5. 다음 스크립트를 작성한다 (실행은 나중에)
– cluster.sh
————————————————————————————-
#!/bin/sh
. /etc/init.d/functions
case $1 in
start)
service ccsd start
service cman start
service fenced start
service gfs start
# Use -o upgrade when GFS comes newer version
# mount -t gfs /dev/etherd/e1.0 /gfs/1.0/common
service rgmanager start
;;
stop)
service rgmanager stop
# umount -l -a -t gfs 2> /dev/null
# if [ $? != “0” ]; then
# echo “Umount gfs filesystem failed”;
# exit;
# fi
sleep 1
service gfs stop
service fenced stop
service cman stop
service ccsd stop
;;
*)
echo $”Usage: $0 {start|stop}”
exit 1
esac
exit 0
————————————————————————
– tunning.sh
————————————————————————
chkconfig ccsd off
chkconfig cman off
chkconfig lock_gulmd off
chkconfig rgmanager off
chkconfig fenced off
chkconfig smartd off
chkconfig iptables off
chkconfig acpid off
chkconfig microcode_ctl off
chkconfig xinetd off
————————————————————————-
tunning.sh 스크립트는 먼저 실행한다~
6. X윈도우 상에서 system-config-cluster 실행하여 configuration 툴 띄운다 (어느쪽이든상관없음)
– Cluster Configuration 창이 뜨면 아래 설정을 참조하여 설정한다
참고로 아래파일은 미리설정해놓은 값이다 (생성하면 /etc/cluster/cluster.conf 파일이생성됨)
그값하고 아래값하고 비교해가면서 셋팅하면됨
%% 주의 : 절대로 에디터를 이용하여 /etc/cluster/cluster.conf 파일을 편집하지 말것!!
<?xml version=”1.0″?>
<cluster config_version=”18″ name=”cluster_test”>
<fence_daemon post_fail_delay=”0″ post_join_delay=”3″/>
<clusternodes>
<clusternode name=”cs1.epersnet.com” votes=”1″>
<fence>
<method name=”1″>
<device name=”test_fence” nodename=”cs1.epersnet.com”/>
</method>
</fence>
</clusternode>
<clusternode name=”cs2.epersnet.com” votes=”1″>
<fence>
<method name=”1″>
<device name=”test_fence” nodename=”cs2.epersnet.com”/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice agent=”fence_manual” name=”test_fence”/>
</fencedevices>
<rm>
<failoverdomains>
<failoverdomain name=”test_failover” ordered=”1″ restricted=”1″>
<failoverdomainnode name=”cs1.epersnet.com” priority=”1″/>
<failoverdomainnode name=”cs2.epersnet.com” priority=”1″/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address=”10.10.10.193″ monitor_link=”1″/>
<fs device=”/dev/etherd/e4.0p1″ force_fsck=”0″ force_unmount=”1″ fsid=”52609″ fstype=”ext3″
mountpoint=”/Test4.0″ name=”e4.0p1″ options=”” self_fence=”0″/>
<script file=”/etc/init.d/smb” name=”samba”/>
</resources>
<service autostart=”1″ domain=”test_failover” name=”cifs” recovery=”relocate”>
<ip ref=”10.10.10.193″/>
<fs ref=”e4.0p1″/>
<script ref=”samba”/>
</service>
</rm>
<cman expected_votes=”1″ two_node=”1″/>
</cluster>
%% 주의 : fence Device 는 메뉴에서 menual fencing 타입을 선택했음~~
%% 주의 : 한쪽에서 설정이 완료됐으면 /etc/cluster/cluster.conf 파일을
다른쪽 노드에 복사하여 넣는다
이후 설정은 configuration 창에서 윗쪽 오른쪽부분에보면
send to cluster 버튼을 눌러서 다른쪽 노드와 동기화 시켜준다
%% 경험 : configuration 창에서 service 항목에 recovery 설정에서
relocate 선택해야지만 failover가 됨
7. 제대로 설정이 완료됐으면 각 노드에서 cluster.sh 스크립트를 실행한다..
– 모든 데몬이 정상적으로 실행완료 되어야함~
8. 다시 configuration 창을 띄우면 윗쪽에 cluster management 버튼이 생긴다
여기서 클러스터 상태및 서비스 제어를할수 있다
또한 콘솔 명령에서 clustat , clustat -i 10 이런 명령으로 클러스터
상태를 모니터링 할수 있다..
*** failover 테스트는 위에서 설정한 파일시스템을 umount 시키거나 서비스 올라가있는서버를
리붓하면 failover가된다.. 그밖에 서비스데몬이나 네트웤이나 이런걸루는 안된다..ㅡ.ㅡ
9 GFS 파일시스템은 위 클러스터 설정이 끝난후 할수있다
gfs_mkfs -p lock_dlm(lock_gulm) -t alpha:gfs1 -j 8 /dev/vg01/lvol0 <– GFS 파일시스템만들기
# gfs_mkfs -p lock_dlm -t cluster:enfs -j 2??/dev/sda1
윗 부분에서 alpha:gfs1 항목은 “클러스터이름:파일시스템이름(아무거나구분할수있는거)” 형식이다
-j 부분은 저널링넘버.. 이숫자에따라 공유할수있는 노드수가 결정된다.. 8 이면 8대의 클러스터가 마운트할수있다 (*중요함*)
10. 각 노드에서 GFS 파일시스템 마운트하기
mount -t gfs BlockDevice MountPoint
위와 같은 형식으로 마운트한다~
Why can’t I mount a GFS filesystem after changing the name of its cluster in Red Hat Enterprise Linux 4?
by Sam Folk-Williams
The cluster configuration tool system-config-cluster can be used to change the name of a cluster. However, doing this will prevent GFS filesystems from mounting, as the name of the cluster is used when the GFS filesystem is created.
The following error occurs if an attempt to mount a GFS filesystem is made after the name of the cluster is changed:
May 16 15:43:17 rh4cluster2 kernel: GFS: Trying to join cluster “lock_dlm”, “rh4cluster:gfsNEW01”
May 16 15:43:17 rh4cluster2 kernel: lock_dlm: cman cluster name “newname” does not match file system cluster name “rh4cluster”
May 16 15:43:17 rh4cluster2 kernel: lock_dlm: init_cluster error -1
May 16 15:43:17 rh4cluster2 kernel: GFS: can’t mount proto = lock_dlm, table = rh4cluster:gfsNEW01, hostdata =
To correct this problem, the superblock of the GFS filesystem must be altered so that it contains the correct cluster name. The following command should be issued from one node in the cluster:
gfs_tool sb /path/to/device table new_cluster_name:filesystem_name
Where “/path/to/device” is the location of the filesystem (for example, /dev/VolGroup00/LogVol00), and “new_cluster_name” is whatever the cluster name was changed to. The last argument “filestem_name” refers to the name the filesystem was given at the time it was created. The name can remain the same, or it can be changed at the same time the cluster name is changed.
After running the above command, the GFS filesystem should mount as expected.
주의할점은 GFS 파일시스템을 생성할때.. 반듯이 클러스터이름으로 해야한다는거 (절대 호스트네임아님) 주의바람
————————————————————————————————-
간단히 GFS만 쓸경우 /etc/cluster/cluster.conf 파일
<?xml version=”1.0″ ?>
<cluster config_version=”26″ name=”cluster_test”>
<fence_daemon post_fail_delay=”0″ post_join_delay=”3″/>
<clusternodes>
<clusternode name=”cls1.epersnet.com” votes=”1″>
<fence>
<method name=”1″/>
</fence>
</clusternode>
<clusternode name=”cls2.epersnet.com” votes=”1″>
<fence>
<method name=”1″/>
</fence>
</clusternode>
</clusternodes>
<fencedevices/>
<rm>
<failoverdomains/>
<resources/>
</rm>
<cman expected_votes=”1″ two_node=”1″/>
</cluster>
———————————————————————————
-j 넘버옵션에따라… 마운트할수있는 노드수가 결정된다..
저널링 파일시스템 추가하기.. (여유용량이 확보되어야 가능하다)
먼저 gfs_tool df “MountPoint” 하여 확인한후 실행할것
gfs_jadd -j Number MountPoint
Number
Specifies the number of new journals to be added.
MountPoint
Specifies the directory where the GFS file system is mounted
———————————————————————————