Openfiler(share disk) + Heartbeat version2
OpenFiler + Heartbeat Version2 + Cluster(3 Node)
OpenFiler Download : http://sourceforge.net/project/showfiles.php?group_id=90725
Heartbeat Version2 Download :
= http://mirror.secuidc.com/centos/4.5/extras/i386/RPMS/heartbeat-2.1.2-3.el4.centos.i386.rpm
– yum -y install lm_sensors
= http://mirror.secuidc.com/centos/4.5/extras/i386/RPMS/heartbeat-pils-2.1.2-3.el4.centos.i386.rpm
= http://mirror.secuidc.com/centos/4.5/extras/i386/RPMS/heartbeat-stonith-2.1.2-3.el4.centos.i386.rpm
– yum -y install OpenIPMI
– yum -y install OpenIPMI-devel
= http://mirror.secuidc.com/centos/4.5/extras/i386/RPMS/heartbeat-gui-2.1.2-3.el4.centos.i386.rpm
– IPaddress 설정
——————————
Openfiler : 153 – 4
Node1 : 151
Node2 : 152
Node3 : 155
– Volume 설정(Openfiler)
——————————
lv01(to Node1)
lv02(to Node2)
lv03(to Node3)
– Heartbeat 설정
——————————
Node1 mcast 255.1.1.1
Node2 mcast 255.1.1.1
Node3 mcast 255.1.1.1
1. Openfiler에 iSCS타입의 볼륨 설정.
https://210.220.224.153:446
– username : openfiler
– password : password
1.1 Service – Enable/Disable
iSCSItarget = Enabled
1.2 General – Local Networks
Local networks configuration
– Node1 210.220.224.151 255.255.255.255 Share
– Node2 210.220.224.152 255.255.255.255 Share
– Node3 210.220.224.155 255.255.255.255 Share
1.3 Volumes – Physical storage Mgmt
Physical storage Management
– /dev/sda SCSI 8.00GB msdos 3
– /dev/sdb SCSI 8.00GB gpt 1
1.4 Volumes – Volume Group Mgmt
Volume Group Management
– vg1 7.97GB 0bytes 7.97GB
1.5 Volume – Create New Volume
Select volume group
– vg1
Create a volume in “vg1”
– lv01 2048MB
– lv02 2048MB
– lv03 2048MB
1.6 Volume – List of Existing Volumes
Volumes in volume group “vg1″(8160MB)
lv01 – Edit
Node1 210.220.224.151 255.255.255.255 allow
Node2 210.220.224.152 255.255.255.255 allow
Node3 210.220.224.155 255.255.255.255 allow
lv02 – Edit
Node1 210.220.224.151 255.255.255.255 allow
Node2 210.220.224.152 255.255.255.255 allow
Node3 210.220.224.155 255.255.255.255 allow
lv03 – Edit
Node1 210.220.224.151 255.255.255.255 allow
Node2 210.220.224.152 255.255.255.255 allow
Node3 210.220.224.155 255.255.255.255 allow
2. 각 노드에 iSCSI Initiator 설정.
– 각 노드에서 iSCSI 관련된 서비스를 동작할 경우 출력되는 메시지가 깨질경우 정상적으로 동작하지 않는다.
따라서, 해당 노드의 언어셋을 ‘en_US.utf8’으로 설정해 주는 것이 좋다.(/etc/sysconfig/i18n)
>Node1 / Node2 / Node3
# yum -y install iscsi-initiator-utils
# vi /etc/iscsi.conf
DiscoveryAddress=210.220.224.153 //openfiler IP address
# vi /etc/scsi_id.conf
#options = -b
options = -g
>Openfiler – each Nodes
Openfiler# service iscsi-target restart
Stopping iSCSI target service: [ OK ]
Starting iSCSI target service: [ OK ]
Node1# service iscsi restart
Node2# service iscsi restart
Node3# service iscsi restart
Searching for iscsi-based multipath maps
Found 0 maps
Stopping iscsid: [ OK ]
Removing iscsi driver: [ OK ]
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
mknod: `/dev/iscsictl’: File exists
Starting iscsid: [ OK ]
Node1# iscsi-ls
Node2# iscsi-ls
Node3# iscsi-ls
*******************************************************************************
SFNet iSCSI Driver Version …4:0.1.11-4(15-Jan-2007)
*******************************************************************************
TARGET NAME : iqn.2006-01.com.openfiler:vg1.lv03
TARGET ALIAS :
HOST ID : 4
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 210.220.224.153:3260,1
SESSION STATUS : ESTABLISHED AT Wed Aug 29 08:35:42 KST 2007
SESSION ID : ISID 00023d000001 TSIH 400
*******************************************************************************
TARGET NAME : iqn.2006-01.com.openfiler:vg1.lv01
TARGET ALIAS :
HOST ID : 5
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 210.220.224.153:3260,1
SESSION STATUS : ESTABLISHED AT Wed Aug 29 08:35:42 KST 2007
SESSION ID : ISID 00023d000001 TSIH 500
*******************************************************************************
TARGET NAME : iqn.2006-01.com.openfiler:vg1.lv02
TARGET ALIAS :
HOST ID : 6
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 210.220.224.153:3260,1
SESSION STATUS : ESTABLISHED AT Wed Aug 29 08:35:41 KST 2007
SESSION ID : ISID 00023d000001 TSIH 600
*******************************************************************************
3. 각 노드에 파일시스템 및 마운트 포인트 설정.
– 공유된 볼륨이 정상적으로 보인다면 각 노드에서 공유된 볼륨을 자신의 디스크로 인식한다.
– 공유된 볼륨이 꼬이지 않도록 가상 디바이스를 생성하여 매핑한다.
Node1# fdisk -l
Node2# fdisk -l
Node3# fdisk -l
Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 65 522081 83 Linux
/dev/sda2 66 460 3172837+ 83 Linux
/dev/sda3 461 855 3172837+ 83 Linux
/dev/sda4 856 1044 1518142+ 5 Extended
/dev/sda5 856 986 1052226 82 Linux swap
Disk /dev/sdb: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Disk /dev/sdb doesn’t contain a valid partition table
Disk /dev/sdc: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Disk /dev/sdc doesn’t contain a valid partition table
Disk /dev/sdd: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Disk /dev/sdd doesn’t contain a valid partition table
3.1 공유된 자원을 가상디바이스로 매핑
Node1# scsi_id -g -s /block/sdb
14f70656e66696c0000000000010000005dfd07000e000000
Node1# scsi_id -g -s /block/sdc
14f70656e66696c0000000000020000005dfd07000e000000
Node1# scsi_id -g -s /block/sdd
14f70656e66696c0000000000030000005dfd07000e000000
Node1# vi /etc/udev/rules.d/20-mapping.rules
KERNEL=”sd*”,BUS=”scsi”,PROGRAM=”/sbin/scsi_id”,RESULT=”14f70656e66696c0000000000010000003bf807000e000000″,NAME=”NodeA%n”
KERNEL=”sd*”,BUS=”scsi”,PROGRAM=”/sbin/scsi_id”,RESULT=”14f70656e66696c000000000002000000abfa07000e000000″,NAME=”NodeB%n”
KERNEL=”sd*”,BUS=”scsi”,PROGRAM=”/sbin/scsi_id”,RESULT=”14f70656e66696c0000000000030000005dfd07000e000000″,NAME=”NodeC%n”
Node1# /sbin/start_udev
//’Node2′,’Node3’에 ’20-mapping.rules’를 복사한다.
3.2 파티션 및 파일시스템 생성
Node1# fdisk /dev/NodeA
# mkfs -j /dev/NodeA1
# tun2fs -c -1 -i 0 /dev/NodeA1
Node2# fdisk /dev/NodeB
# mkfs -j /dev/NodeB1
# tun2fs -c -1 -i 0 /dev/NodeB1
Node3# fdisk /dev/NodeC
# mkfs -j /dev/NodeC1
# tun2fs -c -1 -i 0 /dev/NodeC1
3.3 마운트 포인터 생성
Node1# mount /dev/NodeA1 /Node1
Node2# mount /dev/NodeB1 /Node2
Node3# mount /dev/NodeC1 /Node3
4. 각 노드에 Heartbeat Version 2 설정.
>Node1 / Node2 / Node2
# cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
# cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
# vi /etc/ha.d/ha.cf
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
udpport 694
mcast eth0 255.1.1.1 694 1 0
auto_failback off
node Node1
node Node2
node Node3
apiauth stonithd uid=root
respawn root /usr/lib/heartbeat/stonithd
apiauth cibmon uid=hacluster
respawn hacluster /usr/lib/heartbeat/cibmon -d
crm on
# vi /etc/ha.d/authkeys
auth 1
1 crc
# chmod 600 /etc/ha.d/authkeys
5. HA GUI를 통해 각 자원을 설정.
>Node1
# hb_gui &
5.1 아파치 설정
//각각의 노드마다 설치한다.
//단, ‘failover’상황을 대비하여 서로다른 노드의 아파치를 구동할 수 있는 스크립트를 포함
총 3개의 Init 스크립트를 갖고 있어야 한다.
>Node1 / Node2 / Node3
# wget http://archive.apache.org/dist/httpd/httpd-2.0.59.tar.gz
# tar xvzf httpd-2.0.59.tar.gz
# cd httpd-2.0.59
# vi conf
./configure –prefix=/Node1/apache \\
–enable-modules=so \\
–enable-so \\
–enable-rule=SHARED_CODE \\
–enable-file-cache \\
–enable-cache \\
–enable-disk-cache \\
–enable-headers \\
–enable-mods-shared=most
# make && make install
# vi httpd.conf //추가
…
SetHandler server-status
Order allow,deny
Allow from all
…
# cp httpd.init /etc/init.d/httpd
//’httpd.init’파일은 RedHat에서 제공하는 아파치 RPM에서 추출한 스크립트이다.
# service httpd start
stopped : echo $? == ‘0’
started : echo $? == ‘0’ //수정
# service httpd status
stopped : echo $? == ‘3’
started : echo $? == ‘0’
# service httpd stop
stopped : echo $? == ‘0’ //수정
started : echo $? == ‘0’
5.2 Group_resource 자원에 대한 ‘failover’ 우선순위 지정하기..
Nodes
– Node1
– Node2
– Node3
Groups
– group_Node1 resource_stickiness(0)
– group_Node2 resource_stickiness(0)
– group_Node3 resource_stickiness(0)
Places
– place_GN1_Node1 score(300) #uname eq Node1
– place_GN1_Node2 score(200) #uname eq Node2
– place_GN1_Node3 score(100) #uname eq Node3
– place_GN2_Node1 score(200) #uname eq Node1
– place_GN2_Node2 score(300) #uname eq Node2
– place_GN2_Node3 score(100) #uname eq Node3
– place_GN3_Node1 score(100) #uname eq Node1
– place_GN3_Node2 score(200) #uname eq Node2
– place_GN3_Node3 score(300) #uname eq Node3
– ‘Place’를 지정할 때 ‘score’값에 따라 해당 노드가 ‘shutdown or standby’될 때(failover) 이동하고자 하는 노드의 우선순위가 달라진다.
– ‘score’값이 클수록 지정된 노드에 대한 우선순위가 높아지므로, 최초 설정시 해당 resource가 수행될 노드를 지정하는 것이 좋다.