[GFS] IPMI_Fencing Config
What is IPMI?
Intelligent Platform Management Interface
Advantages
Low cost.
Available on most recent server systems.
Disadvantages
There is no way to retain power to the IPMI BMC if power to the host is lost. If power is lost to the host, fencing will fail. This will delay cluster recovery. It is recommended to either have a second fencing device (external power switch) or a server with multiple power supplies when using IPMI over LAN for fencing.
Some IPMI implementations share an ethernet interface with the host operating system. If this interface is the same one used for the cluster communication, a loss of connectivity to this port will cause a need for fencing but also cause fencing to fail, delaying cluster recovery. It is recommended that the ethernet interface the BMC uses be reserved exclusively for BMC use; that is, the host operating system should not use the BMC’s NIC.
Prerequisites
You will need the OpenIPMI packages and the ipmitool utility in order to use this mini-howto. On Red Hat Enterprise Linux 5 and CentOS 5, the packages you need are:
OpenIPMI
OpenIPMI-tools
You will also need some free IP addresses which you assign to IPMI devices.
This mini-HOWTO is for machines which have not had IPMI-over-LAN set up before.
Basic Configuration
See if you can start ipmi:
[root@et-virt05 ~]# service ipmi start
Starting ipmi drivers: [ OK ]
If it fails (FAILED), go have a beer or something, because the rest of this howto will not help you. If it succeeds, you should see something like this (may differ by hardware) in dmesg output:
ipmi message handler version 39.1
IPMI System Interface driver.
ipmi_si: Trying SMBIOS-specified kcs state machine at i/o address 0xca8, slave address 0x20, irq 0
ipmi: Found new BMC (man_id: 0x0002a2, prod_id: 0x0000, dev_id: 0x20)
IPMI kcs interface initialized
ipmi device interface
If that works, proceed to configure IPMI for your hardware.
Dell, Intel, and IBM machines (typical)
HP machines with iLO MP
Also, pay attention to the known bugs:
Known Bugs
IPMI on Dell/IBM machines
LAN/LAN+ Configuration
We need to configure the LAN device for network access. First, take a look at the configuration:
[root@et-virt05 ~]# ”’ipmitool lan print 1”’
Set in Progress : Set Complete
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5
: User : MD2 MD5
: Operator : MD2 MD5
: Admin : MD2 MD5
: OEM : MD2 MD5
IP Address Source : Static Address
IP Address : 0.0.0.0 <— need to set this
Subnet Mask : 0.0.0.0 <— also need to set this
MAC Address : 00:13:72:50:05:f0
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 0.0.0.0 <— Might need to set this
Default Gateway MAC : 00:00:00:00:00:00
Backup Gateway IP : 0.0.0.0
Backup Gateway MAC : 00:00:00:00:00:00
802.1q VLAN ID : Disabled
802.1q VLAN Priority : 0
Cipher Suite Priv Max : Not Available
Now we need to set the IP address, netmask, gateway, etc. of the LAN interface. Things to remember:
The IP address must not be used anywhere else on the network.
If you are using DHCP for the host, you may usually not use DHCP for IPMI-over-LAN and vice-versa. This is because of the fact that some machines share a MAC address between the IPMI-over-LAN device and the device used by the operating system. I used static addresses on the IPMI devices.
[root@et-virt05 ~]# ipmitool lan set 1 ipsrc static
[root@et-virt05 ~]# ipmitool lan set 1 ipaddr 10.10.10.52
Setting LAN IP Address to 10.10.10.52
[root@et-virt05 ~]# ipmitool lan set 1 netmask 255.255.255.0
Setting LAN Subnet Mask to 255.255.255.0
[root@et-virt05 ~]# ipmitool lan set 1 arp respond on
[root@et-virt05 ~]# ipmitool lan set 1 arp generate on
[root@et-virt05 ~]# ipmitool lan set 1 arp interval 5
Finally, we have to enable the interface:
[root@et-virt05 ~]# ipmitool lan set 1 user
[root@et-virt05 ~]# ipmitool lan set 1 access on
User Configuration
First, we need to list the users and get the user ID of the administrative user for IPMI:
[root@et-virt05 ~]# ipmitool user list
ID Name Callin Link Auth IPMI Msg Channel Priv Limit
2 root true true true ADMINISTRATOR
If you don’t already know the password for the ‘root’ user, you can set it from the command line as well. This should NOT be the same as the root user password for the system). Note the 2 in the command line – this matches the administrator / root account from above:
[root@et-virt05 ~]# ipmitool user set password 2
Password for user 2: mypassword
Password for user 2: mypassword
Preliminary Testing
The easiest thing to check is the power status. Note – you CAN NOT issue IPMI-over-lan commands from the same machine. That is, you must perform the following test from a different machine than the one you have just configured.
[root@et-virt06 yum]# ipmitool -H 10.10.10.52 -I lan -U root -P mypassword chassis power status
Chassis Power is on
Fencing Configuration
Now, we have a working IPMI configuration, we need to add it to cluster.conf. Here is what the above configuration would look like, given that et-virt05 had an IPMI lan interface set to 10.10.10.52 and et-virt06 had its IPMI lan interface set to 10.10.10.62:
…
<clusternodes>
<clusternode name=”et-virt05″ nodeid=”1″ votes=”1″>
<fence>
<method name=”1″>
<device name=”virt05-ipmi”/>
</method>
</fence>
</clusternode>
<clusternode name=”et-virt06″ nodeid=”3″ votes=”1″>
<fence>
<method name=”1″>
<device name=”virt06-ipmi”/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice agent=”fence_ipmilan” ipaddr=”10.10.10.52″ login=”root” name=”virt05-ipmi” passwd=”mypassword”/>
<fencedevice agent=”fence_ipmilan” ipaddr=”10.10.10.62″ login=”root” name=”virt06-ipmi” passwd=”mypassword”/>
</fencedevices>
…
IPMI with HP ILO MP
The ILO MP (Integrated Lights-Out Management Processor), which is present on some HP Integrity servers, has the ability to handle IPMI-over-LAN requests. Users that wish to configure fencing for machines with ILO MP will therefore want to use the fence_ipmilan agent. Note that the fence_ilo agent will not work with the ILO MP interface.
The first step is to enable IPMI from with ILO MP. You will need to login to the ILO MP console in order to set this option. Once you are connected to the ILO MP console, enter the ‘Command Menu’ (CM). You can then enable IPMI by using the following command:
MP:CM> SA -lanipmi e
New Set Access Configuration (* modified values):
Remote/Modem : OS SESSION
Telnet : Enabled
Web SSL : Enabled
SSH : Enabled
* IPMI over LAN : Enabled
Warning: enabling IPMI over LAN will allow an anonymous user login
(an IPMI user with null name and password). To secure access please
set the password of the null user using an IPMI tool.
Confirm? (Y/[N]): Y
-> Set Access Configuration has been updated.
-> Command successful.
Check to see that you can connect to your ILO MP device via IPMI using the ipmitool.
ipmitool -I lan -P”” -H <hostname> user list
Password:
ID Name Callin Link Auth IPMI Msg Channel Priv Limit
1 true false true ADMINISTRATOR
Note that the above command passes a null password (-P “”) since initially the password is set to null. To set the password, use ipmitool with the ‘user set password’ command.
ipmitool -I lan -P “” -H <hostname> user set password 1 <password>
This command will set the password for userid “1”, which is the only IPMI user. Note that userid “1” does not have an associated username. This is a limitation of the ILO MP device and it is not possible to set a username for userid “1”. This is very important. The IPMI BMC on a ILO MP device will not recognize usernames, and therefore you will never want to pass a username to either ipmitool or fence_ipmilan.
Fencing Configuration
Configuring your cluster to use fence_ipmilan with an ILO MP device is the same as described above with one small exception. As mentioned, you will not want to define a username since the ILO MP device has no notion of usernames. Therefore, when setting up your fencedevice in the cluster.conf file, do not pass a “login=” parameter to the fence agent.
…
<clusternodes>
<clusternode name=”mynode-01″ nodeid=”1″ votes=”1″>
<fence>
<method name=”1″>
<device name=”ilo-mp-ipmi”/>
</method>
</fence>
</clusternode>
<clusternode name=”mynode-02″ nodeid=”2″ votes=”1″>
<fence>
<method name=”1″>
<device name=”ilo-mp-ipmi”/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice agent=”fence_ipmilan” ipaddr=”10.0.0.1″ name=”ilo-mp-ipmi” passwd=”mypassword”/>
<fencedevice agent=”fence_ipmilan” ipaddr=”10.0.0.2″ name=”ilo-mp-ipmi” passwd=”mypassword”/>
</fencedevices>
…
Known Bugs
The IPMI Specification, Version 2.0, Section 28.3 states:
[3:0] – chassis control
0h = power down. Force system into soft off (S4/S45) state. This is for
‘emergency’ management power down actions. The command
does not initiate a clean shut-down of the operating system prior to
powering down the system.
The source code for ipmitool correctly passes 0x30 to the BMC. However, most BMCs seem to ignore the fact that this operation is explicitly not a clean shutdown and generate an ACPI power-off event. This causes the operating system to initiate a clean shut down which is precisely what we do not want. Because of this, it is important to disable operating system registration of ACPI. For most users, simply stopping acpid will be sufficient:
[root@et-virt05 ~]# chkconfig –level 2345 acpid off
[root@ayanami lhh]# /sbin/service acpid stop
Stopping acpi daemon: [ OK ]
If power off is not immediate during testing of IPMI, it may be necessary to disable ACPI entirely during boot.