Memory
Typically modern Linux systems (Linux 2.6 +) use swapping to avoid OOM (Out of Memory) to protect the system from kernel freezes. But Hadoop uses Java, and typically Java is configured with MAXHEAPSIZE per service (HDFS, HBase, Zookeeper etc). The configuration has to match the available memory in the system. A common formula for MapReduce1:
TOTAL_MEMORY = (Mappers + Reducers) * CHILD_TASK_HEAP + TT_HEAP + DN_HEAP + RS_HEAP + OTHER_SERVICES_HEAP + 3GB (for OS and caches)
For MapReduce2 YARN takes care about the resources, but only for services which are running as YARN Applications. [1], [2]
Disable swappiness is done one the fly per
echo 0 > /proc/sys/vm/swappiness
and persistent after reboots per sysctl.conf:
echo “vm.swappiness = 0” >> /etc/sysctl.conf
Additionally, RedHat implemented in kernel 2.6.39 THP (transparent huge pages swapping). THP reduces the I/O of an I/O based application at linux systems up 30%. It’s highly recommended to disable THP.
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
To do that at boot time automatically, I used /etc/rc.local:
if test -f /sys/kernel/mm/redhat_transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/redhat_transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
fi
Another nice tuning trick is the use of vm.overcommit_memory. This switch enables the overcommitting of virtual memory. Mostly, virtual memory sparse arrays using zero pages – as Java does when the memory for a VM is allocated. In most circumstances these pages contain no data, and the allocated memory can be reused (overcommitted) by other pages. With this switch the OS knows that always enough memory is available to backup the virtual pages.
This feature can be configured at runtime per:
sysctl -w vm.overcommit_memory = 1
sysctl -w vm.overcommit_ratio = 50
and permanently per /etc/sysctl.conf.
Network / Sockets
On highly used and large Linux based clusters the default sockets and network configuration can slow down some operations. This section covers some of the possibilities I figured out over the years. But please be aware of the use, since this affects the network communications.
First of all, enable the whole available port range of max available sockets:
sysctl -w net.ipv4.ip_local_port_range = 1024 65535
Additionally, increasing the recycling time of sockets avoids large TIME_WAIT queues. Re-useing the sockets for new connections can speed up the network communication. It’s to be used with caution and depends highly on the network stack and the running jobs within your cluster. The performance can grow, but also drop dramatically, since we now fast recycle the connections in the WAIT status. Typically used in clusters with a high ingest rate like HBase or Storm, Kafka etc.
sysctl -w net.ipv4.tcp_tw_recycle = 1
sysctl -w net.ipv4.tcp_tw_reuse = 1
For the same purpose, the network buffers backlog can be overfilled. In this case, new connections can be dropped or deleted – which leads to performance issues. Raising the backlog up to 16MB / Socket is mostly enough, together with the number of outstanding syn – requests and backlog sockets.
sysctl -w net.core.rmem_max = 16777216
sysctl -w net.core.wmem_max = 16777216
sysctl -w net.ipv4.tcp_max_syn_backlog = 4096
sysctl -w net.ipv4.tcp_syncookies = 1
sysctl -w net.core.somaxconn = 1024
=> Remember, this is not a generic tuning trick. On generic purpose clusters playing around with the network stack is not safe at all.
Disk / Filesystem / File descriptors
Linux tracks the file access time, and that means a lot more disk seeks. But HDFS writes once, reads many times, and the Namenode tracks the time. Hadoop doesn’t need to track the access time on the OS level, it’s safe to disable this per data disk per mount options.
/dev/sdc /data01 ext3 defaults,noatime 0
mkfs.ext3 –m 0 /dev/sdc
An optimal server has one HDFS mount point per disk, and one or two dedicated disks for logs and the operating system.
File handler and processes
echo hbase – nofile 32768 >> /etc/security/limits.conf
echo hdfs – nproc 32768 >> /etc/security/limits.conf
echo mapred – nproc 32768 >> /etc/security/limits.conf
echo hbase – nproc 32768 >> /etc/security/limits.conf
DNS / Name resolution
The communication of Hadoop’s Ecosystem depends highly on a correct DNS resolution. Typically the name resolution is configured per /etc/hosts. Important is, that the canonical name must be the FQDN of the server, see the example.
1.1.1.1 one.one.org one namenode
1.1.1.2 two.one.og two datanode
If DNS is used the system’s hostname must match the FQDN in forward as well as reverse name resolution.To reduce the latency of DNS lookups use the name service caching daemon (nscd), but don’t cache passwd, group, netbios informations.
There are also a lot of specific tuning tricks within the Hadoop Ecosystem, which will be discussed in one of the following articles.