CentOS / RHEL CacheFS: Speed Up Network File System (NFS) File Access
CentOS / RHEL CacheFS: Speed Up Network File System (NFS) File Access
by NIXCRAFT on JUNE 4, 2012 · 5 COMMENTS· LAST UPDATED JUNE 4, 2012
in CENTOS, REDHAT AND FRIENDS
We are using NFS v4 server under RHEL 6.x+. How do I configure CacheFS for NFS under Red Hat Enterprise Linux or CentOS to speed up file access and reduce load on our NFS server?
Linux comes with CacheFS which is developed by David Howells. The Linux CacheFS currently is designed to operate on Andrew File System and Network File System. You need to install a package called cachefilesd which is a CacheFiles userspace management daemon. The cachefilesd daemon manages the caching files and directory that are used by network filesystems such a AFS and NFS to do persistent caching to the local disk.
Step #1: Install cachefilesd
Use the yum command to install cachefilesd – cacheFiles userspace management daemon:
# yum -y install cachefilesd
Sample outputs:
Loaded plugins: product-id, protectbase, rhnplugin, subscription-manager
Updating certificate-based repositories.
0 packages excluded due to repository protections
Setting up Install Process
Resolving Dependencies
–> Running transaction check
—> Package cachefilesd.x86_64 0:0.10.2-1.el6 will be installed
–> Finished Dependency Resolution
Dependencies Resolved
====================================================================================================
Package Arch Version Repository Size
====================================================================================================
Installing:
cachefilesd x86_64 0.10.2-1.el6 rhel-x86_64-server-6 35 k
Transaction Summary
====================================================================================================
Install 1 Package(s)
Total download size: 35 k
Installed size: 0
Downloading Packages:
cachefilesd-0.10.2-1.el6.x86_64.rpm | 35 kB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : cachefilesd-0.10.2-1.el6.x86_64 1/1
Installed products updated.
Installed:
cachefilesd.x86_64 0:0.10.2-1.el6
Complete!
Step #2: Configure cachefilesd
You need to edit /etc/cachefilesd.conf, enter:
# vi /etc/cachefilesd.conf
Sample configuration file:
dir /ssd/fscache
tag mycache
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%
# Assuming you’re using SELinux with the default security policy included in
# this package
secctx system_u:system_r:cachefiles_kernel_t:s0
Where,
dir /ssd/fscache – The default directory is set to /var/cache/fscache. This directory act as the root of the cache. In this example, my nfs client is mounted on RAID-1 and cache is on single ssd disk at mounted at /ssd/. The cache is good while reading files but I have way too many small files and it causes the cache opposite. Particularly on slow harddrives. So I put cache to another harddrive. If you can afford used ssd disks.
tag mycache This command specifies a tag to FS-Cache to use in distinguishing multiple caches. This is only required if more than one cache is going to be used. The default is “CacheFiles”.
secctx system_u:system_r:cachefiles_kernel_t:s0 – Specify an LSM security context as which the kernel will perform operations to access the cache. The default is to use cachefilesd’s security context. If you change the directory you need to use the these instructions to set new LSM security context.
brun 10%, bcull 7%, bstop 3%, frun 10%, fcull 7%, fstop 3% – The cache may need culling occasionally to make space. This involves discarding objects from the cache that have been used less recently than anything else. Culling is based on the access time of data objects. Empty directories are culled if not in use. See cachefilesd.conf man page or this url for more information.
Step #3: How do I start / stop / restart cachefilesd?
Simply type the following commands:
## start it ##
/sbin/service cachefilesd start
# or #
/etc/init.d/cachefilesd start
## stop it ##
/sbin/service cachefilesd stop
## restart it ##
/sbin/service cachefilesd restart
# or #
/etc/init.d/cachefilesd restart
## get status ##
/sbin/service cachefilesd status
Step #4: How do I mount nfs client with CacheFS support?
You need to pass the fsc option to mount command as follows:
mount -t nfs -o fsc,optio2 nas01:/export/dir1/ /destination/mnt/point
In this example mount /var/www/html from nas042 on local /var/www/html using fsc (and other options) as follows:
# mount -t nfs4 -o rsize=32768,wsize=32768,intr,hard,proto=tcp,sync,fsc nas042:/var/www/html /var/www/html
Step #5: How do I verify that nfs client is working with CacheFS?
Cd to /ssd/fscache (or default /var/cache/fscache) directory and type the following command:
# cd /ssd/fscache
# ls -Z
Sample outputs:
drwx——. root root system_u:object_r:cachefiles_var_t:s0 cache
drwx——. root root system_u:object_r:cachefiles_var_t:s0 graveyard
When the cache is set up correctly, you will see two directories as above. You can type the following command to list files and cache size:
# find
# du -sh
Sample outputs:
142M
How do I see FS-Cache statistics?
Simply type the following command:
# cat /proc/fs/fscache/stats
Sample outputs:
FS-Cache statistics
Cookies: idx=30 dat=7895 spc=0
Objects: alc=7164 nal=0 avl=7164 ded=4261
ChkAux : non=0 ok=3727 upd=0 obs=3
Pages : mrk=59000 unc=37195
Acquire: n=7925 nul=0 noc=0 ok=7925 nbf=0 oom=0
Lookups: n=7164 neg=3429 pos=3735 crt=3429 tmo=0
Updates: n=0 nul=0 run=0
Relinqs: n=5022 nul=0 wcr=0 rtr=22
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=7307 ok=3324 wt=1540 nod=3243 nbf=740 int=0 oom=0
Retrvls: ops=6567 owt=1672 abt=0
Stores : n=32683 ok=32683 agn=0 nbf=0 oom=0
Stores : ops=5842 run=38525 pgs=32683 rxd=32683 olm=0
VmScan : nos=181 gon=0 bsy=0 can=0
Ops : pend=1695 run=12409 enq=38525 can=0 rej=0
Ops : dfr=21 rel=12409 gc=21
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
You can see the caching state of individual mountpoints can be examined through other /proc files.
# cat /proc/fs/nfsfs/servers
Sample outputs:
NV SERVER PORT USE HOSTNAME
v4 0a0a1d44 801 1 nas042
# cat /proc/fs/nfsfs/volumes
Sample outputs:
NV SERVER PORT DEV FSID FSC
v4 0a0a1d44 801 0:20 147f55ffe88547d2 yes
Note down the output of the “FSC” column. It always says “yes” when the system has been asked to cache a particular NFS, and “no” when it hasn’t.
How do I test CacheFS?
Copy files from NFS server to local hard drive:
time cp /path/to/nfs/mnt/point/bigfile.gz /tmp
#### this should speed up as bigfile.gz is in cache now #####
time cp /path/to/nfs/mnt/point/bigfile.gz /dev/null
In this example copy data.tar.bz2 (78MB) from nfs to /tmp (init cache):
$ time cp data.tar.bz2 /tmp
Sample outputs:
real 0m7.023s
user 0m0.000s
sys 0m0.185s
Copy data.tar.bz2 to /dev/null from NFS again (i.e. get it from cache):
$ time cp data.tar.bz2 /dev/null
Sample outputs:
real 0m0.027s
user 0m0.000s
sys 0m0.026s