lustre 설치법

Here is an example of how to create the lustre file system from scratch.

Note, ALL the systems MUST already be running the Luster kernel, with appropriate modules loaded. Once this is done, there is now a “lustre” file system on all machines, with kernel extensiosn to support it.

The machine “miner” will be the MGS and MDT system, and machines n01 – n04 and miner will each be an OSS (Object Storage System) with one OST (Object Storage Target) each.

1) Create the MGS and MDS disk. ( a file system name ‘spfs’   with MGS and MDT on /dev/hde1 )

[root@miner nwhite]# PATH=${PATH}:/usr/sbin:/sbin

[root@miner nwhite]# /usr/sbin/mkfs.lustre –fsname=spfs –reformat –mdt –mgs /dev/hde1

   Permanent disk data:

Target:     spfs-MDTffff

Index:      unassigned

Lustre FS:  spfs

Mount type: ldiskfs

Flags:      0x75

              (MDT MGS needs_index first_time update )

Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr

Parameters:

device size = 47692MB

formatting backing filesystem ldiskfs on /dev/hde1

        target name  spfs-MDTffff

        4k blocks     0

        options        -J size=400 -i 4096 -I 512 -q -O dir_index -F

mkfs_cmd = mkfs.ext2 -j -b 4096 -L spfs-MDTffff  -J size=400 -i 4096 -I 512 -q -O dir_index -F /d

ev/hde1

Writing CONFIGS/mountdata

2) Mount the MGS and MDS disk (this starts lustre, even though it has no OSS’s or OST’s yet)

[root@miner nwhite]# mount -t lustre /dev/hde1 /lustre-mgs-mds

[root@miner nwhite]# df

Filesystem           1K-blocks      Used Available Use% Mounted on

/dev/hda3            239051908  19472848 209864556   9% /

/dev/hda1               194442    139553     44850  76% /boot

none                   1037592         0   1037592   0% /dev/shm

grid2:/gridwork      1032671136 709655744 312688704  70% /gridwork

diskfarm:/disk2/sge  241263968 102308096 126700288  45% /sge

halo:/vol/data2/is   545259520 480489408  64770112  89% /homedir/is

rnd2:/rnddata        5284467136 4767905376 419924992  92% /mnt/rnddata

leda2:/work/FC3-install

                      42465184  23463232  16844832  59% /ledawork/FC3-install

/dev/hde1             42729112    463188  39824048   2% /lustre-mgs-mds

[root@miner nwhite]# more /etc/fstab

# This file is edited by fstab-sync – see ‘man fstab-sync’ for details

/dev/hda3                /                       ext3    defaults        1 1

LABEL=/boot             /boot                   ext3    defaults        1 2

none                    /dev/pts                devpts  gid=5,mode=620  0 0

none                    /dev/shm                tmpfs   defaults        0 0

none                    /proc                   proc    defaults        0 0

none                    /sys                    sysfs   defaults        0 0

/dev/hda2               swap                    swap    defaults        0 0

grid2:/gridwork /gridwork nfs

grid2:/griddata /griddata nfs

diskfarm:/disk2/sge /sge nfs

#/dev/hde1        /minerwork               ext3     defaults   1 2

#/minerwork/swapfile       swap            swap     defaults   0 0

/dev/hde1               /lustre-mgs-mds         lustre  defaults,_netdev        0 0

/dev/hde2               /lustre-ost             lustre  defaults,_netdev        0 0

#  apparently, should run client and ost on same machine

#192.168.0.100@tcp0:/spfs       /lustrework     lustre  defaults        0 0

/dev/hdc                /media/cdrom            auto    pamconsole,exec,noauto,managed 0 0

/dev/fd0                /media/floppy           auto    pamconsole,exec,noauto,managed 0 0

3) Check to see if things are actually loaded..

[root@miner nwhite]# cat /proc/fs/lustre/devices

  0 UP mgs MGS MGS 5

  1 UP mgc MGC192.168.0.100@tcp 4b5270c2-1256-7526-b63f-f4da3b82ea22 5

  2 UP mdt MDS MDS_uuid 3

  3 UP lov spfs-mdtlov spfs-mdtlov_UUID 4

  4 UP mds spfs-MDT0000 spfs-MDT0000_UUID 3

4) Create on OST on machine miner… on /dev/hde2

[root@miner nwhite]# /usr/sbin/mkfs.lustre –fsname=spfs –reformat –ost –mgsnode=192.168.0.100@tcp0  /dev/hde2

   Permanent disk data:

Target:     spfs-OSTffff

Index:      unassigned

Lustre FS:  spfs

Mount type: ldiskfs

Flags:      0x72

              (OST needs_index first_time update )

Persistent mount opts: errors=remount-ro,extents,mballoc

Parameters: mgsnode=192.168.0.100@tcp

device size = 238488MB

formatting backing filesystem ldiskfs on /dev/hde2

        target name  spfs-OSTffff

        4k blocks     0

        options        -J size=400 -i 16384 -I 256 -q -O dir_index -F

mkfs_cmd = mkfs.ext2 -j -b 4096 -L spfs-OSTffff  -J size=400 -i 16384 -I 256 -q -O dir_index -F /dev/hde2

Writing CONFIGS/mountdata

[root@miner nwhite]# more /etc/fstab

# This file is edited by fstab-sync – see ‘man fstab-sync’ for details

/dev/hda3                /                       ext3    defaults        1 1

LABEL=/boot             /boot                   ext3    defaults        1 2

none                    /dev/pts                devpts  gid=5,mode=620  0 0

none                    /dev/shm                tmpfs   defaults        0 0

none                    /proc                   proc    defaults        0 0

none                    /sys                    sysfs   defaults        0 0

/dev/hda2               swap                    swap    defaults        0 0

grid2:/gridwork /gridwork nfs

grid2:/griddata /griddata nfs

diskfarm:/disk2/sge /sge nfs

#/dev/hde1        /minerwork               ext3     defaults   1 2

#/minerwork/swapfile       swap            swap     defaults   0 0

/dev/hde1               /lustre-mgs-mds         lustre  defaults,_netdev        0 0

/dev/hde2               /lustre-ost             lustre  defaults,_netdev        0 0

#  apparently, should run client and ost on same machine

#192.168.0.100@tcp0:/spfs       /lustrework     lustre  defaults        0 0

/dev/hdc                /media/cdrom            auto    pamconsole,exec,noauto,managed 0 0

/dev/fd0                /media/floppy           auto    pamconsole,exec,noauto,managed 0 0

MOUNT THE OST   (this should be in /etc/fstab eventually)

[root@miner nwhite]# mount -t lustre /dev/hde2 /lustre-ost

NOW GO TO OTHER SYSTEMS and CREATE OSTs

[root@miner nwhite]# ssh n03

Last login: Sat Dec 22 12:32:52 2007 from miner.stern.nyu.edu

[root@n03 ~]# more /etc/fstab

# This file is edited by fstab-sync – see ‘man fstab-sync’ for details

LABEL=/1                /                       ext3    defaults        1 1

LABEL=/boot             /boot                   ext3    defaults        1 2

none                    /dev/pts                devpts  gid=5,mode=620  0 0

none                    /dev/shm                tmpfs   defaults        0 0

/dev/hdc                /lustre-ost             lustre  defaults,_netdev        0 0

# 192.168.0.100@tcp:/lustre     /lustrework     lustre  defaults        0 0

none                    /proc                   proc    defaults        0 0

none                    /sys                    sysfs   defaults        0 0

LABEL=/var              /var                    ext3    defaults        1 2

LABEL=SWAP-hda5         swap                    swap    defaults        0 0

/dev/fd0                /media/floppy           auto    pamconsole,exec,noauto,managed 0 0

[root@n03 ~]#  /usr/sbin/mkfs.lustre –fsname=spfs –reformat –ost –mgsnode=192.168.0.100@tcp0  /dev/hdc

   Permanent disk data:

Target:     spfs-OSTffff

Index:      unassigned

Lustre FS:  spfs

Mount type: ldiskfs

Flags:      0x72

              (OST needs_index first_time update )

Persistent mount opts: errors=remount-ro,extents,mballoc

Parameters: mgsnode=192.168.0.100@tcp

device size = 305245MB

formatting backing filesystem ldiskfs on /dev/hdc

        target name  spfs-OSTffff

        4k blocks     0

        options        -J size=400 -i 16384 -I 256 -q -O dir_index -F

mkfs_cmd = mkfs.ext2 -j -b 4096 -L spfs-OSTffff  -J size=400 -i 16384 -I 256 -q -O dir_index -F /dev/hdc

Writing CONFIGS/mountdata

[root@n03 ~]# more /etc/fstab

# This file is edited by fstab-sync – see ‘man fstab-sync’ for details

LABEL=/1                /                       ext3    defaults        1 1

LABEL=/boot             /boot                   ext3    defaults        1 2

none                    /dev/pts                devpts  gid=5,mode=620  0 0

none                    /dev/shm                tmpfs   defaults        0 0

/dev/hdc                /lustre-ost             lustre  defaults,_netdev        0 0

# 192.168.0.100@tcp:/lustre     /lustrework     lustre  defaults        0 0

none                    /proc                   proc    defaults        0 0

none                    /sys                    sysfs   defaults        0 0

LABEL=/var              /var                    ext3    defaults        1 2

LABEL=SWAP-hda5         swap                    swap    defaults        0 0

/dev/fd0                /media/floppy           auto    pamconsole,exec,noauto,managed 0 0

CREATE MOUNT POINT

[root@n03 ~]# mkdir -p /lustre-ost

MOUNT OST   (At this point it’s space is available to cluster file system

[root@n03 ~]# mount -t lustre /dev/hdc /lustre-ost

REPEAT for n04…

Cluster file system should now be ready, Try to connect a client to it.

SSH to EUCLID ( a client machine)

create a mount point for the file system at /lustrework

mkdir -p /lustrework

Add the following line to /etc/fstab

192.168.0.100@tcp0:/spfs     /lustrework     lustre  defaults        0 0

Then  mount the file system

mount /lustrework

See if it is there..

[root@euclid ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda2              95G  4.4G   86G   5% /

/dev/sda1              99M   70M   25M  74% /boot

none                  8.0G     0  8.0G   0% /dev/shm

/dev/sda5              16G   79M   15G   1% /tmp

/dev/sda6             7.9G  2.0G  5.5G  27% /var

diskfarm:/usr/install

                      224G   44G  169G  21% /usr/install

diskfarm:/disk2/sge   231G   98G  121G  45% /sge

192.168.0.100@tcp:/spfs

                      817G  1.4G  774G   1% /lustrework

[root@euclid ~]#

ALL Done on euclid

Repeat on darwin..

서진우

슈퍼컴퓨팅 전문 기업 클루닉스/ 상무(기술이사)/ 정보시스템감리사/ 시스존 블로그 운영자

You may also like...

6 Responses

  1. winter jazz 말해보세요:

    winter jazz

  2. soothing relaxation 말해보세요:

    soothing relaxation

  1. 2024년 9월 30일

    … [Trackback]

    […] Here you can find 9275 more Info to that Topic: nblog.syszone.co.kr/archives/2847 […]

  2. 2024년 9월 30일

    … [Trackback]

    […] Find More on that Topic: nblog.syszone.co.kr/archives/2847 […]

  3. 2024년 10월 24일

    … [Trackback]

    […] Info to that Topic: nblog.syszone.co.kr/archives/2847 […]

  4. 2024년 11월 8일

    … [Trackback]

    […] Info to that Topic: nblog.syszone.co.kr/archives/2847 […]

페이스북/트위트/구글 계정으로 댓글 가능합니다.