[HPC] MM5 기상 모델 시뮬레이션 환경 구축하기

MM5 + MPICH + Intel Compiler v10 Install


작성자 : 서진우 (alang@syszone.co.kr)
작성일 : 2009년 2월 26일

근래(2009년 당시) 최산 하드웨어와 최신 운영체제,컴파일러 환경에서 MM5 클러스터
환경 구축에 대해 살펴 보도록 한다.

CPU : Intel(R) Xeon(R) CPU E5420  @ 2.50GHz
OS : RHEL4(update5) x86_64
먼저 MPICH+INTEL(v10)이 이미 설치되어 있다는 가정이다.
mpich+intel path : /engrid/enhpc/mpich/intel
mpich+pgi path : /engrid/enhpc/mpich/pgi
MM5 버전은 3.7 버전을 사용하였다.

– MM5-intel 설치

# mkdir /home/MM5-intel
# cd /home/MM5-intel
# wget ftp://ftp.ucar.edu/mesouser/MM5V3/MM5.TAR.gz
# wget ftp://ftp.ucar.edu/mesouser/MM5V3/MPP.TAR.gz
# tar xzvf MM5.TAR.gz
# cd MM5
# tar xzvf ../MPP.TAR.gz
# vi configure.user
——————————————————————————
.
.
#—————————————————————————–
#   7g2. Linux PCs.  Need INTEL and MPICH.
#—————————————————————————–
RUNTIME_SYSTEM = “linux”
MPP_TARGET=$(RUNTIME_SYSTEM)
### edit the following definition for your system
LINUX_MPIHOME = /engrid/enhpc/mpich/intel
MFC = $(LINUX_MPIHOME)/bin/mpif77
MCC = $(LINUX_MPIHOME)/bin/mpicc
MLD = $(LINUX_MPIHOME)/bin/mpif77
FCFLAGS = -O3 -mtune=core2 -march=core2 -xT -aT -convert big_endian -DDEC_ALPHA
LDOPTIONS = -O3 -mtune=core2 -march=core2 -xT -aT -convert big_endian -DDEC_ALPHA
LOCAL_LIBRARIES = -L$(LINUX_MPIHOME)/lib -lfmpich -lmpich
MAKE = make -i -r
AWK = awk
SED = sed
CAT = cat
CUT = cut
EXPAND = /usr/bin/expand
M4 = m4
CPP = /lib/cpp -C -P -traditional
CPPFLAGS = -traditional -DMPI -Dlinux -DDEC_ALPHA
CFLAGS = -DMPI -I/engrid/enhpc/mpich/intel/include -DDEC_ALPHA
ARCH_OBJS =  milliclock.o
IWORDSIZE = 4
RWORDSIZE = 4
LWORDSIZE = 4
.
.
#—————————————————————————–
# 4. General commands
#—————————————————————————–
.
CC = icc
.
——————————————————————————
# vi MPP/RSL/Makefile.RSL
——————————————————————————
.
.
INCLUDES = -I$(MPPTOP) -I$(MPPTOP)/$(MPP_LAYER)  \
           -I$(DEVTOP)/pick -I$(MPPTOP)/debug -I$(RSLLOC) \
           -I/engrid/enhpc/mpich/intel/include
.
——————————————————————————
# vi MPP/RSL/RSL/makefile.linux
——————————————————————————
.
CC = $(LINUX_MPIHOME)/bin/mpicc
FC = $(LINUX_MPIHOME)/bin/mpif77
.
CFLAGS = -I$(IDIR) -DMPI -DRSL_SYNCIO -Dlinux -DSWAPBYTES -O3 -mtune=core2 -march=core2 -xT -aT -I/engrid/enhpc/mpich/intel/include -DDEC_ALPHA
FFLAGS = -O3 -mtune=core2 -march=core2 -xT -aT -convert big_endian -DDEC_ALPHA
.
——————————————————————————-
# vi MPP/RSL/RSL/rsl_mpi_compat.c
—————————————————-
mpi_init 로 문자열을 검색하면…
mpi_init___ 로 된 부분이 있다. 이것을 mpi_init_ 로 변경해 준다.
—————————————————-
# vi MPP/RSL/RSL/makefile
——————————————————————————
.
linux :
        $(MAKE) -f makefile.linux LINUX_MPIHOME=/engrid/enhpc/mpich/intel $(MAKE_OPTS) all
.
——————————————————————————
# cd MPP/RSL/RSL
# make linux
# cd /home/MM5-intel/MM5
# make mpp
# ls /home/MM5-intel/MM5/Run/mm5.mpp <- 실행 파일 생성 확인
# make mm5.deck
——————————————————————————
Making mm5.deck for MPP
Using template file ./Templates/mm5.deck.mpp
Including file ./Templates/oparam
Including file ./Templates/lparam
Including file ./Templates/nparam
Including file ./Templates/pparam
Including file ./Templates/fparam
——————————————————————————
# ./mm5.deck
# ls /home/MM5-intel/MM5/Run/mmlif <- 생성 확인
# cd Run
# wget ftp://ftp.ucar.edu/mesouser/MM5V3/TESTDATA/input2mm5.tar.gz
# tar xzvf input2mm5.tar.gz

;;; 컴파일은 모두 정상적으로 된다. 실행도 된다. 하지만 계산 도중 아무런 에러
없이 계속 실행 상태로 멈추어 있다. intel9.1 버전으로 환경을 구축해도 동일한
문제가 발생한다. 이 문제 해결을 위해서는 아래와 같이 패치를 해야 한다.
MM5/MPP/mhz.c -> 164 줄
void use_p(int **p) {}  을 ..
volatile int use_p_counter = 0;
void use_p(int **p) { use_p_counter += (p != NULL); }
으로 변경 ..

– MM5-pgi 설치
# mkdir /home/MM5-pgi
# cd /home/MM5-pgi
# wget ftp://ftp.ucar.edu/mesouser/MM5V3/MM5.TAR.gz
# wget ftp://ftp.ucar.edu/mesouser/MM5V3/MPP.TAR.gz
# tar xzvf MM5.TAR.gz
# cd MM5
# tar xzvf ../MPP.TAR.gz
# vi configure.user
——————————————————————————
.
.
RUNTIME_SYSTEM = “linux”
MPP_TARGET=$(RUNTIME_SYSTEM)
# edit the following definition for your system
LINUX_MPIHOME = /engrid/enhpc/mpich/pgi
MFC = $(LINUX_MPIHOME)/bin/mpif90
MCC = $(LINUX_MPIHOME)/bin/mpicc
MLD = $(LINUX_MPIHOME)/bin/mpif90
FCFLAGS = -O2 -Mcray=pointer -tp p7-64 -pc 64 -Mnoframe -byteswapio -DDEC_ALPHA
LDOPTIONS = -O2 -Mcray=pointer -tp p7-64 -pc 64 -Mnoframe -byteswapio -DDEC_ALPHA
LOCAL_LIBRARIES = -L$(LINUX_MPIHOME)/lib -lfmpich -lmpich
MAKE = make -i -r
AWK = awk
SED = sed
CAT = cat
CUT = cut
EXPAND = expand
M4 = m4
CPP = /lib/cpp -C -P -traditional
CPPFLAGS = -DMPI -Dlinux -DSYSTEM_CALL_OK -DDEC_ALPHA
CFLAGS = -DMPI -I$(LINUX_MPIHOME)/include -DDEC_ALPHA
ARCH_OBJS =  milliclock.o
IWORDSIZE = 4
RWORDSIZE = 4
LWORDSIZE = 4
.
.
#—————————————————————————–
# 4. General commands
#—————————————————————————–
.
CC = pgcc
.
——————————————————————————
# vi MPP/RSL/Makefile.RSL
——————————————————————————
.
.
INCLUDES = -I$(MPPTOP) -I$(MPPTOP)/$(MPP_LAYER)  \
           -I$(DEVTOP)/pick -I$(MPPTOP)/debug -I$(RSLLOC) \
           -I/engrid/enhpc/mpich/pgi/include
.
——————————————————————————
# vi MPP/RSL/RSL/makefile.linux
——————————————————————————
.
CC = $(LINUX_MPIHOME)/bin/mpicc
FC = $(LINUX_MPIHOME)/bin/mpif77
# FC = $(LINUX_MPIHOME)/bin/mpif77 -byteswapio 의 -byteswapio를 제거하라
.
CFLAGS = -I$(IDIR) -DMPI -DRSL_SYNCIO -Dlinux -DSWAPBYTES -O2 -I/engrid/enhpc/mpich/pgi/include -DDEC_ALPHA
FFLAGS = -O -DDEC_ALPHA -byteswapio  ## 이 부분 주의 (-O2 사용하지 마시오)
.
——————————————————————————-
# vi MPP/RSL/RSL/rsl_mpi_compat.c
—————————————————-
mpi_init 로 문자열을 검색하면…
mpi_init___ 로 된 부분이 있다. 이것을 mpi_init_ 로 변경해 준다.
—————————————————-
# vi MPP/RSL/RSL/makefile
——————————————————————————
.
linux :
        $(MAKE) -f makefile.linux LINUX_MPIHOME=/engrid/enhpc/mpich/intel $(MAKE_OPTS) all
.
——————————————————————————
# cd MPP/RSL/RSL
# make linux
# cd /home/MM5-pgi/MM5
# make mpp
# ls /home/MM5-pgi/MM5/Run/mm5.mpp <- 실행 파일 생성 확인
# make mm5.deck
——————————————————————————
Making mm5.deck for MPP
Using template file ./Templates/mm5.deck.mpp
Including file ./Templates/oparam
Including file ./Templates/lparam
Including file ./Templates/nparam
Including file ./Templates/pparam
Including file ./Templates/fparam
——————————————————————————
# ./mm5.deck
# ls /home/MM5-pgi/MM5/Run/mmlif <- 생성 확인
# cd Run
# wget ftp://ftp.ucar.edu/mesouser/MM5V3/TESTDATA/input2mm5.tar.gz
# tar xzvf input2mm5.tar.gz

– MM5 실행
# cd Run
# mpirun -np <pro_num> ./mm5.mpp
– 새로운 모델 수행
largedominrun.tar.gz 모델 수행
# cd Run
# tar xzvf largedominrun.tar.gz
configure.user 에 나오는 parame.incl 항목을 새로운 모델에 맞게 적용한다.
# ..
# make uninstall
# make mpp
새모델에서 제시하는 mm5.deck 를 통해 mmlif 파일을 새로 생성한다.
mm5.mpp 수행
soc_benchmark_config.tar.gz 모델 수행

# cd Run
# tar xzvf input2mm5.tar.gz
# tar xzvf soc_benchmark_config.tar.gz
configure.user -> 상위 디렉토리로 이동 후 수정
mmlif   -> Run 디렉토리에 그대로 둠
# cp configure.user ..
# vi ../configure.user
———————————————————————-
.
RUNTIME_SYSTEM = “linux”
MPP_TARGET=$(RUNTIME_SYSTEM)
# edit the following definition for your system
LINUX_MPIHOME = /engrid/enhpc/mpich/pgi
MFC = $(LINUX_MPIHOME)/bin/mpif90
MCC = $(LINUX_MPIHOME)/bin/mpicc
MLD = $(LINUX_MPIHOME)/bin/mpif90
FCFLAGS = -O2 -Mcray=pointer -tp p7-64 -pc 64 -Mnoframe -byteswapio -DDEC_ALPHA
LDOPTIONS = -O2 -Mcray=pointer -tp p7-64 -pc 64 -Mnoframe -byteswapio -DDEC_ALPHA
LOCAL_LIBRARIES = -L$(LINUX_MPIHOME)/lib -lfmpich -lmpich
MAKE = make -i -r
AWK = awk
SED = sed
CAT = cat
CUT = cut
EXPAND = expand
M4 = m4
CPP = /lib/cpp -C -P -traditional
CPPFLAGS = -DMPI -Dlinux -DSYSTEM_CALL_OK -DDEC_ALPHA
CFLAGS = -DMPI -I$(LINUX_MPIHOME)/include -DDEC_ALPHA
ARCH_OBJS =  milliclock.o
IWORDSIZE = 4
RWORDSIZE = 4
LWORDSIZE = 4
.
——————————————————————————
# make uninstall
# make mpp
# cd Run
# mpirun -np x ./mm5.mpp

– MM5 성능 측정 ( input2mm5.tar.gz )
** MPI+PGI
[root@gc001 Run]# time ./mm5.mpp
gc001 — rsl_nproc_all 1, rsl_myproc 0
real    1m47.779s
user    1m47.283s
sys     0m0.477s
[root@gc001 Run]# time mpirun -np 2 -machinefile ./hosts ./mm5.mpp
gc001 — rsl_nproc_all 2, rsl_myproc 0
gc001 — rsl_nproc_all 2, rsl_myproc 1
real    1m3.725s
user    0m59.803s
sys     0m1.250s

[root@gc001 Run]# time mpirun -np 2 ./mm5.mpp
gc001 — rsl_nproc_all 2, rsl_myproc 0
gc002 — rsl_nproc_all 2, rsl_myproc 1
real    1m7.542s
user    0m58.706s
sys     0m1.515s

[root@gc001 Run]# time mpirun -np 4 -machinefile ./hosts ./mm5.mpp
gc001 — rsl_nproc_all 4, rsl_myproc 0
gc001 — rsl_nproc_all 4, rsl_myproc 2
gc001 — rsl_nproc_all 4, rsl_myproc 3
gc001 — rsl_nproc_all 4, rsl_myproc 1
real    0m47.076s
user    0m42.902s
sys     0m1.857s

[root@gc001 Run]#  time mpirun -np 4 ./mm5.mpp
gc001 — rsl_nproc_all 4, rsl_myproc 0
gc004 — rsl_nproc_all 4, rsl_myproc 3
gc003 — rsl_nproc_all 4, rsl_myproc 2
gc002 — rsl_nproc_all 4, rsl_myproc 1
real    0m49.771s
user    0m39.540s
sys     0m1.685s
[root@gc001 Run]# time mpirun -np 8 -machinefile ./hosts ./mm5.mpp
gc001 — rsl_nproc_all 8, rsl_myproc 0
gc001 — rsl_nproc_all 8, rsl_myproc 2
gc001 — rsl_nproc_all 8, rsl_myproc 3
gc001 — rsl_nproc_all 8, rsl_myproc 5
gc001 — rsl_nproc_all 8, rsl_myproc 1
gc001 — rsl_nproc_all 8, rsl_myproc 4
gc001 — rsl_nproc_all 8, rsl_myproc 7
gc001 — rsl_nproc_all 8, rsl_myproc 6
real    0m44.661s
user    0m37.833s
sys     0m2.368s
[root@gc001 Run]# time mpirun -np 8 -machinefile ./hosts ./mm5.mpp
gc001 — rsl_nproc_all 8, rsl_myproc 0
gc001 — rsl_nproc_all 8, rsl_myproc 3
gc001 — rsl_nproc_all 8, rsl_myproc 1
gc001 — rsl_nproc_all 8, rsl_myproc 2
gc002 — rsl_nproc_all 8, rsl_myproc 5
gc002 — rsl_nproc_all 8, rsl_myproc 4
gc002 — rsl_nproc_all 8, rsl_myproc 7
gc002 — rsl_nproc_all 8, rsl_myproc 6
real    0m40.837s
user    0m27.131s
sys     0m2.307s
[root@gc001 Run]# time mpirun -np 8 -machinefile ./hosts ./mm5.mpp
gc001 — rsl_nproc_all 8, rsl_myproc 0
gc001 — rsl_nproc_all 8, rsl_myproc 1
gc002 — rsl_nproc_all 8, rsl_myproc 2
gc002 — rsl_nproc_all 8, rsl_myproc 3
gc003 — rsl_nproc_all 8, rsl_myproc 5
gc003 — rsl_nproc_all 8, rsl_myproc 4
gc004 — rsl_nproc_all 8, rsl_myproc 7
gc004 — rsl_nproc_all 8, rsl_myproc 6
real    0m40.832s
user    0m25.597s
sys     0m1.874s
[root@gc001 Run]# time mpirun -np 16 -machinefile ./hosts ./mm5.mpp
gc001 — rsl_nproc_all 16, rsl_myproc 1
gc001 — rsl_nproc_all 16, rsl_myproc 0
gc001 — rsl_nproc_all 16, rsl_myproc 2
gc003 — rsl_nproc_all 16, rsl_myproc 10
gc002 — rsl_nproc_all 16, rsl_myproc 4
gc002 — rsl_nproc_all 16, rsl_myproc 5
gc004 — rsl_nproc_all 16, rsl_myproc 12
gc003 — rsl_nproc_all 16, rsl_myproc 11
gc003 — rsl_nproc_all 16, rsl_myproc 8
gc001 — rsl_nproc_all 16, rsl_myproc 3
gc002 — rsl_nproc_all 16, rsl_myproc 7
gc002 — rsl_nproc_all 16, rsl_myproc 6
gc004 — rsl_nproc_all 16, rsl_myproc 13
gc003 — rsl_nproc_all 16, rsl_myproc 9
gc004 — rsl_nproc_all 16, rsl_myproc 15
gc004 — rsl_nproc_all 16, rsl_myproc 14
real    0m52.140s
user    0m21.429s
sys     0m2.433s

* MPI + INTEL
[root@gc001 Run]# time ./mm5.mpp
gc001 — rsl_nproc_all 1, rsl_myproc 0
real    1m29.570s
user    1m29.201s
sys     0m0.368s
[root@gc001 Run]# time /engrid/enhpc/mpich/intel/bin/mpirun -np 2 -machinefile ./hosts  ./mm5.mpp
gc001 — rsl_nproc_all 2, rsl_myproc 0
gc001 — rsl_nproc_all 2, rsl_myproc 1
real    0m55.134s
user    0m50.626s
sys     0m1.172s
[root@gc001 Run]# time /engrid/enhpc/mpich/intel/bin/mpirun -np 2 -machinefile ./hosts  ./mm5.mpp
gc001 — rsl_nproc_all 2, rsl_myproc 0
gc002 — rsl_nproc_all 2, rsl_myproc 1
real    0m58.157s
user    0m48.534s
sys     0m1.418s
[root@gc001 Run]# time /engrid/enhpc/mpich/intel/bin/mpirun -np 4 -machinefile ./hosts  ./mm5.mpp
gc001 — rsl_nproc_all 4, rsl_myproc 0
gc001 — rsl_nproc_all 4, rsl_myproc 1
gc001 — rsl_nproc_all 4, rsl_myproc 2
gc001 — rsl_nproc_all 4, rsl_myproc 3
real    0m43.468s
user    0m39.281s
sys     0m1.638s

[root@gc001 Run]# time /engrid/enhpc/mpich/intel/bin/mpirun -np 4 -machinefile ./hosts  ./mm5.mpp
gc001 — rsl_nproc_all 4, rsl_myproc 0
gc003 — rsl_nproc_all 4, rsl_myproc 2
gc002 — rsl_nproc_all 4, rsl_myproc 1
gc004 — rsl_nproc_all 4, rsl_myproc 3
real    0m44.959s
user    0m35.006s
sys     0m1.589s
[root@gc001 Run]# time /engrid/enhpc/mpich/intel/bin/mpirun -np 8 -machinefile ./hosts  ./mm5.mpp
gc001 — rsl_nproc_all 8, rsl_myproc 2
gc001 — rsl_nproc_all 8, rsl_myproc 0
gc001 — rsl_nproc_all 8, rsl_myproc 6
gc001 — rsl_nproc_all 8, rsl_myproc 1
gc001 — rsl_nproc_all 8, rsl_myproc 4
gc001 — rsl_nproc_all 8, rsl_myproc 3
gc001 — rsl_nproc_all 8, rsl_myproc 7
gc001 — rsl_nproc_all 8, rsl_myproc 5
real    0m42.442s
user    0m35.998s
sys     0m1.853s
[root@gc001 Run]# time /engrid/enhpc/mpich/intel/bin/mpirun -np 8 -machinefile ./hosts  ./mm5.mpp
gc001 — rsl_nproc_all 8, rsl_myproc 1
gc001 — rsl_nproc_all 8, rsl_myproc 0
gc001 — rsl_nproc_all 8, rsl_myproc 3
gc002 — rsl_nproc_all 8, rsl_myproc 4
gc001 — rsl_nproc_all 8, rsl_myproc 2
gc002 — rsl_nproc_all 8, rsl_myproc 5
gc002 — rsl_nproc_all 8, rsl_myproc 6
gc002 — rsl_nproc_all 8, rsl_myproc 7
real    0m37.260s
user    0m24.784s
sys     0m2.062s
[root@gc001 Run]# time /engrid/enhpc/mpich/intel/bin/mpirun -np 8 -machinefile ./hosts  ./mm5.mpp
gc001 — rsl_nproc_all 8, rsl_myproc 0
gc001 — rsl_nproc_all 8, rsl_myproc 1
gc003 — rsl_nproc_all 8, rsl_myproc 4
gc002 — rsl_nproc_all 8, rsl_myproc 3
gc002 — rsl_nproc_all 8, rsl_myproc 2
gc004 — rsl_nproc_all 8, rsl_myproc 6
gc003 — rsl_nproc_all 8, rsl_myproc 5
gc004 — rsl_nproc_all 8, rsl_myproc 7
real    0m37.785s
user    0m23.041s
sys     0m1.808s

ot@gc001 Run]# time /engrid/enhpc/mpich/intel/bin/mpirun -np 16 -machinefile ./hosts  ./mm5.mpp
gc001 — rsl_nproc_all 16, rsl_myproc 0
gc001 — rsl_nproc_all 16, rsl_myproc 2
gc001 — rsl_nproc_all 16, rsl_myproc 1
gc001 — rsl_nproc_all 16, rsl_myproc 3
gc003 — rsl_nproc_all 16, rsl_myproc 8
gc003 — rsl_nproc_all 16, rsl_myproc 10
gc002 — rsl_nproc_all 16, rsl_myproc 4
gc002 — rsl_nproc_all 16, rsl_myproc 5
gc004 — rsl_nproc_all 16, rsl_myproc 12
gc003 — rsl_nproc_all 16, rsl_myproc 9
gc002 — rsl_nproc_all 16, rsl_myproc 7
gc003 — rsl_nproc_all 16, rsl_myproc 11
gc002 — rsl_nproc_all 16, rsl_myproc 6
gc004 — rsl_nproc_all 16, rsl_myproc 14
gc004 — rsl_nproc_all 16, rsl_myproc 13
gc004 — rsl_nproc_all 16, rsl_myproc 15
real    0m47.604s
user    0m20.410s
sys     0m2.222s

– MM5 Patch
Patches to MM5 (MPI version)
In the past we have seen some problems with MM5, addressed by
the first two patches, but we have not seen these problems
recently. If you have problems with segmentation faults or while using
PADIT code with MM5, please use Patch 1 and Patch 2.
Patch 3 is for a timing loop that causes a problem when the C code is
compiled at high levels of optimization. This is worked around in the configure.user file by lowering the
optimization level for the C source files. (MM5 does not use C for
performance-critical code, so this should not cause any limitations.)
Serial MM5 users can use the configure.user.serial file.
MM5 version 3.6.3 requires Patch 4. The problem has been fixed in
later versions of MM5.
Patch 1
In MPP/RSL/RSL/rsl_mm_io.c there is a problem that may cause
a segmentation fault. When ioffset or joffset has a
positive value, some of the array references in RSL_MM_BDY_IN and RSL_MM_DIST_BDY can exceed the bounds of the array. There
is code to catch negative offsets walking off the bottom of an array
dimension, but nothing for positive offsets at the other end.
To fix this, make these changes to rsl_mm_io.c:
353c353
< if ( i+ioffset >= 0 )
> if ( i+ioffset >= 0 && i+ioffset  < ix_l )
368c368
< if ( j+joffset >= 0 )
> if ( j+joffset >= 0 && j+joffset < jx_l )
529c529
< if ( i+ioffset >= 0 )
> if ( i+ioffset >= 0 && i+ioffset < ix_l )
545c545
< if ( i+ioffset >= 0 )
> if ( i+ioffset >= 0 && i+ioffset < ix_l )
561c561
< if ( j+joffset >= 0 )
> if ( j+joffset >= 0 && j+joffset < jx_l )
577c577
< if ( j+joffset >= 0 )
> if ( j+joffset >= 0 && j+joffset < jx_l )
Patch 2
When using the PADIT code in MPP/RSL/RSL/rsl_malloc.c, there is a
problem in MPP/RSL/RSL/process_refs.c where calls
to RSL_MALLOC are paired up with calls to the C standard
library “free” instead of RSL_FREE. This is usually OK but it
causes obvious problems if PADIT is defined.
To fix this problem, make the following change in process_refs.c:
7c7                    
< static int destroy_packrec( p ) packrec_t * p ;
< {            
< free( p ) ;  
< return(0) ;  
< }            
> static int destroy_packrec( p ) packrec_t * p ;  
> {
> RSL_FREE( p ) ;  
> return(0) ;    
> }    
Patch 3
Lastly, there is code in MPP/mhz.c that attempts to
estimate CPU MHz using the McVoy algorithm from lmbench that
requires the following patch:
164,165c164          
< volatile int use_p_counter = 0;      
< void use_p(int **p) { use_p_counter += (p != NULL); }
> void use_p(int **p) {}
Note: At high optimization levels, the compiler (correctly)
optimizes away this empty loop without the patch. This is worked
around in the configure.user file by
lowering the optimization level for the C source files. MM5 does not
use C for performance critical code, so this should not cause any
limitations.
Patch 4
If you are using the MM5 3.6.3 source code, you will need to fix
an errno problem in ./MPP/RSL/RSL/rsl_malloc.c and ./MPP/RSL/RSL/rsl_nx_compat.c before you build.
Make this change:
< extern int errno;
> #include <errno.h>
All these changes have been submitted to the MM5-MPP maintainers, so
hopefully these fixes will be integrated into a future MM5 release.

서진우

슈퍼컴퓨팅 전문 기업 클루닉스/ 상무(기술이사)/ 정보시스템감리사/ 시스존 블로그 운영자

You may also like...

6 Responses

  1. 2024년 10월 22일

    … [Trackback]

    […] Find More to that Topic: nblog.syszone.co.kr/archives/4280 […]

  2. 2024년 10월 22일

    … [Trackback]

    […] Read More on on that Topic: nblog.syszone.co.kr/archives/4280 […]

  3. 2024년 10월 22일

    … [Trackback]

    […] Info on that Topic: nblog.syszone.co.kr/archives/4280 […]

  4. 2024년 10월 22일

    … [Trackback]

    […] Information to that Topic: nblog.syszone.co.kr/archives/4280 […]

  5. 2024년 10월 29일

    … [Trackback]

    […] Find More to that Topic: nblog.syszone.co.kr/archives/4280 […]

  6. 2024년 11월 4일

    … [Trackback]

    […] Find More to that Topic: nblog.syszone.co.kr/archives/4280 […]

페이스북/트위트/구글 계정으로 댓글 가능합니다.