WIEN2k 설치 방법

WIEN2k 는 밀도 기능 이론을 사용하여 고체의 전자 구조를 계산하는 상용 충전 소프트웨어입니다. 가장 정확한 솔루션 인 완전 포텐셜 에너지 (선형) 증강 평면파 ((L) APW) + 국소 궤도 (lo) 방법을 계산하는 핵심 구조를 기반으로합니다. 국부적 (스핀) 밀도 근사 (LDA) 또는 일반화 된 경사 근사 (GGA)는 밀도 기능에 사용될 수 있습니다. WIEN2k는 상대 론적 영향을 포함하는 모든 전자 방식을 사용합니다.

WIEN2k17.1 소프트웨어 패키지는 MPI 병렬, OpenMP 병렬 및 직렬 등을 지원합니다. 루트 권한없이 설치할 수 있으며 사용자는 자신의 디렉토리에 설치할 수 있습니다. 이 기사에서는 Intel 컴파일러 환경 (컴파일러, MKL, MPI) 및 FFTW3의 사용에 대해서만 설명합니다.

인텔 빌드 환경 설정

설치된 컴파일러 확인:
which ifort
/opt/intel/compilers_and_libraries_2017.4.196/linux/bin/intel64/ifort

환경설정이 적용되어 있지 않을 경우 적용:
. /opt/intel/compilers_and_libraries_2017.4.196/linux/bin/compilervars.sh intel64

Intel MKL 환경 확인:
echo $MKLROOT
/opt/intel/compilers_and_libraries_2017.4.196/linux/mkl
환경설정이 적용되어 있지 않을 경우 적용:
. /opt/intel/compilers_and_libraries_2017.4.196/linux/mkl/bin/mklvars.sh intel64
Intel MPI 환경  확인:
which mpiifort
/opt/intel/compilers_and_libraries_2017.4.196/linux/mpi/intel64/bin/mpiifort

환경설정이 적용되어 있지 않을 경우 적용:
. /opt/intel/impi/2017.3.196/bin64/mpivars.sh intel64

MPI 지원하는 FFTW3 설치 :
WIEN2k는 FFTW MPI 병렬 처리를 지원합니다 인텔 MKL이 지원되는지 확실하지 않은 경우 여기에서 소스 코드를 사용하여 FFTW3를 컴파일하고 MPI를 지원합니다

wget https://fossies.org/linux/misc/fftw-3.3.8.tar.gz

tar xzvf fftw-3.3.8.tar.gz

cd fftw-3.3.8

export CXX=mpiicpc
export CC=mpiicc
export MPICC=mpiicc
export F77=mpiifort
export FC=mpiifort
export F90=mpiifort
export CFLAGS=”-O3 -ip -ftz -xCORE-AVX512 -fPIC -shared-intel”
export FFLAGS=”-O3 -ip -ftz -xCORE-AVX512 -fPIC -shared-intel”
export FCFLAGS=”-O3 -ip -ftz -xCORE-AVX512 -fPIC -shared-intel”
export CXXFLAGS=”-O3 -ip -ftz -xCORE-AVX512 -fPIC -shared-intel”
export LDFLAGS=-L/APP/enhpc/compiler/intel/v18/compilers_and_libraries_2018.5.274/linux/mpi/intel64/lib
export CPPFLAGS=-I/APP/enhpc/compiler/intel/v18/compilers_and_libraries_2018.5.274/linux/mpi/intel64/include
./configure –prefix=/APP/enhpc/fftw-3.3.8-intel –enable-shared=yes –enable-threads –enable-mpi –enable-openmp

make && make install

성공하면 libfftw3_mpi.a 및 libfftw3.a와 같은 파일 /APP/enhpc/fftw-3.3.8-intel/lib 밑에 생성됨

WIEN2k17.1.tar.gz 압축 해제

mkdir WIEN2k17.1; cd WIEN2k17.1
tar xvf WIEN2k17.1.tar.gz

gunzip *.gz

chmod +x expand_lapw
./expand_lapw

export MKL_TARGET_ARCH=intel64
export INTEL_TARGET_ARCH=intel64
./siteconfig_lapw

*********************************************************
* W I E N *
* site configuration *
*********************************************************

Wien Version: WIEN2k_17.1 (Release 30/6/2017)
System: linuxifc

S Specify a System
C Specify Compiler
O Compiling Options (Compiler/Linker, Libraries)
P Configure Parallel Execution
D Dimension Parameters
R Compile/Recompile
U Update a package
L Perl Path (if not in /usr/bin/perl)
T Temp Path

Q Quit

Selection:
해당하는 내용에 따라 해당 옵션을 설정하십시오 (대소 문자 구분) : 시스템을 설정하고 S를 누르십시오.
**************************************************************************
* Specify a system *
**************************************************************************

Current system is: unknown

LI Linux (Intel ifort compiler (12.0 or later)+mkl+intelmpi))
LS Linux+SLURM-batchsystem (Intel ifort (12.0 or later)+mkl+intelmpi)
LG Linux (gfortran compiler + OpenBlas)

M show more, not updated older options (not recommended)
Q Quit

Recommended setting for f90 compiler: ifort
Current selection: ifort

Your compiler:
Recommended setting for C compiler: cc
Current selection: icc

Your compiler:
직접 입력하거나 icc를 입력하여 입력하고 Intel C 컴파일러를 사용하여 COMPILERC 파일에 저장되도록 설정하십시오. BLAS, LAPACK 및 기타 컴파일 옵션을 설정하고 O :를 누르면 MKL 환경을 찾도록 프롬프트됩니다.

***********************************************************************
* Specify compiler and linker options *
***********************************************************************

Since intel changes the name of the mkl-libraries from version to version,
you may find the linking options for the most recent ifort version at
http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/

Recommended options for system linuxifc are:
Compiler options: -O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include
Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread
Preprocessor flags: ‘-DParallel’
R_LIB (LAPACK+BLAS): -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -openmp -lpthread

Current settings:
O Compiler options: -O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include
L Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread
P Preprocessor flags ‘-DParallel’
R R_LIBS (LAPACK+BLAS): -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -openmp -lpthread
X LIBX options:
LIBXC-LIBS:

PO Parallel options

S Save and Quit
Q Quit and abandon changes

To change an item select option.

Selection:

기본 설정을 시스템 사양과 환경에 맞게 수정한다. skylake 프로세서의 경우

O Compiler options: -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -O2 -xCORE-AVX512 -fp-model source -assume buffered_io
F FFTW options: -DFFTW3 -I/$(HOME)/local/include
L Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH)
P Preprocessor flags ‘-DParallel’
R R_LIB (LAPACK+BLAS): -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread
FL FFTW_LIBS: -lfftw3_mpi -lfftw3 -L/$(HOME)/local/lib
S Save and Quit
Q Quit abandon changes

***********************************************************************
* Configure parallel execution *
***********************************************************************

These options are stored in parallel_options of your WIENROOT.
You can change them later manually or with siteconfig.

If you have only ONE multi-core node (ONE shared memory system) it is normally
better to start jobs in the background rather than using remote commands.
If you select a shared memory system WIEN will by default not use remote
shell commands (USE_REMOTE and MPI_REMOTE = 0 in parallel_options) and
set the default granularity to 1.

You still can override this default granularity in your .machines file.

You may also set a specific TASKSET command to bind your executables to a
specific core on multicore machines.

If you have A CLUSTER OF shared memory parallel computers answer next question with N
Shared Memory Architecture? (y/N):
Do you know/need a command to bind your jobs to specific nodes?
(like taskset -c). Enter N / your_specific_command:
N
On most mpi2-versions, it is better to start an mpijob on the original machine
and not via ssh on a remote system. If you are using mpi2 set MPI_REMOTE to 0
Set MPI_REMOTE to 0 / 1:
0

***********************************************************************
* Configure parallel execution *
***********************************************************************

Parallel execution makes use of remote shells.
On most computers these are named “rsh or ssh”. .

Please specify the name of the remote shell command:

On linuxifc systems the remote shell is normally ssh,
which will be used as default.

Remote shell (default is ssh) =
ssh:
***********************************************************************
* Configure parallel execution *
***********************************************************************

Parallel execution makes use of remote shells.
On most computers these are named “rsh or ssh”. .

Please specify the name of the remote shell command:

On linuxifc systems the remote shell is normally ssh,
which will be used as default.

Remote shell (default is ssh) = ssh
Changing lapw1para
Changing lapwsopara
Changing lapw2para
Changing lapwdmpara
Changing opticpara
Changing irreppara
Changing qtlpara
Changing hfpara
Changing dstartpara
Changing vec2old
Changing testpara
Changing x_nmr

Done.

Press RETURN to continue

**************************************************************************
Do you have MPI, ScaLAPACK, ELPA, or FFTW installed and intend to run
finegrained parallel?

This is useful only for BIG cases (50 atoms and more / unit cell)
and your HARDWARE has at least 16 cores (or is a cluster with Infiniband)
You need to KNOW details about your installed MPI, ELPA, and FFTW )

(y/N)
y
Recommended setting for parallel f90 compiler (default): mpiifort
Current selection: mpiifort

Your compiler:
Your parallel compiler will be: mpiifort

Do you want to use a present ScaLAPACK installation? (Y,n):
y,
To abort the ScaLAPACK setup enter ‘x’ at any point!
You seem to have an MKL installation. (MKLROOT: /opt/intel/compilers_and_libraries_2017.4.196/linux/mkl)
Do you want to use the MKL version of ScaLAPACK? (Y,n):
y
Your SCALAPACK_LIBS are: -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64

These options derive from your chosen settings:

SCALAPACKROOT: /opt/intel/compilers_and_libraries_2017.4.196/linux/mkl/lib/
SCALAPACK_LIBNAME: mkl_scalapack_lp64
BLACSROOT: /opt/intel/compilers_and_libraries_2017.4.196/linux/mkl/lib/
BLACS_LIBNAME: mkl_blacs_intelmpi_lp64
MKL_TARGET_ARCH: intel64
Is this correct? (Y,n):
y
Do you want to use a present FFTW installation? (Y,n):
y
To abort the FFTW setup enter ‘x’ at any point!
Do you want to automatically search for FFTW installations? (Y,n):
y
Is this correct? (Y,n):
Do you want to use a present FFTW installation? (Y,n):
To abort the FFTW setup enter ‘x’ at any point!
Do you want to automatically search for FFTW installations? (Y,n):
n
Your present FFTW choice is:
Please specify whether you want to use FFTW3 (default) or FFTW2 (FFTW3 / FFTW2):

Present FFTW root directory is:
Please specify the path of your FFTW installation (like /opt/fftw3/) or accept present choice (enter): /opt/fftw/3.3.6-p12/intel/2017.6.196

The present target architecture of your FFTW library is: lib64
Please specify the target achitecture of your FFTW library (e.g. lib64) or accept present choice (enter):

The present name of your FFTW library: fftw3
Please specify the name of your FFTW library or accept present choice (enter):

Your FFTW_OPT are: -DFFTW3 -I/opt/fftw/3.3.6-p12/intel/2017.4.196/include
Your FFTW_LIBS are: -L/opt/fftw/3.3.6-p12/intel/2017.4.196/lib64 -lfftw3
Your FFTW_PLIBS are: -lfftw3_mpi

These options derive from your chosen Settings:

FFTWROOT: /opt/fftw/3.3.6-p12/intel/2017.4.196/
FFTW_VERSION: FFTW3
FFTW_LIB: lib64
FFTW_LIBNAME: fftw3
Is this correct? (Y,n):
y
Do you want to use ELPA? (y,N):
내 필요에 따라 여기에서 사용하지 않고 병렬 실행에 들어가려면 N을 입력하십시오.
***********************************************************************
* Configure parallel execution *
***********************************************************************

Since intel changes the name of the mkl-libraries frequently you may find
the linking options for the most recent ifort version at
http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/

You need to specify

MPI and parallel libraries done in previous step,
options for parallel compilation in FPOPT, and
how to run mpi jobs in MPIRUN

(during execution _NP_ will be substituted by the “number of processors”
_EXEC_ by the “executable”
and _HOSTS_ by the name of the machines file).

For calculations on SLURM batch systems you have to additionally specify

number of cores per node in CORES_PER_NODE,
a command to bind tasks to cpus in PINNING_COMMAND (optional), and
an ordered list of physical cores in PINNING_LIST (optional).

Recommended options for system linuxifc are:

FPOPT(par.comp.options) : -O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include

MPIRUN command : mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_

Please specify your parallel compiler options or accept the recommendations (Enter – default)!:
mpirun

***********************************************************************
* Summary of parallel settings *
***********************************************************************

Current settings:

Parallel compiler : mpiifort
SCALAPACK_LIBS : -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64
FFTW_OPT : -DFFTW3 -I/opt/fftw/3.3.6-p12/intel/2017.4.196/include
FFTW_LIBS : -L/opt/fftw/3.3.6-p12/intel/2017.4.196/lib64 -lfftw3
FFTW_PLIBS : -lfftw3_mpi
ELPA_OPT :
ELPA_LIBS :
FPOPT(par.comp.options): -O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include
MPIRUN command : mpirun

S Accept, Save, and Quit
R Restart Configuration
Q Quit and abandon changes

Please accept and save these settings, restart the configuration, or abandon
your changes.
If you want to change anything later on you can redo this whole configuration
process or you can change single items in “Compiling Options”.
Selection:

S Enter를 입력 한 후 설정을 WIEN2k_MPI 파일에 저장하여 특정 노드에 바인딩할지 여부를 설정하고
Enter를 직접 누르십시오.

Do you know/need a command to bind your jobs to specific nodes ?
(like taskset -c). Enter N / your_specific_command:

On most mpi-2 versions, it is better to start an mpijob on the original machine
and not via ssh on a remote system. If you are using mpi2 set MPI_REMOTE to 0
Set MPI_REMOTE to 0 / 1:

Remote shell (default is ssh) =

This is useful only for BIG cases (50 atoms and more / unit cell)
and your HARDWARE has at least 16 cores (or is a cluster with Infiniband)
You need to KNOW details about your installed MPI and FFTW )

Finding the required fftw2/3 mpi-files in /usr and /opt ….

Please specify the ROOT-path of your FFTW installation (like /opt/fftw3):

Your FFTW_LIBS are: -lfftw3_mpi -lfftw3 -L/home/nic/hmli/local/lib
Your FFTW_OPT are : -DFFTW3 -I/home/nic/hmli/local/include

S

A Compile all programs (suggested)
S Select program

Q Quit

Selection: A

./userconfig_lapw

w2web

문제점 처리 : 각 SRC_ * 디렉토리에서 compile.msg 파일을보고 문제점이있는 경우 프롬프트에 따라 해당 디렉토리에서 Makefile을 수정 한 후 make 또는 make para를 실행할 수 있습니다.

 

 

서진우

슈퍼컴퓨팅 전문 기업 클루닉스/ 상무(기술이사)/ 정보시스템감리사/ 시스존 블로그 운영자

You may also like...

페이스북/트위트/구글 계정으로 댓글 가능합니다.