콘도르 간단 설치 및 설정법 입니다.

————————————————

편집을 할려고 해도.. 어디서 부터 해야 할지.. ㅠㅠ

좀오래되서.. 가물가물 하내요..

————————————————

모든 노드에 아래와 같이 conodor를 설치한다.

단, client3 이 central manager 이므로 client3 에서는 ./condor_install 를 한번더 실행하여 central manager 설정을 한다.

[root@client3]# tar zxf condor-6.6.10-linux-x86-glibc23-dynamic.tar.gz

[root@client3 condor]# cd condor-6.6.10

[root@client3 condor-6.6.10]# ./condor_install

.

.

Press enter to begin Condor installation

***************************************************************************

        STEP 1: What type of Condor installation do you want?

***************************************************************************

Would you like to do a full installation of Condor? [yes]

Press enter to continue.

***************************************************************************

        STEP 2: How many machines are you setting up for Condor?

***************************************************************************

Are you planning to setup Condor on multiple machines? [yes]

Will all the machines share files via a file server? [yes]

You should run condor_install on your file server, so that root has

permission to create files needed by Condor.

What are the hostnames of the machines you wish to setup?

(Just type the hostnames, not the fully qualified names.

Put one machine per line.  When you are done, just hit enter.)

client3

client5

Setting up Condor for the following machines:

client3 client5

Press enter to continue.

***************************************************************************

        STEP 3: Install the Condor “release directory”, which holds

        various binaries, libraries, scripts and files used by Condor.

***************************************************************************

which: no condor_config_val in (/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/pbs/bin:/usr/local/pbs/sbin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6/bin:/opt/c3-4/:/usr/local/lam/bin:/root/bin)

I can’t find a complete Condor release directory.

Have you installed a release directory already? [no]

Where would you like to install the Condor release directory?

[/usr/local/condor]

That directory doesn’t exist, should I create it now? [yes]

.

condor 설치함

.

Using /usr/local/condor as the Condor release directory.

Press enter to continue.

***************************************************************************

        STEP 4: How and where should Condor send email if things go wrong?

***************************************************************************

If something goes wrong with Condor, who should get email about it?

[root@client3.hoho.com]

What is the full path to a mail program that understands “-s” means

you want to specify a subject? [/bin/mail]

Using /bin/mail to send email to root@client3.hoho.com

Press enter to continue.

***************************************************************************

        STEP 5: Filesystem and UID domains.

***************************************************************************

To correctly run all jobs in your pool, including ones that aren’t relinked

for Condor, you must tell Condor if you have a shared filesystem, and if

so, what machines share it.

Please read the “Configuring Condor” section of the Administrator’s manual

(in particular, the section “Shared Filesystem Config File Entries”)

for a complete explaination of these (and other, related) settings.

Do all of the machines in your pool from your domain (“hoho.com”)

share a common filesystem? [no] yes

Configuring all machines to use “hoho.com” for their filesystem domain.

Do all of the users across all the machines in your domain have a unique

UID (in other words, do they all share a common passwd file)? [no] yes

Configuring all machines to use “hoho.com” for their uid domain.

In some cases, even if you have unique UIDs, you might not have all users

listed in the password file on each machine.

Is this the case at your site? [no] yes

Press enter to continue.

***************************************************************************

        STEP 6: Java Universe support in Condor.

***************************************************************************

Enable Java Universe support? [yes] no

***************************************************************************

        STEP 7: Where should public programs be installed?

***************************************************************************

The Condor binaries and scripts are already installed in:

        /usr/local/condor/bin

If you want, I can create some soft links from a directory that is already

in the default PATH to point to these binaries, so that Condor users do not

have to change their PATH.  Alternatively, I can leave them where they are

and Condor users will have to add /usr/local/condor/bin

to their PATH or explicitly use a full pathname to access the Condor tools.

Shall I create links in some other directory? [yes]

Where should I install these files?

[/usr/local/bin]

Press enter to continue.

***************************************************************************

        STEP 8: What machine will be your central manager?

***************************************************************************

What is the full hostname of the central manager?

[client3.hoho.com] client3.hoho.com

Your central manager will be on the local machine.

Press enter to continue.

***************************************************************************

        STEP 9: Where will the “local directory” go?

***************************************************************************

Each machine in your pool will need a unique directory

You have a “condor” user on this machine.  Is the home directory for

this account (/home/condor) shared among all machines in your pool?

[yes]

Do you want to put all the Condor directories for each machine in

subdirectories of /home/condor/hosts? [yes]

Using /home/condor/hosts/[hostname] as the local directory for each host.

Creating all necessary Condor directories … done.

Condor needs a few lock files to syncronize access to it’s log files.

You’re using a shared file system for your local Condor directories.

Because of problems we’ve had with file locking over network file

systems, we recomend that you specify a directory on a local

partition to put these lock files.

Do you want to specify a local partition for file locking? [yes]

Where should I put the lock files? [/var/lock/condor]

When condor_install completes, you will have to run condor_init

on each machine in your pool before you start Condor there.  condor_init

will create the local lock directory with the right permissions.

Press enter to continue.

***************************************************************************

        STEP 10: Where will the local (machine-specific) config files go?

***************************************************************************

Condor allows you to have a machine-specific config file that overrides

settings in the global config file.

You must specify a machine-specific config file.

Do you want all the machine-specific config files for each host in one

directory? [yes]

What directory should I use? [/usr/local/condor/etc]

Naming each config file [hostname].local

Creating config files in “/usr/local/condor/etc” … done.

Configuring global condor config file … done.

Created /usr/local/condor/etc/condor_config.

Press enter to continue.

Setting up client3.hoho.com as your central manager

What name would you like to use for this pool?  This should be a

short description (20 characters or so) that describes your site.

For example, the name for the UW-Madison Computer Science Condor

Pool is: “UW-Madison CS”.  This value is stored in your central

manager’s local config file as “COLLECTOR_NAME”, if you decide to

change it later.  (This shouldn’t include any ” marks).

hoho   # <— 콘도르에서 묶여질 노드들의 공통 pool name

Setting up central manager config file /usr/local/condor/etc/client3.local … done.

Press enter to continue.

***************************************************************************

        STEP 11: How do you want Condor to find its config file?

***************************************************************************

Condor searches a few locations to find it main config file. The first place

is the envionment variable CONDOR_CONFIG. The second place it searches is

/etc/condor/condor_config, and the third place is ~condor/condor_config.

Should I put in a soft link from /home/condor/condor_config to

/usr/local/condor/etc/condor_config [yes]

Installing links for public binaries into /usr/local/bin … done.

Created /usr/local/condor/etc/roster,

the list of all machines in your pool.

Press enter to continue.

***************************************************************************

Condor has been fully installed on this machine.

***************************************************************************

/usr/local/condor/sbin contains various administrative tools.

If you are going to administer Condor, you should probably place that

directory in your PATH.

Be sure to run condor_init on each machine in your pool to create

the lock directory before you start Condor there.

To start Condor on any machine, just execute:

/usr/local/condor/sbin/condor_master

Since this is your central manager, you should start Condor here first.

Press enter to continue.

You should probably setup your machines to start Condor automatically at

boot time.  If your machine uses System-V style init scripts, look in

/usr/local/condor/etc/examples/condor.boot

for a script that you can use to start and stop Condor.

Please read the “Condor is installed… now what?” section of the INSTALL

file for things you should do before and after starting the Condor daemons.

In particular, you might want to set up host/ip access security.  See the

Adminstrator’s Manual for details.

설치끝————

[root@client3 condor-6.6.10]# ln -s /home/condor/hosts/client3 /usr/local/condor/local.client3

[root@client3 condor-6.6.10]# /usr/local/condor/sbin/condor_init

/home/condor/condor_config already exists.

/home/condor/log already exists.

/home/condor/spool already exists.

/home/condor/execute already exists.

Creating /usr/local/condor/local.client3/condor_config.local

Condor has been initialized, but not started.

위와같은 파일과 디렉토리를 생성한다.  단지 파일을 생성만하고 내용은 없다.

이것은 빈 local_config 파일을 생성하므로 condor_install 실행으로 생기는 진짜 설정파일에 심볼릭 링크를 걸저줘야한다.

또, 몇개의 파일과 디렉토리는 root 권한으로 설정되어지므로 condor 권한으로 변경해줘야한다.

client3 노드는 central manager 설정을 위한 ./condor_install 을 한번더 실행한다.

[root@client3 condor-6.6.10]# ./condor_install

Press enter to begin Condor installation

.

***************************************************************************

        STEP 1: What type of Condor installation do you want?

***************************************************************************

Would you like to do a full installation of Condor? [yes] no

Would you like to setup this host as a submit-only machine? [yes] no

Would you like to setup this host as a Condor Central Manager?

(Only choose this option if you have already done a full installation on

a file server and want to setup the local machine

[no] yes

Press enter to continue.

***************************************************************************

        STEP 2: How many machines are you setting up for Condor?

***************************************************************************

You can only have 1 machine set up as a central manager.

Press enter to continue.

***************************************************************************

        STEP 3: Install the Condor “release directory”, which holds

        various binaries, libraries, scripts and files used by Condor.

***************************************************************************

It looks like you’ve installed a Condor release directory in:

/usr/local/condor

Do you want to use this release directory? [yes]

Using /usr/local/condor as the Condor release directory.

Press enter to continue.

What name would you like to use for this pool?  This should be a

short description (20 characters or so) that describes your site.

For example, the name for the UW-Madison Computer Science Condor

Pool is: “UW-Madison CS”.  This value is stored in your central

manager’s local config file as “COLLECTOR_NAME”, if you decide to

change it later.  (This shouldn’t include any ” marks).

hoho

Setting up central manager config file /usr/local/condor/local.client3/condor_config.local … done.

설치끝

기타 설정

[root@client3 condor]# chmod 755 /home/condor

[root@client3 condor]# cd /home/condor

[root@client3 condor]# chown condor.condor condor_config

[root@client3 condor]# chown condor.condor hosts

[root@client3 condor]# cd hosts

[root@client3 hosts]# chown condor.condor *

[root@client3 hosts]# rm client3/condor_config.local

[root@client3 hosts]# ln -s /usr/local/condor/etc/client3.local client3/condor_config.local

[root@client3 hosts]# chown condor.condor client3/condor_config.local

[root@client3 hosts]# chmod o-x /usr/local/condor/sbin/condor*

[root@client3 etc]# cat /usr/local/condor/etc/roster

client3

client5

[root@client3 hosts]# vi /usr/local/condor/etc/condor_config

##  What machine is your central manager?

CONDOR_HOST             = client3.hoho.com

[root@client3 client3]# vi /home/condor/hosts/client3/condor_config.local

MEMORY = 256

[root@client3 client3]# /usr/local/condor/etc/examples/condor.boot start

Starting up Condor

[root@client3 client3]# ps aux |grep condor

condor    1713  0.0  0.7  5128 1928 ?        Ss   15:22   0:00 /usr/local/condor/sbin/condor_master

condor    1714  0.0  0.8  5380 2104 ?        Ss   15:22   0:00 condor_collector -f

condor    1715  0.0  0.7  5184 2004 ?        Ss   15:22   0:00 condor_negotiator -f

condor    1716  0.0  0.9  6024 2356 ?        Rs   15:22   0:02 condor_startd -f

condor    1717  0.0  0.9  6080 2476 ?        Ss   15:22   0:00 condor_schedd -f

root      1726  0.0  0.2  4780  704 pts/2    S+   15:22   0:00 grep condor

————–

clinet5.hoho.com

[root@client5 condor]# ./condor_configure –install-dir=/usr/local/condor –type=submit,execute –central-manager=client3.hoho.com –owner=condor –install

[root@client5 condor-6.6.10]#export CONDOR_CONFIG=/usr/local/condor/etc/condor_config

[root@client5 condor]# vi /usr/local/condor/local.client5/condor_config.local

RESERVED_SWAP = 0 # 이부분은 꼭 넣어준다. 각 클라이언트 설정파일에..

COLLECTOR_NAME = hoho

MEMORY = 256

FILESYSTEM_DOMAIN = hoho.com

SUSPEND = FALSE

LOCK = /tmp/condor-lock.$(HOSTNAME)0.229313677017675

CONDOR_ADMIN = root@client5.hoho.com

START = TRUE

MAIL = /bin/mail

RELEASE_DIR = /usr/local/condor

DAEMON_LIST = MASTER,SCHEDD,STARTD

COLLECTOR = $(SBIN)/condor_collector

PREEMPT = FALSE

UID_DOMAIN = client3.hoho.com

FILESYSTEM_DOMAIN = client3.hoho.com

USE_NFS = True

NEGOTIATOR = $(SBIN)/condor_negotiator

VACATE = FALSE

CONDOR_HOST = client3.hoho.com

CONDOR_IDS = 703.703

LOCAL_DIR = /usr/local/condor/local.$(HOSTNAME)

CONTINUE = TRUE

KILL = FALSE

/usr/local/condor/etc/condor_config  <– 이파일은 설정하지 않는다.

[root@client5 condor-6.6.10]# /usr/local/condor/etc/examples/condor.boot start

Starting up Condor

[root@client5 condor-6.6.10]# ps aux |grep condor

condor    2575  0.0  0.7  5104 1796 ?        Ss   15:18   0:00 /usr/local/condor/sbin/condor_master

condor    2576  0.0  0.9  6024 2356 ?        Rs   15:18   0:03 condor_startd -f

condor    2577  0.0  0.9  6052 2344 ?        Ss   15:18   0:00 condor_schedd -f

root      2585  0.0  0.2  4780  704 pts/1    S+   15:19   0:00 grep condor

————— client3.hoho.com ————

확인한다.

[root@client3 client3]# condor_status

Name          OpSys       Arch   State      Activity   LoadAv Mem   ActvtyTime

client3.hoho. LINUX       INTEL  Owner      Idle       0.000   256  0+00:05:14 <– owner

client5.hoho. LINUX       INTEL  Unclaimed  Idle       0.000   256  0+00:07:23 <– unclaimed

                     Machines Owner Claimed Unclaimed Matched Preempting

         INTEL/LINUX        2     1       0         1       0          0

               Total        2     1       0         1       0          0

————- 에러 수정

테스트할 파일

[clustest@client5 clustest]$ cat print.c

#include <stdio.h>

void main()

{

printf(“the C program is working”);

}

[clustest@client5 clustest]$ gcc -o print print.c

[clustest@client5 clustest]$ ./print

the C program is working

[clustest@client5 clustest]$ cat print.sh

Executable = print

Universe = vanilla

output = print.out

Log = print.log

Requirements = Memory >=32 && Opsys == “LINUX” && Arch == “INTEL” <– 아키텍처 및 램 용량 같은 것을 메칭을 못시킬때는 두번째 에러가 났다.

Queue

[clustest@client5 clustest]$ condor_submit print.sh

Submitting job(s).

Logging submit event(s).

1 job(s) submitted to cluster 3.

print.out 파일에 실행 내용이 출력 될 것이다.

첫번째….

job 을 수행하려고 할때 아래와 같은 메세지 출력(로그파일을 봐보면 나온다.)

Swap space estimate reached! No more jobs can be run!

7/15 16:31:07     Solution: get more swap space, or set RESERVED_SWAP = 0

]# condor_status -schedd [hostname] -long | grep VirtualMemory

이것을 수행하여 If the value listed is 0, then this is what is confusing Condor. There are two ways to fix the problem:

1. Configure your machine with some real swap space.

2. Disable this check within Condor. Define the amount of reserved swap space for the submit machine to 0.

    Set RESERVED_SWAP to 0 in the configuration file:

RESERVED_SWAP = 0

서브및 머신들의 /usr/local/condor/local.client3/condor_config.local 파일의 값을 위처럼 고친다.(글로벌 설정파일 말고 각 클라이언트 설정파일이다.)

and then send a condor_ restart to the submit machine.

두번째….

7/15 16:58:07       Rejected 9.0 clustest@client3.hoho.com <192.168.5.3:32999>: no match found

예로 9번 작업을 확인한 결과 아래처럼 나온다.

[clustest@client3 clustest]$ condor_q -analyze 9

— Submitter: client3.hoho.com : <192.168.5.3:32999> : client3.hoho.com

ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD              



009.000:  Run analysis summary.  Of 2 machines,

      1 are rejected by your job’s requirements

      1 reject your job because of their own requirements

      0 match, but are serving users with a better priority in the pool

      0 match, match, but reject the job for unknown reasons

      0 match, but will not currently preempt their existing job

      0 are available to run your job

        No successful match recorded.

        Last failed match: Fri Jul 15 16:58:07 2005

        Reason for last match failure: no match found

Requirements = Memory >=32 && Opsys == “LINUX” && Arch == “INTEL” <– 이런 것들을 적어주면 실행되기도 한다.

그러나 실질적인 해결법은

condor_config.local 파일들에 START=TRUE 지시자를 추가하여 재실행 한다.

이 지시자에 따라 unclaimed 되어 져있는지를 지시한다. START=TRUE 로 되어 있다면 아래처럼 unclaimed 되어지고 작업을 수용한다.

                    Machines Owner Claimed Unclaimed Matched Preempting

        INTEL/LINUX        1        0            0               1            0                 0

그러나 START=FAIL 로 되어지면 owner 로 지정되며 작업 수용을 하지 않는다.

                   Machines Owner Claimed Unclaimed Matched Preempting

        INTEL/LINUX        1        1            0               0            0                 0

###############################

# nfs 를 이용한 condor 실행

###############################

콘도르는 machine ClassAd attributes 에 명시한 FileSystemDomain 과 UidDomain 머신을 사용하여 job을 위한 정확한 shared files 과 사용자을 접근 허용 시킨다. 이 속성에 나열한 머신들은 같은 shared file systems에 access 가능하다.

모든 머신은 same location 안에 있는 같은 공유디렉토리를 마운트 한는데 이것들은 same file system domain 아래에 있는 것이다.

client3.hoho.com 은 nfs로 /cluster/clustest 디렉토리를 공유설정하였고 client5.hoho.com 이 이 디렉토리를 마운트한다. 두곳의 clustest 사용자는 같은 UID,GID 를 가지고 있다.

STRAT = True 로 하느냐 False 로 하느냐에 따라 작업을 허락하고 허락하지 않냐가 결정되고 두 머신다 True로 설정하여 두곳다 job이 들어오는 것을 허락하게 하였다.

client3.hoho.com : central manager

client5.hoho.com : client3 을 중심으로 client3에 속해있는 pool 머신

[root@client3 root]# ps aux | grep condor

condor    3554  0.0  0.9  5564 2408 ?        Ss   15:51   0:00 condor_collector -f

condor    4019  0.0  0.8  5212 2064 ?        Ss   16:45   0:00 /usr/local/condor/sbin/condor_master

condor    4020  0.0  0.8  5184 2080 ?        Ss   16:45   0:00 condor_negotiator -f

condor    4021  1.5  1.0  6152 2624 ?        Ss   16:45   0:09 condor_startd -f

condor    4022  0.0  1.0  6104 2580 ?        Ss   16:45   0:00 condor_schedd -f

root      4058  0.0  0.2  4776  692 pts/18   R+   16:57   0:00 grep condor

client3.hoho.com 에서는 위와 같이 5개의 데몬이 구동한다.

[root@client5 root]# ps aux |grep condor

condor    5036  0.0  0.7  5172 2004 ?        Ss   16:30   0:00 /usr/local/condor/sbin/condor_master

condor    5037  0.0  0.9  6060 2472 ?        Ss   16:30   0:00 condor_schedd -f

condor    5039  0.3  1.0  6184 2752 ?        Ss   16:30   0:05 condor_startd -f

client5.hoho.com 에서는 위와 같이 3개의 데몬이 구동한다.

]# condor_status <– client5 와 client3 이 묶여 있는지 확인한다.

Name          OpSys       Arch   State      Activity   LoadAv Mem   ActvtyTime

client3.hoho. LINUX       INTEL  Unclaimed  Idle       0.000   256  0+00:03:15

client5.hoho. LINUX       INTEL  Unclaimed  Idle       0.000   256  0+00:00:08

                     Machines Owner Claimed Unclaimed Matched Preempting

         INTEL/LINUX        2     0       0         2       0          0

               Total        2     0       0         2       0          0

]# condor_status -l <– 이명령어로 machine ClassAd attributes 를 확인하여 FileSystemDomain 과 UidDomain 이 동일한지 확인한다.

MyType = “Machine”

TargetType = “Job”

Name = “client3.hoho.com”

Machine = “client3.hoho.com”

.

.

UidDomain = “client3.hoho.com”

FileSystemDomain = “client3.hoho.com”

MyType = “Machine”

TargetType = “Job”

Name = “client5.hoho.com”

Machine = “client5.hoho.com”

.

.

UidDomain = “client3.hoho.com”

FileSystemDomain = “client3.hoho.com”

]# su – clustest <– 사용자로 로그인 한후

]$ cat print.c

#include <stdio.h>

void main()

{

printf(“the C program is working”);

}

]$ gcc -o print print.c

]$ cat print.sh

Executable = print

Universe = vanilla

#output = print.out

Log = print.log

Requirements = Memory >=32 && Opsys == “LINUX” && Arch == “INTEL”

Rank = Memory >=64

Error = err.$(Process)

Output = result.$(Process)

Queue 100

]$ condor_submit print.sh

]$ condor_status <– 두 머신다 Claimed 상태로 나오는 것을 볼수 있다.

Name          OpSys       Arch   State      Activity   LoadAv Mem   ActvtyTime

client3.hoho. LINUX       INTEL  Claimed    Suspended  0.010   256  0+00:00:04

client5.hoho. LINUX       INTEL  Claimed    Busy       0.000   256  0+00:00:05

                     Machines Owner Claimed Unclaimed Matched Preempting

         INTEL/LINUX        2     0       2         0       0          0

               Total        2     0       2         0       0          0

각 머신의 condor_config.local 파일에서 START = TRUE 또는 START = FALSE 를 입력하고 재실행하여 job을 받을수 있게 또는 받을수 없게 하면서 각각의 머신에서 작업이 수행되는지 확인해본다.

########################################

###############################

# nfs 를 이용하지 않고 condor 실행

###############################

client3 은 중앙 머신이므로 condor_config 에서 설정하고 client5 는 condor_config.local 파일에서 아래와같이 서로 다른 UidDomain 과 FileSystemDomain을 설정한다. 이렇게 하여 같은 pool 안에 있어도 자기 머신에서 들어오는 job만 수용하게 설정한다.

그리고 테스트를 위해서 client3의 condor_config.local 파일에 START = FALSE 로 설정하여 job 을 수용할수 없게 설정하고 client5 에서는 START = TRUE 로 설정하여 job을 수용하게 설정한다.(START = TRUE 이면 condro_status 명령어로 머신 Classad 속성을 보면 state 가 Owner로 설정된다.)

그리고 client3 에서 작업을 넣어본다.

]# client3 쪽 설정

UID_DOMAIN = client3.hoho.com

FILESYSTEM_DOMAIN = client3.hoho.com

]# client5 쪽 설정

UID_DOMAIN = client5.hoho.com

FILESYSTEM_DOMAIN = client5.hoho.com

]# condor_status -l <– 이명령어로 machine ClassAd attributes 를 확인하여 FileSystemDomain 과 UidDomain 이 동일한지 확인한다.

MyType = “Machine”

TargetType = “Job”

Name = “client3.hoho.com”

Machine = “client3.hoho.com”

.

.

UidDomain = “client3.hoho.com” <– client3 은 UidDomain 과 FileSystemDomain 이 client3.hoho.com 이다.

FileSystemDomain = “client3.hoho.com”

MyType = “Machine”

TargetType = “Job”

Name = “client5.hoho.com”

Machine = “client5.hoho.com”

.

.

UidDomain = “client5.hoho.com” <– 하지만 client5 는 UidDomain 과 FileSystemDomain 이 client5.hoho.com 이다.

FileSystemDomain = “client5.hoho.com”

]# su – clustest

client3]$ condor_submit print.sh

이렇게 하면 로그파일을 찾아보면 아래와 같은 에러와 함께 작업이 수행하지 못한다.

client3 에서 START = TRUE 로 설정되어서 job 의 허용을 막았고, UidDomain 과 FileSystemDomain 이 서로 다르기때문에 파일의 이동및 접근이 않되기 때문이다.

Out of servers – 0 jobs matched, job숫자 jobs idle, 1 jobs rejected

]$ condor_rm -all <– 모든 jobs 를 삭제하고

All jobs marked for removal

아래와 같이 3부분을 수정한다. 파일을 옮길수 있도록 하는 설정이다.

]$ cat print.sh

Executable = print

Universe = vanilla

#output = print.out

Log = print.log

Requirements = (Memory >=32) && (OpSys == “LINUX”) && (Arch == “INTEL”)

Rank = Memory >=64

Image_Size = 1 Meg

transfer_input_files = print.sh <– 이부분 추가

should_transfer_files = YES <– 이부분 추가

when_to_transfer_output = ON_EXIT_OR_EVICT <– 이부분 추가

Error = err.$(Process)

Output = result.$(Process)

Queue 10

]$ condor_submit print.sh

]$ condor_status <— 확인 해보면 client5.hoho.com 에서 수행하고 있는 것을 볼것이다.

Name          OpSys       Arch   State      Activity   LoadAv Mem   ActvtyTime

client3.hoho. LINUX       INTEL  Owner      Idle       0.040   256  0+00:05:09

client5.hoho. LINUX       INTEL  Claimed    Busy       0.000   256  0+00:00:03

                     Machines Owner Claimed Unclaimed Matched Preempting

         INTEL/LINUX        2     1       1         0       0          0

               Total        2     1       1         0       0          0

서진우

슈퍼컴퓨팅 전문 기업 클루닉스/ 상무(기술이사)/ 정보시스템감리사/ 시스존 블로그 운영자

You may also like...

7 Responses

  1. 2024년 9월 9일

    … [Trackback]

    […] Here you can find 19712 more Info to that Topic: nblog.syszone.co.kr/archives/2113 […]

  2. 2024년 9월 9일

    … [Trackback]

    […] There you can find 34657 more Info on that Topic: nblog.syszone.co.kr/archives/2113 […]

  3. 2024년 9월 9일

    … [Trackback]

    […] Here you will find 73642 more Info to that Topic: nblog.syszone.co.kr/archives/2113 […]

  4. 2024년 10월 12일

    … [Trackback]

    […] Information on that Topic: nblog.syszone.co.kr/archives/2113 […]

  5. 2024년 10월 18일

    … [Trackback]

    […] Information to that Topic: nblog.syszone.co.kr/archives/2113 […]

  6. 2024년 10월 22일

    … [Trackback]

    […] Find More Info here to that Topic: nblog.syszone.co.kr/archives/2113 […]

  7. 2024년 10월 24일

    … [Trackback]

    […] There you will find 85435 more Information on that Topic: nblog.syszone.co.kr/archives/2113 […]

페이스북/트위트/구글 계정으로 댓글 가능합니다.