OMA-browser, the manual script failure
0
0
Entering edit mode
6.1 years ago
natasha.sernova ★ 3.8k

Dear all,
My name is Natalia Sernova,
I am trying to run a script from the OMA-manual on our cluster:

#!/bin/bash
# set the number of nodes and processes per node
#PBS -l select=1:ncpus=1:mem=1000mb
# set max wallclock time
#PBS -l walltime=100:00:00
#PBS -J 1-100
/mnt/lustre/tools/OMA.0.99/OMA/OMA.0.99z

# this final address could be incorrect, but the program has not reached it so far.
# I installed the program in my own directory and we also did it for our cluster -
# I don't quite unserstand so far where the program executable is sutuated.
# I choose the PATH, but it is safer to do it like this, I think.

My run produces a mistake, and I failed to correct it.

[sernova@head02 OMA]$qsub OMA_run.sh The -J option can only be used in conjunction with -P Could you, please, help me? Many thanks! Sincerely yours, Natasha OMA Orthologs • 2.1k views ADD COMMENT 0 Entering edit mode I don't see a -J or a -P option in my PBS guide, can you explain what that directive means, please? ADD REPLY 0 Entering edit mode Dear Ram, Unfortubately I also don't know. I found it here. http://omabrowser.org/standalone/ I would like to run the program, and it requires this option. Sorry, I cannot answer your question. I hope the authors of this program will answer it. ADD REPLY 0 Entering edit mode Ah, you're looking for job arrays. The PBS I worked with at NYU used the -t option to specify the range of the array variable and the variable itself was available as $PBS_ARRAYID.

Does your institute have a manual? That will be a definitive source. You may wish to check if your usual operating procedure matches with NYU's manual here: https://wikis.nyu.edu/display/NYUHPC/High+Performance+Computing+at+NYU and use it (with a bit of caution) if it does.

The page on job arrays in the NYU wiki is here: https://wikis.nyu.edu/display/NYUHPC/Using+an+Array+Job+to+run+a+set+of+experiments

HTH

0
Entering edit mode

Dear Ram,

Honestly, I am afraid to change something globally, I may

kill our cluster. These were your words: "with a bit of caution" -

I prefer to wait a little bit and see what the program authors will tell me.

They know much better.

Thank you very much!

Natalia

0
Entering edit mode

You won't damage anything. The caution was for protecting your data should the script mess up stuff. Your HPC admins' job is to ensure no amount of bad scripting can damage any cluster configuration or system critical file and you can trust them to do their job fine.

0
Entering edit mode

Dear Ram,

If  this line:

#PBS -J 1-100 means,

that I need to run 100 jobs in parallel with 1GB

RAM capacity for each one, we have only a little bit different

sintax for this, and there is no problem. One Gb - looks too little.

What is the mail point here: parallelization or node memory volume?

Most probably, parallelization is more important.

Probably I cannot have more parallel jobs than 100, but I can increase their

memory volume. I am looking for the optimal interrelation.

Sincerely yours,

Natasha