Question: OMA-browser, the manual script failure
0
gravatar for natasha.sernova
3.9 years ago by
natasha.sernova3.2k
natasha.sernova3.2k wrote:

Dear all,
My name is Natalia Sernova,
I am trying to run a script from the OMA-manual on our cluster:

#!/bin/bash
# set the number of nodes and processes per node
#PBS -l select=1:ncpus=1:mem=1000mb
# set max wallclock time
#PBS -l walltime=100:00:00
#PBS -J 1-100
/mnt/lustre/tools/OMA.0.99/OMA/OMA.0.99z

# this final address could be incorrect, but the program has not reached it so far.
# I installed the program in my own directory and we also did it for our cluster -
# I don't quite unserstand so far where the program executable is sutuated.
# I choose the PATH, but it is safer to do it like this, I think.

My run produces a mistake, and I failed to correct it.

[sernova@head02 OMA]$ qsub OMA_run.sh
The -J option can only be used in conjunction with -P

Could you, please, help me?

Many thanks!
Sincerely yours,
Natasha

oma orthologs • 1.5k views
ADD COMMENTlink modified 3.9 years ago by RamRS20k • written 3.9 years ago by natasha.sernova3.2k

I don't see a -J or a -P option in my PBS guide, can you explain what that directive means, please?

ADD REPLYlink written 3.9 years ago by RamRS20k

Dear Ram,

Unfortubately I also don't know. I found it here.

http://omabrowser.org/standalone/

I would like to run the program, and it requires this option.

Sorry, I cannot answer your question. I hope the authors of this program will answer it.

ADD REPLYlink written 3.9 years ago by natasha.sernova3.2k

Ah, you're looking for job arrays. The PBS I worked with at NYU used the -t option to specify the range of the array variable and the variable itself was available as $PBS_ARRAYID.

Does your institute have a manual? That will be a definitive source. You may wish to check if your usual operating procedure matches with NYU's manual here: https://wikis.nyu.edu/display/NYUHPC/High+Performance+Computing+at+NYU and use it (with a bit of caution) if it does.

The page on job arrays in the NYU wiki is here: https://wikis.nyu.edu/display/NYUHPC/Using+an+Array+Job+to+run+a+set+of+experiments

HTH

ADD REPLYlink modified 3.9 years ago • written 3.9 years ago by RamRS20k

Dear Ram,

Honestly, I am afraid to change something globally, I may

kill our cluster. These were your words: "with a bit of caution" -

I prefer to wait a little bit and see what the program authors will tell me. 

They know much better.

Thank you very much!

Natalia

ADD REPLYlink written 3.9 years ago by natasha.sernova3.2k

You won't damage anything. The caution was for protecting your data should the script mess up stuff. Your HPC admins' job is to ensure no amount of bad scripting can damage any cluster configuration or system critical file and you can trust them to do their job fine.

ADD REPLYlink modified 3.9 years ago • written 3.9 years ago by RamRS20k

Dear Ram,

If  this line:

#PBS -J 1-100 means,

that I need to run 100 jobs in parallel with 1GB

RAM capacity for each one, we have only a little bit different

sintax for this, and there is no problem. One Gb - looks too little.

What is the mail point here: parallelization or node memory volume?

Most probably, parallelization is more important.

Probably I cannot have more parallel jobs than 100, but I can increase their

memory volume. I am looking for the optimal interrelation.

Sincerely yours,

Natasha

ADD REPLYlink modified 3.9 years ago • written 3.9 years ago by natasha.sernova3.2k
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 759 users visited in the last hour