Question: Best way running OMA using SLURM
0
gravatar for dimar.gonzalez.soto
14 months ago by
dimar.gonzalez.soto0 wrote:

Hi everyone, I'm executing OMA through a cluster using a bash script mentioned in the documentation of OMA standalone. For parallelization I'm using SLURM with this script:

sbatch --array=1-100 -N1 <<EOF
#!/bin/sh
export NR_PROCESSES=100
/usr/local/bin/OMA/bin/OMA
EOF

SLURM is running only one OMA proccess per node, I want to execute all 100 proccess at same time. How can I achieve that?

slurm oma • 415 views
ADD COMMENTlink modified 14 months ago by adrian.altenhoff630 • written 14 months ago by dimar.gonzalez.soto0

Is this all there is to the script? What is your actual OMA command line with options/inputs/outputs?

ADD REPLYlink written 14 months ago by genomax80k

I have not put input/output to the OMA executable

ADD REPLYlink modified 14 months ago • written 14 months ago by dimar.gonzalez.soto0

How will you actually use OMA for analysis of your data if you don't tell it what to analyze.

You seem to have copied the section about SLURM parallelization from OMA Standalone help page but are not actually constructed the OMA command line for running the analysis.

ADD REPLYlink modified 14 months ago • written 14 months ago by genomax80k

There is a default parameter file that is used in case there are no arguments

ADD REPLYlink written 14 months ago by adrian.altenhoff630

I see. @adrian: Thanks for the clarification.

ADD REPLYlink written 14 months ago by genomax80k

Ok I will construct a command properly

ADD REPLYlink written 14 months ago by dimar.gonzalez.soto0
1
gravatar for adrian.altenhoff
14 months ago by
Switzerland
adrian.altenhoff630 wrote:

Hi

your command should indeed start 100 jobs of oma standalone in parallel. however the cluster configuration specifies how much resources a job should be allocated by default if not otherwise specified. I assume that your configuration allows to use a lot of resources and hence it will only start one job per node. This should not be a problem as long as you have enough nodes in your cluster... However, you will block others from using the cluster if you don't specify tight resource limits (see man sbatch; in particular --mem option)

Best wishes Adrian

ADD COMMENTlink written 14 months ago by adrian.altenhoff630

if start one process per node, should be it about slurm configuration?

ADD REPLYlink written 14 months ago by dimar.gonzalez.soto0
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 2017 users visited in the last hour