Question: toil-cwl-runner on slurm: toil.batchSystems.singleMachine maxMemory limit?
gravatar for alanh
12 months ago by
alanh80 wrote:

I've written a CWL that I know works on the Arvados CWL runner on our SLURM cluster with Docker.

I'm trying to run it using the toil-cwl-runner on the same SLURM cluster using singularity 2.5.x.

I've created a virtualenv and ran pip install toil[cwl] and it appears to have installed.

I'm trying to kick off a workflow using the following command lines on our SLURM cluster's login node, which is a two core VM with 8 GB of RAM

export TOIL_SLURM_ARGS='--partition allnodes' ;
toil-cwl-runner --jobStore ~/tmp/toil-test --singularity --batchSystem Slurm dnaseq.cwl dnaseq-test.yml

Almost instantly, this warns me:

WARNING:toil.batchSystems.singleMachine:Limiting maxMemory to physically available memory (8201486336).

How do I change the maxMemory as this is way below what I need? The CWL requests more memory in many of the jobs, so this is coming from Toil, not CWL.

singularity cwl slurm toil • 477 views
ADD COMMENTlink modified 12 months ago by Michael R. Crusoe1.8k • written 12 months ago by alanh80
gravatar for Michael R. Crusoe
12 months ago by
Common Workflow Language project
Michael R. Crusoe1.8k wrote:

Hello alanh,

Toil should be querying the Slurm system for the maximum amount of memory available on a single node. If that is wrong, then that is a toil bug →

In the mean time, you may find the following option to toil-cwl-runner to be usrful:

  --maxMemory INT       The maximum amount of memory to request from the batch
                        system at any one time. Standard suffixes like K, Ki,
                        M, Mi, G or Gi are supported. Default is 8.0 Ei
ADD COMMENTlink written 12 months ago by Michael R. Crusoe1.8k
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1787 users visited in the last hour