How to configure Slurm for local Galaxy bioinformatics analysis server.
1
0
Entering edit mode
13 months ago
cbass • 0

Hi, I am wondering how to exactly set the usage of CPU and RAM with Slurm, since this is my first time using Slurm (or any workload manager/job scheduler for that matter). Can I just create new partitions and set these partitions for the destination id together with slurm as the runner? And what is the param id? I had the following settings in mind for Slurm, but am wondering if this would actually work or not. Also, I am using this for my private galaxy server (https://galaxyproject.org/) in order to run tools more efficiently, in case this might be important. Any feedback/help is appreciated! Thanks in advance!

Here are the parts of my files with the settings I had in mind:

galaxyservers.yml:

# Slurm
slurm_roles: ['controller', 'exec']
slurm_nodes:
- name: localhost 
  CPUs: 14 # Host has 16 cores total
  RealMemory: 110000 # 110000MB, viewed with command 'free --mega' which shows 135049MB total free
  ThreadsPerCore: 1
slurm_partitions:
  - name: main
    Nodes: localhost
    Default: YES
slurm_config:
  SlurmdParameters: config_overrides   # Ignore errors if the host actually has cores != 2
  SelectType: select/cons_res
  SelectTypeParameters: CR_CPU_Memory  # Allocate individual cores/memory instead of entire node

job_conf.xml.j1:

<job_conf>
    <plugins workers="14"> #cores
        <plugin id="local_plugin" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner"/>
        <plugin id="slurm" type="runner" load="galaxy.jobs.runners.slurm:SlurmJobRunner"/>
    </plugins>
    <destinations default="slurm">
        <destination id="local_destination" runner="local_plugin"/>
        <destination id="partition_1" runner="slurm">
            <param id="p01">--nodes=1 --ntasks=1 --cpus-per-task=1 --mem=4000</param>
            <param id="tmp_dir">True</param>
        </destination>
        <destination id="partition_2" runner="slurm">
            <param id="p02">--nodes=1 --ntasks=1 --cpus-per-task=2 --mem=6000</param>
            <param id="tmp_dir">True</param>
        </destination>
        <destination id="partition_3" runner="slurm">
            <param id="p03">--nodes=1 --ntasks=1 --cpus-per-task=4 --mem=16000</param>
            <param id="tmp_dir">True</param>
        </destination>
    </destinations>
    </limits>
     <tools>
        <tool id="tool_example_1" destination="partition_1"/>
        <tool id="tool_example_2" destination="partition_2"/>
        <tool id="tool_example_3" destination="partition_3"/>
     </tools>
</job_conf>
galaxyproject galaxy slurm • 1.2k views
ADD COMMENT
0
Entering edit mode

This is about resource management, not about bioinformatics.

ADD REPLY
0
Entering edit mode

True, but it was recommended to me to post the question on this forum, since it is related to the Galaxy project, which is a bioinformatics platform, and I could possibly find people on here who also use this platform for bioinformatic analysis who know how to fix this problem, therefore the question was posted on here.

ADD REPLY
0
Entering edit mode

just curious: what is the point of managing a "private" instance of galaxy ?

ADD REPLY
0
Entering edit mode

Access to non public data on your institution or group own storage and compute.

ADD REPLY
0
Entering edit mode

well, managing galaxy is part of the bioinformatician job.

ADD REPLY
0
Entering edit mode
13 months ago

Try asking on the Galaxy Help forum: https://help.galaxyproject.org/ or on gitter: https://gitter.im/galaxyproject/Lobby

ADD COMMENT

Login before adding your answer.

Traffic: 1072 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6