How to limit the resources of a snakemake rule?
1
0
Entering edit mode
2.3 years ago

Dear all, I have a problem with the memory management for a rule of a snakemake pipeline. Actually I have a C++ based tool to correct ChIP-seq and ATAC-seq signal for CNV in samples. I run it by the shell with the following rule:

# CNV correction
    rule M1_signal_correction_for_CNVs:
        input:
            dedup_BAM_shifted_sorted = ancient(...),
            dedup_BAM_shifted_sorted_index = ancient(...),
            tool_config_file = "path/to/the/configuration_file_of_the_tool.txt",
        output:
            regions = os.path.join(.../regions.bed),
            peaks = os.path.join(.../peaks.narrowPeak),
            CNV_profile = os.path.join(.../CNV_profile.txt),
            bedGraph = os.path.join(.../bedGraph.bdg)
        params:
            tool_path = config["tool_path"],
            basename = os.path.join(basename),
            sample = "{SAMPLES}"
        threads:
            config["tool_threads"]
        resources:
            mem_mb = 50
        shell:
            " {params.tool_path} {input.dedup_BAM_shifted_sorted} - {input.tool_config_file} {params.basename} --threads {threads} "

Everything is fine, except the fact that despite the meme_mb=50 parameter the RAM memory consuming in the server goes up to 190GB and then the server kills the process.

Consider also that I use the --resources mem_mb=50000 when I run the snakemake pipeline.

I tried to use also ulimit -v, but i does not work neither.

Does anyone have an idea of what I could do to limit the memory usage for this process?

Thank you in advance for your help

snakemake bash shell • 1.5k views
ADD COMMENT
2
Entering edit mode
2.3 years ago
FGV ▴ 170

snakemake uses resources as a way to avoid submitting to many jobs, and it won't check if they actually use more or less than you have specified. In other words, that value should reflect the amount of memory that rule needs, not how much memory it should use.

In your case, you are saying that the rule uses 50Mb; since your system has more than that, snakemake launches the job.

If you run it on an HPC system, than the queuing system will keep track of how much memory the job is using and kill it if it uses more. But this is the scheduler that does, not snakemake.

ADD COMMENT
0
Entering edit mode

Thank you for your answer!

Indeed I was imaging that that was the problem, for that I used ulimit as well to try to limiting the shell, but it does not work.

Yeah yeah, indeed it is the server that it is killing the process, and not snakmake.

I should find a shell/bash command that can limit the memory of this process directly on bash.

ADD REPLY
1
Entering edit mode

Well, not sure if it will be easy/possible to limit the memory usage of a process without changing the source code... There might be some way to limit RAM usage, but the process will then use SWAP (that is much more slower than RAM).

In the end, if a program uses a lot of RAM it is probably because it needs it! Either that, or it is not implemented properly... :P

ADD REPLY
0
Entering edit mode

Yeah, unfortunately you just confirmed what I was thinking. Thank you anyway!! :)

ADD REPLY

Login before adding your answer.

Traffic: 1861 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6