Question: STAR alignment error- Segmentation fault (core dumped)
0
gravatar for EagleEye
14 months ago by
EagleEye6.5k
Sweden
EagleEye6.5k wrote:

Hello all,

I am using STAR aligner to align my fastq files ranging from 2.5 to 13 GB (aligning for Human genome). STAR successfully completed the jobs for 2.5 to 3.5 GB files but it is giving error as 'Segmentation fault (core dumped)' for files with size 13 GB. I tried the following solutions,

Initial run with my parameters:

star --runThreadN 15 --outSAMmapqUnique 255 --outSAMattributes All --outSAMattrIHstart 0 --outSAMtype BAM SortedByCoordinate --outSAMunmapped None --outFileNamePrefix $outpath/mysample1/mysample1 --runMode alignReads --genomeDir $genome_dir --readFilesIn $inpath/mysample1.fq

Second run with '--limitBAMsortRAM 37775409479':

star --runThreadN 15 --outSAMmapqUnique 255 --limitBAMsortRAM 37775409479 --outSAMattributes All --outSAMattrIHstart 0 --outSAMtype BAM SortedByCoordinate --outSAMunmapped None --outFileNamePrefix $outpath/mysample1/mysample1 --runMode alignReads --genomeDir $genome_dir --readFilesIn $inpath/mysample1.fq

Third run with '--alignWindowsPerReadNmax 200000':

star --runThreadN 15 --outSAMmapqUnique 255 --limitBAMsortRAM 37775409479 --alignWindowsPerReadNmax 200000 --outSAMattributes All --outSAMattrIHstart 0 --outSAMtype BAM SortedByCoordinate --outSAMunmapped None --outFileNamePrefix $outpath/mysample1/mysample1 --runMode alignReads --genomeDir $genome_dir --readFilesIn $inpath/mysample1.fq

Same error all the time with single line:

Segmentation fault      (core dumped)

Note: I am running all jobs with 15 cores (each core has 6.8 GB of RAM). My STAR version is 2.5.3a.

Is there any solution to this issue ?

ADD COMMENTlink modified 14 months ago • written 14 months ago by EagleEye6.5k
1

STAR would have been a good tag here :-)

ADD REPLYlink written 14 months ago by WouterDeCoster41k

Done.

ADD REPLYlink written 14 months ago by EagleEye6.5k

Not a regular STAR user but I thought you load the indexes into memory and then re-use. --genomeLoad option. Each core does not really have 6.8G of RAM. It is shared by all cores (unless you are using a job scheduler).

ADD REPLYlink written 14 months ago by genomax71k

Yes I am using a cluster with workload manager. The cluster I am using has 20 cores per NODE with 128 GB RAM. I can demand max memory from the cores I request, for up-to 19 cores per NODE. An equal share of RAM for each core would mean that each core gets at most 6 GB of RAM.

ADD REPLYlink modified 14 months ago • written 14 months ago by EagleEye6.5k
1

AFAIK STAR requires 30+G for human genome alignments. If your cluster is really set up to limit a max 6G RAM per core (odd setup) then I am not sure how this is going to work. Can you reserve a full node (use the --genomeLoad option and then compute that way).

ADD REPLYlink modified 14 months ago • written 14 months ago by genomax71k

Is this Segmentation fault the only hint you have ?

Not even a small log file or some usual STAR prints like Started STAR run or Started mapping

ADD REPLYlink written 14 months ago by Bastien Hervé4.4k

All log files looks fine till 'started sorting BAM' (final step) and after that I just get this single error message 'Segmentation fault'.

ADD REPLYlink written 14 months ago by EagleEye6.5k
1

If this failed at sorting BAM you can try to --outSAMtype BAM Unsorted

Then use samtools to sort your bam

ADD REPLYlink written 14 months ago by Bastien Hervé4.4k

Can we have one log file please

ADD REPLYlink written 14 months ago by Bastien Hervé4.4k

Did you take a look at this thread ?

ADD REPLYlink written 14 months ago by Bastien Hervé4.4k

Thanks a lot for all your suggestions. Reserving 19 cores instead of 15 cores solved the issue.

ADD REPLYlink written 14 months ago by EagleEye6.5k

You should post that in an answer and mark it as accepted so the question shows up as resolved.

ADD REPLYlink written 14 months ago by manuel.belmadani1.1k
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 896 users visited in the last hour