STAR alignment error- Segmentation fault (core dumped)
0
0
Entering edit mode
5.8 years ago
EagleEye 7.5k

Hello all,

I am using STAR aligner to align my fastq files ranging from 2.5 to 13 GB (aligning for Human genome). STAR successfully completed the jobs for 2.5 to 3.5 GB files but it is giving error as 'Segmentation fault (core dumped)' for files with size 13 GB. I tried the following solutions,

Initial run with my parameters:

star --runThreadN 15 --outSAMmapqUnique 255 --outSAMattributes All --outSAMattrIHstart 0 --outSAMtype BAM SortedByCoordinate --outSAMunmapped None --outFileNamePrefix $outpath/mysample1/mysample1 --runMode alignReads --genomeDir $genome_dir --readFilesIn $inpath/mysample1.fq

Second run with '--limitBAMsortRAM 37775409479':

star --runThreadN 15 --outSAMmapqUnique 255 --limitBAMsortRAM 37775409479 --outSAMattributes All --outSAMattrIHstart 0 --outSAMtype BAM SortedByCoordinate --outSAMunmapped None --outFileNamePrefix $outpath/mysample1/mysample1 --runMode alignReads --genomeDir $genome_dir --readFilesIn $inpath/mysample1.fq

Third run with '--alignWindowsPerReadNmax 200000':

star --runThreadN 15 --outSAMmapqUnique 255 --limitBAMsortRAM 37775409479 --alignWindowsPerReadNmax 200000 --outSAMattributes All --outSAMattrIHstart 0 --outSAMtype BAM SortedByCoordinate --outSAMunmapped None --outFileNamePrefix $outpath/mysample1/mysample1 --runMode alignReads --genomeDir $genome_dir --readFilesIn $inpath/mysample1.fq

Same error all the time with single line:

Segmentation fault      (core dumped)

Note: I am running all jobs with 15 cores (each core has 6.8 GB of RAM). My STAR version is 2.5.3a.

Is there any solution to this issue ?

alignment software error RNA-Seq STAR • 13k views
ADD COMMENT
1
Entering edit mode

STAR would have been a good tag here :-)

ADD REPLY
0
Entering edit mode

Done.

ADD REPLY
0
Entering edit mode

Not a regular STAR user but I thought you load the indexes into memory and then re-use. --genomeLoad option. Each core does not really have 6.8G of RAM. It is shared by all cores (unless you are using a job scheduler).

ADD REPLY
0
Entering edit mode

Yes I am using a cluster with workload manager. The cluster I am using has 20 cores per NODE with 128 GB RAM. I can demand max memory from the cores I request, for up-to 19 cores per NODE. An equal share of RAM for each core would mean that each core gets at most 6 GB of RAM.

ADD REPLY
1
Entering edit mode

AFAIK STAR requires 30+G for human genome alignments. If your cluster is really set up to limit a max 6G RAM per core (odd setup) then I am not sure how this is going to work. Can you reserve a full node (use the --genomeLoad option and then compute that way).

ADD REPLY
0
Entering edit mode

Is this Segmentation fault the only hint you have ?

Not even a small log file or some usual STAR prints like Started STAR run or Started mapping

ADD REPLY
0
Entering edit mode

All log files looks fine till 'started sorting BAM' (final step) and after that I just get this single error message 'Segmentation fault'.

ADD REPLY
1
Entering edit mode

If this failed at sorting BAM you can try to --outSAMtype BAM Unsorted

Then use samtools to sort your bam

ADD REPLY
0
Entering edit mode

Can we have one log file please

ADD REPLY
0
Entering edit mode

Did you take a look at this thread ?

ADD REPLY
1
Entering edit mode
5.8 years ago
EagleEye 7.5k

Thanks a lot for all your suggestions. Reserving 19 cores instead of 15 cores solved the issue.

ADD COMMENT
0
Entering edit mode

You should post that in an answer and mark it as accepted so the question shows up as resolved.

ADD REPLY

Login before adding your answer.

Traffic: 2902 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6