Question: Error ruining OMA "OneMachineParallelInfo"
gravatar for dtejadamartinez
8 months ago by
dtejadamartinez10 wrote:

Hi, Trying to run OMA on 2 different cluster. In one cluster, it ran up to 32%, in another one it is cut at the beginning and the conversions are not finished.

But I get the same following error in both cases:

1523613044.955060 - 1 - [pid 22699]: Computing Orcinus_orca vs Bos_taurus (Part 95 of 503) Mem: 1.236GB
/home/nis/daniela/OMA/bin/oma: line 236: 22663 Terminated              $OMA_PATH/bin/omadarwin ${darwin_flag}  <<EOF
/home/dtejada/OMAstandalone/OMA.2.2.0/bin/oma: line 236:  9919 Segmentation fault      (core dumped) $OMA_PATH/bin/omadarwin ${darwin_flag}  <<EOF

Can you help me to solve this problem?


oma omastandalone • 288 views
ADD COMMENTlink modified 7 months ago by adrian.altenhoff440 • written 8 months ago by dtejadamartinez10

edited to add

There is a moment where conversions seem ready but the file says: conversion.running before that I run again OMA and I get the next error and the run die all.

 line 18: ulimit: stack size: cannot modify limit: Operation not permitted
only_run_dbconv := true
Starting database conversion and checks...
Process 15942 on login2: job nr 1 of 1
Reading GO file...

Best regards,

ADD REPLYlink modified 8 months ago by genomax59k • written 8 months ago by dtejadamartinez10

Tagging: adrian.altenhoff

ADD REPLYlink written 8 months ago by genomax59k

edited to add

The cluster use Sun Grid Engine, I'm still trying to run the data, including the Toy example, but this error always comes up:

Reading GO file... Segmentation fault (core dumped)


ADD REPLYlink written 7 months ago by dtejadamartinez10
gravatar for adrian.altenhoff
7 months ago by
adrian.altenhoff440 wrote:

From the output you mention I think the problem is due to a restriction on your system you modify the stacksize. Some function in OMA require a significant amount of recursions, so we try to increase the stacksize as much as possible (linux: ulimit -s unlimited; mac: ulimit -s hard).

Your output indicates that this operation failed. could you try to run these commands in a shell yourself. on some (HPC) setups, this might only be permitted for a real user session, but cannot be modified in a batched job. In that case, you would need to execute the ulimit command prior to submitting the job or add it to the job submission script.

ADD COMMENTlink written 7 months ago by adrian.altenhoff440
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1360 users visited in the last hour