Details of OMA Running Process - OMA standalone
1
1
Entering edit mode
8.5 years ago
moranr ▴ 290

Hi,

For the standalone version, can you answer the following please:

  1. How does OMA deal with jobs that do not finish in one submission i.e. when working on clusters jobs can only run for a certain amount of time. To 'continue' the job do I simply resubmit the job and OMA detects where it was from the Cache?

Thank you,
R

OMA Orthologs • 2.5k views
ADD COMMENT
4
Entering edit mode
8.5 years ago

In the Cache/AllAll/ directory, all files that are gzipped represent job chunks that have successfully completed and will be used. The files that are not gzipped represent job chunks that were being processed when your job died. You can delete these and they will be restarted next time you run OMA.

ADD COMMENT
0
Entering edit mode

Great thank you.

ADD REPLY
0
Entering edit mode

After having a job with a fair number of restarts, I had the same problem above (which was fixed by your suggestion regarding the non-gzipped files). But now I have a new problem, which I'm guessing also results from this stopping and starting.

During the database conversions and checks all of my attempts end with something like this:

ReadDb: expected 13501933 Pat entries, found 13501854
Error, (in ReadDb) existing index does not match file size

Any suggestions on how to help OMA move past this?

ADD REPLY
1
Entering edit mode

Sorry I missed this additional question (next time, better post this as a separate question with tag "oma", as I get a notification about those).

This is due to a corrupt database index file. The most common cause of this are multiple processes trying to write in the same file at the same time. In your directory Cache/DB/ delete all *.map and *.tree files and restart one single OMA process only, at least until you have reach the all-against-all phase.

ADD REPLY

Login before adding your answer.

Traffic: 2053 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6