How can I find KO IDs for ORF sequences from kofam_scan ?
2
0
Entering edit mode
9 days ago
Nikesh • 0

Hi,

I set up the environment in HCC, and my FASTA file contains 98 sequences. I’ve tried running it several times changing time duration in SLURM script without any success. This is my SLURM script.

#!/bin/bash
#SBATCH --job-name=kofamscan
#SBATCH --output=kofamscan.out
#SBATCH --error=kofamscan.err
#SBATCH --time=5:59:00
#SBATCH --mem=32G
#SBATCH --cpus-per-task=8


source ~/miniconda3/etc/profile.d/conda.sh
conda activate kofamscan_env

./exec_annotation \
  -o kofam_output.txt \
  -f detail-tsv \
  -p profiles/ \
  -k ko_list \
  --cpu 8 \
  test.faa

It keeps giving the following error, I tried changing differenet cpu allocation too.

**“slurmstepd: error: *** JOB 10654468 ON c2023 CANCELLED AT 2025-06-10T21:46:45 DUE TO TIME LIMIT *”

What should I do? What could be the issue?

kofam_scan • 475 views
ADD COMMENT
1
Entering edit mode
9 days ago
GenoMax 152k

Please don't post the same content in multiple places.

I tried changing differenet cpu allocation too.

As I wrote in the other comment this is an issue of time not CPU.

CANCELLED AT 2025-06-10T21:46:45 DUE TO TIME LIMIT *

You are asking for one min less than 6 h in this request and the job is getting killed once that limit is reached. Increase the time request to --time=1-0 (this would be one day) and see if that is enough. Adjust as needed/permitted by your local allocation.

I’ve tried running it several times changing time duration

If you are only allowed 5 h and 59 min max then this job can't be completed in that time limit.

BTW: The title of this post does not have a direct connection to the question you asked.

ADD COMMENT
0
Entering edit mode

The same error is occurring. I’m wondering if parallel processing is not being utilized during the execution.

ADD REPLY
0
Entering edit mode

Looking at the command line you used and the manual for kofam_scan, multi-threading should be in use. How long are your query sequences?

In any case this is coming back to the time limit on the job. Have you checked to see if output is being written before the job gets killed? If you are only allowed a max of 5h59min per job then you may need to split your input file and run multiple jobs to get the analysis to complete.

ADD REPLY
0
Entering edit mode

This is one of sequence;

k127_52915239_12 # 4220 # 5476 # -1 # ID=1511550_12;partial=00;start_type=ATG;rbs_motif=TAAA;rbs_spacer=11bp;gc_cont=0.499 MSKGNMLLGHARGKVGSLVFSRSNGKQVVRANAEVVKNPQTEKQMIQRIIMATVAQAYSR FQPICDHSWEGLQSGQKTMSAFISANLKLMRENIAAAVADNQSFDDIKAFTPVGSNEYAS NAYIIAKGKLPEIVTSFSGSTRAKMDGIAENTYAGVLAAYGLQRGDQLTFVTTQGASGAN MIFHFARVILDPMNADGSEADLSSSFIADGAINKPNTRNEGSFNALEFAAGSISWNFSAQ AVTGAAVIVSRQKADGTYARSNATLQVNDPGIIYERSLQECLDLVASGSIDTLSTMYLNN SGTGRVAGEVYEEPAVELEVSNLKVNGEAKAAPFNVENNQDPTITLTAANAGNDGRFKIG MSTTSSSAGYTAGKAVVEGANEFSYELKQGEQAYFAILDTRNDNKVEEYLGVYVKSAF

Only written this only in error output.

“slurmstepd: error: * JOB 10654468 ON c2023 CANCELLED AT 2025-06-10T21:46:45 DUE TO TIME LIMIT *”

ADD REPLY
0
Entering edit mode
9 days ago
Mensur Dlakic ★ 29k

Already answered in your other post.

I suggest you inquire about the SLURM time limit and set it to a maximum value allowed. This would be 6 days:

#SBATCH --time=6-00:00:00

Suggestions:

  • How do you know that the same error is occurring in response to GenoMax suggestion when you didn't allow several days for this to run?
  • Why don't you try the same script on your shortest sequence and make sure that everything works?
  • Why not ask for more than 8 CPUs?
ADD COMMENT
0
Entering edit mode

Thank you, I changed it as #SBATCH --time=1-00:00:00 and it's running. I will keep you updated and thanks again.

ADD REPLY

Login before adding your answer.

Traffic: 3881 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6