Hi,
I set up the environment in HCC, and my FASTA file contains 98 sequences. I’ve tried running it several times changing time duration in SLURM script without any success. This is my SLURM script.
#!/bin/bash
#SBATCH --job-name=kofamscan
#SBATCH --output=kofamscan.out
#SBATCH --error=kofamscan.err
#SBATCH --time=5:59:00
#SBATCH --mem=32G
#SBATCH --cpus-per-task=8
source ~/miniconda3/etc/profile.d/conda.sh
conda activate kofamscan_env
./exec_annotation \
-o kofam_output.txt \
-f detail-tsv \
-p profiles/ \
-k ko_list \
--cpu 8 \
test.faa
It keeps giving the following error, I tried changing differenet cpu allocation too.
**“slurmstepd: error: *** JOB 10654468 ON c2023 CANCELLED AT 2025-06-10T21:46:45 DUE TO TIME LIMIT *”
What should I do? What could be the issue?
The same error is occurring. I’m wondering if parallel processing is not being utilized during the execution.
Looking at the command line you used and the manual for kofam_scan, multi-threading should be in use. How long are your query sequences?
In any case this is coming back to the time limit on the job. Have you checked to see if output is being written before the job gets killed? If you are only allowed a max of 5h59min per job then you may need to split your input file and run multiple jobs to get the analysis to complete.
This is one of sequence;
Only written this only in error output.
“slurmstepd: error: * JOB 10654468 ON c2023 CANCELLED AT 2025-06-10T21:46:45 DUE TO TIME LIMIT *”