Question: How to remove duplicate sequences in fasta file using python?
0
gravatar for horsedog
15 months ago by
horsedog30
horsedog30 wrote:

I have a two fasta files, file 1 and file 2 ,they have a lot of overlapped sequences but not all of them, here I want to merge these two files into one file file 3 and remove the duplicate part, just keeping the unique one, is there code example for python use? Well the duplicate here means the exact same query name and sequences, like these two:

>YP_204112.2
MEHYISLLVKSIFIENMALSFFLGMCTFLAVSKKVKTSFGLGVAVVVVLTIAVPVNNLVYTYLLKENALV
AGVDLTFLSFITFIGVIAALVQILEMILDRFFPPLYNALGIFLPLITVNCAIFGGVSFMVQRDYNFAESV
VYGFGSGIGWMLAIVALAGIREKMKYSDVPPGLRGLGITFITVGLMALGFMSFSGVQL

>YP_204112.2
MEHYISLLVKSIFIENMALSFFLGMCTFLAVSKKVKTSFGLGVAVVVVLTIAVPVNNLVYTYLLKENALV
AGVDLTFLSFITFIGVIAALVQILEMILDRFFPPLYNALGIFLPLITVNCAIFGGVSFMVQRDYNFAESV
VYGFGSGIGWMLAIVALAGIREKMKYSDVPPGLRGLGITFITVGLMALGFMSFSGVQL

Many thanks!

python • 3.1k views
ADD COMMENTlink modified 5 months ago by michau10 • written 15 months ago by horsedog30
1

Is there a reason you seem to want a python solution everytime? CD-HIT is meant for this sort of application.

ADD REPLYlink written 15 months ago by genomax65k

Well thank you very much! I'll take a look into CDHIT! no reason just I'm practising python recently so I would like to see how people solve problem by python.

ADD REPLYlink written 15 months ago by horsedog30
3
gravatar for Alex Reynolds
15 months ago by
Alex Reynolds28k
Seattle, WA USA
Alex Reynolds28k wrote:

Shorter option via awk:

$ cat one.fa two.fa | awk -vRS=">" '!a[$0]++ { print ">"$0; }' - > answer.fa

If you want to replicate this with Python, you could look at using a dictionary to store unique keys.

ADD COMMENTlink modified 15 months ago • written 15 months ago by Alex Reynolds28k

hi , it said "invalid -v"

ADD REPLYlink written 15 months ago by horsedog30

Are you using GNU awk? If you're using OS X, install GNU awk via Homebrew: brew install gawk

ADD REPLYlink written 15 months ago by Alex Reynolds28k

Hi again, well you're right it works now, but it's so weird cuz in the "answer.fa" it contains even more lines than file1 plus file2, which means that seems it didn't 'remove' but 'add'? (I used wc -l to count lines)

ADD REPLYlink written 15 months ago by horsedog30

Maybe grab the first few lines of both files and try it out on those test files. I'm not sure why you get that result; this is a pretty common use of awk.

ADD REPLYlink written 15 months ago by Alex Reynolds28k

I like the simplicity! This method almost worked for me, but was printing empty lines and one line with only '>'. I'm not an expert in awk, so filtering results by using grep:

cat one.fa two.fa | awk -v RS=">" '!a[$0]++ { print ">"$0; }' - | grep -Ev '^\s*$|^>\s*$' > answer.fa
ADD REPLYlink modified 10 months ago • written 10 months ago by souzademedeiros0
2
gravatar for tiago211287
15 months ago by
tiago2112871.1k
USA
tiago2112871.1k wrote:

This task can be accomplished with FASTA/Q Collapser quickly.

ADD COMMENTlink modified 15 months ago • written 15 months ago by tiago2112871.1k
1
gravatar for Vijay Lakhujani
15 months ago by
Vijay Lakhujani4.0k
India
Vijay Lakhujani4.0k wrote:

One liner using seqkit

zcat fasta.fa.gz | seqkit rmdup -s -i -m -o clean.fa.gz -d duplicated.fa.gz -D duplicated.detail.txt
ADD COMMENTlink written 15 months ago by Vijay Lakhujani4.0k
1
gravatar for pauley-p
12 months ago by
pauley-p10
UNAM
pauley-p10 wrote:

This is a python based alternative to your issue that uses BioPy

https://github.com/MJChimal/BiologPy/blob/master/drop_unique_records.py

Hope it still helps! :)

ADD COMMENTlink written 12 months ago by pauley-p10
0
gravatar for shoujun.gu
15 months ago by
shoujun.gu370
Rockville/MD
shoujun.gu370 wrote:

This python code will combine all the input fasta files (any number of files), and output 1 file with duplicated sequences removed. Note: if your input fasta files are too big to load into memory, this code will fail.

input_files=[list of input file names]
output_file='output_file_name'

holder=[]
for file in input_files:
    with open(file,'r') as file:
        rec=file.read().split('>')[1:]
        rec=['>'+i.strip()+'\n' for i in rec]
    holder.extend(rec)

total='\n'.join(list(set(holder)))

with open(output_file,'w') as out:
    out.write(total)
ADD COMMENTlink modified 15 months ago • written 15 months ago by shoujun.gu370
0
gravatar for michau
5 months ago by
michau10
Poland/Gdansk
michau10 wrote:

Learn to use Biopython library. It's handy as hell. You can use any format as in/out

from Bio import SeqIO

with open('output.fasta', 'a') as outFile:
    record_ids = list()
    for record in SeqIO.parse('input.fasta', 'fasta'):
        if record.id not in record_ids:
            record_ids.append( record.id)
            SeqIO.write(record, outFile, 'fasta')
ADD COMMENTlink modified 5 months ago • written 5 months ago by michau10
1

It should be noted this only checks for duplicates based on their IDs, which is not always going to be the most robust way to do it. It would probably be best to check for duplicated sequences which is what most of the other solutions are doing.

ADD REPLYlink written 5 months ago by jrj.healey12k
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1652 users visited in the last hour