How To Remove The Same Sequences In The Fasta Files?
22
45
Entering edit mode
13.5 years ago
Zhangleisdau ▴ 340

Some FASTA files (e.g., ESTs) have sequences with different IDs that nonetheless have the same sequence. I want to remove duplicate sequences based on the nucleotide sequence, rather than ID.HOW to do?Thankyou to all!

fasta sequence duplicates • 73k views
ADD COMMENT
0
Entering edit mode

What is your understanding of "the same" are your including approximate stringmatching?

ADD REPLY
0
Entering edit mode

In fact you might not want to remove all the duplicated sequences but collapse them into a single sequence. I guess you meant to say it like this but your question is not clearly stating it.

ADD REPLY
49
Entering edit mode
13.5 years ago
brentp 24k

you can do this with fastx-toolkit

usage is like:

fastx_collapser < some.fasta > some.unique.fasta
ADD COMMENT
1
Entering edit mode

always love toolkits

ADD REPLY
1
Entering edit mode

bummer, apparently only works for nucleotide sequences. Where's the love for protein fasta files?

ADD REPLY
0
Entering edit mode

In my example (nucleotides), it removed well the duplicated but renamed the sequences...

ADD REPLY
0
Entering edit mode

Try Dedupe; it won't change the headers.

ADD REPLY
21
Entering edit mode
13.5 years ago

linearize the sequences: surround the fasta headers by '@' and '#' , remove the CR, replace '#' by CR and '@' by '\t'

sort this tab delimited file on the second column (the sequence) , with case-insensible option, only the uniq columns

restore the fasta header and sequence

sed -e '/^>/s/$/@/' -e 's/^>/#/' file.fasta  |\
tr -d '\n' | tr "#" "\n" | tr "@" "\t" |\
sort -u -t '  ' -f -k 2,2  |\
sed -e 's/^/>/' -e 's/\t/\n/'
ADD COMMENT
0
Entering edit mode

I think this is the only answer that actually works if your fasta file contains a multiple sequence alignment.

Both fastx_collapser and gt sequniq fail because they consider '–' to be an invalid character in a fasta sequence. I didn't ask those tools to validate the sequences, and that's not their job, but they did it anyway making them useless for their actual purpose. Have these authors never heard of the Robustness Principle?

ADD REPLY
0
Entering edit mode

I know this is old stuff, but I would like to thank you for this solution. Its logical is perfect for a pipeline I'm building.

ADD REPLY
0
Entering edit mode

How will you store the duplicates?

ADD REPLY
15
Entering edit mode
13.5 years ago
Neilfws 49k

There are quite a lot of utilities that will do this; usually they are found as part of larger software packages (often aimed at motif discovery). For example, RSA-tools contains the utility purge-sequence, MEME contains purge. So those are some terms for your web search.

A roll-your own solution is quite easy if you store the sequences as hash keys (which have to be unique). For example using Bioperl SeqIO you could try something like:

use strict;
use Bio::SeqIO;
my %unique;

my $file   = "myseqs.fa";
my $seqio  = Bio::SeqIO->new(-file => $file, -format => "fasta");
my $outseq = Bio::SeqIO->new(-file => ">$file.uniq", -format => "fasta");

while(my $seqs = $seqio->next_seq) {
  my $id  = $seqs->display_id;
  my $seq = $seqs->seq;
  unless(exists($unique{$seq})) {
    $outseq->write_seq($seqs);
    $unique{$seq} +=1;
  }
}

This will write out a new FASTA file, myseqs.fa.uniq, with only unique sequences (but no record of the other IDs with that sequence).

ADD COMMENT
10
Entering edit mode
13.5 years ago
Darked89 4.6k

Depends on you application, but you may also try to filter out entries with the out same sequence but i.e. just 1bp shorter.

uclust --sort seqs.fasta --output seqs_sorted.fasta
uclust --input seqs_sorted.fasta --uc results.uc --id 1.00
uclust --uc2fasta results.uc --input seqs.fasta --output results.fasta --types S

This should give you results.fasta purged from identical but equal in size / shorter sequences.

ADD COMMENT
0
Entering edit mode

good to know! uclust looks to be a useful tool.

ADD REPLY
9
Entering edit mode
13.5 years ago

Encode the sequences in hash using SHA-1 or MD5, and then check for collision. If you are familiar with Python, here is a useful recipe to start.

ADD COMMENT
0
Entering edit mode

Sorry i was thinking to use the same approaches to refine big sequence file contains ~ 14.000 sequence of influenza virus isn't that will consume the memory "ram" is there other way?

ADD REPLY
7
Entering edit mode
13.5 years ago
Will 4.5k

Here's a python script that should be able to do it:

from itertools import groupby

if __name__ == '__main__':

    ishead = lambda x: x.startswith('>')
    all_seqs = set()
    with open(inname) as handle:
        with open(oname, 'w') as outhandle:
            head = None
            for h, lines in groupby(handle, ishead):
                if h:
                    head = lines.next()
                else:
                    seq = ''.join(lines)
                    if seq not in all_seqs:
                        all_seqs.add(seq)
                        outhandle.write('%s\n%s\n' % (head, seq))
ADD COMMENT
0
Entering edit mode

Does this script only collapse sequences that are completely overlapping or will it collapse sequences where there is only partial overlap but the overlapping region is redundant?

ADD REPLY
0
Entering edit mode

For this one it only checks for complete identity ... the if seq not in all_seqs only does at membership test for the whole sequence. However, with a little work you could probably modify this to look at sequence regions.

ADD REPLY
0
Entering edit mode

Wow, very clever. I don't fully understand a lot of the tool packages in python, but this is very intuitive. I modified the script slightly to strip the extra carriage returns in the outhandle.write command. This removes whitespaces that are aesthetically appealing but causes some programs to choke.

ADD REPLY
7
Entering edit mode
12.2 years ago

Works with big sets, aa/prot seq files (at the opposite to fastx-toolkit), very fast and simple to use: genometools:

gt sequniq -o out.fasta in.fasta
ADD COMMENT
6
Entering edit mode
13.4 years ago

Already nice answers:

I would recommend to try the CD-HIT version designed for EST / nucleotides (CD-HIT-EST and CD-HIT-EST-2D) for this purpose.

ADD COMMENT
6
Entering edit mode
13.4 years ago
Dave Lunt ★ 2.0k

uclust is about the fastest and one of the most flexible. If your sequences are not exactly the same length, or you also want to cluster some sequences that are almost, but not exactly, the same then look carefully at the flexibility and options of the thing you choose.

ADD COMMENT
6
Entering edit mode
13.4 years ago
Rm 8.3k

for smaller sets try this:

perl -ne 'BEGIN{$/=">";$"=";"}($d,$_)=/(.*?)\n(.+?)>?$/s;push @{$h{lc()}},$d if$_;END{for(keys%h){print">@{$h{$_}}$_"}}' multi.seq.fasta
ADD COMMENT
1
Entering edit mode

Wonderful solution!!! Thank you for sharing it.

ADD REPLY
0
Entering edit mode

and so readable too. ;) now which has "... all the visual appeal of oatmeal with fingernail clippings mixed in." ?? http://en.wikiquote.org/wiki/Larry_Wall

ADD REPLY
5
Entering edit mode
13.5 years ago
Hanif Khalak ★ 1.3k

According to SeqAnswers: WU-blast, EBI-Exonerate, and bioperl all have stand-alone programs to make an "nrdb" = non-redundant database of sequences.

Note: WU-blast is now being distributed commercially as AB-Blast - get a free personal license. Older archived versions can be found here.

ADD COMMENT
0
Entering edit mode

Strictly the nrdb program generates a non-identical database. The source for nrdb can be found in http://blast.advbiocomp.com/pub/nrdb/, no license required. This was a newer version than that bundled in WU-BLAST, no idea if this is still the case for AB-BLAST. Originally nrdb was used to produce NCBI's 'nr' and 'nt' databases. Of course 'nt' is no longer non-identical and 'nr' is produced using a slightly different process today, but nrdb is still a popular way to generate non-identical databases for use with BLAST.

ADD REPLY
4
Entering edit mode
8.2 years ago
BioApps ▴ 790

On Windows you can use my Kitten Sequence Dereplicator (which by the way, was updated recently).

How to dereplicate sequences in Fasta file? Dereplication of a FASTA file via clustering

sequence dereplication

The program is based on CD-Hit which is pretty accurate and fast.

ADD COMMENT
3
Entering edit mode
9.1 years ago

I wrote a program for this problem, called Dedupe. It's very fast, and also (optionally) handles contained sequences. Usage:

dedupe.sh in=file.fasta out=nodupes.fasta

If you don't want fully-contained substrings removed, then add the flag ac=f (short for absorbcontainments=false).

ADD COMMENT
0
Entering edit mode

Really helpful, thank you Brian Bushnell it solved my desire to get unique sequences in my set

ADD REPLY
2
Entering edit mode
13.4 years ago
User 0726 ▴ 20

I have to say that the remove of duplicate sequences is not at al trivial, especialy when you consider real data with sequencing/alignment errors. Also problematic is that sequences are identical, but do not sully overlap. I can say with certainty that CD-HIT-EST will filter some duplicates, but far from all.

I am working on a pipeline that uses BLAST, filtering out those contigs that have significant similarity to more that just themselves.

ADD COMMENT
0
Entering edit mode

Why doesn't CD-HIT-EST filter all duplicates? Shouldn't it be possible to set this in the parameters?

ADD REPLY
2
Entering edit mode
9.1 years ago
pyjiang2 ▴ 40

I tried fastx_collapser first, but it gives error for multiple aligned fasta sequences.

I found this useful website, which gives unique fasta sequences and concatenate the header name for the same sequences as well: http://www.ncbi.nlm.nih.gov/CBBresearch/Spouge/html_ncbi/html/fasta/uniqueseq.cgi

ADD COMMENT
0
Entering edit mode

Great find! Only tool that I could find that works with amino acids.

ADD REPLY
2
Entering edit mode
5.6 years ago
ferroao ▴ 20

in R:

library(seqinr)

#when you dont use attributes to name sequences
fasnoa<-seqinr::read.fasta("fasta.fasta", set.attributes = F) 
#get names
seqnames<-names(fasnoa)
#detect dups
dups<-grep(TRUE,duplicated(fasnoa))
#eliminate duplicates    
namesnodups<-seqnames[-dups]
fasnoanodup<-fasnoa[-dups] 
#write file
seqinr::write.fasta(fasnoanodup,namesnodups,"seqinrnoafasta.fas",nbchar=10000)


# when you use attributes as name of sequences
fas<-seqinr::read.fasta("fasta.fasta")
# read attribute annot when they have names and you will use them 
namesfas<-lapply(fas, function(x) attr(x, "Annot") )
# delete attributes 
for (i in 1:length(fas) ){ 
attributes(fas[[i]])<-NULL }
# detect duplicates 
dups<-grep(TRUE,duplicated(fas))
# create object without duplicates 
fasnodup<-fas[-dups]
# create object with names 
namesnodups<-namesfas[-dups]
# modify names 
namesnodups<-gsub("\\s+|,","_",sub(">","",namesnodups) )
# write file 
seqinr::write.fasta(fasnodup,namesnodups,"seqinrfasta.fas",nbchar=10000)
ADD COMMENT
1
Entering edit mode
10.0 years ago
Shaun Jackman ▴ 420

seqmagick convert --deduplicate-sequences will remove duplicate sequence. See here:

http://fhcrc.github.io/seqmagick/convert_mogrify.html#examples

ADD COMMENT
0
Entering edit mode

Hello Shaun:

I tried the seqmagic as you suggested, but it did not give what is expected.

>seq
ATCGATCGATATATATATAT
>seq2 part of seq1
CGATCGATATATATATA
>seq3 part of seq2
ATCGATATATAT
>seq4 reverse complementary of seq 2
TATATATATATCGATCG
>seq5 new seq
ATCGATCGACGATCGAGCGCG
>seq6 another new
ATCGATCGCGCGCGCGCGCGCGC
>seq7 psubstring of seq6
CGCGCGCGCGCGCGCG

Here is the command I used:

$ seqmagick convert --deduplicate-sequences test.fasta test_seqmagick.fasta
$ cat test_seqmagick.fasta
>seq1
ATCGATCGATATATATATAT
>seq2 part of seq1
CGATCGATATATATATA
>seq3 part of seq2
ATCGATATATAT
>seq4 reverse complementary of seq 2
TATATATATATCGATCG
>seq5 new seq
ATCGATCGACGATCGAGCGCG
>seq6 another new
ATCGATCGCGCGCGCGCGCGCGC
>seq7 psubstring of seq6
CGCGCGCGCGCGCGCG

Did I miss anything?

ADD REPLY
0
Entering edit mode

yifangt, one of us is certainly confused. None of your input sequences are duplicated. Feed seqmagick a file with some duplicated sequences and it will work.

ADD REPLY
0
Entering edit mode

Looks like he also wanted to remove contained sequences and reverse complements. That can be done with Dedupe (see the dedupe post on this page).

ADD REPLY
0
Entering edit mode

Thanks Brian!

Yes, what I meant is to dedupe sequences with the original and reverse complemented sequences, and the containment of both strands. Not sure how necessary this job is for any application like assembly, but that's another question.

ADD REPLY
0
Entering edit mode

It's not usually necessary unless you want to combine multiple assemblies. Sometimes it is also useful in RNA-seq transcriptome assembly.

ADD REPLY
0
Entering edit mode
7.2 years ago
Eslam Samir ▴ 110

Here is my free program on Github: Sequence database curator

It is a very fast program and it can deal with:

  1. Nucleotide sequences
  2. Protein sequences

It can work under Operating systems:

  1. Windows
  2. Mac
  3. Linux

It also works for:

  1. Fasta format
  2. Fastq format

Best Regards

ADD COMMENT
0
Entering edit mode
5.5 years ago
SilentGene ▴ 110

Try to use this python script, which can help you remove duplicates of the sequences in one or several fasta files. And it's easy to use --id or --seq option to indicate whether you like the filter to work according to the id or the sequence itself.

ADD COMMENT
0
Entering edit mode
5.4 years ago
michau ▴ 60

In Jalview (best alignment viewer) option "remove redundancy". You can select treshold. Ctrl + D or edit > remove redundancy jalview alignment viewer+

ADD COMMENT
0
Entering edit mode

Is there a limit of sequences that this tool can handle? This ight be an option for smaller files but I kind of doubt that a fasta of several GBs can be handled that way.

ADD REPLY

Login before adding your answer.

Traffic: 1745 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6