I want to extract some sequences using ID from a multifasta file.
Using perl is not possible because it gave an error when indexing the database. Maybe because of it's size? Is there any way to this differently? thanks for your help!
I want to extract some sequences using ID from a multifasta file.
Using perl is not possible because it gave an error when indexing the database. Maybe because of it's size? Is there any way to this differently? thanks for your help!
How about samtools faidx
? It seems to be less well-known, but I think it should work for you.
If you have these reads in test.fa
:
>one
AAAAA
>two
CCCCC
>three
GGGGG
Then just run:
samtools faidx test.fa
This will create an index file that will allow you to do sublinear queries for each name lookup (faster than the awk/grep solutions above and easier than rolling your own script).
For example:
samtools faidx test.fa two
Will output:
>two
CCCCC
It works for multiple regions (samtools faidx test.fa one two three
) and that lets you play tricks like:
xargs samtools faidx test.fa < names.txt
pip install pyfaidx
then
faidx --regex "^((?!>1;).)*$" input.fa > output.fa
or
faidx --invert-match --regex "^>1;.*$" input.fa > output.fa
The first example uses negative lookaheads, which may be more difficult to reason about, while the second example depends on the -v | --invert-match
flag in the faidx script.
Here's a regex sandbox to play in: https://regex101.com/r/tJRWRY/1/
You can use blast to extract sequences from a large file, see below. That being said I am not sure how well this works if the file is large solely because it contains a very large number of sequences. I think the original use case of BLAST optimizes for retrieving locations from long sequence.
# look at the file
$ head EC4115.fa
>NC_011353.1 Escherichia coli O157:H7 str. EC4115 chromosome, complete genome.
AGCTTTTCATTCTGACTGCAACGGGCAATATGTCTCTGTGTGGATTAAAAAAAGAGTCTC
TGACAGCAGCTTCTGAACTGGTTACCTGCCGTGAGTAAATTAAAATTTTATTGACTTAGG
TCACTAAATACTTTAACCAATATAGGCATAGCGCACAGACAGATAAAAATTACAGAGTAC
# generate the blast database
$ makeblastdb -dbtype nucl -out EC -in EC4115.fa -parse_seqids
# retreive an entry by id
$ blastdbcmd -db EC -entry 'NC_011353.1' | head
>lcl|NC_011353.1 Escherichia coli O157:H7 str. EC4115 chromosome, complete genome
AGCTTTTCATTCTGACTGCAACGGGCAATATGTCTCTGTGTGGATTAAAAAAAGAGTCTCTGACAGCAGCTTCTGAACTG
GTTACCTGCCGTGAGTAAATTAAAATTTTATTGACTTAGGTCACTAAATACTTTAACCAATATAGGCATAGCGCACAGAC
....
# query the blast database by id and coordinates
$ blastdbcmd -db EC -range 100-105 -entry 'NC_011353.1'
>lcl|NC_011353.1:100-105 Escherichia coli O157:H7 str. EC4115 chromosome, complete genome
TTAAAA
Thanks for your response ! I will try that. My database is paired-ends reads that contain Millions of small sequence of 76bp . I want to use some IDs to extract the correspondents sequences but using bio::db::fasta module in perl the indexing step is too long and bug ! I'm not sure that generating a blast database will take less time ! Thanks
I believe this perl script will do what you need (just whipped it up - excuse any typos)
my $inseq = 0
my $inFh = IO::File->new( myfile.fa ) || die "can't open file\n";
while( my $line = $inFh->getline )
{
chomp($line);
if($inseq){
if($line =~ /^>/){
$inseq = 0;
} else {
print $line . "\n";
next;
}
}
if($line =~ /^>/ && $line =~/$ARGV[0]/){
$inseq = 1;
print $line . "\n";
}
}
close($inFh);
Run it like:
$perl myscript.pl SequenceName1234 >output
Here is another way to use perl to do this task. It doesn't require BioPerl.
1. The database
>HMPREF1307_00002 protoporphyrinogen IX dehydrogenase [menaquinone] [Klebsiella pneumoniae subsp. pneumoniae WGLW3]
MKTLILFSTRDGQTREIASFLASELKELGIDADTLNLNRTDVVEWHHYDRVVIGASIRYG
HFHPAVDRFVKKHLAALQALPGAFFSVNLVARKPEKRTPQTNSYTRKFLLNSPWQPQSCA
VFAGALRYPRYSWYDRFMIRLIMKMTGGETDTRKEVVYTDWQQVSRFAREIAQMARK
>HMPREF1307_00003 trk system potassium uptake protein trkH [Klebsiella pneumoniae subsp. pneumoniae WGLW3]
MHFRAITRIVGLLVILFSGTMIVPGLVALIYRDGAGRAFTQTFFVALAIGSMLWWPNRKQ
KGELKSREGFLIVVLFWTVLGSVGALPFIFSEQPNLTVTDAFFESFSGLTTTGATTLVGL
DSLPHAILFYRQMLQWFGGMGIIVLAVAILPILGVGGMQLYRAEMPGPLKDNKMRPRIAE
TAKTLWLIYVLLTIACALALWFAGMPAFDAIGHSFATIAIGGFSTHDASVGYFNSPMINS
IIAIFLLISGCNYGLHFSLLSGRSLKVYWRDPEFRMFIGVQLTLVIVCTLVLWLHNVYGS
VLTTLNQAFFQVVSMATTAGFTTDSIARWPLFLPVLLLCSAFIGGCAGSTGGGLKVIRIL
LLFKQGNRELKRLVHPNAVYSIKLGNRALPERILEAVWGFFSAYALVFIISMLAIIATGV
DDFSAFASVVATLNNLGPGLGVVADNFATMNPVAKWILIANMLFGRLEVFTLLVLFTPTF
WRE
................................
................................
2. Sequences for retrieval
HMPREF1307_00002
HMPREF1307_00003
3. The perl script to do it
#!/usr/bin/perl -w
#This script takes a list of fasta id and retrieve the corresponding fasta seqs from the database
#Usage: perl retrieveFasta2.pl listOfId database.fasta > result
my $line = '';
my $fasta = '';
my %hash = ();
my @fileretrieve = ();
my $header = '';
my $dna = '';
open(FILE,$ARGV[0]) or die "can't open file";
while (<FILE>) {
chomp ($_);
push (@fileretrieve,$_);
}
close FILE or die "can't close file";
open(FILE,$ARGV[1]) or die "can't open file";
while (<FILE>) {
chomp ($line = $_);
if ($line =~ /^>.*$/) {
if (exists ($hash{$line})) {
$fasta = "dupl";
} else {
$fasta = $line;
#print "$fasta\n";
}
} elsif ($line !~ /^>.*$/) {
if ($fasta ne "dupl"){
$hash{$fasta} .= $line;
}
}
}
close FILE or die "can't close file";
while (($header,$dna) = each(%hash)) {
foreach my $file(@fileretrieve) {
if ($header =~ /$file\s/) {
print "$header\n$dna\n";
}
}
}
Note: See usage in the perl script on how to use this script. I managed to use this script to retrieve over 300 sequences from a database which is about 750Mb containing about 2 million protein sequences.
There is also a UCSC tool for extracting multiple entries of a fasta file: http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/faSomeRecords
Here is a bash script to extract multiple sequences from a fasta file.
#!/usr/bin/bash
#extract multiple sequences from a large fasta file
while read p; do
echo '>'$p >>contig_out.txt
grep -A 10000 -w $p fasta_file.fa | sed -n -e '1,/>/ {/>/ !{'p''}} >>contig_out.txt
done <contig_list.txt
Where contig_list
is a list of the sequence IDs of interest (one sequence id per row) and contig_out
contains the sequence IDs followed by their sequence in fasta format.
Note: increase grep -A
parameter if sequences exceed 10000 lines.
A different approach using Heng Li's awk (named below as hawk) also described here would be like so
# extract the sequence matchin a read id
$ hawk -c fastx ' $name ~ /NC_011353.1/ {print $seq }' < EC4115.fa
This example will read through the entire file upon each invocation.
If you are going to retrieve sequences multiple times, I recommend building SQL db. I recommend SQLite as it's very light, yet powerful program: you don't need to run a served, the db is in one file and it has bindings to python (and probably to other languages).
We have successfully used SQLite with sets of several millions of sequences (fasta having several GBs). The query time is much below 1 sec if you properly index your sequence table.
One of the difficulties is the lack of standards in a fasta header format. Considering you're interested in the first element in the header as an ID. Here is my python proposal (change the IDclean line if you want an other ID in your header), I used the dictionary type to index the fasta first: it makes this process and the subsequent ID extraction very fast ;-)
#!/usr/bin/python
### extracts sequences from a fasta file (arg 1)
### whose id is in the IDs file (arg 2)
import string
import sys
ListOfIds = sys.argv[1]
fastafile = sys.argv[2]
try:
ids=open(ListOfIds, 'r')
except IOError, e:
print "File error: ",ListOfIds
pass
lignes = ids.readlines()
req=[]
for ligne in lignes:
req.append(ligne.strip())
#### reading the fasta file to cut
handle = open(fastafile)
bank={}
seqIDmap={}
seq_id = handle.next()
while (seq_id[0]!=">"):
seq_id = handle.next()
while True:
try:
seq = handle.next()
line = handle.next()
while (line[0]!=">"):
seq = seq+line
line = handle.next()
bank[seq_id]=seq
IDclean=string.split(seq_id, " ")[0][1:].strip()
seqIDmap[IDclean]=seq_id
seq_id = line # for the next
except StopIteration:
break
# last line
bank[seq_id]=seq
seqIDmap[string.split(seq_id, " ")[0][1:].strip()]=seq_id
handle.close()
######## end reading the potentially big fasta file
faName=fastafile.split("/")[-1]
listName=ListOfIds.split("/")[-1]
subsetName=listName+"-"+faName
subset = open(subsetName,"w")
nbNF=0
for I in req:
try:
subset.write(seqIDmap[i].strip()+"\n")
subset.write(bank[seqIDmap[i]].strip()+"\n")
except KeyError:
print i, "not found in fasta"
nbNF+=1
subset.close()
print
print nbNF, "IDs (listed above) from",listName, "have not been found in", faName
print
print "the Subset fasta file", subsetName, "is now created"
Here's a simple Python solution for one or multiple ID's, can also be adopted to read files containing a list of ID's one per line by adding an extra for loop to populate the ID list. It returns entries from ~1GB files in about a second on my 5th-gen i5.
import sys
#Read in target file
#Setup list
ids = ["AccessionNumber1", "AccessionNumber2", "etc..."]
#Set conditional
go = False
#Read target file
target = open("C:/file_path.txt")
for line in target:
if go == True:
if ">" in line:
go = False
continue
else:
sys.stdout.write(line)
continue
else:
for id in ids:
if id in line:
go = True
sys.stdout.write(line)
else:
continue
target.close()
I run into this problem and devellop a small python lib to answer this, as other answer it uses index but is in pure python. Because it first sort the fasta file it is extremly fast even faster than samtools faidx but the indexing takes longer
As rule of thumb indexes become worth if you need to extract sequences more than once. here the link if you have any question feel free to ask.
Here is a Perl script to extract sequences by their IDs (assuming that you have one sequence identifier per line in the file "ids.txt") from a multi-fasta file:
perl -ne 'if(/^>(\S+)/){$c=$i{$1}}$c?print:chomp;$i{$_}=1 if @ARGV' ids.txt multifasta.file
Another way of doing the same:
$ seqtk subseq multifasta.file ids.txt
I will use some code in perl, I'm assuming that your fasta file have the ID as the only text in the fasta comment
#!/usr/bin/perl
# extractSeq.pl -> extract fasta sequence from a list of IDs
use strict;
use warnings;
$ARGV[2] or die "use perl extractSeq.pl LIST FASTA OUPUT\n";
my ($list, $fasta, $out) = @ARGV;
my %list;
open L, "$list" or die;
while ( < L > ) {
chomp;
$list{$_} = 1;
}
close L;
open O, ">$out" or die;
open F, "$fasta" or die;
$/ = "\n>";
while ( < F > ) {
s/>//g;
my @seq = split (/\n/, $_);
my $id = shift @seq;
next unless (defined $list{$id});
print O join "\n", ">$id", @seq;
}
close O;
close F;
EDIT: remove the spaces in the file handles in the while ()
statement, autoformating is messing with the code view
You can also try using awk...
awk '/\bscaffold_name\b/ {print">"}' RS=">" genome.fna
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Your question can not be answered without more details.
If BioStar still suggested similar questions, you would have seen this one which answers your (identical) question: http://www.biostars.org/post/show/1195/extracting-sequence-from-a-3gb-fasta-file/.
Thanks a lot everybody for your precious help ! I try your solution both on my Desktop computer and a cluster ! this is the time taking by some solution on the cluster : For Chris miller solution : the scripts works well ! but make 1m19.999s to retrieve one ID ! so it's too long :( matted solution : 0m42.859s to retrieve one ID JC solution : 1m50.973s to retrieve one ID. Istvan Albert solution : generating the blast database is time consuming ! finally i will use grep that take 0m28.827s on my computer and less and 0m1.271s on the cluster
Of course it's a solution for only "BIG database".
Thanks everybody for taking time to help others.
but how does grepping a Fasta file for an id return the sequence for that id?
but that grep only gets the fasta header, not the full sequence. Do I missing something?
You could do
grep -A 1 ID file.fasta
to get the following line.yes I know that, but in a strict sense, the fasta file can contain multiple lines of sequence
You're right, I was just thinking of the short read Illumina output case. I don't know how to get grep to do the general case.
ha, I've not known about the -A flag but it is something I've always needed ;-)
I'm sorry I forget the -A 1 in the command line ! and as matted said is for Illumina read with single line ! thanks everybody for your answers !
I'm looking to pull out the full FASTA entry from a multiFASTA file using grep given the ID that appears in the header. This entails grabbing two more lines from a file that looks like the following:
>Full header information with matching ID
GATATC
ATATGC
>Full header for next entry with ID
TTTAAGGG
GGGAACC
The sequence data is spread across multiple lines (obviously with line breaks). The following command / RegEx is close to what I need.
pcregrep -M 'searchID.*(\n|.)*>' query.faa
# -M multi lines
#(\n new lines and any chars) - until it finds another ">"
The following problems are:
1) I don't want / need the following full header line that it grabs looking for the next entry with the terminating ">" in the RegExpression.
2) The Regular Expression doesn't work when it's the last entry, i.e. I need to "OR" the EOF to the ">" terminating character. Any way to match up to (">" OR EOF) and not include the following header line that starts with ">" in the regular expression?
Right now it returns the following:
>Header1
GATCG
TATTTA
CATTAT
>Header2
OR returns nothing on the last entry since there isn't another entry beginning with ">"
Thank you for this script
grep -A 1 2015.M.420nP seqs_non_chimera.fna >output1
But also I have a question: I am getting and output with a doble "-" ("--") separating each fasta line. I would like to get rid of it, if not, at least knowing that will not affect my QIIME downstring analysis.
Thank you a lot for your help
how can i extract the first 10 fasta sequences from a file?
which type of sequence and file you have? In which programm you want to do this. There are many ways wo do this.