While I of course never have stupid mistakes...ahem...I have many "friends" who:
but I'm sure there are some other very common pitfalls that are unique to bioinformatics programming. What are your favorites?
I truncated many fasta files this way when trying to see which headers it contained:
grep > some.fasta
I also see a lot of off-by-one errors due to switching between formats
Bed is 0 based
GFF/GTF are 1-based
and switching between languages:
Python and nearly every other modern language are 0-based indexing
R is 1-based (as is Lua)
Gene annotation stored in an excel file and find out that some HUGO gene names have been hacked by Excel. SEPT9 become sept-9. Conclusion Do not use the .xls format to store your data.
Listen people saying this eternal mistake "Hey these two sequences are 50% homologs"
I feel like a lot of "stupid mistakes" revolve around betrayed trust and false assumptions
Trusting that a downloaded file is actually fully downloaded
Trusting that an aligner will accept a list of query files instead of just taking the first and ignoring the rest (quiz: which ones am i talking about?)
Assuming that the quality scores in a FASTQ file are from a great Sanger-encoded run instead of a very poor Illumina-1.3 run
Assuming chr1 is followed by chr2 not chr10
If you forgive an attempt to be somewhat provocative, my two favorite mistakes are:
1 Letting academics build software
Academics are in the need to publish papers, and one easy way to do that is to implement an algorithm, demonstrate that it works (more or less), and type it up in a manuscript. BT,DT. But robust and useful software requires a bit more than that, as evidenced by the sad state of affairs in typical bioinformatics software (I think I've managed to crash every de novo assembler I've tried, for instance. Not to mention countless hours spent trying - often in vain - to get software to compile and run). Unfortunately, you don't get a lot of academic credit for improved installation proceedures, testing, software manuals, or especially, debugging of complicated errors. Much better and productive to move on to the next publishable implementation.
2 Letting academics build infrastructure
Same argument as above, really. Academics are eager to apply to construct research infrastructures, but of course they aren't all that interested in doing old and boring stuff. So although today's needs might be satisfied by a $300 FTP server, they will usually start conjecturing about tomorrow's needs instead, and embark on ambitious, blue sky stuff that might result in papers, but not in actually useful tools. And even if you get a useful database or web application up and running (and published), there is little incentive to update or improve it, and it is usually left to bitrot, while the authors go off in search of the next publication.
Well i have couple:
1) Run a batch BLAST job and forgetting to put the "-o something.out" option. Then switching off the monitor and coming the next day to see a bunch of characters in my terminal
2) "tar -zxvf" without checking the tar file before, I have decompressed thousands of files in my current directory assuming they came in their own folder.
I'll offer this one, which is a bit on the general side: Deletion of data that appear to serve no relevance from the computational side, but which have importance to the biology/biologist. Often, this arises from a lack of clear communication between the two individuals/teams as to what everything means, what it exactly means and why it is relevant to the process being developed.
I often encounter problems related to the fact the computer scientists index their arrays starting with 0, while biologists index their sequences starting with 1. Simple concept that drives the noobs mad and even trips up more experienced scientists every once in a while.
Masking out sequence in a FASTA file (e.g. s/TAAT/NNNN/ig) where the sequence is formated, i.e. split onto multiple lines.
This will miss TAAT that is split over the end of one line and the start of the next!
The classic mistake (also mentioned above by Casey) is not being aware the genome assembly effect coordinates.
Do pathways statistics or gene set enrichment statistics and then represent the list of gene sets as a valuable result, instead using that statistics just as a means to decide which pathways need to be evaluated.
(This is bad for many reasons for instance because the statistical contribution of a key regulatory gene in a pathway is equal to that of 1 out 7 iso-enzymes that catalyze a non-relevant side reaction, and because the significance of a pathway changes when you add a few non-relevant genes, and also because we have many overlapping pathways).
Another typical mistake is to solve problems that nobody has.
Re-inventing the wheel. So often did I have to debug (or just replace) a bad implementation of a fasta-parser when BioPython/BioPerl have perfect implementations, I don't understand why no-one bothers to use them. 10 minutes in Google can save you 2 days of work and other people a week of work (you save 2 days of programming, they save a week of understanding your program to find the bug)
I gave my Amazon EC2 password to someone in my group who wanted to run something quickly (estimated cost, $2). I received the bill 2 months later: $156. This person forgot to close the instance. This is a 8 months story and I'm still waiting for my reimbursement... Conclusion: don't trust colleagues!
I made one a few months ago. I launched a heavy process in a pay-per-use cluster, it was running for one week. I thought, 6 pennies/hr cannot be too much money. I received a bill for $832 usd. I'm not using this cluster again unless I estimate the total cost of the process.
edit: the price is per core
One mistake: not looking to see that the 0x4 bit in the bitflag column of a SAM (or BAM) file indicates the entry is mapped.
POS may be set to something non-null (an actual string!) but these are not meaningful if the 0x4 flag says the read is unmapped.
tacking on another command line argument without looking through the rest of them
novoalign -a ATCTCGTATGCCGTCTTCTGCTTG -d genome.ndx -F ILMFQ -f query.fq -a -m -l 17 -h 60 -t 65 -o sam -o FullNW
the first adapter argument (-a ATCTCGTATGCCGTCTTCTGCTTG) is negated by the empty second one
I've just made one, which cost me a good headache trying to figure out the biology underlying my strange results!
POSfield to be the leftmost position of my mapped read on the '+' strand, and the rightmost position on the '-' strand
Note to self : "Read the manual..."
Running the bwa/GATK pipeline with a corrupt/incompletely generated bwa index of hg19. Everything still aligned, but one of 2 mates would have its strand set incorrectly. Other than the insert size distribution, everything seemed normal, until the TableRecalibration step downshifted all quality scores significantly and then UnifiedGenotyper called 0 SNPs. 1st time I've seen a problem with step 1 of a pipeline not become obvious until step 5+.
Some really great comments here, nice to know that such things happen to all genii ;). I have to say my most painful moments relate to my assumption that data obtained elsewhere is correct in every way. I also remember early in my career, using PDB files and realising that sometimes, chains are represented more than once, thus when manually checking calculations involving atomic coordinates, being utterly perplexed and wanting to break my computer. Oh the joys of Bioinformatics.
Assuming that the gene IDs in "knownGenes.gtf" from UCSC are actually gene IDs. Instead they just put the transcript ID as the gene ID.
This just caused me a bit of pain when doing read counting at the gene level. Basically, any consittutive exon in a gene with multiple splice forms was ignored because all the reads in that exon were treated as ambiguous.
I wouldn't say it's stupid , but I think a very common mistake is to not correct for batch effects in high-throughput data.
Batch effects can (best-case) hide the real effect that you're looking for, or (worst-case) make it look like your variable of interest is contributing to your findings when it's actually an artifact.
Leek + Irizarry et al. have a sobering review on this here.
A double mistake combo, 1 - use tar to compress a single file and, 2 - inverting the command arguments
tar xvfz file file.tgz
tar xvfz file.tgz file
Bye bye file!
It happened to me so many times, that I was considering doing an imagery brain check up.