A lot of papers I've read have used the qvalue package from Bioconductor to do multiple testing correction. There is a function in the basic R stats package called p.adjust which seems to do the same thing. What is the advantage of the qvalue package, and why is it so frequently used instead of the built in R version?
p.adjust() and the qvalue package aren't actually doing exactly the same thing. They are doing quite similar things with similar ends in mind, but the algorithms are different, so they produce different results (Gordon Smyth, who wrote p.adjust(), has a short summary of the history here and Tim Triche gives a nice explanation of the differences in the same thread). Interestingly, the packages I use almost always use p.adjust() rather than the qvalue package, though if you read a paper that uses them the authors might actually say "q-value" rather than "BH adjusted p-value".
BTW, q-values are a bit more difficult to calculate and you actually need quite a few data points. In effect, you have to use the data at hand to estimate the expected null rate and then multiply that by the BH adjusted p-value. Consequently, p.adjust() ends up being more generally applicable, but in cases where q-values are appropriate they tend to give more power.
The qvalue package implements FDR methods taylored for genomics studies. The p.adjust function implements "standard" FDR methods. The qvalue package also has a plotting function and a GUI.