Let's say you perform a large number of statistical tests based on some high-throughput screen. Since you are afraid that some of the resulting p-values could have arisen by chance, you perform FDR-correction and only proceed with those whose p-values are < 5% likely to be wrong. Subsequent statistical tests applied to the experimental follow-up results then gives that X number of those genes indeed had p-values below your desired threshold. The question then becomes: should one stop here and consider the original screen results for these particular genes to be successfully validated; or, should one also perform FDR-correction on the validated p-values?
You should correct your nominal p-values only once.
p # vector of nominal p-values fdr <- p.adjust(p,method="fdr")
FYI It's always a good idea to check the distribution of your nominal p-values before multi-testing correction :
Here's a nice post concerning p-value distribution : http://varianceexplained.org/statistics/interpreting-pvalue-histogram/