8.3 years ago by
The answer to the question "which software did better?" depends very much on what you are hoping to achieve. Is this a benchmarking study, in which you know the motifs you are looking for, or is this a motif discovery project?
If the former, then the standard method is some kind of sensitivity (TP/(TP+FN)) versus specificity (TP/(TP+FP)) analysis. Even here, though, "best" depends on the downstream application.
The "best" performance really depends what you want to do next. If you want high confidence in your results, you need a method with statistical significance. SLiMFinder, for example, has pretty and well bench-marked statistics that account for evolutionary biases etc. but it is pretty conservative as a result. If you are going to test a lot of motifs and are not too worried about the False Discovery Rate as long as the motif is in there somewhere, you probably just want to look at the top results from several methods. (Relax the cut-offs if you are doing this to make sure that they all return some motifs.) They all have their own biases and will perform better or worse on different kinds of data. Unless you know what the biases in your own data are and can explicitly pick a model that represents that knowledge - lucky you, if so! - it is not always going to be easy to judge. (Just make sure that any methods you use are accounting for evolutionary relationships between your input proteins and/or you have screened those out prior to analysis, otherwise these will dominate the results of some methods.)
The other alternative is to put results from one method through the statistical models of another. You could feed regular expressions from another predictor to SLiMFinder (the "slimcheck" function) or its related program, SLiMSearch, to see what statistical support they have. You can also change the statistical model to be based on enrichment versus a background dataset rather than the composition of your search dataset, also this inherently has problems of biases introduced by protein families that I do not think anyone has solved. (It can provide nice corroborative evidence, though.)