I've been trying to track (for some time) how ants (advanced normalization tools) and ITK (the insight toolkit, http://itk.org/, https://github.com/InsightSoftwareConsortium/ITK) are cited and/or used in publications. It turns out that it's fairly tricky to do ... citations may be to different "source" academic papers, to different websites (sourceforge, picsl, github, nitrc, neurodebian) or just by citing the name of the software. Another issue is that other software is built on itk and ants so one might need to mine for these dependencies as well or even for software that clones from our github repos. I started (some time ago)
Asking people to add full provenance is great if you are an omnipotent force like NIH (which by the way still has a lot of trouble getting people to put their grant numbers into papers in a consistent way), but for the rest of us mortals all we can really dream for is an identifier, if we make it super easy for people to do it and apply pressure at just the right time (during publication).
That is my two cents, for what it's worth, but I am biased in that I have already had 174 people (confirmed) add identifiers to their papers.
ANTS is a really crappy name for a project for obvious search for ants and see what you get back reasons. ImageJ does not appear randomly in papers, a good thing. Python, remarkably enough, follows a bimodal distribution of journal titles. ;-)
ANTS isRRID:nlx_75959 try your new lovely identifier in google scholar - I get back two papers that used ANTS (methods section attribution)
We have created a pipeline with some fun tools that help to answer this question and have processed the OA literature (so far). ANTZ does come back in a few papers (this is from URL mentions in the methods only, we have a tool that will give you the option of curating mentions from names in the next 1-2 months; we ran a set of learning algorithms on this against the info we get back from publisher apis for modeldb and were able to go from a precision of 40% to 98% so once this is public....I would love for you to test it):
Available now (see column called mentioned in literature): https://www.neuinfo.org/mynif/search.php?t=indexable&nif=nlx_144509-1&q=%22ANTS+-+Advanced+Normalization+ToolS%22&filter=
Adding all alternate urls, synonyms, abreviations and other info to the catalog representation will allow our crawlers to find mentions of your tool more effectively. We do some things that are smart, but algorithmic approaches are never going to be as good as someone interested in the answer.
For RRIDs, I personally would like to use a simple versioning strategy not unlike a genbank identifier, where you can append the version following the numeric ID. In such a way, you can query across all versions with the root number, but still reference the specific version for reproducibility purposes.
RRIDs could still be resolved with a URI or a DOI, and could be the subject of a set of triples that can aid discovery and attribution. See the NIH Software Discovery Index report for requirements analysis - we are looking for comments there: