A brief google search reveals that code quality seems to mostly mean a combination of readability, robustness, extensibility and maintainability. It has less to do with efficiency. In that case (and if you define code quality this way, too), code quality may be frequently subjective. I would be cautious of talking about it. Even if the code is obviously in bad shape (e.g. full of global variables), I may not measure a the quality of manuscript with code quality. To a scientific program, the underlying algorithm is far more important. Code quality is more in the engineering aspect and is only a "good-to-have" thing. We can write bad-looking but efficient programs. And I have indeed seen popular programs with bad code quality (in my standard). We may argue they are not easy to read/use, but we cannot find good alternatives. In addition, many tools are published not for others to use, either intentionally or effectively. In this case, code quality is not important, either.
To be honest, no. I'm not a computer scientist, so "code quality" to me is all about usability. I will comment on:
availability of source code (not on a university website)
ease of installation
accuracy of results
If the substance of what code produces is scientifically rigorous and has utility, then its style, readability, etc. is not that important to me. Furthermore, as long as source code is available, post-publication use, critique and extension will allow others to judge if the code is good or not.
Reproducibility is one of the most important and fundamental components of science. The 'quality' of experimental tools are not. It is much more important to encourage that code necessary to reproduce an analysis be submitted than it is to require that code meet a certain standard of quality.
As a reviewer, do you consider the quality of the materials used in an experiment, assuming that different materials have been demonstrated to produce equivalent results? If so, this would provide an unnecessary barrier to science. (Does it matter if a spectrometer cuvette is made of plastic vs. glass? That a microscope was made by Leica vs brand X? That Galileo used a primitive telescope?)
For publishing code intended for use as software, it is appropriate to comment on the functionality of the code, but not its 'smell'. If it is published as open source, it is available for others to improve upon. Important advances can be made in a fraction of the time that it would take to produce high quality code, and many researchers do not have the time that would be required to cleaning up 'good enough' code.
Interesting question. I usually take quality to mean efficiency and success in doing what the software was designed to do.
I never write about details regarding code quality as a reviewer, just as I don't question whether an animal study using 20 cages (5 control + 5 treatment 1; 5 control + 5 treatment 2) where run concurrently or successively. This is for paper type A, biological and using software as a tool. The exception to this would be a general statement that the methods used and experiments conducted are well suited to the questions of X that the researchers proposed to address, etc.
A known tool (BLAST, BOWTIE, GenePatterns, e.g.) generally need not be explained in terms of efficiency and success - unless they were applied for the wrong purpose. A new tool may be difficult to assess, say if the code is not submitted or made available to the reviewers.
Depends on the software. If the software is a use-me-as-i-am application code quality does not matter that much. If the focus of the software is that you can extend it or use parts of it elsewhere code quality gets a lot more important.
Imagine environments like Cytoscape or Galaxy that are just horrible to extend with your individual solutions and problems because the code is a messy blob of characters. What would you choose? Clearly defined and well designed interfaces vs messy blob?
In the review of such a paper code quality should be considered.