Q uality, Not Q uantity
Q uality, Not Q uantity
Any institution that funds research may reasonably expect to see some return for its money. This necessitates some means of measuring research output. How
else may the institution be sure that its money is being well spent? The simplest indicator, and one that appeals to many administrators, is the number of papers
that result from the research. It is an objective, quantitative indicator; but it is one that undermines quality. Individuals find themselves under immense pressure to produce a certain number of publications each year, and it is no
wonder that quality suffers. Publishing the same thing several times is an easy way of meeting the target. Other tactics include the publication of a string of
interim reports, the publication of material that warrants no more than an internal report, publishing papers that report on what one proposes to do in the
future, and publishing papers with a long and unjustified string of authors. An ingenious alternative is to look at the number of times that an individual's
work is cited by others. Ideally, this gives an objective measure of the worth of the work, as perceived by the researcher's peers. Unfortunately, this system, too, is open to abuse.
Assessment of quality in research is not a simple matter of numbers. It entails a high degree of subjective judgment, both by research managers and by other researchers. Funding bodies must be prepared to appoint research managers
whose judgment they trust, and then be prepared to accept that judgment concerning the quality of research being conducted under those managers' supervision. They must be seeking value for money, which entails both quality and quantity, rather than quantity alone.
Conference Papers
It is a common practice for employers not to fund an individual's attendance at a conference unless he or she is presenting a paper or a poster. It is a practice that makes the research administrator's life much simpler, but one that again encourages the production of superfluous publications. This is another area where research managers must be prepared to make their own judgments on the value that will accrue from an individual's attendance, whether or not a paper or poster is to be presented.
Selection of Conference Papers
Another important way of preventing the publication of substandard conference papers lies with the conference's technical committee. All too often, papers are
selected on the basis of an abstract submitted some eighteen months or more before the conference. At that time, the research will almost certainly not have been completed; indeed, it may not even have begun. The prospective author,
therefore, makes a guess as to the likely outcome of the research and writes an abstract that strikes a delicate balance between the specific and the noncommittal. The technical committee reviews the abstracts and, on this flimsy evidence, decides which papers to accept. By this time, the author has twelve months or less in which to complete a paper—regardless of how the research is going. And the technical committee and the editors, when they finally receive the paper, have little option but to publish it much as it stands.
Doing Better
Not all conferences operate this way, but many do. It means that much of the literature of conservation has been subjected to the very minimum of refereeing, if any. Quality assurance is all but nonexistent.
If preprints are to be issued at the time of the conference, there may be insuffi- cient time for full refereeing. Nonetheless, a significant step forward could be
made if technical committees were to insist on seeing the full text of a paper before deciding whether to accept it for presentation and publication. It is true
that it takes longer to read a paper than it takes to read an abstract, and that technical committees are comprised of people who do not have time on their
hands. However, it does not take long to decide whether a paper consists largely of previously published material, or whether it is of local interest only. A lot of substandard papers could be weeded out very quickly. Another problem is that authors may not be prepared to take the time to write a full paper if there is a
risk that it may not be accepted. Too bad—if poor papers were weeded out, there would be correspondingly more space for good papers, so the author who
has something worthwhile to say need not fear rejection.