OTHER APPROACHES TO GUIDELINE DEVELOPMENT

5. OTHER APPROACHES TO GUIDELINE DEVELOPMENT

Unfortunately, old habits die hard and many clinicians or organisations persist with supporting nonstandardised ad hoc statements or reviews, usually from expert bodies—this undesirable methodology or GOBSAT (Good Old Boys Sat Around a Table) should no longer be valid or encouraged (Miller and Petrie, 2000). These discussions are based on received wisdom, clinical judge- ment, and experience rather than current scientific evidence; lack an explicit decision-making process; and may be biased by undeclared conflicts of inter- est (Miller and Petrie, 2000). A good example of this was the guidelines for improving the use of antimicrobial agents in hospitals—a statement by the Infectious Diseases Society of America (IDSA) published in 1988 (Marr et al., 1988). Although commendable in their efforts at aiming to promote good qual- ity prescribing and that it remains a well-respected document, the development process was not evidence based as it was drafted by three authors with subse- quent review by 43 multidisciplinary members of the IDSA. Other deficien- cies of this document were the rather unhelpful format, lack of summary of the evidence, and no linkage of this to the ultimate recommendations. This chapter highlights a typical example of development methodological flaw before appreciation of EBM methodology became widespread.

“Consensus”-based statements (Murphy et al., 1998) are popular and involve a broad-based panel which listens to the scientific data presented by experts, weighs the information, and then composes a consensus statement that addresses a set of questions previously posed to the panel. Once again within a relatively small group, the interactions are such that some members will have a significant impact on the overall decisions. Other sources of bias

UK Guidelines: Methodology and Standards of Care

27 include the type of questions set, the composition of the panel, and the selec-

tion of the experts and literature. Examples of such a process is the consensus panel recommendations for managing serious candidaemia (Edwards et al., 1997), or synthesising a consensus strategy for combating the prevention and control of antimicrobial resistance microorganisms in hospitals (Goldman et al., 1996). The latter document is worthy of recognition as a good consensus statement—it aims to synthesise a strategy from expert opinion, experience, and key existing evidence. It is explicit in its intention and development process and recognises its inherent deficiencies. This document is widely acknowledged as pivotal in the process of building a subsequent evidence-based approach to preventing antimicrobial resistance in hospitals. This is recognised in the sub- sequent guidelines on this subject by the Society for Healthcare Epidemiology of America (SHEA) and the IDSA Joint Committee on the Prevention of Antimicrobial Resistance (Shlaes et al., 1997). This approach, although valu- able, does not follow the AGREE or SIGN developmental methodology. When the AGREE Instrument for appraising the quality of this guideline is applied to this report (www.agreecollaboration, June 2001), one identifies a number of deficiencies in the areas of stakeholder involvement, rigour of development, and clarity of presentation. This guideline takes much more the form of a specialist but not systematic review, with only one table presented linking four recommendations to evidence. Greater uniformity with AGREE by such North American approach to guidelines would be welcomed. A more struc- tured consensus approach to gathering expert opinion is the classic delphi panel. This technique has merits in that the process is structured, has a number of stages that lead ultimately to a convergence of opinion, and is coordinated by a central person who summarises and feeds the information back from the panel whose responses are anonymous (Evans, 1999).