Show Summary Details
Page of

Producing guidelines, protocols, and toolkits 

Producing guidelines, protocols, and toolkits
Producing guidelines, protocols, and toolkits

Troy A. Moore

, Alexander L. Miller

, and Elizabeth Kuipers

Page of

PRINTED FROM OXFORD MEDICINE ONLINE ( © Oxford University Press, 2020. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Medicine Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 09 July 2020

Over the past several decades, multiple factors have resulted in the development of guidelines, treatment protocols, and implementation toolkits in all branches of medicine, including psychiatry. First, a number of studies showed that some accepted practices could not be shown to be effective when tested in randomized controlled trials (RCTs: Coffman et al., 1987; Gaebel, 1995; Gaebel et al., 2002; Kinon et al., 1998). The lack of practice effectiveness in randomized studies is often attributed to the interventions not being well defined (Michie et al., 2009).

Second, community surveys showed that many evidence-based practices were either not used, or not used properly (i.e. not used with fidelity to the key details of implementation of the practice) (Buchanan et al., 2002; Cradock et al., 2001; Howard et al., 2009; Young et al., 1998). Fidelity refers to the degree to which a particular programme follows a programme model, which is a well-defined set of interventions and procedures that helps individuals achieve some desired goal (Bond et al., 2000).

Third, studies of incorporation of evidence-based practices into everyday practice showed long delays (Codyre et al., 2008; Rosen et al., 2007; Ruggeri et al., 2008). Meanwhile, evaluation of the impact of continuing medical education programmes demonstrated their ineffectiveness in changing physician practices (Davis et al., 1995). Thus, it became clear that to affect actual medical practices, it would be necessary to put together detailed sets of recommendations, and the means to operationalize them.


In this section, we provide definitions of guidelines, protocols, and toolkits because each is different, but they do share some of the same elements. Field and Lohr provided a standard definition of clinical practice guidelines (CPG) in 1990: ‘Clinical practice guidelines are systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances.’ (Field and Lohr, 1990). Guidelines cover broad topics or conditions that affect many individuals. The literature often provides multiple ways of treating the clinical situation and the guideline is used to assist practitioners in navigating these different options.

Protocols in science and medicine are a written methodology making explicit the design and implementation of experiments or procedures. They are particularly important when standardization and replication are desired. Protocols in psychiatry are typically developed for an acute clinical situation where there is a high level of certainty in the evidence showing there is a right and wrong way to address the situation. Protocols have a high level of specificity and the intent of following a protocol is to improve outcomes or avoid poor outcomes.

Toolkits are collections of tools and/or strategies. An example of a toolkit in medicine is for the implementation of clinical practice guidelines or evidence-based practices. Toolkits can include worksheets/checklists, template letters, pocket cards, quick reference guides, assessment instruments, and related process and quality measures (National Guideline Clearinghouse, 2009). Toolkits can be thought of an accumulation of multiple protocols addressing a subject with many elements. Figure 39.1 depicts the complex overlap of elements between guidelines, protocols, and toolkits (GPTs). As we proceed through this chapter, we will compare the goals, risks, benefits, development process, scope, contributors, feasibility, and evidence for the utilization of GPTs. We will primarily focus on pharmacological interventions, but psychological and psychosocial treatments will also be considered.

Fig. 39.1 Guideline, protocol, and toolkit interaction.

Fig. 39.1
Guideline, protocol, and toolkit interaction.

Goals, risks, and benefits of GPTs

What are the goals of GPTs as commonly used in psychiatry? Ultimately, a well-constructed guideline can be used as a tool for making care more consistent and efficient and for closing the gap between what clinicians do and what scientific evidence supports. Unfortunately, there is literature suggesting practitioners do not necessarily follow guideline recommendations when guideline effect on process of clinical care is examined (Grimshaw and Russell, 1993). Toolkits try to provide a means of support for a practitioner or health care system trying to implement a guideline. Toolkits provide strategies, examples, and monitoring techniques to ensure proper implementation. Guidelines such as the Texas Medication Algorithm Project (TMAP) schizophrenia algorithm (Moore et al., 1997) and the National Institute of Health and Clinical Excellence (NICE) schizophrenia guidelines (National Collaborating Centre for Mental Health, 2009) manuals have many elements of toolkits that can assist in implementing the guideline. Protocols provide a standardization of procedures to ensure similar results are achieved when a different individual performs the same task at a different time and/or place. For instance, administration of an intramuscular antipsychotic can produce very different results in absorption and distribution of the drug if injection technique and injection placement protocols are not followed by the health care provider administering the injection.

The potential risks and benefits of utilizing guidelines and protocols can vary depending on the user’s point of view (clinicians, patients, administrators/payers). An example of the risk and benefits of guidelines and protocols utilization on clinical practice is the reinforcement and justification of clinical decisions made by clinicians using guideline or protocol recommendations. At the same time, other clinicians may feel that guidelines and protocols may restrict their clinical practices. Evidence-based guideline recommendations can be used as a tool by health care administrators/payers to determine if clinician prescribing is consistent with the evidence base, such as dosing of antipsychotics. By following evidence-based practice recommendations within guidelines or protocols, administrators and health care systems may restrict certain medications and/or practices that have not been shown to be beneficial, but administrators and health care systems may also make themselves responsible for paying for very costly medications or practices if certain guideline recommendations are followed.

Development of GPTs

Many elements must be considered at the outset of GPT development. First, one must examine what are evidence-based practices. Evidence-based practices are the integration of best research evidence with clinical expertise and patient values (Sackett et al., 2000). Second, data from scientific studies, real-world observations, and clinical experience must be evaluated on their scientific merit to determine their inclusion, and their subsequent weighting when recommendations are made from these data. Should studies with only case–control level data be included? Or is data from one positive RCT enough for inclusion into a guideline? Should data from clinical experience or expert opinion be included? And if expert opinion is included, how should it be weighted versus RCT data? What value do pragmatic design medication trials hold versus typical RCT trials? Will evidence rating systems or quantification be utilized during the evaluation of the literature (such as Agency for Healthcare Research and Quality (AHRQ) evidence ratings)? AHRQ evidence grading reflects the strength of evidence supporting a treatment and the magnitude of net benefit (benefits minus harm), e.g. ‘A’ equals good evidence supporting the treatment improves outcomes and benefits substantially outweigh the risks (U.S. Preventive Services Task Force, 2010).

It is helpful if the criteria and processes involved in these kinds of decisions are made clear, as those responsible for producing the GPTs must carefully consider these questions. Some decisions are complex, requiring a mixture of scientific merit (whether a paper is included) and expert view (whether the evidence justifies a recommendation).

The pluses and minuses of using the various types of evidence in GPT formation must be evaluated by those producing GPTs. The ultimate issue surrounding evidence inclusion is whether a guideline addresses only those clinical decisions with sufficient RCT evidence or whether it tries to also make recommendations for common clinical decisions where an adequate RCT database is lacking. For instance, the TMAP schizophrenia algorithm tries to only utilize RCTs where appropriate, but includes the best available data under some of the stages of the algorithm. The latter stage of the algorithm examining antipsychotic polypharmacy utilizes case series data and expert opinion because there is a lack of RCTs using two antipsychotics (excluding clozapine augmentation trials) (Moore et al., 2007). The PORT schizophrenia guidelines use strict inclusion criteria for recommendations requiring at least three positive RCTs on a medication management topic before the topic is addressed by the guideline (Kreyenbuhl et al., 2010). NICE guidelines require evidence from at least two well conducted RCTs (National Collaborating Centre for Mental Health, 2009). Consequently, the PORT and NICE guidelines do not address antipsychotic polypharmacy.

Although RCTs are considered the ‘gold standard’ in formulating GPTs, it is critical to determine if the population that participated in the RCTs is representative of the population that typically receives the intervention, ‘the target population’. Some factors that may make study populations differ from ‘target populations’ include how they were recruited (advertisements vs. clinical settings) or participants joining the study for only altruistic or monetary reasons.

The producers of a guideline that only incorporates RCT data must be prepared not to address some important issues that arise in clinical practice. RCT data provides evidence that one can be confident in the results, but on the other hand RCTs may provide limited external validity. Often clinical trials have very strict inclusion/exclusion criteria that may limit the generalizability of the study results. In practice, the limitations of the available data will need to be acknowledged, and the recommendations that result be duly cautious, pointing out the pitfalls.

What about internal versus external validity? Can results of guideline utilization in Texas, in the National Health Service in the United Kingdom, or elsewhere be generalized to other populations? Strict inclusion/exclusion study criteria may potentially limit the external validity of the studies used in a GPT. On the other hand, very loose inclusion/exclusion study criteria will limit the internal validity of the studies used to develop the GPT.

An example of a common clinical practice that is difficult to thoroughly evaluate with RCTs is the use of antipsychotic polypharmacy. An RCT examining every possible antipsychotic polypharmacy regimen would be difficult to conduct. It would have to be incredibly large in size to incorporate all the possible antipsychotic polypharmacy regimens and have the power to show differences between regimens. So is it better for a guideline to ignore a treatment with little to no evidence to support its practice (such as non-clozapine antipsychotic polypharmacy) altogether despite a significant segment of the population being treated with antipsychotic polypharmacy because monotherapy treatment has failed in the past, or is it better to provide practitioners with tools (e.g. rating scales) to use in the evaluation of individual patients as a basis for deciding if an intervention that is not supported by RCT evidence is helpful for the individual patient? It is hard to be definitive. If the guideline is trying to provide only firm, concrete evidence from RCTs to its users, then one would not want to include case–control or expert opinion data.

What about industry versus non-industry RCTs? It has become clearer that some industry sponsored studies are not published if there are not positive results. However, rectifying the publication bias can be problematic, as there have been instances of delay and refusal for requests for raw data from unpublished trials during the NICE process.

Do industry sponsored studies hold less weight than non-industry studies? Are the producers of GPTs assuring the RCT study design is not providing an unfair advantage to one of the treatments being studied? Examples of potentially unfair advantage in study design are seen in early studies comparing haloperidol versus second-generation antipsychotics, which used haloperidol doses that are higher than seen in clinical practice today (Davis et al., 2003).


The scope of any guideline, toolkit, or protocol needs to be clearly delineated. For instance, the TMAP schizophrenia guideline examines antipsychotic medication management in the acute and maintenance treatment phases of adult schizophrenia. Other areas of treatment such as psychosocial interventions are not addressed by the TMAP schizophrenia guideline.


Who will be involved in the evaluation process? This may seem like a benign issue, but failure to include necessary topic experts in the development process of GPTs may raise issues about the final results. Is the evaluation group large or diverse enough? Having a small homogeneous GPT development group, might be easy to manage, but could lead to important information or issues being overlooked, minimized, or interject bias into the process. The inclusion of a variety of individuals with differing expertise as contributors to GPTs is an important aspect of the development process. Contributors can greatly influence the direction of the document and in many cases provide first-hand knowledge of the data being examined. Expert opinion provided in guidelines is commonly presented from researchers, clinicians, academics, administrators, and sometimes consumers, and also covers topics where there may be a lack of literature evidence addressing the topic. Protocol recommendations are usually developed by practitioners highly experienced in dealing with the clinical situation. Any expert opinion utilized in protocol development focuses on issues of implementation, if it is utilized at all.

The TMAP schizophrenia algorithm panel included clinicians, researchers, administrators, and consumers. Clinicians and researchers had expertise in first-episode schizophrenia, chronic schizophrenia, treatment-resistant schizophrenia, or psychopharmacology. An example of the importance of having a diverse group of contributors arose during the most recent TMAP schizophrenia algorithm update. There was debate within the panel over extrapolating results from studies of chronic schizophrenia to first-episode psychosis. Ultimately, the inclusion of a well-rounded group of experts in the field of schizophrenia resulted in a well thought out and thoroughly examined issue that addressed both sides of the debate. If the expert panel had been homogeneous, then some of these important issues about the use of first-generation antipsychotics in first-episode schizophrenia may have been overlooked.


The feasibility of a GPT is an important issue that must be considered. Guidelines do consider the feasibility of implementing their recommendations to some extent, but guidelines are not mandatory and usually do not have funding for implementation. Protocols also heavily rely on their feasibility. A national protocol, such as the acute management of stroke, has to be feasible in terms of personnel, monetarily, time, and equipment for the protocol recommendations to be successfully carried out. Toolkits are also intended to be highly feasible. If toolkit recommendations are not feasible, their recommendations will not be implemented or implemented unsuccessfully. Producers of GPTs can take steps to ensure the inclusion of administrators, practitioners, and consumers in the development process, so they can reveal feasibility issues (such as practicality or cost issues).

The NICE guidelines

In the United Kingdom a government funded body, NICE, is asked to commission reviews of evidence for the treatment of a range of physical and mental health disorders. In 2002 they produced the first mental health review, on schizophrenia, which was also the first one to be updated, in 2009 (National Collaborating Centre for Mental Health, 2009). In the NICE process, it is important to point out that the methodology for looking at evidence is much the same for both medication management and psychological interventions. This means that the quality of the trials included has to meet a threshold of methodologically robust findings for both sorts of interventions, and probably accounts for NICE’s insistence on randomized trials for psychotherapy which then share many of the requirements for trials carried out on medication. It is also an important feature as many meta-analyses are criticized for not checking the quality of the trials they include.

The methodological criteria for including studies in the meta-analysis for the recent schizophrenia guideline update (National Collaborating Centre for Mental Health, 2009) for instance, covered aspects such as a clear description of the trial as randomized, more than 10 patient in each treatment cell, 80% had to have a diagnosis of schizophrenia and adequate information about follow-up data (not more than 50% drop out in any study at follow-up, or not more than 50% of their data unavailable). All studies included also had to be able to get a positive rating on a checklist which looked at whether studies used validated outcome scales, adequate allocation, concealment, and intention to treat analyses. Pharmacological studies had to be within recommended dosages, only trials of licensed medication were included, and not any studies that used rapid titration. For early intervention studies, a priori criteria were a diagnosis of psychosis, not schizophrenia, and participants could be aged 16 and above, instead of 18 and above for all the adult studies. The full check list is available from the NICE website (

The NICE process, as itemized above, required that only RCTs with adequate quality controls were included. These studies were found after a literature and grey literature search of appropriate terms and any queries about the studies were dealt with by discussion with the chair of that topic group. All the meta-analyses were attached to specific clinical, functional, and social outcomes agreed on earlier by the members of the guideline group to be appropriate for that mental health problem. Each meta-analysis was then presented to the group, and an in-depth examination of the results undertaken by the specialist guideline group members selected to be expert in that topic, including service users and carers.

In the most recent update of NICE guidelines for schizophrenia for instance, two main topic groups were convened from members of the main guideline group: medication management and psychological interventions. Individuals chose which topic group they preferred to be in, and each topic group regularly updated the other group about their progress in assessing the evidence. The two service users from the group chose to focus on the medication analyses, and the carer on the psychological interventions. Both groups were, however, in close touch with each other about the results they were finding, and comments could be made and acted upon from any member of the main guideline group. In this particular instance, a third topic group was convened. Members were co-opted into this, to cover access to appropriate treatment by BME (black and minority ethnic) groups. This third topic group was also invited to make comments on any of the results of the meta-analyses as they were produced.

It is also important to stress that the questions the evidence was assembled to answer had been set, a priori, by a scoping exercise carried out by NICE in consultation with relevant stakeholders. It was not therefore possible to answer other questions from the data. This insistence on a transparent, a priori set of questions and assembling of data to answer them, was part of the way that NICE has been able to argue that it is doing methodologically sound analyses, which are not driven by particular pressure groups.

Once evidence has been assembled to answer the questions that have been agreed upon, the guideline group was asked to decide on key recommendations that they would prioritize. These recommendations related to the recommendations from the earlier guideline; these continued to stand unless a substantial amount of new evidence was available to overturn them. This meant that some recommendations remain in place; they were not changed because there was not enough compelling new evidence to say anything different. If, however, there was a new area with good quality studies that met quality criteria—in this instance, arts therapies—this did form the basis of one new recommendation. The process was thus a mixture of adding in new information to previous analyses, and only making a new recommendation if this new evidence was substantial, or provided substantial evidence of new, robust findings (at least two good quality RCTs).

The methodology for NICE guidelines, how it makes decisions about the questions to be asked, the quality of the evidence that will be used, and all information about the schizophrenia guideline update, is available and can be downloaded from

The schizophrenia Patient Outcomes Research Team (PORT) guidelines

The schizophrenia PORT guidelines were updated in 2009 (Kreyenbuhl et al., 2010). The PORT guidelines utilize rigorous standards much like the NICE guidelines. The PORT review process entailed using two evidence review groups (ERGs) (one for pharmacological interventions and one for psychosocial interventions). The ERGs selected 41 treatment areas for review. Extensive literature reviews were conducted to identify literature (pre- and post-2002 PORT literature survey) not addressed in the 2003 PORT update for each identified treatment area. The ERGs considered if there was enough evidence (at least three well-designed, positive RCTs by independent investigators) to meet criteria to merit a treatment recommendation. If there was insufficient evidence to warrant a treatment recommendation, a summary statement was written describing the treatment, its indications, evidence summary, and important knowledge gaps that precluded a treatment recommendation.


A further part of the NICE process is to try to encourage implementation of the recommendations contained in the evidence-based guidelines. Currently there is a range of tools available to support local implementation in the United Kingdom. These include web-based implementation resources, and tools aimed at commissioners to help guide the process of commissioning evidence-based services. Information on this process is available at

Toolkits, such as the Substance Abuse and Mental Health Services Administration (SAMHSA) Assertive Community Treatment (ACT) and Supported Employment (SE) toolkits, address implementing evidence-based practices on a system-level implementation. These toolkits are very detailed resources that cover utilizing evidence-based practices, building an evidence-based programme, training front-line staff, and evaluating a programme after implementation. These toolkits can be found at The SAMHSA ACT and SE toolkits examine the evidence regarding implementing evidence-based services to help people stay out of the hospital, develop skills for living in the community, and find and keep competitive employment within their communities. Toolkits provide a myriad of suggestions and recommendations for successful implementation of evidence-based guidelines. It is hard to dissect what global elements and recommendations of the complex interventions provided by the toolkits are critical for achieving the desired outcomes of the practice.

Field research on the utilization of the SAMHSA toolkits was conducted in the National Implementing Evidence-Based Practices Project (McHugo et al., 2007). In this project, 53 community mental health centres across eight states in the United States implemented one of five evidence-based practices (EBPs) (SE, ACT, integrated dual disorders treatment, family psychoeducation, or illness management and recovery). Sites were provided with toolkits and human resources to help guide the implementation process. At the end of 2 years, 29 of 53 sites (55%) had high fidelity implementation of the chosen EBP. This study showed most sites implemented EBPs with moderate to high fidelity to the model set forth in the toolkits. Those sites implementing illness management and recovery and integrated dual disorders treatment fared less well overall. Longitudinal fidelity results indicate the use of the toolkit implementation model and similar resources should allow providers to achieve successful implementation of EBPs within 12 months. Subsequent publications revealed the greatest barriers to EBP implementation were leadership, resistance (supervisors, practitioners, and other agencies), utilization of non-EBP services, financing, staffing, and complexity of the intervention (such as with integrated dual disorders treatment) (Bond et al., 2008; Mancini et al., 2009; Rapp et al., 2010).

The Texas Implementation of Medication Algorithms (TIMA) project provides recommendations and implementation strategies learned from a large-scale implementation project within their medication management manuals. The TIMA manuals and the supporting forms are available at

Literature for producing guidelines, protocols, and toolkits

Presently, there is a very small literature base examining how to produce GPTs (Chiles et al., 1999; Frances et al., 1998; Kahn et al., 1997; Woolf, 1992; Woolf et al., 1999). Frances and colleagues (1998) point out that it was surprising how little standardization there was in the development of practice guidelines (Frances et al., 1998). Over a decade later, the same argument can still be made about the lack of guideline standardization. An examination of schizophrenia guidelines shows that each one has different development criteria, with the NICE and PORT guidelines having the most similar development standards (Kreyenbuhl et al., 2010; National Collaborating Centre for Mental Health, 2009). Most of the GPT literature focuses on implementation and utilization outcomes. Those individuals or organizations interested in producing their own psychiatric guidelines mostly have to rely on the examination of the strengths and weaknesses, methods, and design of currently developed GPTs to generate the structure for their own GPT. Figure 39.2 provides a comparison of the elements and qualities of GPTs. A review of articles by Frances et al. and Woolf et al. can provide guideline producers a refresher on the limitations of guidelines and assist in thoughtful development of the methods used to produce the guideline (Frances et al., 1998; Woolf et al., 1999).

Fig. 39.2 Comparison of guideline, protocol, and toolkit elements and qualities.

Fig. 39.2
Comparison of guideline, protocol, and toolkit elements and qualities.

A very well-developed list of evidence-based resources has been compiled by SAMHSA. The list includes state, national, and international level evidence-based resources for substance abuse prevention, substance abuse treatment, mental health treatment, and the prevention of mental health disorders. The website can be located at This resource list is a great place to begin an exploration into guideline and protocol development. Within the list is the National Guideline Clearinghouse (NGC),, which is the largest database of clinical practice guidelines. It is very comprehensive, but some guidelines may not appear within the NGC if the organization/individuals producing the guideline did not include them into the database. Additionally, if a guideline has not been updated within 5 years it is removed from the active guidelines list and may only be found in the archives.


Bond, G.R., Evans, L., Salyers, M.P., Williams, J., and Kim, H.K. (2000). Measurement of fidelity in psychiatric rehabilitation. Mental Health Services Research, 2, 75–87.Find this resource:

Bond, G.R., McHugo, G.J., Becker, D.R., Rapp, C.A., and Whitley, R. (2008). Fidelity of supported employment: lessons learned from the National Evidence-Based Practice Project. Psychiatric Rehabilitation Journal, 31, 300–5.Find this resource:

Buchanan, R.W., Kreyenbuhl, J., Zito, J.M., and Lehman, A. (2002). The schizophrenia PORT pharmacological treatment recommendations: conformance and implications for symptoms and functional outcome. Schizophrenia Bulletin, 28, 63–73.Find this resource:

Chiles, J.A., Miller, A.L., Crismon, M.L., Rush, A.J., Krasnoff, A.S., and Shon, S.S. (1999). The Texas Medication Algorithm Project: Development and implementation of the schizophrenia algorithm. Psychiatric Services, 50, 69–74.Find this resource:

Codyre, D., Wilson, A., Begg, J., and Barton, D. (2008). Dissemination and implementation of the Royal Australian and New Zealand College of Psychiatrists’ clinical practice guidelines. Australasian Psychiatry, 16, 336–9.Find this resource:

Coffman, J.A., Nasrallah, H.A., Lyskowski, J., McCalley-Whitters, M., and Dunner, F.J. (1987). Clinical effectiveness of oral and parenteral rapid neuroleptization. Journal of Clinical Psychiatry, 48, 20–4.Find this resource:

Cradock, J., Young, A.S., and Sullivan, G. (2001). The accuracy of medical record documentation in schizophrenia. Journal of Behavioral Health Services and Research, 28, 456–65.Find this resource:

Davis, D.A., Thomson, M.A., Oxman, A.D., and Haynes, R.B. (1995). Changing physician performance. A systematic review of the effect of continuing medical education strategies. Journal of the American Medical Association, 274, 700–5.Find this resource:

Davis, J.M., Chen, N., and Glick, I.D. (2003). A meta-analysis of the efficacy of second-generation antipsychotics. Archives of General Psychiatry, 60, 553–64.Find this resource:

Field, M.J. and Lohr, K.N. (eds.) (1990). Clinical Practice Guidelines: Directions for a New Program. Washington, DC: National Academy Press.Find this resource:

    Frances, A., Kahn, D.A., Carpenter, D., Frances, C., and Docherty, J. (1998). A new method of developing expert consensus practice guidelines. American Journal of Managed Care, 4, 1023–9.Find this resource:

    Gaebel, W. (1995). Is intermittent, early intervention medication an alternative for neuroleptic maintenance treatment? International Clinical Psychopharmacology, 9(Suppl 5), 11–16.Find this resource:

    Gaebel, W., Janner, M., Frommann, N., Pietzcker, A., Kopcke, W., Linden, M., et al. (2002). First vs multiple episode schizophrenia: two-year outcome of intermittent and maintenance medication strategies. Schizophrenia Research, 53, 145–59.Find this resource:

    Grimshaw, J.M. and Russell, I.T. (1993). Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet, 342, 1317–22.Find this resource:

    Howard, P.B., El-Mallakh, P., Miller, A.L., Rayens, M.K., Bond, G.R., Henderson, K., et al. (2009). Prescriber fidelity to a medication management evidence-based practice in the treatment of schizophrenia. Psychiatric Services, 60, 929–35.Find this resource:

    Kahn, D.A., Docherty, J.P., Carpenter, D., and Frances, A. (1997). Consensus methods in practice guideline development: A review and description of a new method. Psychopharmacology Bulletin, 33, 631–9.Find this resource:

    Kinon, B.J. (1998). The routine use of atypical antipsychotic agents: maintenance treatment. Journal of Clinical Psychiatry, 59(Suppl 19), 18–22.Find this resource:

    Kreyenbuhl, J., Buchanan, R.W., Dickerson, F.B., and Dixon, L.B. (2010). The Schizophrenia Patient Outcomes Research Team (PORT): Updated Treatment Recommendations 2009. Schizophrenia Bulletin, 36, 94–103.Find this resource:

    Mancini, A.D., Moser, L.L., Whitley, R., McHugo, G.J., Bond, G.R., Finnerty, M.T., et al. (2009). Assertive community treatment: facilitators and barriers to implementation in routine mental health settings. Psychiatric Services, 60, 189–95.Find this resource:

    McHugo, G.J., Drake, R.E., Whitley, R., Bond, G.R., Campbell, K., Rapp, C.A., et al. (2007). Fidelity outcomes in the National Implementing Evidence-Based Practices Project. Psychiatric Services, 58, 1279–84.Find this resource:

    Michie, S., Fixsen, D., Grimshaw, J.M., and Eccles, M.P. (2009). Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implementation Science, 4, 40.Find this resource:

    Moore, T.A., Buchanan, R.W., Buckley, P.F., Chiles, J.A., Conley, R.R., Crismon, L.M., et al. (2007). The Texas Medication Algorithm Project Antipsychotic Algorithm for Schizophrenia: 2006 Update. Journal of Clinical Psychiatry, 68, 1751–62.Find this resource:

    National Collaborating Centre for Mental Health. (2009). National Institute for Health and Clinical Excellence Clinical Guideline 82. Schizophrenia: Core interventions in the treatment and management of schizophrenia in adults in primary and secondary care [serial on the Internet]. Available at:

    National Guideline Clearinghouse (2009). Glossary. [Available at:

    Rapp, C.A., Etzel-Wise, D., Marty, D., Coffman, M., Carlson, L., Asher, D., et al. (2010). Barriers to evidence-based practice implementation: results of a qualitative study. Community Mental Health Journal, 46, 112–18.Find this resource:

    Rosen, A., Mueser, K.T., and Teesson, M. (2007). Assertive community treatment- Issues from scientific and clinical literature with implications for practice. Journal of Rehabilitation, Research & Development, 44, 813–26.Find this resource:

    Ruggeri, M., Lora A., Semisa, D., on behalf of the SIEP-DIRECT’S Group. (2008). The SIEP-DIRECTS Project on the discrepancy between routine practice and evidence. An outline of main findings and practical implications for the future of community based mental health services. Epidemiologia e Psichiatria Sociale, 17, 358–68.Find this resource:

    Sackett, D., Strauss, S., Richradson, W., Rosenberg, W., and Haynes, B. (2000). Evidence Based Medicine. London: Churchill Livingstone.Find this resource:

      U.S. Preventive Services Task Force (USPSTF) (2010). Grade Definitions. Guide to Clinical Preventive Services, Third Edition: Periodic Updates, 2000-2003 [serial on the Internet]. Available at:

      Woolf, S.H. (1992). Practice guidelines, a new reality in medicine, II: Methods of developing guidelines. Archives of Internal Medicine, 152, 946–52.Find this resource:

      Woolf, S.H., Grol, R., Hutchinson, A., Eccles, M., and Grimshaw, J. (1999). Potential benefits, limitations, and harms of clinical guidelines. British Medical Journal, 318, 527–30.Find this resource:

      Young, A.S., Sullivan, G., Burman, M.A., and Brook, R. (1998). Measuring the quality of outpatient treatment for schizophrenia. Archives of General Psychiatry, 55, 611–7.Find this resource: