Show Summary Details
Page of

Introduction 

Introduction
Chapter:
Introduction
Author(s):

Pat Croskerry

DOI:
10.1093/med/9780190088743.003.0001
Page of

PRINTED FROM OXFORD MEDICINE ONLINE (www.oxfordmedicine.com). © Oxford University Press, 2021. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Medicine Online for personal use (for details see Privacy Policy and Legal Notice).

date: 27 October 2021

A variety of errors occur in the course of normal medical practice, but many will be detected and corrected before they do any harm. Nevertheless, medical error is now estimated to be one of the leading causes of death.1 Errors take several forms and range from simple problems such as miscalculating a dose of medication to more complex errors such as misdiagnosing a myocardial infarct, a cerebrovascular accident, or wrong side surgery. Individual errors occur in a variety of areas, but by far the greatest number of errors we make in medicine are in the ways through which our thoughts and feelings impact our decision making. Yet historically, surprisingly little emphasis in medical education has been put on how to think, and especially on how to think rationally. The tacit assumption is made that by the time people arrive in medical school, they are already rational, competent thinkers and invulnerable to a variety of predictable and widespread biases in thinking and other cognitive failures that may lead to error. Unfortunately, this is not the case. The primary purpose of this book is to focus attention on clinical thinking failures. It should also be remembered that affect or emotion is reciprocally related to cognition—one generally does not occur without the other. Errors in emotion and cognition are collectively referred to here as cognitive errors, although affective error is occasionally used for emphasis in some cases.

One of the most important drivers of this book is to focus on these cognitive failures in clinical reasoning and the ambient conditions that enable them. It is now claimed that diagnostic failure is the main threat to patient safety,2 which translates into clinical decision making is the main threat to patient safety. In fact, we can refine this further and state that clinical prediction is the main threat. Diagnosis is about prediction—how accurately can a clinician predict the identity of one (or more) of approximately 12,000 diseases that may underlie the patient’s symptoms and signs? As Pinker notes, “the acid test of empirical rationality is prediction,”3 so we need to ask what we can do to improve the predictive power of our decision making to increase the likelihood of a correct diagnosis and reduce the morbidity and mortality of diagnostic failure. As the philosopher/sociologist Habermas observed, the modernity of the Enlightenment, with rationality as its main driving force, remains “an unfinished project.”4 Completion will be more attainable when a fuller understanding of rationality, in all its forms, is realized. Medicine in particular will benefit from a better understanding of what is needed for rational decision making.

A first step lies in understanding the process of diagnosis, the most important of a physician’s tasks. Although numerous publications have addressed diagnostic failures within specific disciplines, it is only fairly recently that attention has focused on the process itself.5,6 This is surprising given that getting the diagnosis right is so critical for the safety of patients. An important distinction needs to be made at the outset—between thinking and deciding. Thinking is generally considered a deliberate act—that is, to think about a problem is to deliberately engage in a conscious process aimed at solving it. However, in everyday life, the strategies we use to solve many problems typically involve shortcuts, approximations, rules of thumb, and even guesses. These are reflexive, autonomic processes that mostly do not reach consciousness and are usually appreciated as potential time-savers. In the majority of cases, this appears to be the case. It is often unnecessary to laboriously work one’s way through a clinical problem when the answer seems readily apparent. However, these processes are imperfect in that although they may work most of the time, they are occasionally wrong and may manifest themselves as cognitive biases. Paradoxically, although they are often viewed as time-saving, they may actually increase workload.7 They are not deliberate acts, so we cannot refer to them as thinking per se, even though they may lead to a decision that we deliberately act upon. This is referred to as Type 1 or intuitive processing, whereas decisions that arise from deliberate thoughtful activity are known as Type 2 or analytical processing. The concept underlying this approach is referred to as dual process theory. The two processing systems, Type 1 and Type 2, differ from each other in a number of ways8 that have been well delineated (Table I.1). In the modern era of decision making, it appears to have emerged with the work of Schneider and Shiffrin in 1977,9 although it had been well recognized more than two centuries earlier in 1794 by Thomas Paine (Figure I.1):

Any person, who has made observations on the state and progress of the human mind, by observing his own, can not but have observed, that there are two distinct classes of what are called Thoughts; those that we produce in ourselves by reflection and the act of thinking, and those that bolt into the mind of their own accord. I have made it a rule to treat those voluntary visitors with civility, taking care to examine, as well as I was able if they were worth entertaining, and it is from them that I have acquired almost all the knowledge that I have.10

Table I.1 Characteristics of Type 1 and Type 2 approaches to decision making

Characteristic

Type 1

Type 2

Cognitive style

Heuristic, intuitive

Systematic, analytical

Cognitive awareness

Low

High

Conscious control

Low

High

Verbal

No

Yes

Automaticity

High

Low

Cost

Low

High

Rate

Fast

Slow

Reliability

Low

High

Errors

Normative distribution

Few but large

Effort

Low

High

Predictive power

Low

High

Emotional valence

High

Low

Detail on judgment process

Low

High

Scientific rigor

Low

High

Source: From Croskerry.8

Figure I.1 Thomas Paine (1737–1809), early observer of dual-process decision making.

Figure I.1 Thomas Paine (1737–1809), early observer of dual-process decision making.

Dual process theory is now the dominant view for examining decision making in medicine (Figure I.2).8,11 Cognitive bias, arguably the most important issue in clinical decision making, underlies many of the cognitive failures in the cases described in this book. Typically, they “bolt into the mind” in a reflexive, autonomous fashion; are often uncritically accepted; and unfortunately are not always subjected to the scrutiny, caution, and civility that Paine exercised. Often, but not exclusively, they are associated with Type 1 processing. Pohl12 has described five main characteristics of biased decisions (Table I.2).

Figure I.2 Dual process model for medical decision making. The two principal modes of decision making, automatic and controlled, originally described more than 40 years ago,9 are now commonly referred to as intuitive and analytical, respectively. Intuitive decision making is seen to be driven by four kinds of Type 1 processes and analytical reasoning by a single Type 2 process. Type 2 can override a Type 1 process (executive override), and Type 1 can override a Type 2 process (irrational override). The process is in a dynamic state and can toggle (T) back and forth between the two systems. There is an overall tendency to default to Type 1 processing (cognitive miser function).

Figure I.2 Dual process model for medical decision making. The two principal modes of decision making, automatic and controlled, originally described more than 40 years ago,9 are now commonly referred to as intuitive and analytical, respectively. Intuitive decision making is seen to be driven by four kinds of Type 1 processes and analytical reasoning by a single Type 2 process. Type 2 can override a Type 1 process (executive override), and Type 1 can override a Type 2 process (irrational override). The process is in a dynamic state and can toggle (T) back and forth between the two systems. There is an overall tendency to default to Type 1 processing (cognitive miser function).

Source: From Croskerry.11

Table I.2 Characteristics of biased decisions

Reliably deviate from reality

Occur systematically

Occur involuntarily

Are difficult or impossible to avoid

Appear rather distinct from the normal course of information processing

Source: From Pohl.12

Stanovich13 has described four major categories of Type 1 processing (Figure I.3):

  1. 1. Processes that are hardwired: The product of evolutionary forces acting on our distant ancestors in the environment of evolutionary adaptiveness when we spent most of our time as hunter–gatherers. They have been selected in the Darwinian sense and are, therefore, in our present-day DNA (genetically transmitted). The metaheuristics (anchoring and adjustment, representativeness, and availability) are examples of such inherited heuristics that may be associated with various biases.

  2. 2. Processes regulated by our emotions: The basic emotions (happiness, sadness, fear, surprise, anger, and disgust) are also evolved, hardwired adaptations (e.g., fear of snakes is universal in all cultures). They may be significantly modified by learning.

  3. 3. Processes established by overlearning: Continued repetition of information and of psychomotor acts eventually leads to habituation so that eventually they may be performed without conscious deliberation (e.g., reciting multiplication tables or driving a car). Thus, knowledge and skills become firmly embedded in our cognitive and behavioral repertoires through overlearning. This allows these processes to be executed quickly, effortlessly, and reliably when needed, without conscious effort.

  4. 4. Processes developed through implicit learning: We generally learn things in two ways—either through deliberate explicit learning, such as occurs in school and in formal training, or by implicit learning, which is without intent or conscious awareness. Implicit learning plays an important role in our skills, perceptions, attitudes, and overall behavior. It allows us to detect and appreciate incidental covariance and complex relationships between things without necessarily articulating that understanding. Thus, some biases may be acquired unconsciously. Medical students and residents might subtly acquire particular biases by simply spending time in environments in which others have these biases, even though the biases are never deliberately articulated or overtly expressed to them (i.e., in the hidden curriculum). Examples include the acquisition of biases toward age, socioeconomic status, gender, race, patients with psychiatric comorbidity, and obesity.

Figure I.3 The four subsets of Type I processing.

Figure I.3 The four subsets of Type I processing.

Source: From Stanovich.13

Generally, we talk about thoughts, beliefs, and feelings as being “intuitive,” but Stanovich’s work allows us to go deeper than that. We can develop and acquire “intuitive knowledge” in a variety of ways from the multiple sources he describes, as well as from interactions between the various sources. This may have implications for dealing with biases. Those that are hardwired might be expected to generate the most difficulty in mitigation. Recently, there has been a colloquial tendency to describe such biases as originating from our “reptilian brain”—a term coined by the Yale neuroscientist MacLean in the 1960s.14 In his memoir of spiritual and philosophical awakening, the novelist Lawrence Durrell viewed it as our biggest challenge: “The greatest delicacy of judgement, the greatest refinement of intention was to replace the brutish automatism with which most of us exist, stuck like prehistoric animals in the sludge of our non-awareness.”15 Dealing with our “brutish automatism” and unsticking ourselves from the “sludge of non-awareness” may require extraordinary effort, whereas intuitions acquired and not inherited might be more amenable to mitigation;16 this is discussed further in the closing chapter.

Overall, many of the heuristics that characterize Type 1 decision making serve us well. If the decision maker is well-calibrated—that is, understands the continuous need for vigilance in monitoring the output from Type 1—then the quality of decision making may be acceptable. If monitoring is suboptimal, then calibration of decision making deteriorates and Type 1 becomes unreliable.

The diagnostic failure rate across the board in medicine is 10–15%.17 This sounds better if we frame it as “the success rate of diagnosis is 85–90%,” yet few of us would cross a bridge or make a car trip with those odds. Although not all diagnoses carry life-threatening consequences, many of them do, and this level of failure is simply unacceptable. Some have asked how medical error occurs, given that it is not rocket science. Ironically, if it were rocket science, it would be much easier.18 Rocket science follows the laws of physics, which are mostly immutable and predictable. Diagnosis is less so. It is estimated that at least six clusters of factors have the potential to influence the diagnostic process.19 Each is identified in the literature and number in the 40s, all with the potential for significant interactions with others (Figure 1.4); there are probably more. Perhaps it is surprising that the process fails only 10–15% of the time. Its inherent complexity makes it difficult to study and difficult to teach. Reductionism typifies the traditional scientific approach to complexity, by stripping away as many independent variables as possible to isolate the key one(s). For some research groups, this has led to studies in which the dependent variable, in this case diagnostic reasoning, is studied by reading text vignettes on computer screens. The ecological validity of this approach has been seriously challenged.20,21,22 As Gruppen and Frohna stated,

Too often, studies of clinical reasoning seem to take place in a vacuum. A case or scenario is presented to subjects, usually in written form, stripped of any “irrelevant” noise. The traditional methodology of providing clinical cases that are decontextualized and “clean” may not be a particularly valid means of assessing the full range of processes and behaviors present in clinical reasoning in natural settings.20

That is, this methodology, in ignoring the principles of situated cognition, is a significant threat to the external and ecological validity of these findings, and their relevance to understanding real-life clinical practice is thus seriously questioned. In the cases presented in this book, an effort has been made to preserve the original clinical context as much as possible.

One conclusion from the finding of diagnostic failure across the board in medicine17 is that the training in clinical decision making currently received in most medical school falls short of what is needed.19 In the domain of patient safety, comparisons are often made with the airline industry. If the industry’s training of pilots resulted in a performance deficit of 10–15%, we would be very quick to say that their training program was deficient, so we should not be reluctant to draw the same conclusion in medicine. Although we can say that the diagnostic process has an irreducible uncertainty that will always result in failure in this order of magnitude and we should live with it, we might instead argue that the normal level of expertise that is achieved with conventional training in medicine is insufficient. If the latter, then we need to augment the training process so that we can tackle the 10–15% of failures. What we do not know is what is responsible for the overall failure rate in diagnosis, other than it is a combination of physician and system factors. Various estimates suggest that the physician accounts for most of the failure—probably approximately 75%. Physician failure varies between disciplines. In the visual specialties (dermatology, radiology, and anatomic pathology), it is approximately 1% or 2%, whereas in the more general specialties [family medicine, emergency medicine (EM), and internal medicine], it is approximately 15%.23 Thus, failure appears to be determined by the nature of the task. To put it in the context of signal detection theory, we suspect that the lower failure rate in the visual specialties is due to the fact that there is less noise around the signal, whereas in the general specialties there is more. Increased noise arises from the complexity described in Figure I.4 and the correspondingly greater challenge to clinical reasoning. So, it appears we need to find ways to improve clinical reasoning. Within a particular general discipline, one possibility is that 85% of physicians are good at diagnosis and the remaining 15% are not; another possibility is that all physicians fail approximately 15% of the time. Further work is needed to identify where the failure occurs, but the working assumption here is that within a particular discipline, all physicians fail approximately 15% of the time, and our goal is to identify errors when they occur and find ways to deter them from happening in the first place.

Figure I.4 Six clusters of factors that influence the diagnostic process.

Figure I.4 Six clusters of factors that influence the diagnostic process.

Source: From Croskerry.19

The well-known Dreyfus model24 has been used to describe how people acquire competency and expertise. Beginning from a novice stage, they progress through various levels to become experts. The end point is now referred to as “routine” expertise (Figure I.5A), and we might suppose that it is suboptimal in cases in which diagnosis fails. Recent work suggests that the process of acquisition of expertise can be augmented to include other features, such as rationality, critical thinking, flexibility, creativity, and innovation, to attain what may now be known as “adaptive” expertise (Figure I.5B).19 Adaptive expertise appears to characterize those who have more highly developed metacognitive skills compared with routine experts.25 This difference between routine expertise and adaptive expertise can be viewed as a mindware gap.26 Mindware is a term coined by Perkins27 to describe the unique operating system software that each individual brain runs on, a product of genetic and acquired influences. A number of specific interventions to close this gap have been proposed19 and are reviewed in more detail in the concluding chapter of this book.

Figure I.5 Augmented effects of multiple cognitive processes to close the mindware gap associated with routine expertise (A) and achieve adaptive expertise (B).

Figure I.5 Augmented effects of multiple cognitive processes to close the mindware gap associated with routine expertise (A) and achieve adaptive expertise (B).

Sources: Originally adapted from Dreyfus and Dreyfus24 and subsequently Croskerry.19

This book uses a selection of real clinical cases to illustrate flaws in thinking, the cognitive errors. They are real de-identified examples mostly from the author’s personal experience within the milieu of emergency medicine collected over a decade through the 1990s and 2000s. With occasional updates, the book was used as a clinical teaching manual continuously to the present. Several cases were added more recently. Although some aspects of emergency practice may have changed since the inception of the manual, little has changed in the cognitive properties of diagnostic decision making. The clinical setting of these cases turns out to be fortuitous—EM covers all disciplines and therefore yields a wide variety of cases. Furthermore, when cases are seen in the emergency department (ED), they are at their most undifferentiated and are more likely to illustrate the complexity of the process. This contrasts with, for example, an orthopedic clinic in which most cases will have become well differentiated once they are referred. Several of the cases described in this book have previously been published in journal articles and book chapters. Four medicolegal cases from the emergency care setting, originally included in the Applied Cognitive Training in Acute-Care Medicine (ACTAM) manual, have been omitted from the current work but may be accessed in the ED Legal Letter,28 in which they were first published. Performing cognitive autopsies on cases from EM is particularly useful because this environment has been described as a “natural laboratory for medical error.”29

There has been some debate in the literature about the use of the word “error.” There is a historical tendency to view it as a negative term, one that suggests someone is at fault and should be blamed. Psychologists, the people who mostly study error, do not view it negatively but, rather, as a piece of human behavior that is worthy of study in its own right. The current practice in medicine is not to use “medical error” but to talk about “patient safety.” The word “error” is a more direct term, however, and is used in this book in the psychological sense. Errors are pieces of our behavior that need to be studied so that we might learn from them. They are not a tool for attribution or blaming.

Similarly, there is a tendency to view “bias” as a negative term; in some cases (e.g., “racial bias”), it carries an obvious and undesirable negative connotation. Many will associate “biased judgment” with negative character traits, and not wishing to be associated with this undesirable attribution has probably driven some people away from accepting it as an important feature of all human behavior—one that deserves our attention and study. This might be especially true of the medical establishment, whose members may view themselves as holding to higher standards of propriety and being less judgmental of others. Again, however, cognitive scientists do not view bias as a negative attribute but, rather, as an aspect of cognitive behavior that needs to be studied to improve our understanding of it. Throughout the cognitive revolution of the 1970s and onwards, cognitive scientists happily identified themselves with the “heuristics and biases” literature. In earlier work, we attempted to circumvent the negative associations of bias by referring to cognitive bias as “cognitive dispositions to respond” and affective bias as “affective dispositions to respond,”30,31 but “bias” appears to have prevailed in the literature.

Such narrative accounts as are presented in this book are something of a tradition in medicine and recognized as a powerful tool for learning.32,33,34 This particular collection vividly illustrates some classic errors that physicians will encounter in the course of their careers. They are not atypical or isolated; they happen in every medical setting, every day, everywhere in the world. In some cases, there will be an unambiguous demonstration of a particular bias or flaw in thinking, or some combination of these, but at other times the error will be less apparent and we will need to make an inference that erroneous decision making has occurred both through the circumstances and through what we can construe from the medical and nursing record and the outcome. Inference is necessary because we never really know what is going through a clinician’s mind. In fact, neither are clinicians; many are unaware of what they are thinking and how they arrive at their decisions. Thus, any process that attempts to examine the ways in which clinicians think will face this problem. However, although thinking and feeling are typically covert processes, this should not discourage us from trying to take a closer look at them and understand some of the pitfalls. In addition to the cognitive errors noted previously, other conditions that occur in the hospital setting (error-producing conditions,35 other systemic errors, transitions of care, resource limitations, etc.) are also described.

Another feature of these cases is that the majority have significant outcomes (Appendix A reads like a “what’s what of emergency medicine,” containing some of the most interesting diagnoses that can be encountered). This is no accident—if a physician misdiagnosed an ankle sprain as an ankle contusion, the outcome would not be particularly different, and the case itself would not be very interesting from a learning standpoint; misdiagnosis usually only comes to light when the outcome is serious. Consequently, many of the cases presented in this book are characterized by their graphic nature and mostly unrepresentative of the spectrum of routine cases usually seen in EM. Using such “exaggerated” distinctive examples has been shown to facilitate the processing of information and enhance learning.36,37 In addition to illustrating a variety of cognitive failings, the cases also contain important teaching points about a wide variety of significant illnesses that students and some clinicians might not otherwise encounter.

The book addresses another major problem in this area: the language that is used to describe error and, in particular, cognitive error. It provides a glossary of definitions and descriptors for specific terms used in the commentaries that follow each case. This is important because the language will not be familiar to many. An effort has been made to keep the terminology as basic as possible, but at times the original terms used by the cognitive psychologists who pioneered this work are retained. It is important to familiarize ourselves with some of these terms because they have already begun to enter the medical literature and become part of the medical lexicon.

Finally, this process is one of hindsight, and hindsight itself may be subject to a significant bias. It is always easier to be wise after the event. There are a variety of interesting psychological experiments that illustrate how unreliable our basic perceptual processes are in the moment. One perceptual phenomenon in particular, “inattentional blindness,” is demonstrated in several of the cases. However, the problem is further compounded when we recall things. We often become even more vulnerable to inaccuracies and selective reminiscence, which may result in “faking good” or “feeling bad.” Faking good is an effort to make ourselves look better than we were at the time. This may help our egos along a little, but the failure to fully appreciate faults in our performance usually means we are destined to repeat them. On the other hand, when we feel bad about what we appear to have done, we may not be being entirely fair to ourselves. It is difficult to re-create the ambient conditions under which the original event took place, and subtle cues or other factors that significantly influenced behavior at the time may not be apparent in retrospect. Thus, there is a danger of overpunishing ourselves for something we may not have been entirely responsible for, and perhaps of subjecting ourselves to too much self-recrimination. Nevertheless, hindsight allows us an opportunity to learn from our mistakes; the process itself need not necessarily be biased.

Through the approach and analysis offered here, there is a good chance the reader will develop an understanding of the frequency and normalcy of cognitive failings and how context and ambient conditions can influence clinical decision making. It is hoped that an appreciation will be gained for the circumstances under which biases and other cognitive failures occur. Such insights provide significant opportunities to evolve realistic strategies for avoiding them or coping with them when they do occur. They are a fact of our lives. General strategies to mitigate cognitive error and produce overall improvement in cognitive habits are reviewed in the closing chapter of this book.

References

1. Makary MA, Daniel M. Medical error: The third leading cause of death in the United States. BMJ. 2016; 353: i2139.Find this resource:

2. Tehrani ASS, Lee HW, Mathews SC, Shore A, Makary MA, Pronovost PJ, Newman-Toker DE. 25-Year summary of US malpractice claims for diagnostic errors 1986–2010: An analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013; 22(8): 672–680.Find this resource:

3. Pinker S. Enlightenment Now: The Case for Reason, Science, Humanism and Progress. New York, NY: Viking, 2018.Find this resource:

4. Habermas, J. The Philosophical Discourse of Modernity, Twelve Lectures (F. Lawrence, Trans.). Cambridge, MA: Massachusetts Institute of Technology, 1987.Find this resource:

5. National Academies of Sciences, Engineering, and Medicine. Improving Diagnosis in Health Care. Washington, DC: National Academies Press, 2015.Find this resource:

6. Croskerry P, Cosby K, Graber M, Singh H. Diagnosis: Interpreting the Shadows. Boca Raton, FL: CRC Press, 2017.Find this resource:

7. Moss SA, Wilson SG, Davis JM. Which cognitive biases can exacerbate our workload? Australas J Org Psychol. 2016; 9: 1–12. doi:10.1017/orp.2016.1Find this resource:

8. Croskerry P. Critical thinking and reasoning in emergency medicine. In: Croskerry P, Cosby KS, Schenkel S, Wears R (Eds.), Patient Safety in Emergency Medicine. Philadelphia, PA: Lippincott Williams & Wilkins, 2008; 213–218.Find this resource:

9. Schneider W, Shiffrin RM. (1977). Controlled and automatic human information processing: 1. Detection, search, and attention. Psychol Rev. 84(1): 1–66.Find this resource:

10. Paine T. The Age of Reason. San Bernardino, CA: Minerva, 2018.Find this resource:

11. Croskerry P. A universal model for diagnostic reasoning. Acad Med. 2009; 84(8): 1022–1028.Find this resource:

12. Pohl RF. Cognitive illusions. In: Pohl RF (Ed.), Cognitive Illusions: Intriguing Phenomena in Thinking, Judgement and Memory. Oxford, UK: Routledge, 2016; 3–22.Find this resource:

13. Stanovich KE. Rationality and the Reflective Mind. New York, NY: Oxford University Press, 2011: 19–22.Find this resource:

14. MacLean P. The Triune Brain in Evolution: Role in Paleocerebral Function. New York, NY: Plenum, 1990.Find this resource:

15. Durrell L. A Smile in the Mind’s Eye: An Adventure into Zen Philosophy. London, UK: Open Road Media, 2012.Find this resource:

16. Croskerry P. Cognitive bias mitigation: Becoming better diagnosticians. In: Croskerry P, Cosby K, Graber M, Singh H (Eds.), Diagnosis: Interpreting the Shadows. Boca Raton, FL: CRC Press, 2017.Find this resource:

17. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005; 165(13): 1493–1499.Find this resource:

18. Croskerry P. Not rocket science. CMAJ. 2013; 185(2): E130. doi:10.1503/cmaj.120541Find this resource:

19. Croskerry P. Adaptive expertise in medical decision making. Medical Teacher. 2018; 40(8): 803–808. doi:10.1080/0142159X.2018.1484898Find this resource:

20. Gruppen LD, Frohna AZ. Clinical reasoning. In: Norman GR, van der Vleuten CP, Newble DI (Eds.), International Handbook of Research in Medical Education. Boston, MA: Kluwer, 2002; 205–230.Find this resource:

21. Croskerry P, Petrie DA, Reilly JB, Tait G. Deciding about fast and slow decisions. Acad Med. 2014; 89(2): 197–200.Find this resource:

22. Royce CS, Hayes MM, Schwartzstein RM. Teaching critical thinking: A case for instruction in cognitive biases to reduce diagnostic errors and improve patient safety. Acad Med. 2019; 94(2): 187–194. doi:10.1097/ACM.0000000000002518Find this resource:

23. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med. 2008; 121(5 Suppl): S2–S23. https://doi.org/10.1016/j.amjmed.2008.01.001Find this resource:

24. Dreyfus SE, Dreyfus HL. A five-stage model of the mental activities involved in directed skill acquisition. Supported by the U.S. Air Force, Office of Scientific Research (AFSC) under contract F49620-79-C-0063 with the University of California, Berkeley, 1980: 1–18 (unpublished study).Find this resource:

25. Carbonell KB, Stalmeijer RE, Könings KD, Segers M, van Merriënboer JJG. How experts deal with novel situations: A review of adaptive expertise. Educ Res Rev. 2014; 12: 14–29.Find this resource:

26. Stanovich KE. Rational and irrational thought: The thinking that IQ tests miss. Scientific American Mind. 2009; 20(6): 34–39.Find this resource:

27. Stanovich K. What Intelligence Tests Miss: The Psychology of Rational Thought. New Haven, CT: Yale University Press, 2009.Find this resource:

28. Croskerry P. Achilles heels of the ED: Delayed or missed diagnoses. ED Legal Lett. 2003; 14: 109–120.Find this resource:

29. Croskerry P, Sinclair D. Emergency medicine—A practice prone to error? CJEM. 2001; 3(4): 271–276.Find this resource:

30. Croskerry P. Achieving quality in clinical decision making: Cognitive strategies and detection of bias. Acad Emerg Med. 2002; 9(11): 1184–1204.Find this resource:

31. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003; 78(8): 775–780.Find this resource:

32. Murtagh J. Cautionary Tales: Authentic Case Histories from Medical Practice. New York, NY: McGraw-Hill, 2011.Find this resource:

33. Pilcher CA. Medical malpractice insights (MMI): Learning from lawsuits. https://madmimi.com/p/fa0e2d?fe=1&pact=76716-148560539-8457174274-2b6f035f60fb4d603a886574b0a5af25e2c8ab1d. Accessed December 29, 2018

34. Howard J. Cognitive Errors and Diagnostic Mistakes: A Case-Based Guide to Critical Thinking in Medicine. Cham, Switzerland: Springer, 2019.Find this resource:

35. Croskerry P, Wears RL. Safety errors in emergency medicine. In: Markovchick VJ, Pons PT (Eds.), Emergency Medicine Secrets (3rd ed.). Philadelphia, PA: Hanley & Belfus, 2003; 29–37.Find this resource:

36. Dror IE, Stevenage SV, Ashworth A. Helping the cognitive system learn: Exaggerating distinctiveness and uniqueness. Appl Cognit Psychol. 2008; 22(4): 573–584.Find this resource:

37. Dror I. A novel approach to minimize error in the medical domain: Cognitive neuroscientific insights into training. Medical Teacher. 2011; 33(1): 34–38.Find this resource: