Show Summary Details
Page of

Innovation in Clinical Practice 

Innovation in Clinical Practice
Innovation in Clinical Practice

S. Nassir Ghaemi

Page of

PRINTED FROM OXFORD MEDICINE ONLINE ( © Oxford University Press, 2016. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Medicine Online for personal use (for details see Privacy Policy and Legal Notice).

date: 20 May 2019

Almost everyone can and should do research . . . because almost everyone has a unique observational opportunity at some time in his life which he has an obligation to record. . . . If one considers the fundamental operations or methods of research, one immediately realizes that most people do research at some time or another, except that they do not call their activity by that name. There are seven operations. . . . In simple language they are counting, sorting, measuring, comparing, nature-study, guess testing, and reappraisal. . . . Guess testing is of course what most people think of when the word research is mentioned; except that it is bad manners to call a guess a guess. It should be called an hypothesis. Let me make one plea. Guessing becomes merely a game unless it is also a plan for action. It is a waste of time elaborating untestable hypotheses.

—John Cade

Every new class of agents in psychopharmacology has begun with clinical innovation and novel observations at the outset. This has been the case with reserpine, which was observed to be associated with depression, and led to the development of MAOIs. Phenothiazines were used as anesthesia and observed to have major tranquilizing effects, and later examined clinically in psychosis with benefit. TCAs were developed for schizophrenia as chemical products of phenothiazines, yet they were observed to improve depression. Lithium was discovered by John Cade, who used it in a small group of selected manic patients (see following). And carbamazepine was initially extended to bipolar illness based on innovative observation of evidence of benefit for mood in epilepsy. In every case, when there has been a fundamental new departure in psychopharmacology, it always began with clinical innovation. It never began in a research protocol with well-thought out methodologies, hypotheses, outcome measures, and guidelines for ethical conduct.

This history is not unique to psychopharmacology. Antibiotics famously began in the serendipitous work of Alexander Fleming. Antihypertensives also were discovered based on unexpected observations. Clinical innovation is the fount of research discoveries for all of medicine, and psychiatry is no different.

Clinical innovation occurs, by definition, outside of formal research protocols. There is a risk that treatment guidelines of any kind, however well-intentioned, will impede clinical innovation unnecessarily. On the other hand, there are limits to acceptable innovation, and in some cases, one can imagine cases of innovation that would appear to be unethical.

The issue at hand is the contrast between “research,” bound by strict rules and conducted by “scientists,” as opposed to “treatment,” conducted by clinicians. In this chapter, it will be held that the two activities have much in common, and that good clinicians should do “research”; i.e., innovate based on their experience. Furthermore, good research needs to be connected to experience from clinical practice.


Part of the problem is that the academic bioethics community has sought to cleanly and completely separate clinical practice from research. In the Belmont Report of The National Commission for the Protection of Human Subjects, for instance, an attempt was made to separate “practice,” where “interventions are designed solely to enhance the wellbeing of an individual patient or client and that have a reasonable expectation of success” (Belmont Report, p. 3), from “research,” defined as “an activity designed to test an hypothesis, permit conclusions to be drawn, and thereby to develop or contribute to generalizable knowledge” (Belmont Report, p. 3). In fact, the clinician/researcher engaging in clinical innovation is not acting with solely one set of interests in mind, but two. On one hand, the clinician/researcher wants to help the individual patient; on the other, the clinician/researcher wants to gain some experience or knowledge from his observation.

Some in the bioethics community set up this scenario as a necessary conflict. They seem to think that a choice must be made: either the clinician must choose to seek only to make the patient better, without learning anything in the process; or the clinician must seek to learn something, without any intention at all to improve the patient’s lot. As with so much in life, there are in fact multiple interests here, and there is no need to insist that those interests do not overlap at all. First and foremost in any clinical encounter is the clinician’s responsibility to the individual welfare of the patient. Any innovative treatment, observation, or hypothesis cannot be allowed to lead to complete lack of regard for the patient’s welfare. Unfortunately, the Belmont Report and much of the mainstream bioethics literature presumes complete and unavoidable conflict of these interests:

When a clinician departs in a significant way from standard or accepted practice, the innovation does not, in and of itself, constitute research. The fact that a procedure is “experimental,” in the sense of new, untested, or different, does not automatically place it in the category of research. . . . [but] the general rule is that if there is any element of research in an activity, that activity should undergo review for the protection of human subjects. (Belmont Report, p. 4)

This approach leads, it could be argued, to both uncontrolled clinical innovation and over-regulated formal research. The ultimate rationale for clinical innovation is evidence from the history of psychopharmacology that such innovation is essential to the research process. Furthermore, since such innovation, by definition, occurs outside of formal research protocols, if it is granted legitimacy, then clinicians will need to think about how to provide an ethical framework to support it. Two case scenarios may help clarify the subject.

Case Scenarios

Dr. X primarily treats mood disorders and is interested in new drugs because many of his patients have failed treatment with “standard” medications. Many new medications become available to practitioners after being approved by the FDA for disorders other than depression (e.g., epilepsy). Dr. X begins to give these medications to some of his patients with mood disorders, and soon most of his patients are taking various combinations of them. Dr. X never publishes his experience, which is unfortunate because initially studies of those medications for mood disorders are sparse. The studies that do appear are quite preliminary of necessity (uncontrolled, non-randomized, small case series). When asked, Dr. X strongly asserts his beliefs regarding the benefits of certain medications he uses and the lack of utility of others.

Here is another scenario: Dr. Y also likes to use new drugs, similarly for mood disorders, although the medications are FDA-indicated only for other conditions. After using these agents in 5–10 patients, she usually publishes her experience. Sometimes, she then becomes involved in obtaining funding for more rigorous studies of the medications that appear potentially useful based on her publications. In some cases, her early experience is confirmed by randomized studies (and occasionally new FDA indications), and sometimes they are not confirmed.

Neither Dr. X nor Dr. Y prepares a protocol or obtain Institutional Review Board (IRB) approval or a research-based informed consent for their clinical use of any of these medications. Are these doctors practicing ethically?

The classic bioethics approach would be that neither doctor is practicing ethically, because they are not obtaining IRB approval for non-standard practice. One could claim, contrast that Dr. Y is practicing ethically, because she at least is testing her hypotheses, while Dr. X is veering too far from scientifically proven treatments, and is practicing solely based on his personal beliefs.

The Story of Lithium

The history of psychopharmacology, such as the story of lithium, may provide a model for understanding how to promote effective innovation in clinical practice. John Cade was an Australian psychiatrist who decided to test a hypothesis he had developed while interned in a Japanese prisoner-of-war camp during the Second World War: that mania and depression represented abnormalities of nitrogen metabolism (excess or deficiency states). After collecting urine samples from patients with mania, depression, and schizophrenia, and normal controls, he injected them into guinea pigs, all of whom died. He concluded that the nitrogenous product, urea, was probably acting as a poison, but a similar dose of pure laboratory urea (equivalent to the urea concentrations in urine) did not have the same effect. Thus Cade began to search for modifying factors in the urine of these patients (such as uric acid) that might be enhancing the toxicity of urea. As part of his effort to study uric acid, he noted that lithium urate, which he chose to study simply because it was the most soluble form of uric acid, actually reduced the toxicity of urea. Although he initially assumed that this protective effect was due to the uric acid, he included a different lithium salt (lithium chloride) just to be certain. To his great surprise, he found that the chloride salt of lithium also provided the same protection from urea toxicity, and, even more to his surprise, the guinea pigs were calm when he handled them, rather than showing the usual agitation triggered by handling. Cade had just witnessed the psychotropic effect of lithium. Cade worked in an era before the existence of mechanisms for formal review of research protocols. Yet his experimental subjects were protected by his own Hippocratic ethic that led him to first try lithium himself before giving it to a patient: “How to proceed? Primum non nocere. . . . There is always the number one experimental animal, oneself.” Thus Cade was able to detail the signs of lithium toxicity, along with its benefits, in his first report in 1949. In fact, his first patient, W. B., whose case Cade regarded initially as the most successful, developed many side effects, and Cade repeatedly took him off lithium only to feel forced to resume it due to W. B.’s severe chronic mania. A year later, during an admission for severe mania, Cade resumed lithium at apparently therapeutic levels, but W.B., who probably had an infection, suffered a number of seizures, lapsed into coma, and died. Cade was quite concerned and abandoned using lithium further due to its toxicity. It was for other researchers to follow up on his discovery and determine how lithium might safely be used to treat mania. In fact, the first randomized clinical trials in psychiatry were conducted in the early 1950s as follow-up assessments of Cade’s original report of six cases.

Would we have lithium if Cade were working today? It is unlikely. If Cade worked in a hospital today, he probably would not have been able to obtain enough animal data to justify using lithium in humans, and he probably would not have been allowed to experiment on himself. Most IRBs would not have allowed Cade to give lithium to acutely manic patients, given the state of his knowledge of the drug.

As noted in the introductory quote to this chapter, drawn from a presidential address given to the Australian and New Zealand College of Psychiatry near the end of his life, Cade identified “guess testing,” comparable to the concept of clinical innovation, as essential to psychopharmacological research. And the most essential aspect of innovation, according to Cade, is that there should be hypothesis testing. Innovative use of medications should not be random; it should be driven by legitimate hypotheses, and therefore provable or disprovable. Indeed, Pasteur’s famous maxim about chance favoring the prepared mind can be interpreted as involving the combination of serendipity and hypothesis testing. If one has hypotheses, and one is actively seeking to test them, then one is more likely to come across “chance” findings that others may either not observe or not experience.

It also is important to emphasize, nonetheless, that chance findings can be noticed by an alert physician, even if no prior hypothesis exists. For instance, no hypothesis of antidepressant effect preceded the observation that the use of reserpine led to depression, or that imipramine used in schizophrenia improved depression. These two innovative observations, not initially driven by hypotheses, set the ball in motion that ultimately led to a Nobel prize given for the unraveling the biochemical mechanisms of these agents. All this work started with clinical observation without hypothesis. Much clinical innovation begins with the use of a drug for an indication not used before, which then leads to novel observations that lead to a new hypothesis that can then be tested.

One major path of clinical innovation is the path described by Cade, where one possesses an hypothesis, and one is experimentally interested in testing the hypothesis, and new findings occur as a result. Another path of clinical innovation occurs when alert clinicians watch for unexpected effects when using a drug for legitimate purposes. Without having an initial hypothesis, these clinicians find something they didn’t expect. This truly serendipitous finding leads to hypotheses that can then lead to further clinical innovation and eventually more organized research.

So innovation begins in clinical practice without proof, and then gets proven in research protocols, returning to wider clinical practice after it is proven. Conceived in terms used by evidence-based medicine (EBM), innovation in psychopharmacology more commonly proceeds bottom-up, rather than top-down. Innovation proceeds usually from low-level case reports, through middle-level naturalistic and non-randomized studies, to high-level randomized studies. Less frequently does it go in the other direction, from initial randomized studies to clinical experience.

Let’s illustrate these issues with our two scenarios. Dr. X uses all kinds of medications without attempting to organize, quantify, or publish his experience. Furthermore, he holds strong beliefs about the benefits and risks of medications based almost solely on his clinical experience. Dr. Y uses the same medications, but reports her experience to the scientific community, and is involved in research protocols based on her pilot uncontrolled experience.

What did Cade do? Cade used lithium with a research hypothesis in mind. He further tried it on animals before humans, given the absence of any prior human use. He even tried it on himself before using it with other patients. He reported his experience immediately to the scientific community. And when he experienced a bad outcome, he reported it also and curtailed his use of the medication. Other researchers were able to conduct rigorous research protocols based on Cade’s early reports. Cade and Dr. Y have much in common, which we think helps us identify the ideal characteristics of ethical innovation.

The Ethics of Clinical Research

A commonly cited means by which clinical innovation may differ from formal research has to do with the clinician’s intention. At one level, it often seems to be assumed that if the intention of the clinician is primarily to treat the individual patient, then it is not research; but if the intention is primarily to advance knowledge, then it is research. This dichotomy is set up to a great extent by the Helsinki Declaration, which seemed to state that research by definition is not primarily concerned with the individual patient’s welfare.

Psychopharmacology researchers know that, in many cases, a research study both advances knowledge and is directly beneficial to the patient in equal degrees. This is especially the case with open-label studies. How one can differentiate such research from clinical innovation becomes somewhat unclear. What if the actions in two cases are the same, such as giving unproven drug X openly, but in one setting as part of a research protocol, and in the other as part of daily clinical activity? Is the primary motivation of the doctor the sole difference that would make the activity research in one case, and therefore requiring informed consent and IRB review, whereas it is clinical innovation in another case, which proceeds without oversight?

One could argue that clinical innovation does not require this kind of oversight, though it is, as Cade stated, part of research. The either/or dichotomy derived from the Helsinki Declaration often does not help us in understanding the research process in psychopharmacology.

The only sensible reading of the historical record in psychopharmacology is that there is a factual link between clinical innovation and progress in formal clinical research. Clinical innovation is a legitimate activity, because it often serves as a source of ideas and observations that later lead to classical research conducted in the formal manner of protocols, IRB reviews, and rigorous designs. Sometimes clinical practice can yield important knowledge beyond what can be gleaned from randomized clinical trial protocols. This chapter has reviewed some historical examples, such as John Cade’s discovery of lithium. Thus, clinical innovation is a legitimate and important activity.

Does this approach conflict with federal standards, such as the Belmont Report, which has been identified by the NIH Office of Human Subjects Research as the philosophical foundation for its ethical regulations? As mentioned before, the Report leaves itself open to a strict interpretation when it asserts that “any element of research” requires formal review. Yet even the NIH notes that the Report is

not a set of rules that can be applied rigidly to make determinations of whether a proposed research activity is ethically “right” or “wrong.” Rather, these regulations provide a framework in which investigators and others can ensure that serious efforts have been made to protect the rights and welfare of research subjects. (NIH, 1993, p. 4)

In conclusion, it is suggested here that the best clinical research is conducted by active clinicians, and that the best clinical work is conducted by active researchers. The strict wall separating pure research from pure clinical practice is at best a fiction, and at worst a dumbing down of both activities. Clinical innovation is the kind of activity that bridges this gap. Clinical innovation should be legitimized, accepted, and even encouraged within the framework of ethical guidelines, such as those suggested here, so as to avoid the alternative extremes of indiscriminate practice on one hand and over-regulation of all research on the other.

Selected References

The Belmont Report. Ethical Principles and Guidelines for the Protection of Human Subjects of Research: The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979., accessed August 6, 2018.

Ghaemi, S. N., & Goodwin, F. K. (2007). The ethics of clinical innovation in psychopharmacology. Philosophy, Ethics, and Humanities in Medicine, 2, 26.Find this resource:

NIH, 1993, Guidelines for the Conduct of Research Involving Human Subjects at the National Institutes of Health. Washington DC: U.S. Department of Health and Human Services, Public Health Service.Find this resource: