Show Summary Details
Page of

Quarantine: Spatial Strategies 

Quarantine: Spatial Strategies
Chapter:
Quarantine: Spatial Strategies
Author(s):

Andrew Cliff

and Matthew Smallman-Raynor

DOI:
10.1093/med/9780199596614.003.0003
Page of

PRINTED FROM OXFORD MEDICINE ONLINE (www.oxfordmedicine.com). © Oxford University Press, 2020. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Medicine Online for personal use (for details see Privacy Policy and Legal Notice).

date: 24 February 2020

  1. 3.1 Introduction [link]

  2. 3.2 History of Quarantine [link]

    • International Sanitary and Health Regulations [link]

    • Quarantine: The United States, 1878–2010 [link]

    • Quarantine Islands [link]

  3. 3.3 Isolation [link]

  4. 3.4 Quarantine and Isolation Today [link]

    • The Role of Movement [link]

    • Communicable Disease Consequences of Change [link]

    • Estimating the Impact of Quarantine and Isolation [link]

    • Summary [link]

  5. 3.5 Conclusion [link]

3.1 Introduction

…the last and greatest art is to limit and isolate oneself.

Johann Wolfgang von Goethe 20 April 1825

Quarantine and isolation are the oldest methods used to try to prevent the geographical spread of communicable diseases between humans (Figure 3.1). The principle is simple and obvious – prevent spatial interaction between an infective or a fomite and a susceptible and the spread of infection is inhibited. Indeed, before the germ theory of diseases was available and antibiotics and vaccines developed, quarantine and isolation were the only methods by which the geographical spread of infectious diseases could be checked. And so our substantive discussion of methods of control in the next four chapters begins here with this most ancient of approaches, still used in certain circumstances today. We begin by defining what we mean by quarantine and isolation, and then consider each in turn.

Figure 3.1 Historic attitudes to communicable diseases. Many infections, of which leprosy is the prime example, often caused sufferers to be stigmatised socially and to be physically isolated. In this watercolour by Richard Tennant, a leper warns of his approach by ringing a bell. Supported by his pointing pole, he has cleared a village street of adults leaving only an unknowing infant by the roadside to witness his passage.

Figure 3.1
Historic attitudes to communicable diseases. Many infections, of which leprosy is the prime example, often caused sufferers to be stigmatised socially and to be physically isolated. In this watercolour by Richard Tennant, a leper warns of his approach by ringing a bell. Supported by his pointing pole, he has cleared a village street of adults leaving only an unknowing infant by the roadside to witness his passage.

Source: Wellcome Library, London.

Quarantine

The fifteenth edition of the American Public Health Association’s bible, Control of Communicable Diseases of Man (Benenson, 1990, pp. 502–6), specifies very strict definitions for the terms, quarantine and isolation. Quarantine is used to denote restrictions upon the activities of well persons or animals (susceptibles, S in Figure 1.19) who have been exposed to a case of a communicable disease during its period of communicability. This is to prevent disease transmission (IS in Figure 1.19) going beyond S if S should fall ill as a result of the contact with I. Failure to intercede would potentially allow the chains of infection to be maintained. Geographically, defensive isolation is employed as described in Section 1.4 and Figure 1.20. Quarantine may be absolute so that the S population has its freedom of movement curtailed for a period of time up to the longest usual incubation period of the disease in question. Modified quarantine is a selective, partial limitation upon the movements of those S who have come into contact with an I. It is designed to meet particular situations such as the exclusion of children from school, or the exemption of immunes (e.g. by vaccination) or recovereds, R, from the provisions applied to S. For modified quarantine to be successful, personal surveillance is essential so that if infection occurs, the person is removed from circulation, as well as segregation – the separation of some part of S from the herd for control or observation to protect uninfected from infected portions of a population. Examples of segregation include the removal of susceptible children to the homes of immunes, or the establishment of a sanitary boundary (cordon sanitaire) between susceptibles and infectives (cf. Sections 1.21.4).

Isolation

In contrast to quarantine, isolation refers to action taken with the infected rather than the susceptible population to prevent the transmission (IS). Isolation represents separation, for the period of communicability, of infectives from others in such a way as to prevent or limit the direct or indirect transmission of the infectious agent via the transmission (IS). This represents offensive containment in the terms of Figure 1.20. The US Centers for Disease Control and Prevention specifies seven grades of isolation. The general approach is strict isolation which is designed to prevent the transmission of highly contagious or virulent infections that may spread by both air and contact. The patient is isolated in a private room with, ideally, negative pressure to surrounding areas, and barrier nursed. The other categories are less restrictive versions of strict isolation, i.e. contact isolation and respiratory isolation for less highly transmissible infections in which patients with the same pathogen may share a room; tuberculosis isolation with a special ventilated room and closed door but generally less severe barrier nursing; and enteric, secretion and body fluid precautions to prevent contamination of clothing and medical staff who then enter general circulation.

To some extent, these definitions look like splitting hairs. The end result will be the same – preventing the mixing of susceptible and infected persons in order to break the chains of infection from one person to another. In quarantine, the means used to prevent mixing focus on the suceptibles; in isolation, the focus is upon the infectives. In this book, we have followed the distinction drawn by Benenson. But we accept that much of the literature uses the term quarantine generically to refer to the separation of components of the S and I population. This is in line with both the spirit of the definition in the Oxford English Dictionary and common usage:

Quarantine is a period of isolation (originally 40 days) imposed on an infected person or animal that might otherwise spread a contagious disease, especially on one who has just arrived from overseas or has been exposed to infection.

3.2 History of Quarantine

The large-scale practice of quarantine, as we know it today, began during the fourteenth century in an effort to protect coastal cities in Italy from plague epidemics (see Sections 1.21.4). The earliest attempts were made by Ragusa and Venice. Then, ships arriving at these cities from infected ports were required to sit at anchor for 40 days before landing. This practice, called quarantine, was derived from the Italian words quaranta giorni (40 days). As described in Section 2.5, the modern international development of ideas of quarantine is founded in the International Sanitary Conferences which took place from 1851, and whose primary purpose was to develop protocols to protect the world’s peoples from the cholera pandemics which swept the world in regular waves from the early 1830s. Between the late nineteenth century and the inter-war period, the International Sanitary Conventions evolved to include an increasing number of so-called ‘quarantine diseases’ (cholera, plague, yellow fever, smallpox and louse-borne typhus fever; see Figure 2.22). From 1907, the Office International d’Hygiène Publique (Section 2.5) was charged with overseeing the international codification of procedures for quarantine and its associated surveillance and, in the aftermath of the First World War, the Paris Office continued its work in a formal collaboration with the League of Nations Health Organisation.

International Sanitary and Health Regulations

When the newly-created World Health Organization assumed responsibility for the international quarantine regulations in 1948, quarantine practice and procedure varied considerably from one country to another and the general situation was confused (World Health Organization, 1958). The International Sanitary Conventions then in force had been drawn up at different times, each with a specific objective in view. None completely replaced its predecessors because different countries adhered to different conventions or groups of conventions. Furthermore, since the adoption of the conventions, conditions had changed; hence they did not take account of the new methods available for the control of several of the diseases they covered, nor were they framed to deal adequately with the greatly increased volume and speed of international traffic (see Section 3.4).

It fell to the First World Health Assembly (1948) to replace this multiplicity of conventions by a single code based on modern epidemiological principles, and to provide an international instrument which could be adapted to changing conditions without the delays imposed by the formalities at each modification of signature and ratification. Provision for such an instrument existed in the Constitution of the World Health Organization, which, in Article 21, states that the World Health Assembly shall have the authority to adopt regulations concerning sanitary and quarantine requirements and, in Article 22, that regulations so adopted shall come into force for all Member States after due notice has been given of their adoption by the Health Assembly, except for such Members as may notify the Director-General of rejection or reservations within the period stated in the notice.

The International Sanitary Regulations (1951)

In the event, it was not until the Fourth World Health Assembly that final agreement was reached although preliminary studies had already been undertaken, 1946–48, on the possibility of drawing up a single set of regulations to replace the sanitary conventions. The regulations were signed off as WHO Regulations No. 2, on 25 May 1951. These regulations covered and still cover all forms of international transport – ships, aircraft, trains and road vehicles. They deal with the sanitary conditions to be maintained and measures to be taken against diseases at seaports and airports open to international traffic, including measures on arrival and departure, sanitary documents and sanitary charges. The Regulations represent “the maximum measures applicable to international traffic which a State may require for the protection of its territory against the quarantinable diseases…”. The same principle – a minimum of interference with traffic and of inconvenience to passengers – is expressed in the stipulation that sanitary measures and health formalities “shall be initiated forthwith, completed without delay and applied without discrimination”.

In the 1951 Regulations, there were special provisions relating to each of the (then) quarantine diseases (cholera, plague, yellow fever, smallpox, louse-borne typhus fever and louse-borne relapsing fever). These indicate the conditions under which vaccination may be required as a condition of entry into a country (Figure 3.2); conditions entailing the de-insecting of passengers, their isolation or surveillance; conditions entailing the de-ratting of vessels; and the measures to be taken in the case of “suspect” or “infected” ships and aircraft.

Figure 3.2 Vaccination certificates. The International Sanitary Regulations (1951) laid out certain conditions under which the possession of a valid certificate of vaccination against yellow fever (upper), smallpox (centre) and cholera (lower) was a requirement for entry of an international passenger into a particular territory. Failure to produce a valid certificate could result in the placing of the passenger under a period of isolation or surveillance that reflected the incubation period of the disease.

Figure 3.2
Vaccination certificates. The International Sanitary Regulations (1951) laid out certain conditions under which the possession of a valid certificate of vaccination against yellow fever (upper), smallpox (centre) and cholera (lower) was a requirement for entry of an international passenger into a particular territory. Failure to produce a valid certificate could result in the placing of the passenger under a period of isolation or surveillance that reflected the incubation period of the disease.

The Regulations, as they were first adopted, followed the example of the former international sanitary conventions and included provisions relating to the Mecca Pilgrimage. More than once in the nineteenth century the Pilgrimage had resulted in the catastrophic international spread of diseases, and it was to meet such dangers that an international sanitary convention for the Pilgrimage had been drawn up at the 1892 International Sanitary Conference in Venice, to give effect to conclusions reached at previous conferences in 1866 and 1874. Even in 1951, when the International Sanitary Regulations were adopted, it was considered that the Mecca Pilgrimage still needed special international sanitary controls.

The International Health Regulations (1969 and 2005)

The International Sanitary Regulations were revised and adopted by the WHO under the new title of the International Health Regulations in 1969. The number of diseases covered by the regulations reduced from six to four (cholera, plague, yellow fever and smallpox; see Figure 2.22). Smallpox was subsequently excluded by a regulatory amendment (1981) following its global eradication. Faced with the global health challenges posed by new and resurging infectious diseases in the late twentieth and early twenty-first centuries, WHO issued a fully revised set of International Health Regulations in 2005. As noted in Section 2.7, rather than focusing on a small and prescribed set of diseases, the 2005 regulations ushered in a new global public health surveillance regime that requires member states to notify WHO of all events which may constitute a public health emergency of international concern (Article 6.1) – whether naturally occurring, intentionally created or unintentionally caused. The regulations came into force on Friday 15 June 2007 and are a legally binding international instrument to “prevent, protect against, control and provide a public health response to the international spread of disease in ways that are commensurate with and restricted to public health risks and which avoid unnecessary interference with international trade and traffic” (World Health Organization, 2008b, [link]).

The manner in which quarantine procedures and the associated reporting machinery described in Sections 2.52.7 have worked out in practice over time is most easily understood if we follow a single country example, here as in Chapter 2, the United States, and it is to this that we now turn.

Quarantine: The United States, 1878–2010

Early American Quarantine

For much of the early history of the North American colonies, the populations were too sparse to hold many infectious diseases in endemic form. Diseases such as smallpox and yellow fever would occasionally be introduced into southern ports by ships sailing from Latin America and the Caribbean, but the epidemics would rarely be sustained. The epidemiological isolation of the Colonies in these early years was bolstered by long sea journeys in small sailing ships, shipboard epidemics usually having run their course well before the Colonies were reached. But this situation changed in the nineteenth century as the epidemiological isolation of the United States was eroded by expanding international trade, immigration and the ever-increasing size and speed of ocean-going ships (Section 3.4). By the latter decades of the nineteenth century, major ports on the eastern seaboard were within epidemiological reach of many African, European and Latin American ports with which the United States had links. These, in turn, were potential sources of cholera, plague, yellow fever and many other infectious diseases.

The systematic development of quarantine procedures in the United States begins in the last quarter of the nineteenth century with the US Marine Hospital Service (USMHS), the forerunner of the US Public Health Service, which was established in July 1798 to provide healthcare and hospitals for ailing sailors (Williams, 1951; Furman, 1973; Bordley and Harvey, 1976; Greene, 1977; Bienia, et al., 1983). In 1871, as a remedial response to the decimation of the health system wrought by the Civil War, the post of Supervising Surgeon of the Marine Hospital Service was created. The first person to occupy this position was John M. Woodworth (Figure 3.3) who was to play a pivotal role in the subsequent development of international disease surveillance by the United States.

Figure 3.3 Dr John Maynard Woodworth (1837–1879). Supervising Surgeon and Supervising Surgeon General, US Marine Hospital Service, 1871–79.

Figure 3.3
Dr John Maynard Woodworth (1837–1879). Supervising Surgeon and Supervising Surgeon General, US Marine Hospital Service, 1871–79.

Source: Brady-Handy Photograph Collection, Library of Congress.

The 1878 and 1893 Quarantine Acts

Woodworth contended that the most effective way to halt the spread of epidemic disease in the United States was to prevent it from entering the country. His first move was to revive the Quarantine Law of 1799 by ordering USMHS personnel to familiarise themselves with local quarantine regulations (Figure 3.4). Probably more significant, however, was his contribution to the report on the Cholera Epidemic of 1873 in the United States commissioned by Congress in 1874 (US Department of the Treasury, 1875). His 25-page preface, The Introduction of Epidemic Cholera Through the Agency of the Mercantile Marine: Suggestions of Measures of Prevention, argued that infectious diseases were permitted to break out in the United States because insufficient information was to be had of disease activity in foreign locations. To address the problem, Woodworth urged the President of the United States to instruct consular officials to inform the State Department of infectious diseases prevailing in their jurisdictions:

A circular letter from his Excellency the President, through the Department of State, instructing consular officers to place themselves in communication with the health authorities of their respective localities; to advise promptly, by cable if necessary, of the outbreak of cholera (or other epidemic disease) at the ports or in any section in communication therewith; to inspect all vessels clearing for United States ports with reference to the original and intermediate as well as the final port of departure of emigrants thereon; and to report, always by cable, the sailing and destination of any such vessel carrying infected or suspected passengers or goods – this would be the first step

(US Department of the Treasury, 1875, [link]).

Figure 3.4 US Public Health Records, 1798–1912. Critical dates in disease surveillance in the United States.

Figure 3.4
US Public Health Records, 1798–1912. Critical dates in disease surveillance in the United States.

Source: Cliff, et al. (1998, Figure 2.6, [link]).

The resulting information, Woodworth proposed, should be collated and circulated to port health officers and other concerned parties. He concluded that:

International sanitary action is too remote, and the steps toward it have been too vacillating in the past to admit of much hope from it in the near future. But the acquisition and diffusion of general sanitary knowledge is a matter in which each nation for itself may engage…Let the General Government do its share in collection and publishing the information – a work which it alone can do…

(Woodworth, in US Department of the Treasury, 1875, [link]).

The 1878 Quarantine Act

Woodworth’s call for an international disease surveillance system materialised on 29 April 1878 with the passage of the National Quarantine Act: An Act To Prevent The Introduction of Contagious Or Infectious Diseases Into The United States (Figure 3.4). Not only did the Act grant the USMHS powers of detention over vessels originating from areas infected with epidemic disease. It also directed consular officers in foreign ports to forward weekly reports of sanitary conditions prevailing in their jurisdictions to the USMHS. This information was to be collated by the Supervising Surgeon General, and circulated in the form of a weekly abstract to USMHS officers and other interested parties; the first number was issued on 13 July 1878 under the title Bulletins of the Public Health.

The 1893 Quarantine Act

The National Quarantine Act of 1893 was to extend further the international powers of the USMHS. This Act provided that all ships headed from foreign ports to the United States must be issued with a bill of health signed by the US consul prior to departure. To assist in the process, officers of the USMHS could be detailed to foreign ports to serve in the office of the consul. Because of outbreaks of cholera in Europe, a number of medical officers were immediately assigned to consulates there (Furman, 1973).

The US Consular System and International Disease Surveillance

The idea of an international disease surveillance system which operated through consular officials was by no means new. Indeed, similar systems had been implemented in the city states of the Mediterranean as early as the fourteenth century (see Chapter 1). What was new, however, was the global scale of the operation mandated under the 1878 Quarantine Act. By the latter decades of the nineteenth century the US consular system had assumed a global pattern and included much of the Caribbean Basin and Latin America, northern, central and southern Europe, the St Lawrence Seaway and parts of Africa and Asia (Figure 3.5). Many consulates were located in the major urban centres of the most powerful trading nations, in cities such as London, Paris and Rome. But other consulates were situated in small settlements, rarely heard of then as now (Mattox, 1989).

Figure 3.5 Location of US consulates and commercial agencies in 1888. The United States operated 277 consulates and 39 commercial agencies worldwide (circles), with major concentrations in Central America and the Caribbean Basin, along the St Lawrence waterway and the Canadian Great Lakes, Northern and Central Europe and in the Mediterranean Basin. Small clusters are also to be seen in the Southern cone of South America and along the coast of China. Elsewhere, in Africa and Asia, consulates and commercial agencies were restricted to the ports of a few major cities. Offices submitting mortality reports in 1888, towards the beginning of systematic surveillance, are indicated by black circles.

Figure 3.5
Location of US consulates and commercial agencies in 1888. The United States operated 277 consulates and 39 commercial agencies worldwide (circles), with major concentrations in Central America and the Caribbean Basin, along the St Lawrence waterway and the Canadian Great Lakes, Northern and Central Europe and in the Mediterranean Basin. Small clusters are also to be seen in the Southern cone of South America and along the coast of China. Elsewhere, in Africa and Asia, consulates and commercial agencies were restricted to the ports of a few major cities. Offices submitting mortality reports in 1888, towards the beginning of systematic surveillance, are indicated by black circles.

Source: Cliff, et al. (1998, Figure 2.10, [link]), drawn from information in US Department of State (1889).

The 1878 Quarantine Act required all consuls in the jurisdictions plotted in Figure 3.5 to submit weekly reports of the sanitary conditions prevailing in their jurisdictions. This requirement fell within the jurisdication of a well-developed commercial intelligence system as evidenced by consular reports in the form of Reports on the Commercial Relations of the United States (House of Representatives, 1856–1902) and Reports from the Consuls (Department of State, 1880–1901). The sanitary information was usually drawn from local disease surveillance reports although other sources (including local gravediggers) were not unknown. Most of the sanitary reports reached the USMHS via the State Department in the form of consular dispatches. Examples of consular dispatches are reproduced in Figure 3.6. Sanitary dispatches rarely exceeded more than a few lines when favourable health conditions prevailed. But severe epidemics and poor sanitary conditions usually warranted much more information. Under these circumstances, dispatches frequently stretched to several handwritten pages and provided detailed qualitative and quantitative information pertinent to the health of the consular city. Under other conditions, dispatches simply served to refute popular rumours, to report the medical research of local luminaries, or as a call for action on the part of the USMHS. Newspaper clippings, journal articles, commissional reports and sundry other enclosures added further substance to the dispatches.

Figure 3.6 Examples of consular sanitary dispatches. Left, a telegram dated 28 December 1894 from Eugene Baker, Consul to Buenos Aires, Argentina, informing the State Department of the presence of cholera. Centre, a letter to confirm the telegram. Right, reproduction of the letter in the Weekly Abstract.

Figure 3.6
Examples of consular sanitary dispatches. Left, a telegram dated 28 December 1894 from Eugene Baker, Consul to Buenos Aires, Argentina, informing the State Department of the presence of cholera. Centre, a letter to confirm the telegram. Right, reproduction of the letter in the Weekly Abstract.

Source: Cliff, et al. (1998, Plate 2.2, [link]).

Rapid transmission of weekly consular dispatches depended critically on the telegraph (Figure 3.7, located in the colour plate section). The first submarine cable was established in 1850 between England and France, but it was 1866 (after several abortive schemes) before the first permanently successful trans-Atlantic cable was laid. Once complex cable-laying technology was established, new submarine links were laid apace. By 1890, the international cable network ran from the United States to Central and South America, and from Western Europe south to Africa and east to India, China, South Asia and Australia. By the First World War, most US consular offices could send telegrams to Washington D.C., though not without frequent delays and breaks in transmission. Telephone communication was to come later. Boston and New York were connected in 1884, but it was 1915 before the first transcontinental telephone line opened between New York and San Francisco.

Figure 3.7. Global reach of telegraphic communications at the turn of the twentieth century. Map of submarine telegraph cable routes of the Eastern Telegraph Co., 1901. Formed in 1872 as an amalgamation of several existing cable companies, the Eastern Telegraph Co. was to become the largest cable company in the world, operating 160,000 nautical miles of cable at its maximum.

Figure 3.7.
Global reach of telegraphic communications at the turn of the twentieth century. Map of submarine telegraph cable routes of the Eastern Telegraph Co., 1901. Formed in 1872 as an amalgamation of several existing cable companies, the Eastern Telegraph Co. was to become the largest cable company in the world, operating 160,000 nautical miles of cable at its maximum.

The Weekly Abstract

The 1878 Quarantine Act required publication of the consular sanitary reports as a weekly abstract and this was begun as Bulletins of the Public Health on 13th July of that year (Figure 3.8, left). Issue No. 1 consisted of just 23 lines of text and detailed the sanitary conditions prevailing in Cuban ports, the occurrence of yellow fever in Florida and cholera on British troop ships in the Mediterranean. However, the Bulletins soon expanded to include reports of disease activity in major cities around the world; by December 1878, the Bulletins contained summaries of morbidity and mortality from infectious diseases in places as far-flung as Brazil, China, Singapore and Ireland.

Figure 3.8 Front covers of the Weekly Abstract under its various aliases. (Left) Volume I appeared in 1878 as the Bulletins of the Public Health. (Centre) After a gap of nine years, Volume II emerged in 1887 as the Weekly Abstract of Sanitary Reports. (Right) The Weekly Abstract was renamed Public Health Reports in 1896.

Figure 3.8
Front covers of the Weekly Abstract under its various aliases. (Left) Volume I appeared in 1878 as the Bulletins of the Public Health. (Centre) After a gap of nine years, Volume II emerged in 1887 as the Weekly Abstract of Sanitary Reports. (Right) The Weekly Abstract was renamed Public Health Reports in 1896.

Source: Cliff, et al. (1998, Plate 2.3, [link]).

Publication of the Bulletins was suspended on 24 May 1879, after just 46 issues, when powers under the 1878 Quarantine Act were temporarily transferred from the USMHS to the newly created National Board of Health (see Figure 3.4). The National Board of Health continued to publish the consular reports in the weekly National Board of Health Bulletin. The quarantine powers of the National Board of Health were to lapse in 1883 and charge of the 1878 Quarantine Law again returned to the USMHS (Williams, 1951). But, it was to be a further four years before the Surgeon General of the USMHS, John B. Hamilton, was to regain the initiative; publication of the consular sanitary reports recommenced in January 1887 as the Weekly Abstract of Sanitary Reports (Figure 3.8, centre). With new legislation under the 1893 National Quarantine Act, the Weekly Abstracts were further extended and Volume XI, published in January 1896, appeared under the new title Public Health Reports (Figure 3.8, right). The Public Health Reports published tabular information on morbidity and mortality gathered through the international network of consuls and USMHS officers until 1912, although entries for cholera, yellow fever, plague and smallpox (the ‘quarantine’ diseases) continued in later years.

Although the Weekly Abstract and its successor publication, the Public Health Reports, was a vehicle for the dissemination of international sanitary information, it also assumed the role of the domestic disease surveillance report of the United States. The early editions restricted domestic information to brief statements, largely for port cities and quarantine stations. But, in June 1888, the Weekly Abstract began to tabulate disease reports for major US cities. This initiative continues today in the form of the US Centers for Disease Control and Prevention’s Morbidity and Mortality Weekly Report.

Ellis Island

Once federal legislation had been passed in the shape of the 1878 Quarantine Act, which was reinterpreted in the early 1890s to provide the federal government more authority in imposing quarantine requirements following a series of cholera outbreaks from passenger ships arriving from Europe, the practical face of US quarantine emerged in the form of one of the world’s great quarantine stations, Ellis Island (Figure 3.9). Located off New York City, Ellis Island operated for 62 years from 1892 to 1954. Over this period, more than 12 million immigrants – as many as 5,000 a day, with a record of nearly 13,000 – underwent immigration processing at Ellis Island. This total represents more than 66 percent of immigrants who came to America, and it is estimated that today more than 100 million Americans can trace their roots to an ancestor who came through Ellis Island. As described in Coan (1997, p. xiii), the Federal Immigration and Naturalization Service (INS), which operated the station, enforced a number of Acts to exclude mentally disabled persons, paupers and those who might become public charges. The INS also excluded those suffering from “a loathsome or contagious disease”, or convicted of various crimes. Over the life of the station, 82,199 potential immigrants were rejected as mental or physical defectives. Screening for disease was carried out by Ellis Island doctors in a set of 15 medical buildings until 1932 when this task was transferred to American consulates in originating countries. Thereafter, the role of Ellis Island declined until final closure in 1954.

Figure 3.9 Ellis Island quarantine station. (Upper) Ellis Island Quarantine Station, New York City. The original Georgia-pine main building at Ellis Island, opened 1 January 1892. It was destroyed by fire in 1897 and then rebuilt. (Centre left) US Quarantine inspectors in Public Health Service uniforms c. 1912. (Centre right) Quarantine detention at the immigration station, Ellis Island c. 1930. Those suspected of having a communicable disease were segregated at once and, after confirmation of the diagnosis, admitted to the communicable disease hospital for care and treatment. (Lower) Number of immigrants passing through the Ellis Island station annually and the total number of arrivals in the USA, 1892–1954. Sources: (Upper) US National Park Service, Statue of Liberty National Monument website, reproduced in Cliff, et al. (2000, Plate 5.6, p. 200). (Centre left) Centers for Disease Control and Prevention website (http://www.cdc.gov/quarantine/HistoryQuarantine.html). (Centre right) National Library of Medicine, US Department of Health and Human Services, images of the history of the Public Health Service, p. 21 (http://www.nlm.nih.gov/exhibition/phs_history/images.dir/21.gif). (Lower) Cliff, et al. (2000, Figure 5.10, p. 199).

Figure 3.9
Ellis Island quarantine station. (Upper) Ellis Island Quarantine Station, New York City. The original Georgia-pine main building at Ellis Island, opened 1 January 1892. It was destroyed by fire in 1897 and then rebuilt. (Centre left) US Quarantine inspectors in Public Health Service uniforms c. 1912. (Centre right) Quarantine detention at the immigration station, Ellis Island c. 1930. Those suspected of having a communicable disease were segregated at once and, after confirmation of the diagnosis, admitted to the communicable disease hospital for care and treatment. (Lower) Number of immigrants passing through the Ellis Island station annually and the total number of arrivals in the USA, 1892–1954. Sources: (Upper) US National Park Service, Statue of Liberty National Monument website, reproduced in Cliff, et al. (2000, Plate 5.6, p. 200). (Centre left) Centers for Disease Control and Prevention website (http://www.cdc.gov/quarantine/HistoryQuarantine.html). (Centre right) National Library of Medicine, US Department of Health and Human Services, images of the history of the Public Health Service, [link] (http://www.nlm.nih.gov/exhibition/phs_history/images.dir/21.gif). (Lower) Cliff, et al. (2000, Figure 5.10, p. 199).

Twentieth-century Quarantine Organization

In 1893, Congress passed legislation that further clarified the federal role in quarantine activities. As local authorities came to realise the benefits of federal involvement, local quarantine stations were gradually turned over to the US government. Additional federal facilities were built and the number of staff was increased to provide better coverage. The quarantine system was fully nationalised by 1921 within the Treasury Department when administration of the last quarantine station was transferred to the US government. Quarantine and the Public Health Service (PHS), its parent organisation, became part of the Federal Security Agency in 1939. The 1944 Public Health Service Act clearly established the federal government’s quarantine authority for the first time. The Act gave the PHS responsibility for preventing the introduction, transmission, and spread of communicable diseases from foreign countries into the United States (Figure 3.10). Another transfer occurred in 1953 when quarantine and the PHS joined the Department of Health, Education, and Welfare (HEW). Quarantine was then transferred to the agency now known as the Centers for Disease Control and Prevention (CDC) in 1967. CDC remained part of HEW until 1980 when the department was reorganised into the Department of Health and Human Services.

Figure 3.10 US quarantine inspection of ships, mid-twentieth century. (Left) The internationally-recognised yellow jack quarantine flag (the yellow squares of the flag are here shown in grey) flown by ships if they suspected they had quarantinable infection on board. (Right) US Public Health Service cutter used to transport quarantine inspectors to board ships flying the yellow quarantine flag. The flag was flown until quarantine and customs personnel inspected and cleared the ship to dock at the port.

Figure 3.10
US quarantine inspection of ships, mid-twentieth century. (Left) The internationally-recognised yellow jack quarantine flag (the yellow squares of the flag are here shown in grey) flown by ships if they suspected they had quarantinable infection on board. (Right) US Public Health Service cutter used to transport quarantine inspectors to board ships flying the yellow quarantine flag. The flag was flown until quarantine and customs personnel inspected and cleared the ship to dock at the port.

Source: Centers for Disease Control and Prevention website (http://www.cdc.gov/quarantine/HistoryQuarantine.html).

When CDC assumed responsibility for quarantine, it was a large organisation with 55 quarantine stations and more than 500 staff members. Quarantine stations were located at every port, international airport and major border crossing. After evaluating the quarantine programme and its role in preventing disease transmission, CDC trimmed the programme in the 1970s and changed its focus from routine inspection to programme management and intervention. The new focus included an enhanced surveillance system to monitor the onset of epidemics abroad and a modernised inspection process to meet the changing needs of international traffic (cf. Sections 2.62.8 and 3.4). The cutbacks are unsurprising given the long twentieth-century decline in the communicable diseases which made quarantine and isolation necessary; see Smallman-Raynor and Cliff (2012) for examples of this decline in the UK.

By 1995, all US ports of entry were covered by only seven quarantine stations. But the emergence of new and the re-emergence of old infections in the last quarter of the twentieth century posed new disease threats which led to an expansion of the US quarantine station network. A station was added in 1996 in Atlanta, Georgia, just before the city hosted the 1996 Summer Olympic Games. Following the severe acute respiratory syndrome (SARS) epidemic of 2003, CDC reorganised the quarantine station system, expanding to 18 stations with more than 90 field employees. The reorganisation led to the creation within CDC of a Division of Global Migration and Quarantine as part of CDC’s National Center for Emerging and Zoonotic Infectious Diseases in Atlanta (Figure 3.11). The location of quarantine stations, currently 20, is mapped in Figure 3.12 and covers all the ports (sea and air) of entry and land-border crossings where international travellers arrive in the US.

Figure 3.11 Centers for Disease Control and Prevention: Division of Global Migration and Quarantine. The organisational structure gives oversight over quarantinable diseases and travel and refugee entry into the US (http://www.cdc.gov/ncezid/pdf/ncezid-org-chart-july-2010.pdf; http://www.cdc.gov/ncezid/dgmq/index.html).

Figure 3.11
Centers for Disease Control and Prevention: Division of Global Migration and Quarantine. The organisational structure gives oversight over quarantinable diseases and travel and refugee entry into the US (http://www.cdc.gov/ncezid/pdf/ncezid-org-chart-july-2010.pdf; http://www.cdc.gov/ncezid/dgmq/index.html).

Figure 3.12 US quarantine stations. Location map of the 20 currently operating US quarantine stations with their geographical jurisdictions shaded.

Figure 3.12
US quarantine stations. Location map of the 20 currently operating US quarantine stations with their geographical jurisdictions shaded.

Source: CDC website (http://www.cdc.gov/quarantine/QuarantineStationContactListFull.html; http://www.cdc.gov/quarantine/HistoryQuarantine.html).

Under its delegated authority, the Division of Global Migration and Quarantine is empowered to detain, medically examine, or conditionally release individuals and wildlife suspected of carrying a communicable disease. The current list of quarantinable diseases is contained in an Executive Order of the President and includes both old and newly-emerging infections: cholera; diphtheria; infectious tuberculosis; plague; smallpox; yellow fever; viral haemorrhagic fevers such as Marburg, Ebola, and Crimean–Congo, and SARS. Influenza was added to the list in 2005 because of its pandemic potential. As noted, and reflecting the impact of mass vaccination upon the spectrum of vaccine-controllable diseases, many other illnesses of public health significance, such as measles, mumps, rubella, and chickenpox, are not contained in the list of quarantinable illnesses, although they continue to pose a health risk to the public.

Quarantine Islands

While Ellis Island is the most famous quarantine station of all, island settings have been favoured across the globe and throughout history for quarantine stations against a spectrum of diseases including cholera, leprosy, smallpox, yellow fever and measles (Figure 3.13).

Figure 3.13 Island quarantine. (Upper) St John’s Island, Malaysia, near Singapore. Watercolour by J. Taylor, 1879. In 1874, St John’s Island became a quarantine station for cholera-stricken Chinese immigrants, and it became the world’s biggest quarantine station in the 1930s, screening both Asian immigrants and Malay pilgrims from Mecca returning to Singapore. From 1901, victims of beri-beri, smallpox and leprosy were also brought here. Like Ellis Island, its quarantine function was abandoned in the 1950s when mass immigration into Singapore ceased. Gilbert Brooke (Figure 2.25) had oversight of St John’s Island when he was Chief Health Officer for Singapore. (Lower) Culebra Island quarantine station, Panama, 1909. From the early years of the twentieth century, Culebra became the quarantine station to keep communicable diseases, especially malaria and yellow fever, from getting into the local population of the Panama isthmus – especially likely given its location on the Panama Canal and the fact that Culebra served the US military in various capacities, 1903–75.

Figure 3.13
Island quarantine. (Upper) St John’s Island, Malaysia, near Singapore. Watercolour by J. Taylor, 1879. In 1874, St John’s Island became a quarantine station for cholera-stricken Chinese immigrants, and it became the world’s biggest quarantine station in the 1930s, screening both Asian immigrants and Malay pilgrims from Mecca returning to Singapore. From 1901, victims of beri-beri, smallpox and leprosy were also brought here. Like Ellis Island, its quarantine function was abandoned in the 1950s when mass immigration into Singapore ceased. Gilbert Brooke (Figure 2.25) had oversight of St John’s Island when he was Chief Health Officer for Singapore. (Lower) Culebra Island quarantine station, Panama, 1909. From the early years of the twentieth century, Culebra became the quarantine station to keep communicable diseases, especially malaria and yellow fever, from getting into the local population of the Panama isthmus – especially likely given its location on the Panama Canal and the fact that Culebra served the US military in various capacities, 1903–75.

Source: Wellcome Library, London.

Measles Invasions of Fiji (1879–1920)

One of the clearest examples of the yoking of changes in transport technology to the introduction of communicable diseases and the associated island quarantine response is provided by the history of the use of indentured labour on the Fiji sugar plantations, 1879–1920. The history of the first importation of measles into Fiji in January 1875 and the devastating impact on the native population over the ensuing six months is one of the classic cases of a ‘virgin soil’ outbreak and has been widely studied (McArthur, 1967; Cliff and Haggett, 1985). In the anxious years that followed, the islands provided what was essentially a test case in the use of quarantine to prevent further invasions of the measles virus.

Between 1879 and 1920, Indian immigrant ships made 87 voyages to Fiji carrying nearly 61,000 indentured emigrants. The main routes followed are mapped in Figure 3.14A. This illustrates an important distinction between voyages by sailing ships (used between 1879 and 1904) and steamships (used between 1884 and 1916). To take advantage of prevailing winds, sailing ships followed the route south of Australia and took about 70 days for the voyage. Steamships used the more direct Torres Strait north of Australia and halved the sailing ship times; they were also able to carry a larger number of immigrants. The health and welfare of the immigrants on board was the responsibility of the Surgeon-Superintendent who accompanied each ship and whose report was incorporated into the Annual Reports on Indian Immigration published regularly as Official Papers of Fiji’s Legislative Council. These papers show how the transition from sail to steam dramatically altered the ways in which infectious diseases were transmitted between India and Fiji.

Figure 3.14 Measles transfer from India to Fiji. (A) Routes from India to Fiji via sailing ships and steamships. (B) Vessels carrying indentured immigrants between India and Fiji, 1879–1916, categorised by length of voyage in days and in measles virus generations (14-day periods), type of vessel and measles status.

Figure 3.14
Measles transfer from India to Fiji. (A) Routes from India to Fiji via sailing ships and steamships. (B) Vessels carrying indentured immigrants between India and Fiji, 1879–1916, categorised by length of voyage in days and in measles virus generations (14-day periods), type of vessel and measles status.

Source: Cliff, Smallman-Raynor, Haggett, et al. (2009, Figure 6.7, p. 312).

Since measles was an endemic disease in India, it is not surprising that cases were recorded on departure, although there were checks in the camps both at Calcutta and Madras (the two exit ports) before embarkation: the evidence in the Fijian annual reports shows a 1:3 probability of measles being detected on board on departure from India, and this proportion of infected voyages remained constant over the period. These are shown in Figure 3.14B in which each voyage is plotted in terms of the time taken and the passenger size of each vessel. For the smaller and slower sailing ships, around one-third of the vessels carrying labourers left India with infectives on board but the measles virus did not survive the journey. By the end of the voyage those infected had either recovered or died and the long chain of measles generations needed to maintain infection (up to six on slower voyages) was broken. But for the faster and larger steamships, Figure 3.14B shows the situation was different. Ships on one in three voyages still carried infectives on departure and, in 11 instances, the virus continued to thrive on arrival in Fiji. The larger susceptible population and shorter travel times (as few as two generations on the fastest voyages) ensured the virus persisted to pose a potential threat at the receiving end.

Intensive quarantine had been instituted on the smaller islands off Suva following the experience of the disastrous 1875 measles epidemic in Fiji which resulted in the loss of some 40,000 lives. As a result, during the period of indentured labour, quarantining of Indian passengers on immigrant boats was routine up to 1916. The first quarantine station was established on Yamuca Levu island between Ovalau and Moturiki islands and was used by the first immigrants from the Leonidas (Figure 3.15). With the shift of the Fijian capital to Suva, the quarantine station was moved to the island of Nukulau on the reef about 10 km east of Suva harbour. Immigrants were usually detained for a 14-day period before being delivered to the plantation areas.

Figure 3.15 Quarantine in Fiji, 1879–1916. Locations of the quarantine stations in Fiji during the period of indentured labour. The inset shows the Leonidas from a watercolour probably by Frederick Garling, c. 1870. The Leonidas was a labour transport ship (schooner) that played an important role in the history of Fiji. Captained by McLachlan, the ship departed from Calcutta on 3 March 1879 and arrived at Levuka, Fiji, on 14 May. The indentured labourers who disembarked were the first of over 61,000 to arrive from the Indian subcontinent over the next 37 years, forming the nucleus of the Fiji Indian community that now comprises 40 percent of Fiji’s population. A total of 498 passengers had embarked on the ship in Calcutta. While only three days out to sea there was an outbreak of cholera and smallpox on board. Despite efforts by the Surgeon Superintendent to isolate the infected passengers, 17 died before the ship arrived in Levuka, after a journey of 72 days. Since there was no quarantine facility in Levuka, it was decided to anchor the ship some distance from Levuka on the leeward side. To handle the crisis, Fiji’s first (and temporary) quarantine station was established at Yanuca Lailai. Armed guards were placed in the narrow passage between Levuka and Yanuca Lailai, to prevent contact with the new arrivals. Fifteen more of the new arrivals died on the island from dysentery, diarrhoea and typhoid leaving only 463 survivors to be released into the general population on 9 August 1879.

Figure 3.15
Quarantine in Fiji, 1879–1916. Locations of the quarantine stations in Fiji during the period of indentured labour. The inset shows the Leonidas from a watercolour probably by Frederick Garling, c. 1870. The Leonidas was a labour transport ship (schooner) that played an important role in the history of Fiji. Captained by McLachlan, the ship departed from Calcutta on 3 March 1879 and arrived at Levuka, Fiji, on 14 May. The indentured labourers who disembarked were the first of over 61,000 to arrive from the Indian subcontinent over the next 37 years, forming the nucleus of the Fiji Indian community that now comprises 40 percent of Fiji’s population. A total of 498 passengers had embarked on the ship in Calcutta. While only three days out to sea there was an outbreak of cholera and smallpox on board. Despite efforts by the Surgeon Superintendent to isolate the infected passengers, 17 died before the ship arrived in Levuka, after a journey of 72 days. Since there was no quarantine facility in Levuka, it was decided to anchor the ship some distance from Levuka on the leeward side. To handle the crisis, Fiji’s first (and temporary) quarantine station was established at Yanuca Lailai. Armed guards were placed in the narrow passage between Levuka and Yanuca Lailai, to prevent contact with the new arrivals. Fifteen more of the new arrivals died on the island from dysentery, diarrhoea and typhoid leaving only 463 survivors to be released into the general population on 9 August 1879.

Source: Leonidas, State Library of New South Wales, image a353007.

Gillion (1962, [link]) notes:

All the ships, except the first, went to the port of Suva, where the Indians were transferred to barges and towed by steam launch or tug to the islet of Nukulau, which served as reception centre and quarantine station. There they were inspected by the Agent-General of Immigration and medically examined…

The ability to contain spread of disease by quarantine diminished rapidly during the steamship era. Nevertheless, quarantine was maintained by the first two chief medical officers for the next 30 years and then progressively abandoned in the face of changing transport technology. The threat of speeding the introduction of measles into Fiji by using the faster and larger steamships was considered by the medical officers on the ships but, by the early twentieth century, they did not rate the risk as critical:

So far as can be judged as yet the introduction of immigrants by steamers has not had a prejudicial effect on their health, though it increases the chance of introducing diseases of a severe type into the colony and renders more likely the necessity of imposing quarantine

(Fiji Legislative Council, 1903, [link]).

The changing views of the value of quarantine were reflected by Fiji’s chief medical officer, A. Montague, in 1921. While noting the catastrophic epidemic of 1875, he observed that:

As a result, careful quarantine was unfortunately maintained against the disease [measles] and it was kept out until 1903…since then no special measures have been taken and several localised epidemics have occurred…but the death rate has been very low

(Montague, 1922, [link]).

He concluded from the impact of the 1903 epidemic, after 28 essentially virus-free years, that it would be unwise to attempt to exclude measles any longer, since this would produce an adult, non-immune, population.

3.3 Isolation

As described in Section 3.1, isolation as a means of communicable disease control is concerned with the separation, for the period of communicability, of infectives from others so as to prevent or limit the direct or indirect transmission of the infectious agent to susceptibles. In this section, we begin by discussing the general theory of isolation for this purpose, before examining case studies to illustrate the practice.

Isolation: Theory

Isolation in the strict medical sense was the front line response used since Old Testament times to prevent the spread of infection, given (then) no or limited knowledge of the aetiology of different communicable diseases and appropriate medication to control infection. It was believed, first with leprosy and second with plague, that it might be possible to avoid certain diseases by ensuring that no contact occurred between diseased and healthy persons. The practice of designating huts or villages in which severe infectious diseases such as plague or smallpox were present, as an indication that they were to be avoided, appears to have arisen independently among several different peoples in Africa, Asia and Europe (Figure 3.16). The isolation areas ranged in geographical size from camps down to individual houses. All had the same general idea of isolating patients externally from susceptibles living outside the isolation unit and internally within the unit from each other. It was difficult to achieve the efficient isolation of cases where diseases were endemic, but relatively easy when they were present on ships that approached disease-free ports. Thus the isolation of ships and their contents developed earlier and more successfully (cf. Venice in Section 1.2) than did effective isolation of infected patients on land.

Figure 3.16 Disease control by isolation. (Upper) Isolation and quarantine area during a plague outbreak, Karachi, Pakistan, 1897. (Middle) Cerebrospinal meningitis camp outside a village near Zaris, Northern Nigeria, c. 1960. The graves of the dead comprise the drumlin-like ground in front of the camp. (Lower) Infected house in isolation/quarantine, India, 1906. Note the separate isolation units in the buildings in the upper and centre photographs.

Figure 3.16
Disease control by isolation. (Upper) Isolation and quarantine area during a plague outbreak, Karachi, Pakistan, 1897. (Middle) Cerebrospinal meningitis camp outside a village near Zaris, Northern Nigeria, c. 1960. The graves of the dead comprise the drumlin-like ground in front of the camp. (Lower) Infected house in isolation/quarantine, India, 1906. Note the separate isolation units in the buildings in the upper and centre photographs.

Source: Wellcome Library, London.

The scientific underpinning of the concept of isolation had to await the enunciation of the germ theory of infectious diseases by Pasteur and Koch in the later nineteenth century but, long before this, a belief had developed that such diseases were spread by contagion. The best known early European exponent of this view, for smallpox and measles, was Girolamo Fracastoro of Verona (1478–1553). In a classic book, Fracastoro attributed these diseases to specific seeds, or seminaria, which were spread by direct contact from person to person by intermediate objects (fomites), or perhaps at a distance through the air (Fracastoro, 1546).

Historically, many infectious diseases were contained by isolation including, in addition to leprosy and plague, smallpox, tuberculosis, typhus, and typhoid (enteric fever) as well as general fevers like diphtheria and scarlet fever. In England and Wales, the Local Government Board (1882, 1912) discussed the utility of isolation in the control of infectious diseases and concluded (1912, [link][link]):

Every populous district should be provided with hospital accommodation for the reception of cases of infectious disease, at least for such as are without proper lodging and accommodation or which occur under circumstances involving special danger to the public health. The proportion of cases which it may be desirable to isolate in hospital will vary to some extent with local circumstances….

The diseases most commonly received into isolation hospitals are scarlet fever, diphtheria and enteric fever. It is undesirable that isolation hospitals should be reserved solely for scarlet fever, to the exclusion of diphtheria and enteric fever which are more formidable diseases. When not in use for the acute infectious diseases isolation hospitals may be used for the treatment of cases of pulmonary phthisis [tuberculosis].

As for the design of isolation hospitals, internal and external separation of patients was, if affordable, the order of the day (Figure 3.17).

Figure 3.17 Southport’s New Hall Isolation Hospital, 1927. Plan and elevation drawings for the scarlet fever and diphtheria isolation wings of the new hospital. The male ward is to the left of the central entrance and the female ward to the right. Note the principle of internal separation of patients achieved by the use of individual rooms and cubicles. There was an additional wing for the treatment of tuberculosis sufferers.

Figure 3.17
Southport’s New Hall Isolation Hospital, 1927. Plan and elevation drawings for the scarlet fever and diphtheria isolation wings of the new hospital. The male ward is to the left of the central entrance and the female ward to the right. Note the principle of internal separation of patients achieved by the use of individual rooms and cubicles. There was an additional wing for the treatment of tuberculosis sufferers.

Source: Wellcome Library, London.

Today, isolation is rarely recommended, but it does occur where the susceptibility of an immunosuppressed individual makes them high risk for infection – for example, in paediatrics – and with some extremely infectious diseases for which there is no cure – for example, haemorrhagic fevers like Ebola (Figure 3.18).

Figure 3.18 Spatial isolation at the patient level. (Upper) Isolation nursing to ensure infection control in paediatrics. (Lower) Warning notice outside Gulu hospital, Uganda, during the August 2000 outbreak of Ebola haemorrhagic fever.

Figure 3.18
Spatial isolation at the patient level. (Upper) Isolation nursing to ensure infection control in paediatrics. (Lower) Warning notice outside Gulu hospital, Uganda, during the August 2000 outbreak of Ebola haemorrhagic fever.

Sources: (Upper) Wellcome Library, London; (lower) World Health Organization (2000b, [link]).

Isolation: Practice

Leprosy

The history of leprosy is well described by Carmichael (1993, 1997). Its origins are unknown, but lepers have been cast out into isolation from biblical times. In the Book of Leviticus, a disease called zara’ath was identified by the religious authorities. Those who suffered from it were cast ‘outside the camp’ and considered unclean. They were not exiled altogether from the community as were criminals but rather made to live apart as if the living dead. They were regarded as morally as well as physically tainted although not individually responsible for their disease. The opprobrium attached to leprosy affected attitudes in Western Europe for the next 2,000 years. During the high point of the Middle Ages (ad 1100–1300), lepers were identified by priests and ritually separated from the general community. Last rites might be said, sometimes as the lepers stood symbolically in an open grave. Once identified, the leper’s ability to leave his or her city or village was severely limited. For example, Italian cities posted guards at the gates to identify lepers and to deny them entrance except under carefully controlled circumstances. Fears of contagion by lepers were much exaggerated as the disease is not particularly infectious. Local laws insisted that lepers had to be identifiable at a distance, leading to the creation of legendary symbols of the leper: a clapper or bell to warn those who might pass by too closely (Figure 3.1). Another symbol was the long pole used to retrieve their alms cup or to point to items being purchased.

Lepers were also stigmatised outside Europe. In East Asia and in the Indian subcontinent, some legal rites were denied. For example, marriage to a leper or raising offspring with a leper was prohibited. Both western and eastern art depicted lepers as repulsive and sore covered. An exception to stigmatisation was Islamic society in which lepers were not exiled. Leprosaria, or isolation hospitals, to house lepers (usually with limited medical facilities) were constructed at church or communal expense. Outside the leprosaria, lepers had to depend upon begging or alms. Figure 3.19 shows some of the isolation facilities provided.

Figure 3.19 Isolation of lepers. (Upper left) A medieval leper’s retreat in cast iron, fifteenth century. (Upper right) The medieval leprosarium at Bury St Edmunds, Suffolk, England. (Lower) Leprosy patients awaiting treatment cards, Bumba, Congo, 1955; photograph probably by Stanley Browne.

Figure 3.19
Isolation of lepers. (Upper left) A medieval leper’s retreat in cast iron, fifteenth century. (Upper right) The medieval leprosarium at Bury St Edmunds, Suffolk, England. (Lower) Leprosy patients awaiting treatment cards, Bumba, Congo, 1955; photograph probably by Stanley Browne.

Source: Wellcome Library, London.

Plague

Isolation hospitals, or lazarettos, for plague victims were widespread across Europe during the plague centuries. The examples from Venice, Genoa and Rome (Figures 1.12 and 1.13) illustrate the point, while John Howard’s 1791 book (Figure 3.20) contains plans, prospects and commentary on the principal large hospitals. The operation of plague houses at the local scale has attracted a specialised literature (e.g. Henderson, 1994, on Florence).

Figure 3.20 The Lazarettos of Europe. Title page of prison reformer John Howard’s book on the condition and use of the principal plague lazarettos of Europe.

Figure 3.20
The Lazarettos of Europe. Title page of prison reformer John Howard’s book on the condition and use of the principal plague lazarettos of Europe.

Plague in Eyam, 1665–66

In the British Isles, the most famous geographical example of the use of isolation against plague is provided by Eyam, Derbyshire, in 1665–66. The story of the way in which the village decided to isolate itself from contact with the surrounding world in an attempt to prevent the spread of bubonic plague from the parish to the north of England, which was largely plague-free at the time, has been told and retold (Wood, 1865). It continues to fascinate and to attract the attention of demographers (Race, 1995), medical scientists (Massad, et al., 2004) and mathematical modellers (Raggett, 1982a, b; Brauer, et al., 2008, [link][link]) alike.

Sadly, Eyam’s self-imposed isolation enhanced its own plague experience. At the time, Eyam parish consisted of three townships (Foolow, Woodlands and the larger village of Eyam). The parish population in 1664 has been estimated at c. 1,200–1,300 (Clifford and Clifford, 1993, p. v), and Eyam village at c. 350 persons. Eyam suffered by far the most serious outbreak of plague anywhere in the provinces during this plague visitation to the British Isles. By the time the plague expired in Eyam in November 1666, the village population had been reduced from c. 350 to c. 83, a decline of almost 75 percent (Figure 3.21).

Figure 3.21 Population of Eyam, Derbyshire, England, 1631–1700. Annual baptisms (A), marriages (B) and burials (C), and associated five-year moving averages. The impact of the plague 1665–66 in reducing marriages (and therefore baptisms) is evident. Mortality rocketed.

Figure 3.21
Population of Eyam, Derbyshire, England, 1631–1700. Annual baptisms (A), marriages (B) and burials (C), and associated five-year moving averages. The impact of the plague 1665–66 in reducing marriages (and therefore baptisms) is evident. Mortality rocketed.

Source: based on Race (1995, Figures 1 and 2, [link]).

Race (1995, [link]) has assessed the severity of the outbreak on Eyam by calculating crisis mortality ratios (CMR). The CMR can be defined in a variety of ways; see, for example, Wrigley and Schofield (1981, pp. 646–49). But all consist of establishing the ‘normal’ or expected level of mortality in ordinary years, so that the ratio of actual:normal mortality is unity when nothing exceptional is happening to mortality; values above one indicate higher than expected mortality and the converse. Race took ‘normal’ mortality as the average annual number of burials in the parish over the previous decade. The CMR in Eyam for the 12 highest epidemic months, 1665–66, was 10.2 compared with, for example, 5.9 for 47 London parishes in the Great Plague of the same period, and a value of 3.0 often taken by demographers to denote an exceptional crisis (Slack, 1985, p. 346, note 25).

It is generally accepted that the epidemic began in the bubonic form when the village tailor’s assistant, George Vic[c]ars, who was lodging with the tailor, opened for drying in front of the cottage fire damp cloth which had arrived in a box from London at the beginning of September 1665. It probably also contained plague-carrying fleas which attacked Viccars as he unpacked the cloth. He rapidly sickened and died on 6 September 1665. After seeing that the epidemic was limited to the parish, the rector, William Mompesson, and a previous incumbent, Thomas Stanley, encouraged the villagers to agree to the unprecedented move of establishing a cordon sanitaire around the village to prevent catastrophic spread of the disease from Eyam to other localities. In doing so they made the apparent sacrifice of resigning themselves to death to save others. While the actual motives and reasoning of the inhabitants remain unclear, it now seems, with the benefit of modern medical science, that their actions contributed to an epidemic of almost unrivalled severity for the village of Eyam (Figure 3.22). Massad, et al. (2004) have argued that, in the first 275 days of the outbreak, transmission was predominantly from infected fleas to susceptible humans. But from then on, mortality increased so sharply as to suggest a change in the transmission pattern caused by spatial confinement which facilitated the spread of the infection by direct transmission among humans rather than via the intermediate vector of rat fleas. This is also consistent with a switch from bubonic to pulmonary plague, a deadlier form of the disease.

Figure 3.22 Geographical spread of plague in Eyam, 1665–66. The Eyam plague outbreak affected 76 out of 84 households in the village. On the map, the approximate settled area of the village is shown in grey with the principal roads marked. The plate shows the first entries in the Eyam plague register. The first death was of George Viccars on 6 September 1665 and yielded an early cluster of deaths in September in spatially contiguous households (square symbols). For those households whose geographical location is known, white and black dots are used to code the year of subsequent deaths. In 1665, the main focus of the epidemic was centred around the initial core of infection. When the epidemic resurged in the late spring of 1666, spread had reached the perimeter of the village and outlying cottages.

Figure 3.22
Geographical spread of plague in Eyam, 1665–66. The Eyam plague outbreak affected 76 out of 84 households in the village. On the map, the approximate settled area of the village is shown in grey with the principal roads marked. The plate shows the first entries in the Eyam plague register. The first death was of George Viccars on 6 September 1665 and yielded an early cluster of deaths in September in spatially contiguous households (square symbols). For those households whose geographical location is known, white and black dots are used to code the year of subsequent deaths. In 1665, the main focus of the epidemic was centred around the initial core of infection. When the epidemic resurged in the late spring of 1666, spread had reached the perimeter of the village and outlying cottages.

Sources: drawn from parish data in Clifford and Clifford (1993, [link][link]) and publications of the Eyam Village Society, Eyam Museum.

Smallpox

One of the most momentous events in public health in the twentieth century was the global eradication of smallpox, declared in 1979 (Fenner, et al., 1988). The last case in Britain occurred in 1978. All investigations illustrated that the practice of vaccination and revaccination, properly conducted, was a brilliant success, and Jenner’s prediction in the early nineteenth century that smallpox could be eradicated by vaccination was correct (Section 4.2). However, before global eradication was achieved, practices additional to mass vaccination, which in an elementary form had antedated the concepts of variolation and vaccination, were invoked – namely, isolation and containment of smallpox patients and their contacts. So it was in 1881 when yet another smallpox epidemic badly affected London, the Metropolitan Asylums Board placed specially appointed hospital ships, moored at Long Reach in the Thames Estuary, for the treatment and care of smallpox patients (Figure 3.23). The smallpox hospital ships, in turn, were an alleged source for the dissemination of the disease on both sides of the Thames in the epidemic of 1901–2, after which the use of the hulks was abandoned. On land, in the epidemic of 1881, the borough of St Pancras erected a temporary smallpox isolation hospital housed in a tented camp at Finchley which supplemented the permanent smallpox isolation hospital (Figure 3.24).

Figure 3.23 Smallpox hospital ships of the Metropolitan Asylums Board, London, 1881. Three vessels moored at Long Reach near Dartford, Thames Estuary, were used as hospitals for London’s smallpox patients in the period 1881–1902. The wood engraving from the London Illustrated News (volume 79, 1881, Twentieth-century Quarantine Organizationp. 72) shows, from the left, Atlas (male patients) and Endymion (administration, stores and staff quarters). The hulk, Castalia (female patients), was moored further along the shore.

Figure 3.23
Smallpox hospital ships of the Metropolitan Asylums Board, London, 1881. Three vessels moored at Long Reach near Dartford, Thames Estuary, were used as hospitals for London’s smallpox patients in the period 1881–1902. The wood engraving from the London Illustrated News (volume 79, 1881, [link]) shows, from the left, Atlas (male patients) and Endymion (administration, stores and staff quarters). The hulk, Castalia (female patients), was moored further along the shore.

Source: Wellcome Library, London.

Figure 3.24 Smallpox hospitals, St Pancras. (Upper) The St Pancras Smallpox Hospital on the King’s Cross site, c. 1800. During the large smallpox epidemic of 1881, the hospital overflowed with victims and was supplemented (lower) by a temporary tented hospital at Finchley. The original hospital was founded on Windmill Street, Tottenham Court Road, in 1746, and was later moved to the parish of St Pancras on the site of the present King’s Cross Station. The institution was rebuilt in c. 1793–94 when it received patients from the Cold Bath Fields Hospital in Clerkenwell, a foundation originating in Islington in 1740. Subsequent moves took the hospital to Highgate Hill in c. 1846, and Clare Hall, South Mimms c. 1895–99. It was acquired by the Middlesex Districts Joint Small Pox Hospital Board c. 1900–10. In May 1911, the Local Government Board made an order permitting the admission of patients with pulmonary tuberculosis. It was not unusual for the tuberculosis and smallpox isolation functions to be combined as indigenous smallpox waned and tuberculosis waxed in Great Britain. In 1949, its isolation role was supplemented when non-tuberculosis patients were admitted for treatment. The hospital was closed in 1975.

Figure 3.24
Smallpox hospitals, St Pancras. (Upper) The St Pancras Smallpox Hospital on the King’s Cross site, c. 1800. During the large smallpox epidemic of 1881, the hospital overflowed with victims and was supplemented (lower) by a temporary tented hospital at Finchley. The original hospital was founded on Windmill Street, Tottenham Court Road, in 1746, and was later moved to the parish of St Pancras on the site of the present King’s Cross Station. The institution was rebuilt in c. 1793–94 when it received patients from the Cold Bath Fields Hospital in Clerkenwell, a foundation originating in Islington in 1740. Subsequent moves took the hospital to Highgate Hill in c. 1846, and Clare Hall, South Mimms c. 1895–99. It was acquired by the Middlesex Districts Joint Small Pox Hospital Board c. 1900–10. In May 1911, the Local Government Board made an order permitting the admission of patients with pulmonary tuberculosis. It was not unusual for the tuberculosis and smallpox isolation functions to be combined as indigenous smallpox waned and tuberculosis waxed in Great Britain. In 1949, its isolation role was supplemented when non-tuberculosis patients were admitted for treatment. The hospital was closed in 1975.

Source: (Upper) Wellcome Library, London (oil painting, artist unknown); (lower) montage from the weekly newspaper, The Graphic, 1881, from a series of watercolour images of aspects of the camp hospital at Finchley by F. Collins.

As described in Fenner, et al. (1988, pp. 274–6), the idea of isolation to control the spread of smallpox received a considerable stimulus with the popularisation of variolation, since it was soon recognised that one of the risks of this practice was the spread of smallpox to non-inoculated contacts. So it was, by the end of the eighteenth century, that some writers had already conceived the idea of controlling smallpox by a combination of variolation on a wide scale and the isolation of smallpox. By the middle of the nineteenth century, this approach had taken hold. An article by Sir James Simpson, famous for his introduction of chloroform for anaesthesia, aroused considerable interest and discussion (Simpson, 1868). In it he developed a proposal for eradicating smallpox and other infectious diseases, such as scarlet fever, measles and whooping-cough, by the isolation of cases. He recognised that his proposals could be most readily achieved with smallpox, because vaccination provided a means of protection for nurses and others who had to remain in contact with patients. His proposed Regulations were as follows (Fenner, et al., 1988, p. 275):

1st. The earliest possible notification of the disease after it has broken out upon any individual or individuals.

2nd. The seclusion, at home or in hospital, of those affected, during the whole progress of the disease, as well as during the convalescence from it, or until all power of infecting others is past.

3rd. The surrounding of the sick with nurses and attendants who are themselves non-conductors or incapable of being affected, inasmuch as they are known to be protected against the disease by having already passed through cow-pox or small-pox.

4th. The due purification, during and after the disease, by water, chlorine, carbolic acid, sulphurous acid, etc., of the rooms, beds, clothes, etc., used by the sick and their attendants, and the disinfection of their own persons.

The most vigorous advocacy of isolation as a method of controlling smallpox was developed in Leicester, England, largely as a result of the local anti-vaccination movement (Fraser, 1980). The system (the so-called Leicester Method) developed during the 1870s and achieved notoriety in the 1890s. It depended critically on high-grade surveillance to recognise, report and isolate cases in the town’s Fever and Smallpox Hospital (Figure 3.25). All immediate contacts were quarantined and compensated for loss of time from work. Vaccination was not mentioned – a reflection of the strong local disapproval of compulsory vaccination. Subsequently, the vaccination or revaccination of contacts was added to the routine procedure (Millard, 1914). In Fenner’s view, the Leicester Method plus vaccination anticipated the surveillance and containment strategy of the World Health Organization’s Intensified Smallpox Eradication Programme.

Figure 3.25 Leicester smallpox isolation hospital, 1901. The two-storey isolation block and a ward at Leicester Isolation Hospital at Gilroes. The new hospital was opened in 1900–1 and separated smallpox and tuberculosis isolation from fever isolation (scarlet fever, enteric fever, diphtheria and so on). Historically, all had been treated on a single site in a small combined fever and smallpox hospital built in 1871 on Freaks Ground in northwest Leicester. The buildings there were of corrugated iron and covered 2 acres. Despite enlargement in 1893, it proved inadequate to meet the demands placed upon it by the application of the Leicester Method to control infectious diseases, and this led to the building of the new hospital (McKinley, 1958, pp. 447–Active search operations in India56). The old hospital then treated fevers only.

Figure 3.25
Leicester smallpox isolation hospital, 1901. The two-storey isolation block and a ward at Leicester Isolation Hospital at Gilroes. The new hospital was opened in 1900–1 and separated smallpox and tuberculosis isolation from fever isolation (scarlet fever, enteric fever, diphtheria and so on). Historically, all had been treated on a single site in a small combined fever and smallpox hospital built in 1871 on Freaks Ground in northwest Leicester. The buildings there were of corrugated iron and covered 2 acres. Despite enlargement in 1893, it proved inadequate to meet the demands placed upon it by the application of the Leicester Method to control infectious diseases, and this led to the building of the new hospital (McKinley, 1958, pp. 447–[link]). The old hospital then treated fevers only.

Source: English Heritage Archives.

The Establishment of Smallpox Isolation Hospitals in Great Britain

The success of the Leicester Method led to its widespread adoption elsewhere in Great Britain during the first half of the twentieth century as the notion took hold that a special infectious diseases or smallpox hospital or ward should be an integral part of the control of smallpox. See Dixon (1962) who devotes a chapter of his book to the history of smallpox hospitals in Great Britain. Prior to 1900, hospitals were sometimes established in response to epidemics, often of smallpox, as in Quebec in 1639, and on frequent occasions in towns in Great Britain. But, in general, smallpox patients were not admitted to hospitals. In England, one of the earliest smallpox isolation hospitals was the London Small-Pox and Inoculation Hospital, founded in 1746, initially for the treatment of poor persons with smallpox but soon afterwards mainly as a hospital for subjects undergoing variolation. An Inoculation Institute was established in Brno (Bohemia) at about the same time. Subsequently small private ‘inoculation hospitals’ were set up in most places in which variolation was practised extensively, to prevent the spread of smallpox to susceptible contacts.

As noted, the use of infectious disease hospitals as part of the machinery for controlling smallpox required an efficient system of notification which was easier for smallpox than for most other diseases. Notification formed the core of the Leicester Method and Fenner, et al. (1988, p. 276) regard it as a most important factor in limiting the spread of smallpox after importations into Europe and North America during the twentieth century. However, national notification of cases of infectious diseases was not introduced into Great Britain, for example, until 1899, and even an imperfect system of notification required a public health service far more effective than anything that existed during the nineteenth century in Great Britain or, for that matter, elsewhere. And so, even in the industrial countries of Europe, smallpox elimination was not achieved until well into the twentieth century and signs like Figure 3.26, along with the isolation use of their associated hospitals, passed into history.

Figure 3.26 Smallpox isolation. A smallpox hospital sign in Yorkshire, England, associated with a 1953 outbreak.

Figure 3.26
Smallpox isolation. A smallpox hospital sign in Yorkshire, England, associated with a 1953 outbreak.

Source: Fenner, et al. (1988, Plate 23.2, p. 1078).

Tuberculosis

Mortality from tuberculosis has been historically associated with population growth in cities. It was during the eighteenth century that the world’s big epidemics of tuberculosis began. They were especially intense in those countries (England, United States, Italy and France) that experienced the greatest urbanisation and industrialisation (Johnston, 1993, p. 1059). Tuberculosis was so rampant that autopsies showed that close to 100 percent of some urban populations, such as those of London and Paris had, at some point in their lives, developed the disease although they had died from some other cause. By the early nineteenth century, rates of mortality from tuberculosis in most major American cities ranged from 400 to 500 per 100,000 population.

Early treatment of tuberculosis involved a wide range of quackeries. But, by the mid-nineteenth century and in the absence of antibiotics, therapies with fresh air and sunshine became increasingly popular in specialist isolation hospitals so that, in the 1880s, luxury sanatoria for the wealthy began to proliferate on both sides of the Atlantic and in Japan. One of the leading countries providing these treatments was Switzerland, with its ‘healthy’ mountain air, and four settlements specialised in this work – Davos, Leysin, Arosa and Montana. The village of Leysin had already become internationally known in 1798 when Thomas Malthus included six pages about it in his classic book, An Essay on the Principle of Population. Malthus quoted work by Muret the Elder (1764) who reported the average life expectancy of inhabitants in Leysin was 61 years, as compared to 41 in Vaud (Switzerland) and 30.5 in London. The long life expectancy in Leysin was believed to be the result of both its sunny high altitude climate and the low incidence of infectious diseases.

Figures 3.27 and 3.28 illustrate sanatoria in Leysin. The first winter patient arrived in Leysin in 1873 and, in 1878, the first pension for foreigners was opened. In 1890, the Climatic Society of Leysin was founded, and its promotion of the climate of the village led to several early clinics being built. The first and grandest was the Grand Hotel for 120 patients, opened in 1892, emphasising the importance of international movements of monied patients. The development of Leysin as a centre specialising in the treatment of non-pulmonary tuberculosis awaited the arrival in the village of Dr August Rollier (1874–1954), ‘The Sun Doctor’, in 1903. Rollier’s sun-treatment therapy involved a controlled regime of exposure of different parts of the body to sun for varying lengths of time. Tubercular patients flocked from all over the world to Leysin to be treated. Rollier constructed 37 clinics with 1,150 beds. The design of the sanatoria broadly followed Figure 3.27. The clinics had wide doors and balconies so that bedridden patients could be wheeled into the sun. Rollier’s theories are outlined in de Kruif’s book, Men Against Death, while Mann (1932) in The Magic Mountain and Ellis (1958) in The Rack describe life inside one of the sanatoria.

Figure 3.27 Sanatorium design in Leysin (1,263 m above sea level). (Upper left) Sanatoria were designed to make the best use of sunshine exposure for patients. The general design looked to exploit favourable geographical positions on south and southwest facing slopes of valleys, altitude, clear air and maximal insolation to treat tubercular patients. (Lower left) Wide balconies allowed patients to be wheeled out in their beds into the sun. (Upper right) The Sanatorium Davos-Platz shows the realisation of this design for a specific sanatorium in Davos. (Lower right) This view of Leysin in the 1930s shows that this basic design was repeated across the village producing a very characteristic townscape. Leysin’s sanatoria varied greatly in size from small pavilions (the foreground chalets) to major structures (on the hillside). The viaduct carries a rack and pinion railway opened from Aigle in the valley below in 1900 to facilitate access by patients.

Figure 3.27
Sanatorium design in Leysin (1,263 m above sea level). (Upper left) Sanatoria were designed to make the best use of sunshine exposure for patients. The general design looked to exploit favourable geographical positions on south and southwest facing slopes of valleys, altitude, clear air and maximal insolation to treat tubercular patients. (Lower left) Wide balconies allowed patients to be wheeled out in their beds into the sun. (Upper right) The Sanatorium Davos-Platz shows the realisation of this design for a specific sanatorium in Davos. (Lower right) This view of Leysin in the 1930s shows that this basic design was repeated across the village producing a very characteristic townscape. Leysin’s sanatoria varied greatly in size from small pavilions (the foreground chalets) to major structures (on the hillside). The viaduct carries a rack and pinion railway opened from Aigle in the valley below in 1900 to facilitate access by patients.

Sources: (upper left) Commission Centrale Suisse pour la Lutte Antituberculeuse (Schweizerische Zentralkommission zur Bekämpfung der Tuberkulose), 1917, p. 323; (remainder) Photo Nicca, Leysin.

Figure 3.28 Sanatoria in Leysin. The map shows the original core of the village (grey) in 1890. Subsequent periods of development linked to Rollier’s work providing tuberculosis treatments are shown by white and black boxes. The largest sanatoria on the highest south facing slopes north of the village were built first (white boxes, 1890–1915), with later sanatoria (black boxes) generally backfilling downslope to the original village core. The graphs show the number and demographic composition of sanatoria in Leysin, 1891–2001. The rapid growth after Rollier’s arrival in 1903 is evident. The largest sanatoria were built in the period 1895–1910. At its height for tuberculosis care (1930), Leysin had 5,698 inhabitants. Of these, 3,000 were tuberculosis patients. Antibiotic treatment regimes for tuberculosis were developed after the Second World War, leading to closure of many of the sanatoria in the early 1950s (preceded by the Grand Hotel in 1942). By 1969, there were only eight clinics and convalescent homes left with fewer than 500 beds.

Figure 3.28
Sanatoria in Leysin. The map shows the original core of the village (grey) in 1890. Subsequent periods of development linked to Rollier’s work providing tuberculosis treatments are shown by white and black boxes. The largest sanatoria on the highest south facing slopes north of the village were built first (white boxes, 1890–1915), with later sanatoria (black boxes) generally backfilling downslope to the original village core. The graphs show the number and demographic composition of sanatoria in Leysin, 1891–2001. The rapid growth after Rollier’s arrival in 1903 is evident. The largest sanatoria were built in the period 1895–1910. At its height for tuberculosis care (1930), Leysin had 5,698 inhabitants. Of these, 3,000 were tuberculosis patients. Antibiotic treatment regimes for tuberculosis were developed after the Second World War, leading to closure of many of the sanatoria in the early 1950s (preceded by the Grand Hotel in 1942). By 1969, there were only eight clinics and convalescent homes left with fewer than 500 beds.

Source: redrawn from a sketch map deposited in the Leysin Public Archives with the authorization of the Rollier family and data in Andrew (2002, pp. 252–[link]) also found in Cliff, et al. (2004, Figures 4.7 and 4.8, [link]).

By about 1900, state-sponsored sanatoria also began to be created in many parts of the world, and their use continued for the next half-century. Figure 3.29 uses proportional circles to map the 1912 geographical distribution of isolation hospitals for tuberculosis by county in England and Wales. This function was often added to pre-existing isolation hospitals for smallpox. By mid-century, practically every borough had an isolation hospital for fevers, while hospitals for smallpox and tuberculosis were fewer and served larger catchment areas than the general fever hospitals. All changed with the advent of antibiotics, antivirals and a National Health Service to deliver mass vaccination programmes (see Chapter 4 and Smallman-Raynor and Cliff, 2012). Generalised population immunity meant cases and mortality from these infections ceased to be a significant public health problem. The isolation hospitals became redundant and were either closed or converted.

Figure 3.29 Tuberculosis isolation hospitals in England and Wales, 1912. The county level geographical distribution of TB hospitals is shown by proportional circles.

Figure 3.29
Tuberculosis isolation hospitals in England and Wales, 1912. The county level geographical distribution of TB hospitals is shown by proportional circles.

The holistic treatment of tuberculosis made its appearance in Great Britain as in Switzerland. The geographically isolated Papworth Village Settlement (Figure 3.30) near Cambridge was established by Dr Pendrill Varrier-Jones in 1918 as just such an experiment; see Varrier-Jones (1935), Trail (1961) and Bryder (1984). Initially based around Papworth Hall (P in Figure 3.30), the Settlement expanded in the Hall’s grounds to include some 270 houses and flats. There were some 800 residents (settlers and their families), along with a further 175 colonists in hostels, and with around 500 people employed on-site in printing and bookbinding, woodworking, leather and metalworking industries by the early 1960s (Trail, 1961).

Figure 3.30 Papworth Village Settlement, near Cambridge. Aerial view of the Settlement. The Settlement was a tuberculosis treatment centre, established by Dr Pendrill Varrier-Jones in 1918 as an experiment in the holistic treatment of the disease. It consisted of a hospital and sanatorium (O and S) for the treatment of patients centred around the original Papworth Hall (P), along with a ‘settlement’ where former patients and their families (‘colonists’) could reside and work in a rural environment (A–N). Papworth became the model for other institutions involved in the treatment of tuberculosis in the 1920s and 1930s, including Preston Hall (Kent) and, on a smaller scale, Barrowmore Hall (East Lancashire), Wrenbury Hall (Cheshire) and Sherwood Forest Settlement (Nottinghamshire). The hospital was inherited by the newly formed National Health Service in 1948, and subsequently developed as a pioneering site for cardiology and cardiac surgery.

Figure 3.30
Papworth Village Settlement, near Cambridge. Aerial view of the Settlement. The Settlement was a tuberculosis treatment centre, established by Dr Pendrill Varrier-Jones in 1918 as an experiment in the holistic treatment of the disease. It consisted of a hospital and sanatorium (O and S) for the treatment of patients centred around the original Papworth Hall (P), along with a ‘settlement’ where former patients and their families (‘colonists’) could reside and work in a rural environment (A–N). Papworth became the model for other institutions involved in the treatment of tuberculosis in the 1920s and 1930s, including Preston Hall (Kent) and, on a smaller scale, Barrowmore Hall (East Lancashire), Wrenbury Hall (Cheshire) and Sherwood Forest Settlement (Nottinghamshire). The hospital was inherited by the newly formed National Health Service in 1948, and subsequently developed as a pioneering site for cardiology and cardiac surgery.

Source: Department of Geography, University of Cambridge.

Typhus

Louse-borne typhus used to be a seemingly inevitable companion of war and other forms of social disruption. Louse-borne (epidemic) typhus fever appears to have first manifested as a pestilence of European wars in the latter part of the fifteenth century, spreading widely in the Spanish Army during the War of Granada (1482–92). From thereon, observes Prinzing (1916), the disease became the “Nemesis of belligerent armies” (p. 330), appearing in “almost every war that was waged between the beginning of the sixteenth century and the middle of the nineteenth century” and acquiring the appellation war-plague (p. 328). The notoriety continued into the twentieth century, with major epidemics of typhus fever spreading across Eastern Europe as a consequence of the First World War and its aftermath (Zinsser, 1935; Smallman-Raynor and Cliff, 2004, pp. 657–64), so that controlling the spread of typhus in its aftermath was an early focus of health-related work of the League of Nations (Figure 3.31). Until vaccination became available, isolation was a first line defence. In the Second World War, although there were some important outbreaks, typhus never got out of hand. Outside war zones, typhus struck particularly at major port cities – hence its other popular name of shipboard fever.

Figure 3.31 Typhus isolation in the First World War. (Upper left) The 1919 Russian lithograph by O. Grin illustrates the typhus louse shaking hands with Death. (Upper right) This lithograph by V.S. and Russian S.F.S.R. dates from 1921 and shows men washing themselves in a public or factory bathroom to prevent typhus while their clothes are cleaned in an industrial cleaner. (Lower) Typhus victims being kept in isolation during the First World War in Estonia. In Eastern Europe, Serbia was badly hit by major typhus epidemics in 1914 as were Poland and Russia in 1918.

Figure 3.31
Typhus isolation in the First World War. (Upper left) The 1919 Russian lithograph by O. Grin illustrates the typhus louse shaking hands with Death. (Upper right) This lithograph by V.S. and Russian S.F.S.R. dates from 1921 and shows men washing themselves in a public or factory bathroom to prevent typhus while their clothes are cleaned in an industrial cleaner. (Lower) Typhus victims being kept in isolation during the First World War in Estonia. In Eastern Europe, Serbia was badly hit by major typhus epidemics in 1914 as were Poland and Russia in 1918.

Sources: (Upper) Wellcome Library London, (lower) © Bettman/Corbis.

Typhoid

Typhoid and the carrier state

At the turn of the twentieth century, little was known or understood of the carrier state for typhoid, whereby an individual can be infected with typhoid bacilli, be asymptomatic themselves, and yet pass on the infection to others. The type example is “Typhoid Mary” (Mary Mallon, 1869–1938). Mary emigrated from Ireland to the United States in 1884 (Leavitt, 1996). Her trail of devastation began in 1900 when she commenced work as a cook in the New York City area. In 1900, she had been employed in a house in Mamaroneck, New York, for less than two weeks when the residents developed typhoid. She moved to Manhattan in 1901, and members of the family for whom she cooked developed fevers and diarrhoea, while the laundress died. She then went to work for a lawyer until seven of the eight household members developed typhoid. In 1906, she took a position in Oyster Bay, Long Island. Within two weeks, ten of eleven family members were hospitalised with typhoid. She changed employment again, and similar occurrences happened in three more households.

Mary was first identified as the source of these outbreaks in 1907 by a typhoid researcher, Dr George Soper (Soper, 1907, 1919, 1939). She was subsequently arrested and held in isolation after the New York City health inspector determined her to be a carrier. Under sections 1169 and 1170 of the Greater New York Charter, Mary was held in isolation for three years at a clinic located on North Brother Island (Figure 3.32). Mary came back into circulation when the then New York State Commissioner of Health decided that disease carriers would no longer be held in isolation. Her release in 1910 was conditional on her agreement to cease work as a cook and to take reasonable steps to prevent transmitting typhoid to others. After her release, Mary returned to the mainland and took a job as a laundress. The low wages paid to laundresses compared with cooks led Mary to disappear again, changing her name to Mary Brown to disguise her identity. She returned to her former occupation as a cook and, in 1915, was believed to have infected 25 people, one of whom died, while working as a cook at New York’s Sloane Hospital for Women. Public-health authorities again found and arrested her. She was returned to North Brother Island isolation hospital for the remainder of her life. When she died, a post-mortem found evidence of live typhoid bacteria in her gallbladder, although throughout her life she was in denial about her carrier state and regarded herself as persecuted by society.

Figure 3.32 Mary Mallon (“Typhoid Mary”) and the typhoid carrier state. When Mary Mallon was identified as a typhoid carrier, a lurid press developed around her. An article from the New York American, 20 June, 1909, portrays Mary breaking skulls, not eggs, into a frying pan. The inset images show a card in Mary’s medical records detailing test results for typhoid, and the isolation cottage on North Brother Island where she was committed, 1907–10.

Figure 3.32
Mary Mallon (“Typhoid Mary”) and the typhoid carrier state. When Mary Mallon was identified as a typhoid carrier, a lurid press developed around her. An article from the New York American, 20 June, 1909, portrays Mary breaking skulls, not eggs, into a frying pan. The inset images show a card in Mary’s medical records detailing test results for typhoid, and the isolation cottage on North Brother Island where she was committed, 1907–10.

Sources: Mary Mallon: Mary Evans Picture Library; medical card: New York County Clerk Archives; Brother Island cottage: World Health Organization Archives.

Typhoid and the milk supply

As with Mary Mallon, the danger of an unknown typhoid carrier in the community is that they are likely to cause serial outbreaks of the disease prior to their identification. Such a situation occurred in Folkestone Urban District (1901 population 30,379), southern England, between 1896 and 1909 (Johnstone, 1910); see Figure 3.33.

Figure 3.33 Weekly series of enteric fever cases in Folkestone Urban District, 1896–1909. Cases are classified according to whether the patients had consumed milk from one of four milk farms that employed the typhoid carrier, N, or not in the month prior to onset of illness.

Figure 3.33
Weekly series of enteric fever cases in Folkestone Urban District, 1896–1909. Cases are classified according to whether the patients had consumed milk from one of four milk farms that employed the typhoid carrier, N, or not in the month prior to onset of illness.

Source: redrawn from Johnstone (1910, unnumbered chart, between [link][link]).

In Folkestone, an investigation of enteric fever in the period 1896–1900 identified that a certain milker had worked upon three different farms that were associated with the dissemination of the disease in 1896, 1897 and 1899. At the time of this finding, pre-Mary Mallon, the existence of symptomless carriers was unknown. In succeeding years, 1901–9, inquiries revealed that this same milker was connected with milk farms again associated with the dissemination of enteric fever. Bacteriological studies in 1909 established that this milker was a typhoid carrier. The milker, N, was a man of about 60 years and had, to his knowledge, never suffered from enteric fever and had begun to work regularly as a milker in April 1893 at a farm in Elham Rural District, close to Folkestone. From thereon, he was employed as a cowman and milker on farms near Folkestone, working at four different farms in the years to 1909. Of the 323 indigenous (non-imported) cases of enteric fever identified in Folkestone in the period 1896–1909, 207 (64 percent) are known to have received milk from a farm at which N was then acting as a milker. Johnstone (1910) concluded that enteric fever in the Folkestone Rural District in this period has been spread mainly by milk, and that the milk was infected by a single typhoid carrier.

3.4 Quarantine and Isolation Today

In this section, we look at the impact of population movements, associated with technological changes in transport, upon the feasibility of quarantine and isolation as control strategies today. We then attempt to assess quantitatively their effectiveness using examples for twentieth-century influenza pandemics.

The Role of Movement

Quarantine and isolation are generally only effective control strategies if the surveillance and reporting systems which give early warning of the approach of infection are adequate; and it has always been simpler to control the arrival of infection by sea than overland. This was writ large in Chapter 1 in the early attempts by the states and principalities of Italy to control the spread of plague – compare, for example, Prato (Section 1.3) with the Venetian approach (Section 1.2), and Tatham’s comments (Section 1.1) on disease surveillance in England and Wales.

In the twentieth century, the impact of the breakdown of surveillance and cross-border controls upon the spread of disease was nowhere more graphically illustrated than among the refugee and indigenous populations of Russia and Eastern Europe in the years after the end of the First World War in 1918. Aggravated by the population turbulence caused by the Russian Revolution, millions of displaced persons ranged across western Russia from the Black Sea to the Baltic, carrying with them all manner of infectious diseases – typhus, plague, relapsing fever, and cholera among them. Cholera was an especial concern. Figure 3.34A (Figure 3.34 located in the colour plate section) shows the reported number of cholera cases, other than on the railways, in western Russia in the first six months of 1922. Such was the role of travel in disseminating cholera from the Black Sea and the Ukraine that cholera cases on the railway were recorded separately (Figure 3.34B) while, for this water-borne disease, sanitary stations were established at regular intervals along the principal rivers (Figure 3.34C).

Figure 3.34. Cholera in Russia, January–June 1922. (A) Number of recorded cholera cases in the governments of European Russia, excluding the railways, January–June 1922. (B) Number of cases on the railways, May 1922. Many railway stations contained isolation units. (C) Location of sanitary stations (red squares and triangles) on the principal rivers.

Figure 3.34.
Cholera in Russia, January–June 1922. (A) Number of recorded cholera cases in the governments of European Russia, excluding the railways, January–June 1922. (B) Number of cases on the railways, May 1922. Many railway stations contained isolation units. (C) Location of sanitary stations (red squares and triangles) on the principal rivers.

Sources:(A) League of Nations Health Section (1922a, unnumbered map, between [link][link]). (B) League of Nations Health Section (1922b, [link]; 1922c, unnumbered map, between [link][link]). (C) Office International d'Hygiène Publique (1909, Plate VII between pp. 274–[link]).

As we have noted, implementing quarantine and isolation has always been easiest for ships and when traffic volumes are low. In the twenty-first century, a radically different situation exists. Aircraft now provide the principal means of movement and international traffic volumes are vast. Figures from the World Tourism Organization show that international arrivals worldwide in 2009 for business, leisure and other purposes, amounted to 880 million. Travel for leisure, recreation and holidays accounted for just over half of this flux (51 percent). Some 15 percent of international travellers reported travelling for business and professional purposes and another 27 percent for specific purposes such as visiting friends and relatives, religious reasons and pilgrimages, and for health treatment. Slightly over half of travellers arrived at their destination by air transport (53 percent) in 2009, while the remainder travelled by surface (47 percent) – whether by road (39 percent), rail (3 percent) or sea (5 percent). Over time, the share for air transport arrivals has gradually increased so that international arrivals are expected to reach 1.6 billion by 2020.

Whether by land (Figure 3.35), sea (Figure 3.36) or air (Figure 3.37), the temporal story of transport for the last c. 250 years has been one of exponentially increasing carrying capacity and exponentially diminishing journey times.

Figure 3.35 Historical time changes in land transport at two geographical scales. (A) London to Scotland, 1750–1950. (B) Transcontinental eastbound across the United States from New York to California, 1850–1930. The solid lines show the exponential decline in travel times.

Figure 3.35
Historical time changes in land transport at two geographical scales. (A) London to Scotland, 1750–1950. (B) Transcontinental eastbound across the United States from New York to California, 1850–1930. The solid lines show the exponential decline in travel times.

Source: based partly on Davies (1964, Figure 91, pp. 508–[link]).

Figure 3.36 Time changes in intercontinental travel by sea transport. (A) Transatlantic travel times between Europe and North America, 1820–1940. (B) Travel times between England and Australia, 1788–2000. Names of vessels are in italic. The solid lines show the exponential decline in travel times.

Figure 3.36
Time changes in intercontinental travel by sea transport. (A) Transatlantic travel times between Europe and North America, 1820–1940. (B) Travel times between England and Australia, 1788–2000. Names of vessels are in italic. The solid lines show the exponential decline in travel times.

Sources: Davies (1964, Figure 91, pp. 508–[link]); Cliff and Haggett (2004, Figure 1A, [link]).

Figure 3.37 Historic changes in travel times by air transport. (A) Transcontinental in the United States eastbound New York–California, 1925–60. (B) Intercontinental between Europe and Australia, 1925–2000. Plane types are named. The solid lines show the exponential decline in travel times.

Figure 3.37
Historic changes in travel times by air transport. (A) Transcontinental in the United States eastbound New York–California, 1925–60. (B) Intercontinental between Europe and Australia, 1925–2000. Plane types are named. The solid lines show the exponential decline in travel times.

Sources: adapted from Davies (1964, Figure 91, pp. 508–[link]) and Cliff and Haggett (2004, Figure 1B, [link]).

If the shift from sail to steamships accelerated global interaction in the second half of the nineteenth century, aircraft did the same again in the second half of the twentieth century. Spurred on by the technological advances that accompanied the Second World War, notably the development of high-precision navigational aids and the gas turbine (jet) engine, passenger aircraft increasingly replaced ships as the international carrying medium. Figure 3.37 charts the decline in passenger flight times at two geographical scales. Graph (A) shows the change in transcontinental flight times across the United States (approximately 3,000 miles). A crossing which took two full days in the late 1920s had been reduced to half a day by 1960. Graph (B) shows even more striking changes in the 12,000 mile England–Australia run since 1925. In both cases the exponential decline in travel times is shown by a solid line, comparable in shape to the distance-decay curves for land and sea transport.

If we map the world in time–space using a technique like multidimensional scaling (MDS) (Cliff, et al., 2000, pp. 219–[link]), rather than using a conventional geographical metric, a consequence of the collapse in travel times is that the world’s countries have been rapidly moving closer together. Figure 3.38 shows this effect for part of the Pacific Basin. Here MDS been used to construct a time accessibility map of 25 islands and island groups. Figure 3.38A is a conventional map of the locations of these islands. Figure 3.38B is the MDS representation of (A). Islands and island groups with similar levels of accessibility as measured by travel times are mapped together, irrespective of their geographical locations. The vectors show the way in which relatively inaccessible Papua New Guinea (PNG), the Trust Territories of the Pacific Islands (TTP) and Latin America are moved from the centre of the time space, whereas the more accessible Pacific seaboard of the United States migrates in towards the centre of the time space.

Figure 3.38 A collapsing world: travel-time maps. (A) Conventional map of the Pacific Basin with 25 islands, island groups and continental cities marked. (B) Time accessibility map of Pacific islands and Pacific Rim countries by scheduled airline carriers in the last quarter of the twentieth century, constructed by multidimensional scaling. Centres with similar levels of accessibility are mapped together irrespective of their geographical locations. 1 American Samoa; 2 Cook Islands; 3 Fiji; 4 French Polynesia; 5 Kiribati and Tuvalu; 6 Guam; 7 Hawaii; 8 Nauru; 9 New Caledonia; 10 Vanuatu; 11 Niue; 12 Norfolk Island; 13 Papua New Guinea; 14 Pitcairn; 15 Solomon Islands; 16 Tokelau; 17 Tonga; 18 US Trust Territories of the Pacific Islands; 19 Wallis and Futuna; 20 Western Samoa; 21 Tokyo; 22 Sydney; 23 San Francisco; 24 Singapore; 25 Santiago.

Figure 3.38
A collapsing world: travel-time maps. (A) Conventional map of the Pacific Basin with 25 islands, island groups and continental cities marked. (B) Time accessibility map of Pacific islands and Pacific Rim countries by scheduled airline carriers in the last quarter of the twentieth century, constructed by multidimensional scaling. Centres with similar levels of accessibility are mapped together irrespective of their geographical locations. 1 American Samoa; 2 Cook Islands; 3 Fiji; 4 French Polynesia; 5 Kiribati and Tuvalu; 6 Guam; 7 Hawaii; 8 Nauru; 9 New Caledonia; 10 Vanuatu; 11 Niue; 12 Norfolk Island; 13 Papua New Guinea; 14 Pitcairn; 15 Solomon Islands; 16 Tokelau; 17 Tonga; 18 US Trust Territories of the Pacific Islands; 19 Wallis and Futuna; 20 Western Samoa; 21 Tokyo; 22 Sydney; 23 San Francisco; 24 Singapore; 25 Santiago.

Source: (B) is based upon unpublished work by P. Forer, Department of Geography, University of Canterbury, Christchurch, New Zealand.

Communicable Disease Consequences of Change

It was difficult to foresee in the two decades after the end of the First World War the long-term consequences of the collapse of geographical space for communicable disease diffusion and its control. Occasionally, a prescient view would be taken. Thus Massey (1933, p. v) in the preface to his book on Epidemiology in Relation to Air Travel remarks:

Speedier transport is equivalent to a reduction of distance. This was shown when steamships superseded sailing vessels. It is demonstrated more forcibly today by the events of civil aviation. Among the momentous advantages, fraternal and commercial, born of this development, there is the disadvantage that countries affected by certain major infectious diseases are brought nearer to countries which ordinarily enjoy freedom therefrom.

Table 3.1, taken from Massey, goes to the heart of the matter by summarising the (then) relationship between ship and air travel times to the UK, and the maximum incubation periods for four infectious diseases. It shows how the switch from steamship to air travel potentially opened up the UK to four of the quarantine diseases endemic in other parts of the world. Passenger aircraft reduced travel times to somewhat less than a third of the journey time by sea – and, in all instances, to less than the incubation periods of the diseases so that it became possible for an infected individual unwittingly to transfer sickness into a disease-free area before becoming symptomatic – and accordingly greatly reducing the effectiveness of any early-warning surveillance systems with their associated quarantine and isolation control strategies.

Table 3.1 Disease diffusion consequences of reduced travel times. Travel times by ship and air in relation to the incubation period of selected communicable diseases in days, 1933

Disease

Incubation period (maximum)

Infected countries trafficking with UK

Journey time to UK by sea

Journey time to UK by air

Plague

6

India

20

6

Iraq

18

5

Egypt

10

3

East Africa

20

5

West Africa

10

3

South America

17

5

Cholera

5

India, Iraq

As above

As above

Yellow fever

6

W Africa, S America

As above

As above

Smallpox

14

India, Iraq, Egypt, W Africa, S America

As above

As above

All figures are in days.

Source: Massey (1933, [link]).

Table 3.2 Putative spread of yellow fever. Travel time in days from regions with endemic yellow fever in 1933 to countries with no yellow fever experience

Destination country

Infected countries trafficking therewith

Travel time by sea

Travel time by air

UK

West Africa1

10

3

Central America

17

5

South America

17

6

Caribbean islands2

14

5

France

West Africa2

8

2

South America2

16

4

Belgium

Belgian Congo1

18

6

USA

Panama2

8 (to New York)

3 (to New York)

Colombia2

8

3

Caribbean islands2

7

2

South Africa

West Africa1

10

3

British East Africa

West Africa1

21

3

India

West Africa1

26

5

1 Airlines likely to operate in the near future (1933)

2 Airlines in operation (1933)

All figures are in days

Source: Massey (1933, [link])

Table 3.2 illustrates this problem in more detail for one of Massey’s diseases, yellow fever. Massey was fearful that collapsing travel times within the incubation period of the disease would lead to a geographical diaspora of undetected yellow fever from endemic regions to new parts of the world. As noted in Section 2.5, the Office International d’Hygiène Publique took up the issue of quarantine regulations for air traffic in 1928; an International Sanitary Convention for Aerial Navigation was drawn up in 1932 and came into force in 1935. Part III (Chapter II) of the 1935 Convention was specifically aimed at the control of yellow fever, and this prompted the design of airports that incorporated elements of spatial isolation to lessen the risk of yellow fever virus transmission to passengers on stopovers in tropical locations (Figure 3.39).

Figure 3.39 Inter-war plan of the anti-amaril aerodrome at Juba, Sudan. The putative identification of yellow fever virus (amaril) activity in southern Sudan in the mid-1930s necessitated remedial action if the aerodrome at Juba was to remain operative as a stopover on the Imperial Airways route between London and Cape Town. As implemented under Part III (Chapter II) of the International Sanitary Convention for Aerial Navigation (1935), ‘Measures Applicable in Case of Yellow Fever’, steps taken included the isolation of the airport from the indigenous population by relocating the village of Juba from site A (1.1 km to the southeast of the aerodrome) to site A’ (2.9 km due south of the aerodrome), beyond the flight range of potentially infective mosquitoes. Other measures included the construction of mosquito-proof residences for air crew, isolation rooms for patients and a hotel for passengers, all located within approximately 1 km of the aerodrome.

Figure 3.39
Inter-war plan of the anti-amaril aerodrome at Juba, Sudan. The putative identification of yellow fever virus (amaril) activity in southern Sudan in the mid-1930s necessitated remedial action if the aerodrome at Juba was to remain operative as a stopover on the Imperial Airways route between London and Cape Town. As implemented under Part III (Chapter II) of the International Sanitary Convention for Aerial Navigation (1935), ‘Measures Applicable in Case of Yellow Fever’, steps taken included the isolation of the airport from the indigenous population by relocating the village of Juba from site A (1.1 km to the southeast of the aerodrome) to site A’ (2.9 km due south of the aerodrome), beyond the flight range of potentially infective mosquitoes. Other measures included the construction of mosquito-proof residences for air crew, isolation rooms for patients and a hotel for passengers, all located within approximately 1 km of the aerodrome.

Source: Cliff, Smallman-Raynor, Haggett, et al. (2009, Plate 6.1, p. 351), originally from Pridie (1936, opposite p. 1296).

The problem of spatially intercepting cases of communicable diseases and their contacts before they can spread infection to new areas has been exacerbated by the exponential increase in personal mobility in the last 50 years. Figure 3.40 illustrates the point for 82 islands and island groups over the second half of the twentieth century for which UN data are available (1957–1992). The diagram shows on an annual basis the number of visitors per head of resident population. Representative islands from different geographical environments have been plotted with heavy lines. The pecked line is the sample median. Its gradient suggests that, over the sample, the visitor:resident population ratio has grown tenfold in 35 years. For 77 of the 82 islands, a simple linear regression of the visitor:population ratio against time produces positive slope coefficients; the log scale used for the vertical axis implies that the growth in the ratio has been exponential over the period.

Figure 3.40 Population flux on 82 islands and island groups, 1957–1992. Line traces show number of visitors per head of resident population. Representative islands from different geographical environments are shown with heavy lines. The pecked line is the sample median.

Figure 3.40
Population flux on 82 islands and island groups, 1957–1992. Line traces show number of visitors per head of resident population. Representative islands from different geographical environments are shown with heavy lines. The pecked line is the sample median.

Source: Cliff, et al. (2000, Figure 5.12, p. 205).

Estimating the Impact of Quarantine and Isolation

Quarantine of the sort practised in Italy was a blunt instrument in that it attempted to limit the travel of people between communities (and often of goods as well) irrespective of their disease status. For humans, most quarantine efforts until the early years of the twentieth century focused upon controlling infection arising from maritime trade; Kilwein (1995a, b), Mafart and Perret (1998), Sattenspiel and Herring (2003) and Gensini, et al. (2004) provide reviews of the literature. The general experience with quarantine and isolation over this period was that it was more successful in reducing impact than in keeping areas disease-free – as in Italy with plague. But how successful was quarantine as an approach in the twentieth century as transport technology changed and the international flux of people multiplied exponentially year on year? And, recognising the possible adverse consequences that may follow from the implementation of large-scale quarantine action (Barbera, et al., 2001), what are its prospects for the future? We consider these questions in this and the next subsection.

Canada, 1918–19

Sattenspiel and Herring (2003) used data from the Hudson’s Bay Company records on the 1918–19 influenza pandemic among Aboriginal fur trappers in three northern communities (Norway House, Oxford House and God’s Lake) in the Keewatin District of Central Manitoba (Figure 3.41) to examine two topical questions relating to quarantine:

  1. (i) What is the impact of varying the time during an epidemic at which intercommunity quarantine is implemented?

  2. (ii) What is the effect of varying the duration of quarantine?

Figure 3.41 Keewatin District of central Manitoba, Canada. Location map showing positions of the trading communities of Norway House, Oxford House and God’s Lake.

Figure 3.41
Keewatin District of central Manitoba, Canada. Location map showing positions of the trading communities of Norway House, Oxford House and God’s Lake.

Source: Cliff, Smallman-Raynor, Haggett, et al. (2009, Figure 11.12, p. 642).

An SIR compartment model was used with a 30-day quarantine period which was applied only at Norway House. See Section 1.4 for the specification of an SIR model. The compartment model permits the mixing parameter, β, to vary spatially, thus allowing inhomogeneous mixing of susceptibles and infectives. In their version of the model, Sattenspiel and Herring (2003) replaced β with two parameters: σ, the rate of travel out of communities and ρ, the rate of return into communities. For a theoretical discussion of quarantine in infectious disease models, see Hethcote, et al. (2002).

Figure 3.42A shows the impact upon case levels at Oxford House of varying the time on the epidemic curve at which quarantine measures were introduced at Norway House (question (i) earlier), with epidemic starts at Norway House and God’s Lake. Case load at the epidemic peak was minimised when quarantine was introduced well before the epidemic peaked, but not right at the beginning of an epidemic. The maximum effect was felt when quarantine was started about half way to the peak. Introduction of quarantine at this point on the epidemic curve also had the maximum delaying effect upon the epidemic peak. This is shown for God’s Lake in Figure 3.42B. Such delay can buy public health authorities time to devise other control strategies.

Figure 3.42 Quarantine in Canada, 1918–19. Estimated impact of inter-community quarantine upon the Spanish influenza pandemic in the Keewatin District of central Manitoba, Canada. (A) Oxford House (OH): Estimated cases at epidemic peak as a function of the time on the epidemic curve at which quarantine was started at Norway House (NH). Maximum case reduction occurred for quarantine start times about a quarter of the way through the epidemic. There was no effect after the epidemic peak. Curves for epidemics starting in NH and God’s Lake (GL) are shown. The hypothesised epidemic curve is shaded. (B) God’s Lake: Estimated delay in timing of the epidemic peak (in days) for different quarantine start times at NH and epidemics starting at NH and GL. Consistent with (A), maximum delay is delivered by starting quarantine about a quarter of the way through the epidemic. Curves are shown for high and low rates of inter-community travel. (C) God’s Lake: Estimated size of epidemic (in cases) as a function of the quarantine period at NH and epidemic starts at NH, OH and GL. No appreciable effect is felt with quarantines > 30 days. (D) Oxford House: Days to epidemic peak as function of quarantine completeness at NH on a scale from 0–100 percent (no–complete quarantine), and epidemic starts at NH, OH and GL.

Figure 3.42
Quarantine in Canada, 1918–19. Estimated impact of inter-community quarantine upon the Spanish influenza pandemic in the Keewatin District of central Manitoba, Canada. (A) Oxford House (OH): Estimated cases at epidemic peak as a function of the time on the epidemic curve at which quarantine was started at Norway House (NH). Maximum case reduction occurred for quarantine start times about a quarter of the way through the epidemic. There was no effect after the epidemic peak. Curves for epidemics starting in NH and God’s Lake (GL) are shown. The hypothesised epidemic curve is shaded. (B) God’s Lake: Estimated delay in timing of the epidemic peak (in days) for different quarantine start times at NH and epidemics starting at NH and GL. Consistent with (A), maximum delay is delivered by starting quarantine about a quarter of the way through the epidemic. Curves are shown for high and low rates of inter-community travel. (C) God’s Lake: Estimated size of epidemic (in cases) as a function of the quarantine period at NH and epidemic starts at NH, OH and GL. No appreciable effect is felt with quarantines > 30 days. (D) Oxford House: Days to epidemic peak as function of quarantine completeness at NH on a scale from 0–100 percent (no–complete quarantine), and epidemic starts at NH, OH and GL.

Source: based upon graphs in Sattenspiel and Herring (2003, Figures 4–7, [link][link]).

Figure 3.42C shows the impact of quarantine periods of different lengths at Norway House upon the total number of cases estimated to occur at God’s Lake. For quarantines of up to 30 days, the case total dropped sharply. There is no further benefit gained by quarantines of greater duration – although this will, of course, be affected by the serial interval of the disease (about 4–8 days for influenza), so that we might expect the optimal quarantine duration to be positively correlated with the serial interval of the disease.

The duration of quarantine will also be affected by its effectiveness. Figure 3.42D explores this for Oxford House. The curves show the time in days to the epidemic peak at Oxford House (vertical axis), against quarantine completeness at Norway House on the horizontal axis. The traces show that, once mobility goes above about 10 percent (i.e. the quarantine is less than 90 percent effective), quarantine did not delay the onset of the epidemic peak at Oxford House. Up to this threshold, the epidemic peak was delayed by several days. Sattenspiel and Herring also found a similar 10 percent threshold for the ultimate size of the epidemic.

United States, 1918–19 and 1957

Two studies, by Markel, et al. (2007) and by Haber, et al. (2007), have investigated the impact of various non-prophylactic techniques such as school closures as an approach to epidemic mitigation. Markel, et al. used data from 43 cities in the continental United States for the 24-week period from 8 September 1918 to 22 February 1919, to determine whether city-to-city variations in mortality were associated with the timing, duration, and combination of various non-pharmaceutical interventions (school closures; cancellation of public gatherings; and isolation and quarantine); allowance was made for confounding variables like city size and population density. In a similar vein, but using simulation to evaluate different scenarios, Haber, et al. (2007) estimated the impact upon the ultimate size of an influenza epidemic of reducing contact rates among specified classes of citizens in a hypothetical small urban community in the United States. The community was assumed to have a distribution of household sizes and ages that followed the 2000 US Census. The interventions they investigated were school closures, confinement of ill persons and their household contacts to their homes, and reduction in contact rates among residents of long-term care facilities. Interventions were implemented at the start of the outbreak. Data from the 1957–58 Asian influenza pandemic were used to test the model. A mixing matrix was devised with the following age categories and mixing groups: < 1–4, 5–18, 19–64, ≥ 65 at home, ≥ 65 in long-term care; households, day-care centres, schools, workplaces, long-term care facilities and the community.

Markel, et al. took the weekly excess death rate per 100,000 population (EDR) as a measure of the success of different interventions. Over the 24-week study period, there were 115,340 excess pneumonia and influenza deaths (EDR = 500) in the 43 cities analysed. Every city adopted at least one of the three non-pharmaceutical interventions: school closure; cancellation of public gatherings; and isolation/quarantine. School closure and public gathering bans was the most common intervention combination, implemented in 34 cities (79 percent), with a median duration of 4 weeks (range, 1–10 weeks). The longer the period of non-pharmaceutical intervention, the lower was the EDR. This is illustrated in Figure 3.43A by comparing St Louis (143 days of non-pharmaceutical intervention) and New York City (73 days). The cities which implemented non-pharmaceutical interventions earlier also had greater delays in reaching peak mortality (Spearman r = –0.74, p < 0.001) and lower peak mortality rates (Spearman r = 0.31, p = 0.02); see Figure 3.43. There was a statistically significant inverse correlation between duration of non-pharmaceutical interventions and total mortality (Spearman r = –0.39, p = 0.005) and, as noted, cities experienced lower total mortality when intervention started early (Spearman r = 0.37, p = 0.008); see Figure 3.43C.

Figure 3.43 United States: estimated impact of non-prophylactic interventions upon rates of illness and mortality in the influenza pandemics of 1918 and 1957. (A) Weekly excess death rate (EDR) in New York City and St Louis, September 1918–February 1919 in relation to the duration of non-pharmaceutical interventions. The lower excess mortality in St Louis may be attributed to the longer duration of intervention. (B) and (C), regression lines showing (B) the relationship between public health response time (PHRT) and the timing and magnitude of the first influenza peak in 43 cities and (C) Weekly EDR in relation to timing and duration of non-prophylactic interventions. The vertical line indicates the day on which the pandemic accelerated in each city. An intervention introduced on this day was given a PHRT of zero; interventions introduced on days before acceleration have negative PHRTs and, on days after, positive PHRTs. (D) Impact of school closures for varying levels of sickness and closure periods. (E) Impact of home confinement of sick individuals and their contacts for varying levels of quarantine compliance. In (D) and (E), effectiveness is defined as: effectiveness = (baseline rate – rate with intervention)/baseline rate, where the baseline rate is that for illness during the 1957–58 pandemic in the United States.

Figure 3.43
United States: estimated impact of non-prophylactic interventions upon rates of illness and mortality in the influenza pandemics of 1918 and 1957. (A) Weekly excess death rate (EDR) in New York City and St Louis, September 1918–February 1919 in relation to the duration of non-pharmaceutical interventions. The lower excess mortality in St Louis may be attributed to the longer duration of intervention. (B) and (C), regression lines showing (B) the relationship between public health response time (PHRT) and the timing and magnitude of the first influenza peak in 43 cities and (C) Weekly EDR in relation to timing and duration of non-prophylactic interventions. The vertical line indicates the day on which the pandemic accelerated in each city. An intervention introduced on this day was given a PHRT of zero; interventions introduced on days before acceleration have negative PHRTs and, on days after, positive PHRTs. (D) Impact of school closures for varying levels of sickness and closure periods. (E) Impact of home confinement of sick individuals and their contacts for varying levels of quarantine compliance. In (D) and (E), effectiveness is defined as: effectiveness = (baseline rate – rate with intervention)/baseline rate, where the baseline rate is that for illness during the 1957–58 pandemic in the United States.

Source: Cliff, Smallman-Raynor, Haggett, et al. (2009, Figure 11.14, p. 646).

Haber, et al. used a different measure of the success of non-prophylactic interventions in their study, namely effectiveness, defined as:

effectiveness = ( baseline influenza rate – rate with intervention)/baseline rate.

Figures 3.43D and E show the estimated impact upon outbreak size of (D) school closures and (E) confinement of sick people to home. For schools, closure at around 10 percent sick and for 14 days was the most effective compromise in the trade off between reducing infection and increasing societal disruption. By striking early, children incubating the disease are taken out of circulation, while 14-day closure exceeds the serial interval of influenza. As graph (D) shows, delay (as measured by percent illness required to trigger closure) allows incubators and infectives to produce secondary downstream cases, thus greatly reducing effectiveness. Within the family, chart 3.43E shows that the same principles apply. By confining sick individuals and their contacts to home with high isolation compliance greatly reduces the chances of community-wide contacts between infectives and susceptibles; it is a highly effective intervention. Haber, et al. also found that, for long-term care facilities (LTCF), reducing contacts of the healthy residents with sick co-residents has a significant impact upon illness levels. This is an important finding since LTCF residents respond poorly to vaccination and often escape vaccination entirely in the US.

The findings of the Markel and Haber studies are consistent with that of Sattenspiel – the application of non-pharmaceutical interventions which reduce mixing between infectives and susceptibles early in an epidemic has the capability of reducing both the ultimate size of an epidemic and delaying the peak of infection. It suggests, as do the recent studies by Davey and Glass (2008) and Meltzer (2008), that in planning for future severe influenza pandemics, non-pharmaceutical interventions should be considered for inclusion as companion measures to developing effective vaccines and medications for prophylaxis and treatment (cf. Barbera, et al., 2001). Haber, et al. estimate that, by combining these interventions, rates of illness and death in a community might be reduced by as much as 50 percent. Such non-prophylactic interventions are included in the current US Department of Health and Human Services Influenza Pandemic Plan (US Department of Health and Human Services, 2005, 2007).

Similar findings have also been found for international travel. On a global scale, Cooper, et al. (2006) used simulation models hypothetically to track how the 1968–69 Hong Kong influenza pandemic would have spread globally if it had been injected into the world’s passenger airline network as it existed in 2000. They examined two scenarios: (i) with intervention to suspend 99.9 percent of air travel from affected cities and (ii) for comparison, no intervention, at two dates – August 1968 and February 1969 – two and eight months respectively after the first cases on 1 June 1968; see Figure 3.44 (located in the colour plate section). In the simulation, intervention was made after 100 cases occurred in a city (or 1,000 cases for Hong Kong, the city of origin). As the airline links dropped out so the speed of spread of the epidemic was reduced by up to three weeks as compared with the no intervention scenario – sufficient time to consider local interventions including vaccination provided a suitable vaccine was predistributed to doctors (as was the case in the UK during the influenza A/H1N1/09 pandemic of 2009). The highly connected nature of the air travel network prevents minor delays between pairs of cities combining into substantial delays over the whole network.

Figure 3.44. Simulated spread of the 1968–69 Hong Kong global influenza pandemic via the world's airline network. Simulations are used to track the course of the pandemic (left) with intervention to suspend 99.9 percent of air travel from affected cities and, for comparison (right) with no intervention, at two dates – August 1968 and February 1969 – two and eight months respectively after the first cases on 1 June 1968. Blue lines represent surviving flights. Circle areas are proportional to city population size, and shading indicates the probability of each city experiencing a major epidemic (> 1 case per 10,000 people per day).

Figure 3.44.
Simulated spread of the 1968–69 Hong Kong global influenza pandemic via the world's airline network. Simulations are used to track the course of the pandemic (left) with intervention to suspend 99.9 percent of air travel from affected cities and, for comparison (right) with no intervention, at two dates – August 1968 and February 1969 – two and eight months respectively after the first cases on 1 June 1968. Blue lines represent surviving flights. Circle areas are proportional to city population size, and shading indicates the probability of each city experiencing a major epidemic (> 1 case per 10,000 people per day).

Source: Cooper, et al. (2006, Figure 4, p. 850).

These ideas were tested further by Epstein, et al. (2007) who used a stochastic epidemic model to study global transmission of pandemic influenza, including the effects of travel restrictions and vaccination. They found that the distribution of first passage times to the United States and the numbers of infected persons in metropolitan areas worldwide could be slightly delayed by international air travel restrictions alone. When other local containment measures were applied at the source of infection, in conjunction with travel restrictions, delays could be much longer and case load reduced.

Plague in India, 1994

Beginning on 26 August 1994, outbreaks of bubonic and pneumonic plague began to be reported in south-central, southwestern and northern India (Dutt, et al., 2006). The outbreak probably resulted in some 5,150 pneumonic or bubonic plague cases and 53 deaths in eight Indian states, with the majority from south-central and southwestern regions. Of the 5,150 cases, the majority (2,793) were reported from Maharashtra state (including Bombay), with much of the balance from Gujarat state (1,391 cases) and Delhi (749 cases); the remaining 169 cases were from Andhar Pradesh, Haryana, Madhya Pradesh, Rajasthan, Uttar Pradesh and West Bengal (Centers for Disease Control and Prevention, 1994a, b). By 19 October, the outbreak was under control.

As Madan (1995) and Fritz, et al. (1996) observe, the initial reports of the 1994 outbreak caused considerable international concern, especially among countries which were uncertain of the effectiveness of their own public healthcare systems, over the possible importation of pneumonic plague from India by air travel. The response from the World Health Organization was benign, so that a number of countries adopted their own ad hoc procedures. These ranged from increased surveillance and checking of passengers (e.g. France, Germany and the United States) at airports, through to border closure (offensive containment in terms of Figure 1.20B) in others (six of the Gulf States). Many countries, particularly in Asia, banned flights to and from India. For example, Saudi Arabian authorities refused a scheduled Air India flight from Bombay permission to land in Jeddah (Madan, 1995). Air India aircraft were fumigated on arrival at airports in Rome and Milan and passengers were subjected to special health checks. In Moscow, authorities ordered six-day quarantines for passengers from India and banned travel to India. The response reflected recognition of the risk of transmission in the modern global community, and India was de facto in pseudo-isolation/quarantine for a period with a ring of nations around its borders implementing restrictions on travel and trade to try to prevent cross-border transfer of infection (Figure 3.45). Together, these countries effectively comprised an isolation cordon sanitaire around India. The economic impact on India was enormous and it took the country months to recover. In addition, some 300,000 Indians fled the infected regions, creating an internal migration problem.

Figure 3.45 Containing plague in India, 1994. Countries are identified if they adopted defensive measures to prevent plague from India crossing their borders during the Surat outbreak, August–October 1994. Measures used ranged from banning trade and travel (isolation and quarantine, grey shading) at one extreme through to enhanced surveillance and vaccination at the other (diagonal shading).

Figure 3.45
Containing plague in India, 1994. Countries are identified if they adopted defensive measures to prevent plague from India crossing their borders during the Surat outbreak, August–October 1994. Measures used ranged from banning trade and travel (isolation and quarantine, grey shading) at one extreme through to enhanced surveillance and vaccination at the other (diagonal shading).

Severe Acute Respiratory Syndrome (SARS), 2003

SARS was the first new emerging infectious disease to hit the world in the twenty-first century with the potential to become a global epidemic. It was caused by a previously unknown coronavirus subtype (SARS-CoV), which crossed the species barrier with subsequent human-to-human transmission. As a result of this transmission, from November 2002–July 2003, 8,096 SARS cases and 774 deaths were reported from 29 countries and areas. More than 95 percent of cases occurred in 12 countries of the Western Pacific Region. Mainland China was the worst affected with 5,327 cases. A fifth of the world-wide SARS cases occurred among healthcare workers. The average global case fatality ratio was estimated at around 15 percent, increasing to more than 55 percent for people above 60 years of age (Ahmad and Andraghetti, 2007).

During the outbreak, and dealing with a novel disease agent about which little was initially known, contact tracing, quarantine and isolation were used globally as the principal tools to limit disease spread. Such traditional intervention methods had not been used on this scale for several decades. In many countries, legislative changes were required to facilitate the approach (Rothstein, et al., 2003). WHO was strongly interventionist in leading the global response. Over the longer run, as the characteristics of the causative coronavirus were established, it became clear that it had low transmissibility between humans (basic reproductive number R0 about 3 compared with c. 7 or more for influenza A and c. 15–18 for measles prior to widespread immunisation) and that peak infectiousness followed the onset of clinical symptoms. These characteristics conspired to make the simple public health measures used initially, such as isolating patients and quarantining their contacts, very effective in the control of the epidemic (Anderson, et al., 2004).

Figures 3.46 and 3.47 show the geographical diffusion process. The index case was a 72-year-old man who was taken ill while returning on 15 March 2003 from a trip to Hong Kong to his home in Beijing on China Airways flight CA112. On the flight he transmitted SARS virus to a number of fellow travellers seated near him (Figure 3.46A; Olsen, et al., 2003, and Whaley, 2006). The aircraft was a Boeing 737-300 which can typically carry up to 126 passengers; on this flight there were 112 with eight crew members. The index case had stayed at the Metropole Hotel in Hong Kong. Subsequent tracking confirmed that 22 fellow passengers and two crew members were infected by this index case, four of whom eventually died. WHO studies of other flights with SARS cases on board showed within-plane virus spread on only four of 35 flights, so CA112 looks like an extreme event on the virus spreading scale.

Figure 3.46 Pattern of SARS spread, 2003, by aircraft. (A) Contacts within an aircraft cabin. SARS infections on Flight CA112 from Hong Kong to Beijing, 15 March 2003. (B) SARS epidemic curve, November 2002–July 2003 showing fuelling of the curve by flight CA112 and its sequelae. (C) Subsequent movement of infected passengers.

Figure 3.46
Pattern of SARS spread, 2003, by aircraft. (A) Contacts within an aircraft cabin. SARS infections on Flight CA112 from Hong Kong to Beijing, 15 March 2003. (B) SARS epidemic curve, November 2002–July 2003 showing fuelling of the curve by flight CA112 and its sequelae. (C) Subsequent movement of infected passengers.

Source: Whaley (2006, Figure 15.1, [link]).

Figure 3.47 Global spread of severe acute respiratory syndrome (SARS), November 2002–May 2003. Sequence of appearance of probable SARS cases in 29 countries and major administrative regions, November 2002–May 2003. Timings are based on the date of onset of the first recorded case in a given geographical area.

Figure 3.47
Global spread of severe acute respiratory syndrome (SARS), November 2002–May 2003. Sequence of appearance of probable SARS cases in 29 countries and major administrative regions, November 2002–May 2003. Timings are based on the date of onset of the first recorded case in a given geographical area.

Source: based on information in World Health Organization (2005f).

Rapid onward spread of the virus occurred because many of the man’s fellow travellers, now infected, flew on to Taipei, Singapore, Bangkok and Inner Mongolia. As Figure 3.46C shows, this onwards geographical spread continued so that, by May 2003, cases of SARS were occurring worldwide, driven by international air travel. The temporal sequence of cases and deaths at the global scale by July 2003 is mapped in Figure 3.47.

In the absence of a vaccine or effective therapies, the options for intervention within a country were limited to public health measures. There are essentially six intervention categories, namely: (i) restrictions on entry to the country and screening at the point of arrival for fever; (ii) isolation of suspect cases; (iii) the encouragement of rapid reporting to a healthcare setting following the onset of defined clinical symptoms, with subsequent isolation; (iv) rigorous infection control measures in healthcare settings; (v) restrictions on movements within a country (restricting travel, limiting congregations such as attendance at school); and (vi) contact tracing and isolation of contacts (Ahmad and Andraghetti, 2007; Anderson, et al., 2004). Figure 3.48 uses symbols and shading to indicate which interventions were used in each of the countries chiefly affected.

Figure 3.48 SARS control measures, 2002–3. Symbols and shading are used to indicate the main public health control measures adopted in each of the 10 countries principally affected. There is a clear difference between countries with local SARS transmission, where mandatory quarantine and vigorous contact tracing were undertaken to try to break the chains of infection, and the more permissive approach in countries with solely imported cases.

Figure 3.48
SARS control measures, 2002–3. Symbols and shading are used to indicate the main public health control measures adopted in each of the 10 countries principally affected. There is a clear difference between countries with local SARS transmission, where mandatory quarantine and vigorous contact tracing were undertaken to try to break the chains of infection, and the more permissive approach in countries with solely imported cases.

Source: drawn from data in Ahmad and Andraghetti (2007, Tables 1–7, [link], [link], [link], [link][link], [link], [link], [link]).

Affected countries fell into two distinct categories: Category 1, countries with local SARS transmission (Singapore, Hong Kong, mainland China, Taiwan, Vietnam and Canada) and Category 2, countries with imported SARS cases (United States, Thailand, Malaysia and Australia). The Category 1 countries experienced 98 percent of the world’s cases (Ahmad and Andraghetti, 2007, [link]). Once the presence of SARS was realised, all countries set up SARS task forces and committees at central and regional level for coordinating surveillance, response, and communication activities. They were generally supervised by the national Ministry or Department of Health. All countries made legislative amendments in their infectious disease acts making SARS a notifiable disease. Apart from Vietnam and Canada, all the Category 1 countries implemented mandatory quarantining of contacts; no Category 2 countries went that far. All countries implemented intensified surveillance and reporting. Those in Category 1 instituted active tracing of close contacts. Mandatory home quarantining of contacts of cases, actual and suspected, was commenced in Category 1 except for Vietnam and Canada. Voluntary home quarantining occurred in Canada and Category 2. Vietnam implemented institutional quarantine at affected sites. Eventually, the various public health measures used were sufficient to cause the epidemic to die out in the middle of 2003.

Summary

The burden of evidence of the studies described in this section is that, since the turn of the twentieth century, non-prophylactic interventions have had diminishing impact in preventing the geographical spread of communicable diseases. However, they are still used in certain circumstances. The decline in their general utility as control interventions has been precipitated by changes in the speed and volume of international travel brought about by developments in the internal combustion and jet engines. Now no parts of the populated globe are more than 24–48 hours apart, well within the incubation period of most communicable diseases. This can allow inter-area disease spread to occur asymptomatically and thus undetected, perpetuating chains of infection. The main value of quarantine and isolation today is that careful use (having regard as to when and for how long to implement the measures) will delay rather than prevent inter-community propagation of infection. They may also reduce the ultimate caseload. These interventions may thus buy precious time to deploy other control methods.

3.5 Conclusion

The public health responses of quarantine, isolation, closure of public facilities and the cessation of community events to contain the geographical spread of communicable diseases were widely used until the end of the first quarter of the twentieth century. Indeed, without the availability of appropriate vaccines and antibiotics, they were the only realistic measures which could be deployed for disease containment. But, as we have seen, both quarantine and isolation became progressively less sustainable as stand-alone control strategies as the century unfolded. The main value of quarantine and isolation when used in the modern era has been to slow the geographical spread of infection. Fortunately, medical advances in vaccines and antibiotics have enabled other defensive strategies to be developed, and it is these which we consider in the next chapter.