1 Introduction: terms and scope
For the purposes of this entry, the term ‘medical ethics’ will be understood in three senses: (1) as the moral constraints on and expectations for the professional behaviour of practitioners (specifically medical doctors); (2) as the moral expectations of all persons involved in direct healthcare services (including family caregivers); and (3) as moral consideration of the general social, philosophical, and theological concerns arising through healthcare interventions for individuals and society, including genetic, pharmaceutical, and digital efforts to improve the human body.
In the first sense, ‘medical ethics’ refers to specific approbated and reprobated behaviours expressing and reinforcing the standards of the practitioner guild (broadly understood) which allow expression of the virtuosity of the practitioner. In this sense, medical ethics tends to be defined by a code of standards or specific set of moral practices. This use of the term can be narrowly applied to physicians and surgeons, or more broadly to any healthcare professional delivering direct patient care.
The second use of the term includes the first, but expands to include all persons involved in the direct provision of services, including the patient’s family. The emphasis in this use is on what broadly happens in care settings. If used this way, the term ‘clinical ethics’ may be reserved, by some, for the morality of physicians and surgeons. This second meaning of ‘medical ethics’ covers anything occurring in direct healthcare provision, including the moral discourse and actions of patients, family members, chaplains, social workers, healthcare institutions, etc.
In the third sense, medical ethics is a moral discourse occurring with and beyond practitioners and the clinic, extending into the broader society. This use includes both the first and the second, but encompasses questions of allocation, access, pharmaceutical development, use of alternative therapeutics, etc. Importantly, this widest use includes human research ethics, public health ethics, and technological efforts to ‘improve’ the human race through direct application to individuals (genetic engineering, transhuman digitizing, etc.). Issues of animal welfare and/or rights, disability issues, and environmental ethics are sometimes drawn in, but these are best considered under the even broader moniker of ‘bioethics’ (Brock and Swinton 2012).
Though reductionistic, for heuristic reasons medical ethics in the West as influenced by Christian theology (the focus of this encyclopaedia entry) can be seen as developing through stages. As an embedded set of practices, the medical ethics positions being advocated today must be located as part of an historical movement, not as teleologically-upward steps, but simply as a response to sociological, technological, and theological changes.
3 Medical ethics in late antiquity and medieval eras: Galenic medicine and virtue
3.1 Early Christian period: medical ethics as expression of faithfulness
Very early in the life of the Christian church, before Constantine, physicians had been respected, but with the caveat that God did the actual healing. Luke, likely the writer of the Gospel of Luke and the Acts of the Apostles, was honoured for his faithfulness to Christ and for his vocation as a physician, with the former defining the goodness of the latter.
In the late third century, twins Cosmas and Damian (probably ethnic Arabs), became famous for their integrity and their medical effectiveness. The hagiography of Cosmas and Damien claims that they successfully transplanted a leg and the limb was cross-racially donated, showing that ethnicity was not central for Christians at the time. During the Diocletian persecution of the early fourth century, Cosmas and Damien were tortured and beheaded for their proselytizing, including testifying to patients. The contemporary situation is by no means as draconian as capital punishment, but modern medical ethics includes a prohibition against evangelizing in the healthcare setting on the grounds of power asymmetry in the practitioner-patient relationship. Most importantly for medical ethics, given the example of Cosmas and Damien, it was asserted early in Christian history that one could practice what was then considered scientific medicine in a manner consistent with the faith.
With the establishment of Christendom during the imperial rule of Constantine, the medical models and professional ethics of the Graeco-Roman world were openly adopted by Christians and legitimated in and by the church. Formal and informal guilds continued, requiring specialized mentored education (usually based on Galen). Patient-practitioner interactions, as well as those amongst practitioners, were also guided by the civil morality of newly Christian society. Consequently, a sort of mutual (if begrudging) respect tended to function amongst practitioners of Late Antiquity, be they religiously orthodox, Nestorian, Arian or, later, Muslim. Initially, this even extended to pagans; Oribasius (c. 325–c. 400 CE), as an example, was Julian the Apostate’s physician, yet was allowed to practice after a period of exile for his association with Julian.
There were limits to this respect. Fourth-century contemporaries Jerome and Gregory of Nazianzus, while accepting the legitimacy of much of Galen, seemed to differ on how a Christian could use Graeco-Roman medical ethics and guild models. Jerome, on the one hand, commends the standards of Hippocrates, comparing the duties of the cleric to those of the physician and noting both vocations require virtuous character, expressed as engagement in the care of the suffering (perhaps echoing Sirach). Maintaining privacy, treating patients with respect, and answering in times of need are moral duties of both the medical vocation and leaders in the Christian church (Jerome, Letter LII, to Nepotian, paragraph 15). Unlike Jerome, Gregory of Nazianzus, whose father had practiced medicine, commended Christians who emphasized what he deemed a higher moral standard, one that did not depend on the secretive guild rules nor required vows associated with pagan deities, such as the traditional Oath of Hippocrates (Oration VII, paragraph 10; see also Veatch and Mason 1987). Nonetheless, Gregory endorsed many of the common medical ethical standards, even those allowing judicious paternalistic lying. The physician might sometimes have to act against the expressed wishes of the patient by choosing to ‘prescribe medicines and diet, and guard against things injurious, [so] that the desires of the sick may not be a hindrance to his art’ (Gregory of Nazianzus Oration II, paragraph 18).
Magnus Aurelius Cassiodorus (c. 484–c. 577/585 CE) served Theodoric, an Arian Ostrogoth, who was functionally the leader of the Western empire. Cassiodorus was orthodox and (unsurprisingly, given his position) urged all sides in the doctrinal conflict toward religious toleration. His work De Anima considers the nature of the soul as animator of the physical body, anticipating debates over hylomorphism, and how this animation indicates a valuing necessitating respect for the physical body. Cassiodorus also facilitated the transmission of Greek and Roman medical and ethical works, including those setting moral standards as practitioner duties, into northern Europe and eventually to Celtic monasteries. In his Institutiones, he provided basic public health guidance, telling monasticized practitioners to be logical empirics (Cassiodorus [n.d.]). His Formula Comitis Archiatrorum provided a set of expectations for the Christian physician, including ongoing education. In it, he formulated a rudimentary Christianized oath which maintained duty to the guild, a duty subsumed under Christian moral order (Veatch and Mason 1987; Nemec 1976). Cassiodorus was not alone in his concerns for the poor and pilgrims, as the works of Isidore of Seville (late sixth/early seventh century CE) also indicate (Isidore of Seville 1964).
Cassiodorus also marked the effective end of significant medical-ethical developments for several centuries in the Christian West, developments which all but fully shifted to the Christian East and eventually to the growing Islamic regions. One significant change did occur, however: the requirement to rely on relevant medical expertise in legal cases was established within the Holy Roman Empire under Charles the Great (Charlemagne; Brittain 1966).
3.2 Islamic expansion: medical ethics as expression of shared cultural values
In the first several generations of Islam, the Medicine of the Prophet was composed – with the final version being assembled in the fourteenth century by Ibn Qayyim al-Jawziyya. These served as a collection of various preventatives, treatments, and rehabilitative suggestions garnered from the accepted oral tradition. Shaping the overall medical morality was the assertion from earlier hadith that Allah has not sent down any disease except that he has also sent down the cure. Otherwise, most practical physician ethics was adopted from prior Graeco-Roman, Persian, and Christian standards. For instance, as for Christians during the Justinian Plague of the mid-sixth century, it was generally assumed that Muslim practitioners should avoid plague cities but that the practitioner, if found within one, was duty-bound to remain and render care (Pierre 2021).
With the coming of the Islamic Golden Age, the knowledge expectations for physicians expanded by drawing from the Christian East and Persia, generally using Greek and Roman understandings of science. Especially influential were Nestorian Christian practitioners, forced out of Byzantium as religious toleration lessened. Notable figures in research and medical ethics were Yibril ibn-Bakthtishuh (likely Nestorian; died c. 829 CE), Yuhann ibn-Masawayh (likely Assyrian East Syriac Christian; c. 775-857 CE), and Hunayn ibn-Ishaq (Arab Nestorian Christian; 809-873 CE; Zunic, Karcic and Masic 2014). The latter is noteworthy for asserting a boundary between political and practitioner ethics in refusing to produce a poison for an assassination even when threatened with imprisonment (Cooper 2019). By the Abbasid pinnacle, both guild morality and government medico-legal standards included control over irregular practitioners, the respectful treatment of women, and openness to treating persons regardless of class, ethnicity, or religion (May 1983).
Ishāq bin Ali al-Rohawi (or, Ruhawi; an eastern Christian, strongly influenced by Islamic philosophical debates, possibly later converting to Islam; c. late ninth–early tenth century CE) produced the first genuinely significant work of medical ethics in the medieval period, Adab al-Tabib, (usually translated as either Practical Ethics of the Physician or Practical Medical Deontology, though the emphasis is on virtuosity not deontological reasoning; Aksoy 2004). Al-Rohawi argued for peer-review, required reasonable guild loyalty, insisted on physician-patient confidentiality, and declared the duty to maintain technical expertise such that failure to do so, if it led to technical failure, should be followed by compensatory lawsuits or even severe physical punishment.
Living at the same time, Abū Bakr al-Rāzī (Rhazes was Islamic; c. 864–c. 925 or 935 CE) rose to prominence, engaging with practitioners of Christian Byzantium to the west and the Chinese empire to the east. Using a ‘divine command’ argument, he favoured medical paternalism. ‘[T]he physician’, Rhazes noted, ‘even though he has his doubts, must always make the patient believe that he will recover, for the state of the body is linked to the state of the mind’ (Al-Razi Kitab al-Hawi fi al-tibb 156.167a.6–12; Tibi 2006: 206–207). The duty of nonmaleficence extended to any possible patient, even enemies (Al-Razi, quoted in Zarrintan, Shahnaee and Aslanabadi 2018: 1435–1438).
Ali ibn Abbas Majoosi Ahwazi (likely from a Zoroastrian family that had converted to Islam, also known as Haly Abbas; died very late tenth century CE) wrote The Perfect Art of Medicine. His arguments are quite similar to those of Al-Razi in prioritizing divine command, but also put a strong emphasis on the virtuosity of the practitioner, including an expectation of moral prudence in protecting human life and forbidding the use of abortifacients.
Perhaps the single most influential physician of the early Middle Ages was Ibn Sina (Avicenna; 980–1037 CE), whose medical text The Canon of Medicine was the most significant in both early Islam and in medieval Christian Europe. It required respect for patients, honest and clear communication of medical and moral information, and the maintenance of technical skills (Azmand and Heydari 2018). Though describing disease aetiology using an Aristotelian model of causation, he held to a neo-Platonic-like dualism on the soul, one to be echoed by Descartes centuries later. Though he sought to avoid such, this kind of dualism can lead to a Docetic-like reduction of the material body and dismissal of physical suffering. Not all contemporaries appreciated his work, as Al-Ghazali’s Incoherence of the Philosophers indicates. The Incoherence itself was to be rejected by Ibn Rushd (Averroës; Iberian; died 1198 CE), an Aristotelean. Likely while serving as court physician for Berber kings he wrote The Incoherence of the Incoherence, as well as The Book of the Principles of Medicine, noting that the morality of medical practitioners rested on their own character (a position affirmed by contemporary Christian Scholastics; Delgado 2012; Chandelier 2018).
Another conduit of Graeco-Roman philosophy for later European medical ethics was the personal physician to Saladin, Moses ben Maimon (Maimonides; Sephardic Jew; 1138–1204 CE). Maimonides drew on Jewish, Nestorian, orthodox Christian, and Islamic sources, as well as Hippocratic, Aristotelian, and Galenic traditions. He also seems to have been influenced by the tradition of Ibn Rushd (Averroës). Maimonides’ most famous work was The Guide for the Perplexed, in which he developed the theme, quite popular in contemporary Christian bioethics, of the goal of seeking ‘flourishing’. This can only occur for the individual within communities of integrity, for humans are ‘by nature social’ (Maimonides 1904: part 3, ch. 27). Some of his medical ethical thoughts were encapsulated in the so-called Oath of Maimonides (likely composed by Markus Herz in the eighteenth century; Rosner 1967).
3.3 Medieval Christian Europe: medical ethics as expression of virtue
In Christian Europe during this time, care was centred in the monasteries, with the practitioner generally being a cleric assigned to care of the needy or a layperson offering folk remedies. The significant development of pilgrimages at the beginning of the second millennium led monasteries to expand hospices for pilgrims as well as for the local sick and dying. With the expansion of pilgrimage and use of relics came debates over body-soul hylomorphism and the nature of physicality, which remains the central contemporary metaethical concern of medical ethics.
An example of monastic healing endeavours was that of Hildegard von Bingen (1098–1179). A stunningly gifted mystic and polymath, with notable skills in management, politics, visual art, and music, Hildegard’s healthcare blended practical empiricism and the academic medical tradition. She authored two volumes that include sections on medicine (they were perhaps originally one work, with additional material added in the century after her writing), Causae et Curae (Causes and Cures) and Physica (a work on natural order). While well-read, she seems to have trained as a monastic infirmarian. Her medical practice, according to her writings, was shaped by Graeco-Roman humoral theories, but used practical herbals and common techniques, along with prayer and in conversation with the patient to promote movement toward viriditas (spiritual fecundity or flourishing). Not surprisingly, given her institutional location, she tried to balance other contemporary medical writings by including in her works a great deal on female health. The efficacy of her practice is unknown, given the hagiographic nature of the reports (Sweet 1999; Bushnell 2021).
The rise of universities and the development of urban centres, even if delayed by the Black Death, allowed for medical professionalism independent of monastic communities to strengthen, with the various guilds (physician, surgeon, apothecary) formalizing and codifying practitioner obligations to peers, patients, and, to a lesser extent, society. The church accepted professionalization following the Second Lateran Council (https://www.papalencyclicals.net/councils/ecum10.htm; Amundsen 1978). Medical ethics, however, remained very much a subset of Christian ethics. During the period just before the Black Death, some more formal work on medical ethics codes did occur, such as that by Arnaldo di Villanova (1240–1313: De Cautelis Medicorum [On Rules of Caution for Physicians], attributed) which asserted a duty to patient and society (McVaugh 1997; Ricciardi, Ricciardi and Ricciardi 2017).
For medieval Christian medical ethics at a theoretical level, the most important result of increased access to Greek and Islamic texts was in developing the understanding of the relationship of the physical to the Divine, and an increased emphasis on teleological virtuosity. For medieval Christian physicians, a ‘thing’ was not merely reducible material (atomism) nor abstract energy (neo-Platonic idealism tinged with doceticism), but an inextricably yoked dualism or heterogenous unity (Thobaben 2009).
The rediscovered Aristotelian works tempered neo-Platonic idealism, with the result that hylomorphism in some form was accepted as a necessary rejection of dualism (Fitzpatrick 2016; Ogden 2022; McGinnis 2015). For Aquinas and most Scholastics, the individual was a ‘created good’ as soul within or inclusive of the body, with the whole self-directed toward a telos (‘end’, ‘goal’, or ‘purpose’). The physical body was understood as ‘form’ in matter that expressed the self’s substance. The relationship was complex, for the human body-soul lacked the simplicity of the Divine being, or a ‘synchronized’ teleological wholeness. Even so, it was generally believed that the physical ‘bodily thing’ was somehow the same thing through radical change, moving towards the telos, until transformed into spiritual body in death and resurrection. The extent of bodily ‘unicity’ (that is, the various aspects of the ‘thing’ as constituting one continuing expression of substance through contingent changes) was debated. Ethically, if the physical body was understood as part of the oneness of the human thing, then caring for that body, including with medicine, was not only valid but morally compelled.
A contrary response asserted the lesser importance of the physical body, even to the point of it being (ironically) morally ‘immaterial’. Severe self-deprivation was a sign of holiness, with the most extreme forms being flagellant penitent confraternities, and female anorexia mirabilis. Flagellant means one who willingly submits to or self-inflicts a beating with sticks or whips. Anorexia mirabilis means ‘holy’ or ‘miraculous’ fasting. Both of these spiritual disciplines were intended by the participant as a means of gaining and demonstrating control over the physical body. They considered the severe illness and death, that at least on occasions, a sign of deep spiritual commitment. Against these harsh spiritual disciplines, eucharistic transubstantiation and the division of relics implied the maintenance of substance and identity in physicality. The ‘spiritual’ significance of physicality remains a contemporary concern, as bioethicists tend to strongly disregard the ongoing ‘ownership’ of the physical body by the ‘self’ in order to allow autopsies and organ donation. The valuation of the physical body and its identification with or as the identifiable continuing self also is central in debates about the legal status of severe traumatic brain injury survivors and, albeit in a different sense, in justifications for and against gender reassignment/confirmation surgeries (Ogden 2022).
The coming of the Black Death in the mid-fourteenth century both solidified expectations for practitioner virtue and increased demands for physicians to discover the proximate causes of disease. As in earlier plagues, in both Islamic and Christian regions, some practitioners demonstrated the virtues of courage and prudence in their donning protective clothing (considered the most advanced technology) while ministering to plague victims. Others did not, as implied in Geoffrey Chaucer’s introduction of the physician in The Canterbury Tales, which contrasts the virtues of past masters of the medical pantheon with the incompetence and charlatanism of practitioners in the author’s days:
Wel knew he the olde Esculapius,
And Deyscorides, and eek Rufus,
Olde Ypocras, Haly, and Galyen,
Serapion, Razis, and Avycen,
Averrois, Damascien, and Constantyn,
Bernard, and Gatesden, and Gilbertyn.
He knew well the old Aesculapius,
And Dioscorides, and also Rufus,
Old Hippocrates, Haly [Abbas], and Galen,
Serapion [likely either Serapion the Elder or Serapion the Younger], Rhazes [Abū Bakr al-Rāzī], and Avicenna [Ibn Sina],
Averroes [Ibn Rushd], John the Damascan, and Constantine [likely referring to Constantine the African],
Bernard, and Gaddesden, and Gilbertus.
(Chaucer, The Canterbury Tales, lines 429–434 with Arab names added)
Generally, the ultimate cause of health problems, including the plague, was understood to be a punishment for sin (individual or corporate or species-wide), but practical treatments were morally legitimate. Writing during the Black Death, Henry of Grosmont, Duke of Lancaster, produced The Book of Holy Medicines, which presented Christ as the spiritual physician and the Virgin as an attending nurse for the wounds of sin and sorrow. The work indicates that medical language was appropriated for spiritual discourse, much as a spiritual aetiology was assumed for the ultimate cause of health disorders. The practitioner could readily appeal to devout prayer and use galenic recipes. Commending both to the patient was an ethical expectation (Henry of Grosmont 2014; Yoshikawa 2009).
4 Medical ethics in Western modernity: reduction and utility
4.1 Early modernity: humanism and mechanization of the body
During the middle Renaissance and into the late period with the Reformation, the impact of humanism and rise of inductive science solidified medicine as a profession distinct from church vocation, increasingly requiring university education rather than instruction in the cloister or through apprenticeship. Even some surgeons (‘surgeons of the long robe’) and apothecaries had university training, though empirics and irregulars continued to be instructed through mentoring. Medical practice remained Galenic and medical ethics focused on bedside manners, avoidance of surgery, and certain limitations in research (see Benedetti, 1450–1512; de Zerbi, 1445–1505; Tozzo et al. 2022).
The Church still discouraged dissection undertaken for instructional purposes; the profession increasingly less so. For many practitioners, human dissection was a denial of the sacredness of the body, a belief traceable back at least to Islamic medicine and maintained through medieval Christian medical practice. A transition occurred with the public success of scientist-physician Andreas Vesalius (Andries van Wezel) in the sixteenth century. A translator of Islamic medical texts, Vesalius strongly pushed against the galenic tradition based on his own cadaveric studies, often done with the bodies of recently executed criminals. Reported to authorities for having supposedly dissected a man whose heart was still beating, Vesalius chose or was required by religious or legal authorities to go on pilgrimage to the Holy Land, a journey which, not coincidentally, involved the adoration of relics as the physical carriers of spiritual reality, a declaration of the divinely-created sacredness of the human body, dead or alive (O’Malley 1954; Castiglioni 1943; Lasky 1990).
While some early Protestants opposed necropsy, and especially vivisection, dissection’s eventual public acceptance in Protestant Europe is portrayed in the corporate portrait of Rembrandt van Rijn’s The Anatomy Lesson of Dr Nicolaes Tulp (1632). Of course, the rise of inductive science made almost inevitable the toleration of studies on dead bodies which then would lead to the justification of experimentation with living subjects.
Ambroise Paré, also in the sixteenth century, was in the barber-surgeon guild and served as a battlefield surgeon. He reportedly sought a volunteer from among condemned prisoners who would agree to be a research subject for verifying the ineffectiveness of bezoar stone (indigestible material from a ruminant’s gut) as a poison antidote. The incentive was release from the sentence should the prisoner survive. Paré administered the poison, followed by the bezoar stone and the inefficacy was demonstrated in the man’s agonizing death (Eng and Kay 2012; Fabián 2019). By the mid-twentieth century, following revelations about the activities of Nazi physicians in concentration camps, such experiments would be deemed absolutely wrong by the international community. The intrinsic power imbalance of the practitioner-patient relationship necessitates that the physician emphasize care-giver duties above those of researcher, and all the more so when that authority is magnified by the diminished autonomy of those in the military, in prisons, and in desperate poverty. Nevertheless, counter-arguments are made using utilitarianism, and contemporary concern has been raised about experimentation and organ procurement from prisoners such seems to be the case in the People’s Republic of China (Rogers, Singh and Lavee 2017).
Scientific professionalization developed, and broader access to formally trained physicians expanded, at least in the growing urban areas and amongst the emerging middle class. Professional guilds were expected to develop codes that coincided with the values and interests of that rising class while recognizing the political power of the centralizing nation-states. Though still ‘Christian’, medical ethics sounded increasingly ‘professional’. The tone is evidenced in the sixteenth-century Treatise On the Duties of the Doctor and the Patient by Leonardo Botallo:
If the doctor does not know everything of the disease, he must give only a hypothetical prognosis […] It is necessary that the physician combines generosity and solidarity, avoids any attachment to his own interest. Otherwise, not only his work, but also his own name would be devalued and corrupted. (Tozzo et al. 2022: 50–56)
Towards the end of that century and into the seventeenth century, significant advances occurred in anatomy and physiology by using empirical observation to solve the vitalist question (Wolfe 2017). William Harvey (1578–1657) reached his conclusions about circulation by combining anatomical studies on human cadavers, including those of miscarried foetuses, with experimentation on live non-human animals. His research challenged both traditional versions of hylomorphism and the then very recent mechanistic views of the physical body, such as that of René Descartes (Gorham 1994; Anscombe 2011). The latter’s suggestion that body and soul were tied together in the pineal gland did not solve theological or moral debates about the significance of the physical body (Moussa and Shannon 1992). Harvey worked at the (by then) Protestant St. Bartholomew’s charity hospital and, consequently, was required to take a vow imposing moral duties on the practitioner that included serving the poor and maintaining up-to-date professional skills. For him, this moral duty meant epistemologically prioritizing empirically verifiable truth over medical tradition. In this claim he echoed Theophrastus von Hohenheim (Paracelsus) of a generation earlier, who had burned the works of Galen and Ibn Sina while asserting the superiority of non-university-trained practitioners, yet Paracelsus had also made exorbitant claims for astrological guidance in medicine.
The Puritan Thomas Sydenham (1624–1698) thought the body so sacred that he would avoid any physical probing, but this did not impede his development of the modern approach to disease classification and rejection of aspects of medical tradition. Some of his peers considered him a ‘violator’ of guild loyalty and almost a medical ‘heretic’ (Anstey 2011; Meynell 2006). Sydenham is one of the many to whom credit is given for the Hippocratic-like aphorism primum non nocere (‘first, do no harm’), but how ‘harm’ was defined remained open. Besides developing many reasonably effective treatments, he also favoured the use of ‘Sydenham Laudanum’ (opium mixed with sherry) for pain management. While quite useful in pain control, a significant moral problem arose with treatment-associated addiction and the rise of ‘drug-seeking’ behaviours which continue today with the wide availability of opiates, the apparent promotion of such by some pharmaceutical corporations, and an increasingly broad definition of ‘pain’. Differentiating (or not) between nociception, pain, and suffering remains a significant and often ignored problem for bioethics (Mischkowski et al. 2018; Brand and Yancey 1993; Morris 1991). The moral problem of pain management is not only the possibility of addiction, but also that autonomy requires both the freedom to act and the capacity to reasonably do so, the latter being impeded by addiction. Contemporary efforts to address the morality of pain and addiction often centre on arguments made popular by Sydenham’s friend, fellow physician, and perhaps co-author of the Preface of Observationes Medicae (1676), John Locke. Locke’s significant impact on medical ethics is through his writings on rights.
Herman Boerhaave (1668-1738), also Protestant, wrote a thesis on hylomorphism entitled De Distinctione Mentis a Corpore (‘On the Difference of the Mind from the Body’) in which he asserted that the physical body alone, mechanistically understood, was the responsibility of the physician (Cook 2000; Winslow 1935). He came close to being a practical materialist, bracketing aside any spiritual questions and, to some extent, moral ones. This is similar to the position taken at the end of the twentieth century by the evolutionary biologist and philosopher of science, Stephen Jay Gould, who asserted that religion and science are ‘non-overlapping magisteria’. While some respect for differences in approaches is generally helpful, it does not resolve the bioethical concerns about body and mind (Gould 1997). Medical ethics still has not satisfactorily addressed when a matter is a concern for medical practitioners and when it is properly a concern of those in priestly roles, or when it is both or neither.
4.2 Mid-modernity: public health and regulating professional differentiation
The medical professions were shaped by technological change, developments of scientific knowledge, economic pressures, and rising expectations of the growing middle class. Not surprisingly, the various groups used their ethical and educational standards as a means of controlling market share, specifically by limiting credentialling, though also to offer the patient-consumer reasonable and predictable expectations in the marketplace (Loudon 1985). The British government used taxation and patent law as means to compel moral consistency, especially against what was known as ‘quackery’, on the grounds that a laissez-faire market cannot function in medicine (Stebbings 2013). Licensure for selling medicine was required by 1783 in Britain; it would be decades before such occurred in the US
‘The Worshipful Society of Apothecaries’ (technically, chemists, druggists, apothecaries, and pharmacists might, at times, have been differentiated, but now are one category) separated from grocers and spicers in London in 1617, and eventually won a lawsuit against the College of Physicians that laid the groundwork for the modern boundary between pharmacy and general medical practice (www.apothecaries.org; Aronson 2023). Similar professional differentiation occurred in 1745 as the Company of Surgeons (now ‘The Royal College of Surgeons’) formed out of The Barbers’ Company. These distinctions, however, have not always been rigid. Surgeons and physicians would eventually merge. The line between general practitioners and pharmacists has been challenged in the twenty-first century, with the latter increasingly making treatment decisions and providing vaccinations.
In part in the reaction against restrictions of the guilds, in part due to obvious incompetence with some treatments (both by regulars and irregulars), and in part as an expression of the broader social movement toward ‘levelling’, some practitioners produced works for self-care. Perhaps most noteworthy was George Cheyne in the eighteenth century, a popular physician strongly influenced by pietist mystic Jakob Böhme. Rejecting mechanistic reductionism and professional hubris, he made his works on medical self-help widely available, strongly prioritizing the moral obligations to humanity over those to the guild (Cheyne 1705). In turn, he strongly influenced the pragmatic, anecdotally based empiricism of Rev. John Wesley, whose Primitive Physick (1747) was one of the most widely-used medical works of the second half of the eighteenth century in both Britain and the new United States, going through at least twenty-three editions (Rogal 1978). Wesley emphasized the preventative health benefits of self-discipline, promoting a notion of broad well-being, something akin to what in contemporary bioethics is called ‘flourishing’.
4.3 Mid-modernity, utilitarianism, and public health
In a real sense, as well-liked as Wesley’s Primitive Physick was, popular empiricism was beginning to yield to scientific medicine based on controlled experimentation and reductionist inductive logic. For instance, by the end of the eighteenth century, many medical educators were strongly advocating for cadaveric study. While at least as ancient as Herophilus and Erasistratus (fourth century BCE) in Ptolemaic Egypt, the moral suitability of such a pedagogical technique (later monikered ‘body snatching’) was widely debated. Still, the cultural ‘victory’ of the dissectionists appeared inevitable with the skeletal articulation of the body of Charles Byrne who had seemingly suffered from giantism and whose remains were on display at the Hunterian Museum at the Royal College of Surgeons from 1799 until 2023 (Solly 2023). Recently, popular versions of similar pedagogical displays began in 1995 with Gunther von Hagens’ public plastination exhibits.
Another example of the rising authority of research-based medicine was the human experimentation on variolation and vaccination. Variolation had become an acceptable, albeit controversial, preventative for smallpox in the West, in part due to Lady Mary Wortley Montagu in the early eighteenth century adopting ‘ingrafting’ practices she had seen while in the Ottoman Empire. The process had been used in sub-Saharan African, India, and Chinese regions for centuries. To determine efficacy though, controlled experiments were needed. The moral boundaries of subject selection were not so much ignored or overlooked as simply deemed not concerning. For instance, in 1721, men and women prisoners at Newgate subject to capital punishment were offered freedom, conditioned on participation in smallpox variolation studies followed by departure for the colonies (Weinreich 2020). Experiments were also conducted on those constrained in other ‘total institutions’, such as slaves (Goffman 1961; on slaves as experimental subjects in 1773, see Schiebinger 2017). The utilitarian justification was strikingly similar to Paré’s two centuries earlier, and to one that would be used two centuries later to legitimate infecting with hepatitis children institutionalized in Willowbrook State School for Children with Mental Retardation.
Christian leaders of the eighteenth century took both sides in the moral debate over live inoculations and the associated research. Two figures who shaped vaccination use in the English-speaking world, especially in North America, were Cotton Mather and Jonathan Edwards. Mather reportedly learned of the method from an enslaved man named Onesimus in 1721. He justified the moral choice for the procedure on the basis of the intrinsic value of persons, deeming opposition evil.
[July] 16. […] I have instructed our Physicians in the new Method used by the Africans and Asiaticks, to prevent and abate the Dangers of the Small-Pox, and infallibly to save the Lives of those that have it wisely managed upon them. The Destroyer, being enraged at the Proposal of any Thing, that may rescue the Lives of our poor People from him, has taken a strange Possession of the People on this Occasion. (Mather 1911: 631–632)
Mather, an occasional supporter of witch trials, promoted vaccines on the basis of what he deemed the logical interpretation of empirical reports and moral duty ‘subservient unto the Interests of Piety […]’ (Mather 1911: 626). In response, and spurred on by the medical establishment as well as pamphlets from the likes of James and Benjamin Franklin, someone attempted to bomb Mather’s house, with the warning he should ‘get infected’ with explosives. Later in life Mather regretted the witch trials he had not stopped, and Franklin regretted his opposition to variolation that had led to many unneeded deaths. In 1758, Jonathan Edwards, famous for his theological grounding of the North American Great Awakening (a movement coincident with the Wesleyan Revivals in Britain), died in his effort to prove the efficacy and safety of variolation.
Over a half century later, Edward Jenner developed a safer means of inoculation, having famously observed (and possibly having learned from the two decades of prior work by a farmer named Benjamin Jesty) the demographic pattern of smallpox resistance amongst cowpox-exposed milkmaids (Hammarsten, Tattersall and Hammarsten 1979). Jenner systematized the process, but on the basis of a series of morally dubious actions. In 1796, Jenner vaccinated eight-year-old James Phipps, the grandson of an employee, using material from a cowpox-infected milkmaid, and then purposely tried to infect the boy with smallpox. In a utilitarian sense, what he did might be deemed allowable, if not necessary. The success led to the widespread acceptance and institutionalization of vaccination programs. In a deontological sense, however, the research protocol clearly violated the autonomy of an underaged and economically disadvantaged individual. Under virtue language, the action raises doubts about the fulfilment of the vocational obligation to do no harm.
As physicians continued their attempts to expand their scientific knowledge, reach the ever-growing middle class, and establish themselves as elites, medical ethics became a mechanism both for governing practitioner-patient practice and polishing the somewhat tarnished status of the guild. 1803 can be used as the approximate beginning of the formalization of the academic and clinical field of clinical bioethics (medical ethics), as that is when the term ‘medical ethics’ seems to have been first used, apparently coined by English physician Thomas Percival (1803).
A century later, in 1927, the more expansive term ‘bioethics’ was coined by German theologian and pastor Fritz Jahr. Van Rensselaer Potter, an American biochemist, popularized the term ‘bioethics’ in English during the early 1970s. For both, the term had an expansive meaning, yoking human moral concerns with environmental ethics and the treatment of non-human entities. Jahr’s argument is strongly theological, asserting that an imperative exists for all humans as co-creators with the living things of nature (see Jahr 2013; Kalokairinou 2016; Muzur and Sass 2012; Reich 1994; Sass 2007). This broader understanding, while impacting medical ethics in the late twentieth century, had little impact on professional ethics before that period.
That said, Percival’s contemporaries at the turn of the nineteenth century were likewise collating standards. For example, four years earlier, in the trans-Appalachian town of Lexington, Kentucky, a new program in medical education was established in 1799. Students within that program formed a secret society, Kappa Lambda, that required professional behaviours paralleling those Percival would soon publish (Ambrose 2005). This society expanded to other medical schools in the US, with members participating in the codifying of the Principles of Medical Ethics when the American Medical Association was constituted in 1847.
Inevitably, some were disappointed with the emerging medical model, as the claims for science were more than occasionally inflated, leading to doubts about the moral standards of the profession. This was not infrequently, especially among those labelled as Romantics, tied to apprehension about industrialization and to suspicion about the generalized belief in ‘progress’. Goethe’s Faust, with a very different moral framing than Marlowe’s, was fully published in 1808. Frankenstein; or, The Modern Prometheus, was published anonymously by Mary Shelley in 1818. Both address the hopes and hubris of medical researchers, as well as the mind-body relationship.
One of the most notorious examples of immoral guild protection and elitism in the nineteenth century was the shaming of Ignaz Semmelweis who, while perhaps lacking sufficient tact, clearly demonstrated with simple controlled studies the clinical value of washing hands with carbolic acid after autopsies, wiping down facilities, and vigorously cleaning instruments. Semmelweis’ 1846 assertion was considered a challenge to the increasingly rigorous education of medical students (Semmelweis 1859; Dunn 2005). The so-called Paris Method (especially as developed by Pierre-Charles-Alexandre Louis [1787–1882]), albeit with variations at different schools, used case tabulations, careful clinical observations, and autopsies as highly effective teaching tools. Ironically, the increased knowledge tended to lead to ‘therapeutic nihilism’, the doubt that their interventions would make any significant difference (Dunea 2023). Most of their patients in such hospitals were working class or poor. Research took priority such that the patient became ‘the disease in such-and-such a room’. Diagnostic and research time was deemed so important that students and clinical staff hurried from autopsies to bedsides. Clinical disagreements, such as with Semmelweis in Vienna, were considered rebellions to be crushed. In retrospect, the incident raises serious questions about a fully self-regulating profession. The gathered full force of the profession was demonstrated when Semmelweis was involuntarily institutionalized and subsequently died violently.
Less than a decade later, John Snow made a similar assertion using epidemiological techniques (Tulchinsky 2018). As with Semmelweis, Snow had discarded the humoral theory in favour of statistical studies of risk. He determined that the Broad Street pump in London was introducing cholera through contaminated water and convinced authorities to remove the pump handle, dramatically lowering infection and death. Snow had been a surgeon who sought further education and entered the physicians’ guild. He was a pioneer anaesthesiologist, ultimately serving as a physician to Queen Victoria. This position of socio-political power, and especially the fact that his argument about infection did not challenge the medical establishment, meant he was raised to hero status. It is not that he was underserving, but that Semmelweis was as well. The professional moral standards failed the latter.
What Snow and his institutionally powerful contemporary, Edwin Chadwick, accomplished was the legitimation of using population numbers in medicine. Not just the individual, but the society, could ‘get sick’ (e.g. cholera, smallpox, and typhoid fever). ‘Treatment’ for the social group was, preferably, prevention, but could escalate to compelled quarantine and policed isolation. Such depended on the identification and counting of ‘events’. Forms of such practice had existed previously. Identification and the associated use of isolation and quarantine for health or ritual reasons appeared long before the Common Era (e.g. Lev 13; leper’s bell), but in the nineteenth century counting and surveillance became the standard.
This meant that the medical professionals were now agents of the state, providing reportable information on individuals. The most famous case was that of Mary Mallon in New York, USA at the turn of the twentieth century. Labelled ‘Typhoid Mary’, she spent more than two decades in forced isolation after repeatedly defying restrictions as an asymptomatic carrier (Marineli et al. 2013). The reasoning was strongly utilitarian (Chadwick was an admirer of Bentham) with claims that the rights of the individual sometimes had to be subordinated to the health of the community. Ethical questions arose: how does one determine how much risk must be minimized? Who will do so? What are the limits of confidentiality? Are like treated as like? (Mallon seems to have been severely restricted at least in part due to her social status, as well as her noncompliance.) These are not only past concerns, having been central to moral debates about AIDS and, more recently, COVID-19.
4.4 Mid-modernity, germ theory, and reduction of the human
Both the Crimean War in Europe and the Civil War in the US were marked by horrendous field hospital conditions, with huge numbers of deaths associated with nosocomial infections. Spontaneous generation and miasma theory could not explain the numbers. Germ theory could. At the same time, nursing came into its own, to no small extent in association with improved patient hygiene. These transformed medicine and, subsequently, medical ethics.
At almost the same time as the acceptance of germ theory, Louis Pasteur was conducting research to develop an effective rabies vaccination. In 1885, foreshadowing what is now called a moral ‘Right to Try’, Pasteur injected rabies vaccine into a 9-year-old child bitten by a rabid dog. ‘Right to Try’ refers to the use of experimental treatments when other options are deemed futile. Pasteur’s success helped secure the ‘scientific-ness’ of medicine, after he faced resistance from practitioners (he was not a physician), antivivisectionists, and human rights advocates (the latter for his having previously sought to use prisoners in Brazil as research subjects; Da Mota Gomes 2021). Both Pasteur and his German counterpart, Robert Koch, used utility arguments for research conducted in colonial settings (Eckart 2002). Koch identified or significantly expanded the understanding of tuberculosis, cholera, and anthrax using animal models and controlled experimentation, with final tests on humans.
Counter-movements arose again in the late nineteenth century. Reductionism and utilitarianism seemed to sweep aside, with a sort of Social Darwinian calculation, the medically and economically vulnerable. A lag in applied ethics meant, paradoxically, that while the populace hoped for the success of medical experts, suspicions about their seeming hubris grew alongside suspicion of what would later be called ‘the technological imperative’ (Fuchs 1968; Jonas 1985). This cultural dissonance found expression in the popular press and in literature, for instance in R. L. Stevenson’s 1886 The Strange Case of Dr Jekyll and Mr. Hyde.
Another protest, especially in the US, was the Progressive ‘health movement’ in the last quarter of the nineteenth century. Evangelicalism (which, then, included Methodists, Baptists, and many Presbyterians) was on the edge of fracturing into what are now called Fundamentalism, Holiness, Pentecostalism, and Social Gospel Liberalism. Just prior to that, however, most in the broader movement religiously affirmed ‘pure living’. Drawing on not only their religion, but also Victorian understandings of the nuclear family, eighteenth-century medical self-help literature, and the early nineteenth-century Romantic scepticism of scientific reduction, spokespersons asserted that true scientific medicine should be less paternalistic, less reductive, and more ‘natural’ (the term indicating both natural biological pattern and moral normativity). Sylvester Graham (of Graham Crackers fame) and J. H. Kellogg (corn flakes) promoted strict nutritional guidelines, abstinence from alcohol, and particular hygienic behaviours (including ‘fresh air’). It might be noted that Dorothea Dix was an early proponent of this movement, with both she and Kellogg asserting using such for the humane treatment of the mentally ill. Kellogg, however, also concluded that sterilization and eugenic controls were necessary for the public health control of brain disorders.
5 Twentieth-century scientism and bureaucracy in the West
5.1 Paternalistic authoritarianism in research and psychiatry
At the beginning of the twentieth century, reductionism and utilitarian ethical reasoning were not unique to medicine. Cultural elites asserted the superiority of modern ‘Western’ industrial and scientific progress and, consequently, the inferiority of all other traditions, including past Western traditions. Professionally, this was evident in the elevation of physicians who displaced clergy as the most prestigious vocation. This cultural authority was further demonstrated as – with the appropriation of laissez-faire market economics (especially the concept of ‘interests’) and Social Darwinist language – both eugenics and the psychotherapeutic ‘reduction’ of the human person to component parts found broad acceptance among the upper middle and upper classes.
The triumph of reductionist science within medicine, for good and ill, was all but completed by changes in education. Both nonsensical theories of the irregulars and partially-reasonable holistic claims made by those advocating ‘biologic living’ were pushed aside (Wilson 2014). As early as the 1860s, in German-speaking regions, the moral demand to improve medical education with established scientific standards had been forcefully made by Theodor Billroth (1829–1894; Billroth 1876; Flexner 1910). The Flexner Report in the USA, some fifty years later, would raise many of the same issues and change medicine throughout the world. Those institutions wanting ‘accreditation’ would need to satisfy standards based on a combination of English and German models of university medical education. The emphasis was on objective knowledge, biological reductionism, and practical observation, all with a blend of scientific authority and personal integrity for moral ordering.
In actual practice, this meant the dominant moral model was paternalistic benevolence, with clinical decision-making placed in the hands of the physician. Culturally shared values, such as truth-telling, were relativized within the practitioner-patient relationship by what the expert doctor thought was best for patient and family. The submissive patient became the ideal patient. This continues in some non-Western settings, but paternalism (and its parallel, maternalism) has fallen into disfavour in the West (Duffy 2011).
Paternalism, while most often fairly benign, would eventually take on the malignant form of eugenics, with leading voices such as Billroth, F. Galton (1822–1911), and the latter’s mentee, K. Pearson. In the United States eugenics found strong support among cultural elites, including physicians, as it legitimated or was reinforced by the nation’s relatively recent history of slavery, its then-current bigotry towards certain immigrant groups, the academic prestige of Social Darwinism, and a near-unassailable belief in progress. Developing in the United States and Britain as ‘true science’, ‘positive eugenics’ (encouraging selective human breeding) eventually led to mandated ‘negative eugenics’ (forced sterilization) in portions of Europe and in some US states.
Religious leaders were divided, though the tendency was for Catholics and evangelicals (in the American sense of that term) to oppose eugenics. Some Christian leaders pushed back publicly, such as G. K. Chesterton in Britain (Chesterton 1922). The culminating event in the US was the Carrie Buck case before the US Supreme Court, with Justice Oliver Wendell Holmes, Jr. upholding forced sterilization by famously stating that, ‘[t]hree generations of imbeciles are enough’ (Buck v. Bell, 274 US 200 [1927]). Holmes had earlier asserted the need to legally manage not just property but also human biology (Holmes 2010: 303–307). Only Justice P. Butler, a conservative Roman Catholic, dissented. This did not end the debate, with more US states implementing sterilization laws, and the counter-movement growing ever louder (such as with the Crane Wilbur and Wallace Thurman film, Tomorrow’s Children). Later, in Europe, the eugenic movement was woven together with the remnants of nineteenth-century European nationalism to become a centrepiece of Nazi biomedical ethics, with its extreme version of negative eugenics, at which time eugenics fell out of favour in the US and Britain. Today it remains generally disfavoured, except in reference to disabling genetic conditions by some abortion rights supporters and more strongly amongst some transhumanism advocates.
Nazi eugenics and human research were brought to light through the Nuremberg trials. They had begun with experimentation in sterilization and extermination techniques on persons having disabling conditions, such as at Hadamar (Aktion T4). They continued and expanded at the concentration camps. For instance, prisoners were used as ‘human guinea pigs’ (a popular term in mid-twentieth-century bioethics) for the Dachau malaria studies of Claus Schilling, a scholar from the Robert Koch Institute (Müller-Hill 1988; 2008). Schilling, Karl Brandt, and others involved in Nazi research were executed, while some, like Aribert Heim and Joseph Mengele (who also held a PhD in physical anthropology) escaped. Partially in response to Nuremberg, as well as testimony about crimes committed by the Japanese at Unit 731, the Declaration of Helsinki was produced by the World Medical Association in 1964 with parallel affirmations produced by various national associations. The standards were meant to guide researchers and to provide society a means by which medical researchers and practitioners might be judged.
Increasingly, the populations of Western countries came to doubt experts who asked society to ‘trust us’, for the Nazis were not alone. Both democratic republics and Marxist states had conducted research that reduced humans to mere objects or had cynically used medicine (especially psychiatry) as a mechanism of the state. Enlisted soldiers, developmentally-disabled children, and racial-ethnic minorities were disproportionately selected as research subjects. In spite of his own dubious LSD research, the prominent anaesthesiologist and pain specialist Henry K. Beecher along with Maurice Pappworth and Hugh Clegg of the British Medical Journal pushed for more rigorous informed consent (Beecher 1966; Pappworth 1967).
The Tuskegee research came to light in the 1970s. Initiated in 1932, the study was designed to allow observation of the disease process of syphilis and determine transmission mechanisms to allow control. Even supposing this was initially done with pure intentions and with a reasonable research protocol (seemingly not the case), the entire project should have been altered upon the mass production of penicillin, beginning in the mid-forties. The first article on Tuskegee came from Jan Heller in July 1972, based on the whistleblowing of US Public Health Service employee Peter Buxtun whose moral concerns had been internally dismissed as irrelevant (US Centers for Disease Control and Prevention 2022). Not coincidentally, Buxtun, as a child, had fled Jewish persecution in Nazi-occupied Czechoslovakia.
Some other research projects in the West that inappropriately used human subjects were: the Saskatchewan Tuberculosis Vaccine Trial on Indigenous children (1933); the Guatemala Syphilis Research in which mentally-ill institutionalized subjects received injections of the disease (1946-1948); the Willowbrook Hepatitis Study in which institutionalized developmentally-disabled children were injected with the virus (parents gave permission on the grounds of a risk analysis asserting that the children would likely contract the infection anyway; 1956-1970); the Kobe (Japan) Medical School Infant Sugar Study (1958); the Milgram Authority Study (1963); and Project MKUltra on mind control and psychedelics conducted by the CIA and US Army (1967-1973). Similar utility claims, coupled with the economic assertion that only ‘intellectual property’ was a legitimate property claim in science, including for commercialization, led to the decade-long disregard of Henrietta Lacks as the contributor of the immortalized HeLa cell line. Extensive debates over informed consent in a physician-patient relationship and financial compensations for body parts (for research, transplant, etc.) followed.
A notorious example of Marxist state medical abuse occurred in the Soviet Union. Psychiatry was used, beginning with Stalin and accelerating under Brezhnev, to isolate and punish opponents of the regime. The logic was that any resistance to the scientific ‘fact’ of Leninist-Marxism was a symptom of insanity, even if not an indication of moral corruption. Psychiatry, among medical practices, was particularly susceptible to such abuse. According to an argument made prominent by A. Snezhnevsky of the Serbsky Institute, only the trained practitioner could recognize the subtle delusions, etc. of ‘sluggish schizophrenia’, a disorder often diagnosed amongst those who were opponents of the State. The functional religiosity of Leninist-Marxism merged statist virtue claims with utility assertions about the uselessness of the patients. Assuming the practitioners were not cynically Machiavellian or moral cowards, they were blinded by confirmation bias or were themselves delusional as they asserted their own virtuosity.
The psychiatric system, even if less overtly political, was also being critiqued in the West, especially the US Social location had often determined whether the mental outliers were defined as self-destructively eccentric, violently mad, demonically possessed, or wildly sinful. That, in turn, determined the approach of caregivers. While the reform work of Dorothea Dix and others in the nineteenth century had removed the mentally ill from prisons and minimized the squalor of mental health hospitals, medical treatments were still inadequate. The best care available tended to be in rural retreats with bucolic pathways and hydrotherapy pools. Large state-funded institutions provided shelter for the impoverished mentally ill and those with severe cases, but offered little ongoing care. Scientific medicine, however, seemingly offered a way out. By the 1920s, the wealthy could obtain care based on the new science of psychoanalysis. The less well-off and severe cases could receive insulin coma therapy, Metrazol, electroconvulsive therapy, and lobotomy to control symptoms (all generally followed by long-term institutionalization).
By the mid sixties and early seventies, scepticism about these methods and disfavour for medical paternalism, as well as doubts about the epistemological assumptions of interventionist psychiatry, led to calls for deinstitutionalization. Previously-accepted treatment protocols were increasingly deemed immoral by figures such as Erving Goffman (Asylums), R. D. Laing, Thomas Szasz, Michel Foucault, and D. L. Rosenhan (On Being Sane in Insane Places), who all reached some common conclusions albeit using very different reasoning (Rosenhan 1973). In association with these scholarly critiques, fictional portrayals such as One Flew Over the Cuckoo’s Nest by Ken Kesey resulted in high numbers of discharges by the late 1970s. This, in turn, led to other significant moral concerns, including the widespread homelessness of the mentally ill. Current proposed biomedical solutions for otherwise untreatable mental illness include drugs for behaviour modification and the dulling or pharmaceutical ‘erasure’ of memories to control emotional pain.
5.2 Utility, distribution, and quality of life
By the 1970s, and with no little dissonance, the populace in the West had greater and greater expectations for high-technology ‘medical miracles’, even while growing ever more suspicious of the ‘cruelty’ of researchers and callousness of medical providers. There was no doubt that the scientific method had led to improved sanitation, antibiotics, expanded vaccinations, and effective emergency care, all which – along with the ‘Green Revolution’ (use of chemical fertilizers, pesticides, and herbicides to allow high-density grain production) – had led to the rapid extension of life expectancy (Tulchinsky 2018). This widespread cultural validation of the scientific endeavour seemingly secured an elevated status for physicians, even while suspicions grew. These same providers seemed oblivious to or unwilling to acknowledge widening concerns when ongoing biomedical successes did not eliminate the old moral problems associated with theodicy and death, but rather gave them new clinical expressions.
Despite this, market demand pushed rapid expansion, with services being funded through employer-related health care insurance in the US and nationally-funded healthcare in Britain and most of the rest of Europe. Publicly-funded health insurance in Europe had been initiated in 1883 in Germany under Bismark as an expression of ‘practical Christianity’ and the natural duty of the State (Bismark 1884). A similar model developed soon after the Second World War on the basis of a positive rights (entitlement) argument, with the British National Health Service. In the US in the 1960s, partial governmental systems were established for the elderly (Medicare; funded and coordinated by federal government), persons with disabilities (also Medicare), and the indigent (Medicaid; funded and coordinated jointly by the federal and respective state governments). For most in the US, however, the payment was managed privately by individuals or employers through the ‘Blues’ (Blue Cross and Blue Shield) and other not-for-profit and, then, for-profit insurers. Actual delivery of services remained under the control of hospitals and practitioners.
With both state socialist models and private insurer models, the same moral questions about access and utilization inefficiencies arose. Additionally, these models meant patients and practitioners would meet as ‘moral acquaintances’, or even passing strangers (Wildes 2000; Thobaben 2001). The days of a family practitioner caring for one through half a lifetime were over by the 1960s. The transition was marked with American television programs that portrayed both the gentle paternalism of an ‘older’ physician (Marcus Welby, MD) and the high-pressure, university-based technological expertise of daring ‘young’ practitioners using the most recent and most expensive technologies (Dr. Kildair; Ben Casey).
Through the second half of the twentieth century, science kept its promise to make genuinely astounding discoveries. The availability of antibiotics in the 1940s, followed by the polio vaccine in the 1950s, led to speculation about the possible end of infectious disease. With the transplantation of solid organs, specifically the heart, in the 1960s doubts about research ethics were laid aside by some scientist-physicians who increasingly saw themselves as noble warriors with ‘all being fair in love and war’. If there was to be a war on cancer or advancement in surgical techniques, then the patient had to be a passive battlefield. The physician as benevolent father figure all but completely gave way to the doctor as scientist, or even as a warrior. Soon enough, however, patients wanted not only cures, they wanted a say. The various sociocultural spheres of economics, government, religion, and family began to conflict in the hospital room and the courthouse.
An explosion of events occurred from the late 1960s into the early seventies. Solid organs were being successfully transplanted, notably the heart in 1967. Correspondingly, changes were made in how death was defined, in part to make organs more readily available and in part to curtail what was thought to be near-endless suffering in ICUs. For instance, the Harvard Definition of Brain Death in 1968 shifted the definition from cardiorespiratory failure towards ‘no discernible central nervous system activity’ (1968: 337; interestingly, that committee included H. Beecher). The success of antibiotics in intensive care units (ICUs) brought confusion over the distinction between ‘extending dying’ and ‘extending life’.
Debates over the boundary between life and death, and the patient’s sense of lost self-authority, were clinically expressed in fierce arguments in mid-twentieth-century ICUs. Sometimes the language of negative rights (rights of liberty or non-interference, especially under the guise of ‘self-dignity’ and the right to refuse treatment) was conflated with that of positive rights (entitlement to various medical services) or mixed up with utility claims about ‘efficiency’ and costliness (for the family or for the broader community) as some tried to justify active euthanasia and others argued the concept of medical futility was being improperly used to imply a person’s life was futile.
Clinical successes also led to shortages as the concept of ‘need’ expanded and demand by patients and their advocates grew. Indeed, the first significant public bioethical debates centred on the distribution of limited resources. The first ‘professional’ bioethicists began to argue over microallocation, specifically of solid organs, and about the cost-effectiveness of extended, and supposedly futile, treatment in ICUs. The larger moral question was the macroallocation of society’s resources for healthcare and who would be deemed an untreatable outlier for the sake of population-wide utility maximization.
‘Untreatable’ sometimes meant that available therapeutic treatments were ‘futile’ and sometimes that they were too costly. The problem of outliers for both micro- and macroallocation was later statistically addressed using the ‘Quality Adjusted Life Year’, an effort to quantify the qualitative experience of living on the basis of one year of life in ‘perfect’ health (MacKillop and Sheard 1982). Inevitably, this resulted in the marginalization of persons with significant disabling conditions and the old (healthy or not). Though strongly challenged in the US during the Bush I administration, both informal and formal use of this model continued in the U.S (Stetler 2023; MacKillop and Sheard 1982). In Britain, those categorically defined as having lower life-quality potential due to age and/or disabling conditions were and are sometimes cut off from possibly effective treatments (excepting palliative). Decisions about infants with severe but survivable disabilities meant that on occasion children were taken from their families using a subsidiarity argument and the assertion that the family was not seeking the ‘child’s best interests’, either due to a lack of sufficient knowledge or because their emotional attachment had led to their moral (and, therefore, legal) incompetence.
Not surprisingly, during this period both socialist and market systems moved from clinician virtuosity to a mixture of deontological rights moral language and utilitarianism. Justice would not be ‘due’ on the basis of covenantal bond, but by supposedly fair procedures in the distribution of resources (e.g. first come/first served mixed with a risk assessment of likely outcomes), and/or payment in the free market (lost opportunity decisions made by whoever paid). High residential mobility, the deterioration of mediating institutions (Berger and Neuhaus 1977), insurance being funded by employers (in the US) or nation-states (e.g. UK NHS), and governmental oversight agencies made such a moral shift all but inevitable.
By the end of the 1980s, the adoption of utilitarian consequentialist moral reasoning was formalized in expectations for ‘outcome improvement’, a concept based on corporate continuous improvement managerial theories (CQI). Paradoxically, ‘health’ is an ill-defined concept in healthcare and whatever it may be is extremely difficult to measure, so in most ‘managed care’ uses of CQI in healthcare something more akin to ‘management by objectives’ is actually used, though disguised as Deming’s continuous improvement (Thobaben 1997).
Minimally, increased outcome measurement meant payors had to come up with ways to differentiate demand and need. ‘Demands beyond needs’ (e.g. private rooms, hotel amenities, cosmetic surgery) could be met by markets, but ‘needs’ increasingly were to be funded, directly or indirectly, by the nation-state (through taxation or mandate). This inserted the state into healthcare decision-making. Physicians had already added to their portfolios public health surveillance and reporting responsibilities. They now took up the management obligation of controlling costs for the greater good, however that might be defined. Moral uncertainty about whose interests were being protected arose.
6 Medical ethics in late and postmodernity: hyper-individualism and hyper-centralization
6.1 Late modernity: lost commonality, professional moralists, and the boundary of life
Into this cultural confusion came a new academic discipline, ‘medical ethics’. The term, by then, was far more expansive than merely defining how physicians might properly interact with patients and each other. While initially having strong Christian foundations bioethics quickly secularized. North American and European cultures were abandoning common mediating institutions, including the church, and so lost generally-shared moral mechanisms and notions of the ‘good life’. Falling back on the only values that seemed to still be broadly operative, the new field emphasized utility and autonomy.
Medical ethics as an academic discipline developed after 1954 when then-Episcopal priest (later self-designated humanist atheist) Joseph Fletcher published Morals and Medicine, offering an early version of what became known as ‘situation ethics’ (Fletcher 1954). Narrowly understood, this is moral decision-making using a utility calculus with overall ‘love’ (not clearly defined) serving as the value to be maximized. Fletcher initially asserted that this was a way in which to fulfil the Christian command to ‘love the neighbour’, but eventually made his argument on the basis of personalism. Within medicine and academia, some heartily endorsed the project as ‘progressive’ and fitting the times. His work was rejected by others as contrary to the Hippocratic Oath, being egocentric and inconsistent. His most important counter came from Paul Ramsey, a Christian theological ethicist.
In the popular press, medical ethics was often defined in terms of ‘lifeboat ethics’ and deciding who would take priority in the use of scarce resources. Coupled with this, the increasingly bureaucratic management resulted in the near mutual anonymity of patient and physician (Foucault 1963; Thobaben 2009). Finer and finer casuistic distinctions were endlessly debated. Clinically, it became apparent that a more coherent moral approach was required, one that considered the patients and physician, but also the social and cultural impact of the health sciences.
The Hastings Center, the first such institution dedicated to bioethics, was founded in 1969 by psychoanalyst Willard Gaylin and philosopher Daniel Callahan, a former Catholic (later an advocate for the legalization of abortion and restricted resource use for the elderly). Just as importantly, Joseph Fletcher’s longtime intellectual opponent, Paul Ramsey, published his most significant work, Patient as Person: Explorations in Medical Ethics (Ramsey 1970). Ramsey made deontological arguments within a ‘just coercion’ framework, similar to Christian Realism, but here applied to healthcare and emphasizing the ‘patient as person’ warranting respect (specifically as a rights-holder). For Ramsey, the autonomous individual was not an anonymous individual lost in the crowd, but a person entitled to choice within a covenantal relationship between patient and practitioner (a theme picked up later by William May 1983).
A few years later, in 1972, Catholics opened the Pope John XXIII Medical-Moral Research and Education Center (now the National Catholic Bioethics Center), with similar institutions appearing at other Catholic institutions, often in association with university-related medical centres. Amongst themselves, Catholics had been addressing modern medical ethics within moral theology for decades, but Catholic external influence was limited until the early seventies. A prominent voice was Richard McCormick, who tended toward liberalism (in the American sense) but supported Catholic positions on abortion and euthanasia. A broadly Christian programme was established as the Park Ridge Center. Evangelical and other Protestant organizations also arose, perhaps most significantly the Center for Bioethics & Human Dignity. Though simplistic, one can see Fletcher using utilitarianism, Ramsey deontology, and the Catholics natural law virtue ethics. While each was to some extent helpful, none provided a mechanism for common discourse, even though all set out to do so.
The ineffectiveness of bioethics was obvious in the abortion debates. Abortion had been effectively legal in the U.K. (other than Northern Ireland) since 1967 (efforts to completely decriminalize continue) using privacy arguments. Through the late sixties, pregnancy termination in the US was only allowed in some states, usually needing to be justified on the basis of the ‘health of the mother’; the typical diagnosis referred to mental health. Eventually advocates shifted toward the assertion that personal liberty over reproduction was the path to female empowerment, a previously politically and economically marginalized class.
However, autonomy (or liberty) and privacy are never absolute. ‘Autonomy’ requires both a ‘right to’ and ‘capacity for’ effective choice. Privacy depends on the non-intrusion by uninvited others. Adult women generally have the capacity for choice (though some prolife advocates claim this is not always so, given pressure from various family members, etc.) and healthcare typically is ‘between the patient and the physician’. Therefore, the debate came to centre on the limitation on a right to choice created by the interests of others.
This competing interests question took shape around the definition of ‘person’, usually with fights over terminology, specifically whether the entity was a preborn or a foetus or a product of conception. The setting of the term could determine the outcome of the argument. Abortion was effectively legalized in the US in 1973 when the US Supreme Court ruled in Roe v. Wade (1973) using a combination of privacy and autonomy arguments. Though written evasively, the Court determined that the ‘entity’ was non-person, an essential move since personhood under the US ‘social contract’ (literally, the Constitution) appears to assert a right of life prior to liberty/autonomy.
Some bioethicists recommended that non-dichotomous language was needed. While rarely making the connection to the ‘three-fifths of a person’ period under US slavery, some suggested that the entity was a distinct human but not a full person and, therefore, abortion could be regrettably allowed (The three-fifths standard arose when slave states, at the foundation of the U.S., wanted their non-voting, enslaved population to count for the totals in determining the allotment of representatives in the legislature; later it was popularly asserted that the reduced representation for a population also indicated a correspondingly diminished human worth). For some in bioethics, the nominal categorization of ‘person/non-person’ gave way to the ordinal categorization of ‘person/a-teleologic-being/non-person’. The potentially-aborted were generally assigned to that middle category, along with those with severely diminished capacities due to disabling conditions (so-called permanent ‘incompetents’).
Social movements for the legalization of abortion and then for active euthanasia won political victories in a number of jurisdictions, often on the basis of liberty/autonomy and the inability to find consensus on the concept of ‘person’. Key court cases after Roe v. Wade included Webster v. Reproductive Health Services 492 US 490 (1989), upholding the denial of state funding under certain circumstances, and Planned Parenthood of Southeastern Pennsylvania. v. Casey 505 US 833 (1992), upholding parental consent for minors. Most importantly, Dobbs v. Jackson Women’s Health Organization effectively overturned Roe v. Wade and Planned Parenthood v. Casey by returning legal authority to the states. Since that time, some states have expanded abortion access (e.g. Ohio) and others increased restrictions (e.g. Texas). At a federal level, the status of entity has not been determined and is not seemingly resolvable in the federal legislative branch.
Similarly, there is significant variation among jurisdictions on euthanasia. Nation-states allowing some form of active euthanasia (either practitioner-assisted or self-administered or both) include: Australia, Belgium, Canada, Colombia, Luxembourg, the Netherlands, New Zealand, Portugal, Spain, and Switzerland, as well as various jurisdictions within the United States (although it is not a right in the US, neither can it be prevented if allowed by a state, according to two Supreme Court rulings: Washington v. Glucksberg, 521 US 702 [1997]; Gonzales v. Oregon, 546 US 243 [2006]). Interestingly, several of these countries are secularizing Catholic nations. A similar de-sacralization of human care, including healthcare, is occurring with abortion liberalization in 2023 in (once strongly Catholic) Mexico and Ireland.
Ironically, these debates echoed, yet again, those over hylomorphism from a thousand years earlier. An unexpected consequence of the abortion and euthanasia debates in the US has been the increased ecumenical cooperation of evangelicals (who had roots in eighteenth- and nineteenth-century Revivalism and Progressivism) and traditional Catholics (who, in the US and Britain, had previously tended to be associated with the urban labour left wing). They share not only roughly-similar epistemological assumptions about scriptural authority, but also claim the whole person as physical body and soul is valued by God.
Catholics and evangelical Protestants have tended to oppose what they considered to be a loss of state protection for the vulnerable, both those with disabling conditions and the preborn/foetal. Oldline Protestants and the secularized tended to support both abortion rights and active euthanasia. Disability-rights advocates have also protested active euthanasia on the grounds that inordinate pressure or resource deprivation would be used to harm ‘outliers’, many of whom, unlike the preborn/foetus, can assert their human status.
Many more nation-states now allow passive euthanasia (withdrawal of treatment when further procedures are deemed futile) including all US states. In the US this position was reached through a series of court cases: the Karen Ann Quinlan case in New Jersey (1976) allowed removal of a ventilator; the Nancy Cruzan case, coming from Missouri and decided in the US Supreme Court (1991), allowed withdrawal of assisted nutrition and hydration with ‘clear and convincing evidence’ of the patient’s wishes; the Terri Schiavo case in Florida was rejected several times by the US Supreme Court, thus allowing family member decision authority. Usually, when paediatric euthanasia is allowed in the US, it is passive/withdrawal, though active is permitted in some European nation-states. Controversially, some nation-states have allowed assisted suicide for psychiatric reasons and/or underaged patients.
Differentiating characteristics of end-of-life choices can clarify the decision process for those in clinical settings, even if not eliminating all the moral grey areas. ‘Active euthanasia’ is either practitioner-delivered or physician-assisted. The term ‘passive euthanasia’ is best used in reference to the withdrawal of treatment that directly results in death, usually on the basis of treatment ‘futility’ and demonstrably poor survival prognosis. The moral argument for passive euthanasia tends to be one of ‘lessening pain’ using a consequentialist calculation, or a double-effect claim that death is not intended even if very likely. Active euthanasia tends to be asserted on the basis of autonomy. The ‘autonomy’ of the patient, both as a right to decide and as the competence to decide, can be differentiated as voluntary (wanted), non-voluntary (opposed) and a-voluntary (unknown or unknowable). Advanced directives are used to establish patient intent and/or allow the substituted judgment of a surrogate decision-maker. If a decision is moved to the courts, it is usually made using ‘best interests’ criteria.
Rancorous debates about abortion and euthanasia, as well as genetic modification, cloning, etc. continued through the seventies and into the eighties. Perhaps this was inevitable, given the rapid expansion of medical technology, the continuing decay of pre-industrial family structures, and a general suspicion of authority figures (especially among young adults during and after the Vietnam conflict, and disappointments following the early civil rights movement).
In response, governments formed various committees in the hope of finding consensus on medical ethics. They made important contributions, but their impact was diminished by the tendency for political leaders to select a membership that conformed to and then confirmed the political positions of the party in office. In the US, the most important was the Belmont Report (1979) which did not provide answers as much as an ethical method. The Report asserted the need to balance three values: respect for persons, justice, and beneficence, with respect centred on autonomy in decision-making. While focused on research, it rapidly came to be applied more broadly.
In Britain, the response was the Warnock Committee of Inquiry which reported in 1984, ultimately leading to the Human Fertilisation and Embryology Act 1990 requiring licensure for clinics performing in vitro fertilization procedures and banning research on fourteen-day post-conception embryos (immediate pre-embryonic blastocyst). Later, Mary Warnock’s prestige in bioethics lent gravitas to her work on animal experimentation and her personal support for active euthanasia as a means to relieve the very sick from ‘being a burden’, a moral claim reflecting strongly consequentialist thinking.
The dominant version of what was soon called principlism was that presented by T. Beauchamp and J. Childress (Beauchamp and Childress 1979). Nicknamed the ‘Georgetown Mantra’ for it being vigorously promoted by leaders from the Center for Bioethics of the Kennedy Institute, it consisted of (1) autonomy (deontological category of self-authority), (2) justice (understood in both deontological and consequentialist senses), (3) nonmaleficence, and (4) beneficence (the latter two on a continuum of actions defined by utility and virtuous motive; Pellegrino and Thomasma 1987). Principlism is an expression of its age and place, of late-modern bureaucratic and reductionistic medicine, but it can be a helpful starting point for ethical discussion, even though it rarely – if ever – leads to moral consensus.
Only autonomy (as right to and competent use of liberty) has survived as a culturally-shared value for bioethics. Justice was simply too dependent on other political and religious considerations to be helpful (though a version of ‘fairness in procedure’ can facilitate discussion). Nonmaleficence works, but that is simply the traditional ‘first, do not harm’. Beneficence looks a lot like paternalism.
6.2 Late modernity: professionalization of medical ethics
The twentieth century concluded with extensive efforts to professionalize and license clinical ethicists. Even though it is erroneous to assume that medical ethics grew out of crises in late modern healthcare centres, it is true that the effort to create a body of literature and a distinct academic and professional category of ‘moral expert’ for healthcare did.
Across the globe, academic bioethics programs proliferated, promising to provide legitimate credentialled experts. Advocates argued that such credentials might provide quality and satisfy accreditors (as well as facilitate insurance reimbursement for services). Those opposed claimed that medical ethics should remain an ancillary discipline for persons with expertise in medicine, nursing, philosophy, and theological ethics. Further, the latter asserted, there is no shared corpus, so very little can be coherently taught as ‘standard’ except principlism, which does not solve much of anything.
However, recognition of this lack of foundational commonality does not eliminate the need for cooperation in decision-making within clinical settings. Arguments about abstract social questions of resource macroallocation, research limits, and defining the person become very personal at the bedside. A number of institutions established ‘ethics committees’ to provide guidance. These bodies sometimes advised on clinical decisions but more often developed organizational policies and provided instruction. When various accreditation bodies began to require ‘ethics mechanisms’, the demand for ‘trained’ bioethicists grew. The solution in a bureaucratic state is to add more bureaucracy.
Late modernity (or early postmodernity, if one prefers) has been marked by direct access to information, egalitarianism in moral authority, horizontal or levelled social relationships, and often market-driven decision-making (not just in the economic sense, but rather as choice between alternatives that have opportunity costs; Taylor 2007). H. Tristram Engelhardt Jr, a libertarian and (eventually) Eastern Orthodox bioethicist, as well as founding editor of Christian Bioethics journal, described this contemporary situation in which there is no common bioethics:
Moral strangers are persons with whom one has no common way to resolve moral and bioethical disputes either through sound rational argument and/or by an appeal to a commonly recognized authority […] The dominant secular culture insists on containing and re-interpreting the meaning of all action within the horizon of the finite and the immanent. (Engelhardt 2017: 262–263)
Indeed, with the collapse of Christendom (even while Christianity continues to grow globally) there is little shared moral understanding in the late modern West beyond a very thin notion of social contract. Medical practice depends on finding human life of value, but society is increasingly anomic, made up of syncretistic hyper-subjective individuals.
Stanley Hauerwas was correct in saying:
Kings and princes once surrounded themselves with priests for legitimation. Politicians today surround themselves with social scientists to give those they rule the impression that they really know what is going on and can plan accordingly. Physicians, in an increasingly secular society, surround themselves with medical ethicists. (Hauerwas 1996: 64)
Neither traditional medical paternalism nor late-twentieth-century principlism, let alone the mid-twentieth-century neo-utilitarianism of ‘situation ethics’, provided ready answers when improved healthcare led to greater moral uncertainty over financial costs for families, the emotional costliness of extended dying processes, experiences of significant on-going pain, or the existential sorrow and theodicy conundrums experienced as one sees the ‘self’ of a loved one seemingly slipping away even as the physical body goes on. Medieval hylomorphism debates had never really gone away.
6.3 Twenty-first century and end of modernity: claims of scientific impotence and public health authoritarianism
Extraordinary technical advances evidenced the effectiveness of the scientific method and organizational structure (private v. public funding and profiteering). Without a doubt, recent basic research, pharmaceutical development, public health interventions, and treatment improvements have been extraordinary. Even so, the scientific community has not been of one accord. Accusations of impotence, hubris, and even authoritarianism arose in the face of two epidemics: AIDS and COVID-19.
The epidemics demonstrated – all nineteenth-century Progressivism and Social Darwinist optimism and early-twenty-first-century transhuman wishfulness aside – that while scientific medicine might improve the human condition, it cannot not usher in the eschaton. AIDS struck in the 1980s, serving as a massive, albeit temporary, check on the expansion of sexual openness initiated after the development of antibiotics and the birth control pill. Some proclaimed that the disease evidenced the impinging of divine judgement, others called it a sad consequence of poor decision-making, while still others asserted that it was an indicator of the marginalization of a sexual minority (this assertion was made in the West and Far East; in sub-Sharan Africa HIV centred in the heterosexual population).
Medical ethical questions were expressed over the isolation or even quarantine of high-risk subsets (as in Cuba) and obligations to reveal HIV status. For the latter, the concern was not only for potential partners, but also for practitioners. The ‘duty to treat’ was affirmed, distantly echoing an expectation for practitioners during the Black Death (Pear 1987; Daniels 1991). However, a duty has come to be assumed (informal or legal) for the patient to discreetly and confidentially reveal HIV status to potential partners and under confidentiality to practitioners.
A question also arose about the costs of treatment. High-risk sexual behaviour can create significant financial burdens for healthcare and society. An argument was made that such behaviour warranted high-risk insurance. The same is true for smoking cigarettes, using intoxicants, riding motorcycles, climbing mountains, etc. Generally speaking, sexual choice in the West has been deemed a protected autonomous decision.
Eventually, AIDS was controlled by epidemiological understanding and the subsequent expanded use of universal precautions in healthcare, testing of blood supply, increased education on the use of condoms and the dangers of casual sexual relations, and, very significantly, the development of antivirals for AIDS suppression. In the end, the development of AZT and similar pharmaceuticals (at least in the West) diminished the tone of those debates.
The same arguments about confidentiality and autonomy came back with COVID-19. The previous moral power of autonomy, both in bioethics and as a central claim within traditional liberty-based societies, gave way to ‘shutdown’ when, just before the turn of 2020, COVID began to rapidly pass through ‘virgin-soil’ populations. Given the presumed disease transmission mechanisms, there was far less allowance for freedom of choice in personal behaviours. A mixture of strong medical paternalism and public health utilitarianism led to the abrogation, to a greater or lesser extent, of individual rights in the vaccine-naïve populations. Fear was aroused in the mass media and seemingly not discouraged to any significant degree by governmental officials. In some countries, members of the media and politicians supportive of strong intervention used moral pollution language against those who hesitated to comply. They also ostracized the non-compliant, eventually silenced contrary medical opinions (both those of ‘quacks’ and of those with substantial epidemiological and infectious disease knowledge), and, in some nations, even ‘imprisoned’ the resistant. In some Western societies, strong policing power was used (e.g. Canada) while others did so notably less (e.g. Sweden). In some cases, corporations were asked to participate in limitations on liberty.
Generally, the assertion of western governments (Sweden aside; Ludvigsson 2023) was that only the highly trained could assess risk, generally defined as ‘Risk = Probability of Frequency x Hazard’ (or, in the case of this contagious disease, ‘Risk = Transmissibility x Virulence’). Similar arguments, with less mathematical sophistication, had been used at least as early as the Justinian Plague (sixth century) and the Black Death (fourteenth century), and more recently with Mary Mallon (‘Typhoid Mary’ in the early twentieth century). With COVID-19, the initial uncertainty legitimated precaution, but the long-term continuation of controls became less readily justifiable, especially as a lack of official transparency became evident.
Masking, with proper protocols, may well have been effective in limiting spread to some degree, but the extent of such given the infectiousness of COVID and the general ineptitude in or lack of understanding about such use made population-wide masking largely ineffective (Boulos et al. 2023; Jefferson et al. 2023). The various vaccines had notable benefit, yet seemingly did not prove to be as effective in preventing spread as initially claimed (Franco-Paredes 2022; Tan et al. 2023).
Medical ethics is no longer limited to the bedside; bioethics does not occur in cultural isolation. Unsurprisingly, the response to COVID-19 was fitted into various political narratives. Most notably, doubts about the governmental responses dovetailed with broader populist questions about the bureaucratic state. Suspicions about the supposed commercial motives of pharmaceutical companies, claims of a lack of transparency on ‘gain of function’ research, uncertainty about the competence of nation-state public health experts, and a general distaste for the self-presentation of public health authorities as seeming self-aggrandizing fit the narrative of populists. The internet made communication much easier, and any effort to limit such on the part of multinational social media corporations simply reinforced the claims, as strong doubts about the competence and trustworthiness of what was labelled ‘the medical establishment’ were raised (Grady and Fauci 2016). In response, the protesters were labelled demagogues whose desire for power would harm the public’s health. This bioethical conflict remains a factor in current debates about US governmental and EU bureaucratic authority.
Lower expectations would perhaps have made the use of restrictions seem far more of a ‘victory’. Amongst Christians, uncertainty about the on-going restrictions was exacerbated by two factors: what appeared to some as selective enforcement, and the use of foetal material in vaccines. These became political issues, especially in the US and Canada, with the claim that governmental force requires proper moral justification. Arguably a better moral approach that might have addressed concerns of both populace and experts would have been the transparent application of an analytical tool from the Christian moral tradition: ‘Just Coercion Theory’. After all, public health restrictions are, for better or worse, coercive (a common listing of the criteria for coercing with policing power include: Justifiable Cause, Right Intention or Motive, Legitimate Authority, Proportionality of Response, Reasonable Chance of Success, Reasonably Last Resort, and Discrimination in Protecting Bystanders).
Policing in the name of public health has been employed against using vaping devices, consuming soft drinks, owning guns, evicting squatters from rental properties, etc., as well as for and against marijuana legalization. Arguably, state authority should be used to protect the population, though the use of a greatly-expanded definition of ‘health’ is morally problematic in a society with fewer shared values as an unethical claim of authority legitimacy. Unfortunately, during the initial COVID-19 outbreak, medical ethicists provided ‘expert’ talking points for the media, but seemingly contributed little to the broader social discourse about closed schools, businesses, churches, and borders.
At least one COVID vaccine used live foetal cells, as do some of the vaccines for rabies, varicella (including one for shingles), rubella, and hepatitis A (Philadelphia 2021). During the COVID outbreak, some evangelicals and a few Catholics refused vaccines for conscience’s sake. Other Protestant responses have varied widely. The Roman Catholic Church allows such vaccines if no alternative is available (Secretariat of Pro-Life Activities 2021).
6.4 Twenty-first-century postmodernity: contagious anomy and redefining embodiment
By the second decade of the twenty-first century, expanded definitions of ‘health’ included not only meeting biological potential, but also exceeding the natural standard. Some redefine ‘health’ such that it requires changes to the essential individual human or even the species, and they want medicine to facilitate those changes.
Altering genetics through controlling conception is as ancient as arranged marriages and as recent as twentieth-century eugenics. In 1996, however, the speed and manageability of such alterations changed dramatically. In Scotland, Dolly the sheep was born from a cloned and implanted cell (using somatic cell nuclear transfer), the first mammal so bred. Within a few years, the technologies were expanded and applied to humans as assisted reproductive technologies (ART). A number of these methods are generally accepted in the US and Europe.
Christian communities still debate whether ART is morally permissible. Some use a non-interventionist argument, based either on scriptural references about ‘trust’ or natural law, that persons should leave the outcome up to God. This, however, would seem to disregard the fact that all medicine has an interventionist component. Some do allow ART, but only if there are no intentionally-lost fertilized eggs (using a genetic definition of person and a deontological right to life argument). Others deem ART legitimate as long as the rate of lost fertilized eggs does not exceed typical natural loss is (another version of a natural law argument).
What was and is morally unacceptable for most religious persons is the instrumental use of embryos or foetuses, be they unimplanted ART zygotes or aborted foetuses (see above for exceptions). Western Christians seem to be generally willing to tolerate the use of adult-derived stem cells (multipotent) conditioned by the expectation that no effort would be made to further undifferentiate the cells back to pluripotency. Laws on the clinical use of stem cells and research vary across Europe, and within the US (though federally unfunded, several states provide incentives). The counterargument to the general Christian position has been that totipotent and pluripotent stem cells could provide significant benefits as research materials and as a source of treatment products. In some parts of the world, efforts proceed with little impediment, such as in Singapore and China.
ART does not involve genetic engineering per se. Genetic engineering of human beings became a genuine possibility, even to the point of permanently altering the species (as some ‘transhuman’ advocates have long sought), with Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR). Genetics might be altered somatically for beauty or prowess or intelligence, but more significantly the germline might be changed to ‘positively’ improve one’s future offspring, as determined by those with political or corporate authority. The increased genetic engineering precision allows targeting specific diseases for personalized genetic care, yet also opens up the possibility of creating new categories of beings (species) through ‘gain of function’. It would be naïve to think that dictatorial nation-states and aggressive corporations are not currently working to genetically ‘improve’ the human body. This will become even more morally complicated if genuine transhumans can be created, whereby genetically altered bodies are yoked to artificial intelligence (AI).
Changing the body to match some ideal is ancient. Tattooing has existed for millennia. Cosmetic plastic surgery has been readily available for half a century. These tend to be relatively minimal, more like permanent make-up. Medical ethics problems arise, however, if sought procedures are more extreme. These procedures occur as the patient (or his/her surrogate) seeks a ‘cure’ in the form of radical alteration of appearance through pharmaceuticals and/or surgeries such as elective alteration of limbs, facial alterations inconsistent with the species (e.g. horns, tattooed eye whites), etc. If considered harmful, facilitating such would violate the first duty of the physician. Yet, if the patient seeks such procedures, to not assist might be deemed paternalism.
Currently, the medical ethics debate that is culture-wide has occurred over the use of medical procedures to change appearance such that it purportedly matches ‘true’ identity. The patient (or surrogate) believes that medical technology can provide a solution or cure to a social and existential crisis. The decision to proceed assumes that one’s biological identity is ‘wrong’ or ‘inadequate’, and therefore that the physical body as configured is an unessential modality accidentally imposed on the true self. Unlike Christian hylomorphism (that the self on this earth is, in some sense, the physical body and soul/mind), or modern physicalist reductionism (the self is only the physical body), the argument uses a version of non-material idealism reducing the physical body to instrument.
When this treatment choice is made by an adult, society has tended to expect acceptance or at least ‘toleration’ (meaning ‘disagree but not impede’). However, the moral argument increasingly being made in the West is that the individual’s self-knowledge ‘trumps’ all. To coin a seemingly self-contradictory term, when taken to the extreme, the moral epistemological claim is for ‘authoritative hyper-subjectivized natural law’ which prevents moral verification or even doubt by anyone else. The assertion of epistemic certainty (‘my truth’) requires active support.
Unlike traditional liberty, this newest expression of autonomy means that neutrality and toleration are immoral. This can result in medical professionals being required in some institutions or jurisdictions to not only tolerate or respect patient choices with which they might not agree, but to affirmingly facilitate. Debates over practitioner conscience rights versus patient autonomy quickly follows. Large hospital systems in the US (excepting those affiliated with Catholicism, the Seventh Day Adventist Church, and some other Protestant groups), the National Health Service in Britain, and the various national professional bodies have tended to align with the affirmation position, at least for adult patients.
In the case of potential procedures with those under-age (children or adolescents), moral authority might be shifted to the parents under legal subsidiarity, yet the epistemological problems remain. The legitimacy of parental decisions are increasingly being challenged by legislatures, schools systems, and bioethicists on both sides of the issue. Not only is the status of the physical body increasingly uncertain at the end of modernity, disagreement over sources of moral authority (e.g. church, parents, etc.), and the possibility of functional middle axioms (e.g. Constitutional standards, English common law) seems to leave medical ethics with no basis for ethical discourse, let alone moral agreement.
7 Conclusion: the immediate future and lack of consensus
In the late-modern West, four strong cultural forces shape healthcare: (1) increasing technological capacity, (2) hyper-individualism in the marketplace, (3) the expansive use of health and illness language to describe personal and social needs and wants (e.g. gun violence, global warming as medical ethics concerns), and (4) centralized social control by the nation-state. The demand for access to medical expertise grows, but with that also comes the inevitable failure of ‘treatments’ supposedly addressing existential inadequacies.
Perhaps wishfully, medical ethicists are called upon (or claim the authority) to provide answers, yet seemingly do not fulfil expectations. The currently foremost moral approaches in medical ethics (including public health and research ethics) are modified versions of those dominant for the past four decades:
- utilitarianism (e.g. a version of consequentialism used by most public health officials, most medical market managers, and philosophical ethicists such as P. Singer)
- negative and/or positive rights deontologically asserted (e.g. those holding versions of principlism, both ‘right to life’ and ‘right to choice’ advocates, ‘death with dignity advocates’, disability rights advocates, some philosophical ethicists, and a number of conservative Protestant ethicists)
- case-method casuists (such as A. Jonsen and S. Toulmin, and practitioners emphasis their own moral diagnostic capabilities)
- virtue arguments(e.g. ‘vocation of physician’ theorists such as E. Pelligrino)
Virtue arguments are the most helpful for Christian practitioners, but perhaps the least functional in the secular arena given the lack of a shared telos and the epistemic unverifiability of natural law and revealed law claims.
Since the turn of the twenty-first century, medical ethics has also been influenced by various non-disciplinary theoretical approaches.
- Feminist thought significantly impacted discourse on abortion, beginning in the 1970s. Feminist bioethics also influenced nursing ethics (Gilligan 1982; Harrison 1983; Tong 1997; Dickenson 2007; Baylis and McLeod 2014; IJFAB 2008)
- Disability rights organizations put forward moral arguments based on either ethnic and feminist liberation models or civil rights deontological rights arguments. The former was most evident with ‘deaf liberation’ and the latter with the advocacy that led to the Americans with Disability Act in 1990
- Critical theory, an approach developed out of the work of various theorists of the Frankfurt School, has been influential in academic bioethics, especially Black critical theory and queer critical theory. Simply put, critical theory is a neo-Marxist analytical approach, often mixed with various versions of psychotherapeutic theory, which allows ‘class’ to be defined in non-economic ways
- Various post-colonial arguments have made inroads, but usually only regionally (Zion, Briskman and Bagheri 2021; Rentmeester 2012)
- At the end of the twentieth century and into the twenty-first century, business ethics has been influential, especially through the organizational ethics requirements of various accrediting bodies (e.g. the Joint Commission in the US)
A possible alternative medical ethics response for Christians to cultural deconstruction is to draw on early church and Scholastic notions of ‘flourishing’. The possibility has been proposed by Catholic and Protestant individuals and medical associations (Brand and Yancey 1973; Taylor and Dell’Oro 2006; Messer 2013; Kilner 2015; Thobaben and Young 2019). Flourishing is something more than just lively bodily integrity. It implies that life has a purpose or telos. Consequently, seeking physical well-being, while acknowledging the finitude and frailty of both patient and provider, legitimates biomedical research and the provision of healthcare services. It also provides boundaries for what is not acceptable. In the US and other social contract states, the secular idea of ‘pursuit of happiness’ might partially substitute for ‘flourishing’. In general discourse, ‘holism’ is a rough synonym.
Christian versions, however, should not rely on either extreme individualism nor collectivism, but on the assumption that moral agents (practitioners, patients, governmental authorities, family members, etc.) are teleologic individuals within community. A proximate goal of earthly holism or flourishing provides a telos for considering ethics beyond the clinical setting without extending ‘the medical’ into every societal sphere or drawing governmental paternalism into the clinic.
However, even if the idea of individual ‘flourishing’ or partial equivalents may help Christian bioethical discourse and engagement with civil society in a way that neither traditional ‘natural law’ nor versions of principlism have, social and political disagreements will remain (Thobaben 2016).