17 May

Modern medicine, medical historians generally (although I think mis- takenly) agree, begins in France shortly after the French Revolution of 1789. The Revolution had completely disrupted medical educa- tion –– for a short time no new doctors graduated (the university faculties of medicine were abolished in 1792), and at the same time anyone was permitted to practise medicine –– but when order was restored in 1794 medical education, medical diagnosis, and medical research were placed upon a new footing. Education now required the presence of trainee doctors in hospital wards where they would follow the diagnosis and treatment of actual patients –– this was not a new idea, for it can be traced back to Renaissance Florence, and was common in late seventeenth-century Holland, but it was now applied systematically for the first time. A medical education was now acquired on hospital wards, and not just in a university setting, in the lecture theatre and the dissecting room. At the same time the nature of diagnosis changed. Until late eighteenth-century France, the standard medical encounter, the one that shaped the thinking of the profession, was one in which a fee-paying patient consulted a doctor of her own choice. In this exchange the patient always had the upper hand for the doctor could be dismissed at any time. As we have seen, the doctor usually did not examine the patient; commonly his curiosity was satisfied by feeling the pulse or inspecting the urine, both of which stood in as substitutes for the patient’s actual body.

The information the doctor relied on was primarily provided by the patient’s own words. But in post-revolutionary France the standard medical encounter came to be seen (despite the fact that the majority of doctors were still seeing private patients in their homes) as one between a doctor who was paid for by the state and a patient who had no control over her own treatment, an encounter which took place in a hospital setting. Here, the power relations were reversed: a patient who misbehaved or did not follow the doctor’s instructions would be sent home. There was central control over the distribution of patients: if a doctor was known to be working on a particular disease, suitable patients would be directed to see him. In these encounters, doctors paid less and less attention to what the patient said was the matter, and more and more attention to their own direct observation of the patient’s body. Underlying this was a more profound change. In the past doctors had sought to alleviate symptoms as described by the patient: what mattered was that the patient should feel better, if not immediately (most patients must have felt worse after repeated purging) then in the medium term; in the end the measure of success was a subjective one. But now, in the first decades of the nineteenth century, doctors were seeing patients in considerable numbers, and when these patients died (as a high proportion of them did) their bodies were routinely autopsied. One doctor working at La Charité hospital said that in the twelve years he had worked there he believed not a single patient who had died had escaped autopsy. What the doctor now sought to do was predict, on the basis of his inspection of the patient, what would show up at autopsy. The doctor’s task was to read the symptoms he could perceive in the living as indicators of a hidden condition that would only be exposed to view in the dead. His project was to move in the mind’s eye from the surface of the body to the interior, and soon very simple new tools –– such as the stethoscope em;were to be devised to make this task easier. This task was complicated by the fact that the interior of the body was now being observed in a different way. In the past, autopsies had primarily been concerned to locate the cause of death in a particular organ, and each organ was seen as having a particular function within the body. But now each organ was seen as being constructed out of a number of different types of tissue (eventually twenty-one different types were identified), and the same types of tissue were found throughout the body, so that at autopsy it was seen that the same type of lesion, affecting the same type of tissue, could be found in very different organs.

The founding text of this new anatomy was François-Xavier Bichat’s Traité des membranes, of 1799, followed by his Anatomie générale of 1801: Bichat had arrived in Paris in 1794, at the age of 22, but he had been quick to impress, probably because he had begun learning medicine at an early age from his father, a professor of medicine at Montpellier. In Bichat’s work the body was no longer described as a city made up of different buildings; instead it was seen as a city made up of different buildings constructed from a narrow range of materials, so that rotting wood might equally be found in a town hall, a country cottage, or a castle; just so an inflamed membrane might be found in lung, intestine, or eye-socket, or rather an inflamed membrane of a particular sort, for Bichat distinguished membranes into three distinct types. This set of interlocking transformations –– a new analysis of the body’s components; a new observation of the patient’s body; a new relationship between doctor and patient; a new medical education –– was the subject of a famous book by Michel Foucault, The Birth of the Clinic, first published in French in 1963 and in English translation in 1973. The word ‘clinic’ might be better paraphrased as ‘teaching hospital’, though for Claude Bernard the clinic was not an institution but an enterprise, ‘the study of a disease as complete as possible’. Four important developments took place over a generation or two within this new medical world. First, doctors became slowly aware of the limits of their powers. When the young Oliver Wendell Holmes went to Paris for his medical training in the 1820s, there were still doctors around who would arrive on the wards in the morning and order that every patient, no matter what was wrong with them, should have their blood let. The fashionable theories in the 1820s were those of François Broussais: he thought diseases originated in inflammation, and the treatment for inflammation was bloodletting, though he favoured leeches rather than venesection. Slowly doctors were forced to recognize that their treatments seemed to do little good. In a hospital setting, where the patients were under constant supervision, they could scarcely blame the patients for failing to follow their instructions, so they came face to face with the limits of their own powers. Wendell Holmes went back to America from France convinced that most therapies were useless –– a view that scan- dalized American doctors, but fairly represented the most advanced French thinking. His hero was Louis, whose work we have already discussed.

The clinic gave birth to what is sometimes called ‘thera- peutic nihilism’. And therapeutic nihilism was entirely justified. For the first time in more than two thousand years doctors were finally beginning to acknowledge that they did little good, and some harm. Second, and more slowly, as statistics were collected and treat- ments compared, it became apparent that hospitals were actually very bad places in which to be ill. In Britain, Sir James Simpson, the discoverer of chloroform anaesthesia, established that 40 per cent of amputations performed in hospitals resulted in death; on the other hand only 10 per cent of amputations performed outside hospitals were fatal. Simpson memorably concluded: ‘A man laid on an operat- ing table in one of our surgical hospitals is exposed to more chances of death than was the English soldier on the field of Waterloo.’ The hospital itself seemed to be a cause of disease, and Simpson coined the word ‘hospitalism’ to define the problem. What was the cause of the problem? One answer was that hospitals were rather like slums: dirty (patients were often put into beds that had been occupied by other patients, with sheets covered in blood and pus), smelly (with the air pervaded by the smell of rotting flesh and gangrene), and crowded (with as many beds as possible squeezed into large wards). Often they were built near graveyards and indus- tries. In terms of traditional thinking, both the analysis and the solu- tion were straightforward: the problem was miasma, or bad air; the solution was ventilation and improved cleanliness.

In the Crimean War it was found that wounded soldiers operated on in tents had a better chance of recovering than those operated on in hospitals: the very buildings themselves appeared to be at fault. Miasma must be countered with hygiene. Florence Nightingale was one of the great exponents of this line of argument –– and improved hygiene did help. But cleanliness alone did not seem enough to prevent the spread of infection, and by the 1860s many were arguing that the big city hospital would have to be abolished, that patients would have to be treated at home or in little cottage hospitals. Within two generations the new medicine was in a profound crisis: its therapies didn’t work, and its key institution, the hospital, was a death trap, a ‘charnel house’, as John Tyndall put it. This new awareness of just how dangerous medical intervention could be is usefully marked by the coining in 1860 of the phrase primum non nocere, ‘first do no harm’. The first person to use it was Thomas Inman, who claimed (mistakenly) to be quoting Thomas Sydenham. But the phrase was quickly picked up and attributed not to Sydenham but to Hippocrates –– despite the fact that Hippocrates wrote in Greek, not Latin. In reality it is an invention of 1860 and its rapid attribution to Hippocrates represents the invention of a tradition. Newly aware of the extent to which doctors were capable of doing harm, the medical profession reassured themselves with the thought that Hippocrates had shared their concern. At the same time as the hospital system destroyed itself from within, two new developments were taking place, the one an exten- sion of the other, just as the identification of hospitalism was a natural extension of therapeutic pessimism. First, a small group of doctors, mainly in Paris, came to feel that medical knowledge would never be complete if it relied entirely on the inspection of patients and the dissection of cadavers. What was needed was the application to medicine of the experimental method, an enterprise that would make possible new developments in physiology.

Since there were limits to the experiments that could be performed on humans the new science was to be based on animal experiments. Here again Bichat was a key figure. One of Bichat’s concerns was to establish what actually happened when people died. Death, he realized, was not a uniform process. Sometimes the heart stopped first, and the other organs of the body then failed; at other times, for example if someone drowned, it was the lungs that first ceased to function, followed by unconsciousness and the stopping of the heart; or again an injury to the brain might be fatal, though the lungs and heart were in good order. Bichat, the founder of the new account of human anatomy in terms of different types of tissue, devised experimental methods for studying the progress of death through the body. For example, one could model the failure of the lungs by passing venous blood rather than arterial blood into the heart to be pumped around the body. Bichat tried connecting the heart of one dog to the veins of another, but it was hard to get the pressure right and the blood flowing in the right direction.

He had more success tying off the flow from the lungs and injecting venous blood in its place –– brain death followed, as if from asphyxiation. These experiments were reported in Recherches physiologiques sur la vie et la mort (1800). Experimental physiology meant that for the first time doctors needed a specialized space in which to conduct research. ‘Every experimental science requires a laboratory’, wrote Claude Bernard in his Introduction to the Study of Experimental Medicine (1865). Medicine followed the path pioneered by physics and chemistry in becoming a laboratory science. The first medical laboratories were straight- forwardly places for conducting vivisections, and the tools of the physiologists’ trade were those of the chemist and the surgeon. At first microscopes were not to be found: Bichat, for example had no use for them. They began slowly to appear only in the 1830s: ‘the new era of microscopic pathological anatomy’, wrote Claude Bernard in 1865, ‘was originated in Germany by Johannes Müller’, with a book pub- lished in 1830. Only later, from the 1870s, did bacteriology create a new type of laboratory space, full of microscopes, test tubes, petri dishes, and other types of specialized glassware. With physiology came another new science, pharmacology (the word dates to 1805). The revolution in chemistry inaugurated by Lavoisier provided the techniques to produce pure samples of the active agents in the plant materials that had been relied on for drug therapy. From Hippocrates on, pharmacists had produced compli- cated recipes with numerous ingredients. Now, physiologists working with chemists set out to isolate and test one active ingredient at a time. In Germany, morphine was isolated from opium in 1817; in the same year in France emetine was isolated from ipecacuanha root; strychnine was isolated from upas in 1818; quinine from cinchona in 1820; caffeine from coffee in 1821. In 1821 François Magendie pub- lished a Formulaire pour la preparation et l’emploi de plusieurs nouveaux medicamens. It was 84 pages long. In 1834 the 8th edition appeared; it was 438 pages long. Some of these drugs were certainly useful; but few, perhaps none of them, cured diseases. Thus by the 1860s medicine appeared to have been transformed: a new relationship between the doctor and the patient’s body; a new preoccupation with linking diagnosis and autopsy; new sciences of physiology and pharmacology; and three new locations, first the purpose-built hospital, and then the physiological and the pharmaco- logical laboratories. I wouldn’t want to question that there is some- thing profoundly modern about all this, and it certainly was out of this world that modern medicine emerged, but it is essential to stress that the medical revolution represented by the birth of the clinic and of the physiological laboratory was not a success but a failure. The mortality amongst patients did not decrease, instead it increased. New therapies were tried, but they failed. The old complicated pharmaco- poeia was abandoned in favour of new, chemically pure drugs which could from the 1850s be injected by hypodermic syringe straight into the bloodstream. Morphine was extracted from opium, quinine was extracted from cinchona, but people went on dying, more or less as before. You could only think that this was the foundation of modern medicine if you thought that modern medicine was about certain institutions (hospitals, laboratories), or certain ways of inspecting patients (stethoscopes, thermometers), or certain ways of interpreting the human body as a prospective cadaver for autopsy (lesions rather than diseases, tissues rather than organs). But if you think that the key feature of modern medicine is effective therapy and the capacity to postpone death, then these institutions, these instruments, this way of thinking about disease are beside the point, because none of them led to effective therapy.

The alternative view is that modern medicine began long after the birth of the clinic, and that it is inseparable from the germ theory of disease and the controlled clinical trial. Why do historians prefer to focus on the birth of the clinic rather than the germ theory of disease or the clinical trial? Part of the answer is that many of them don’t actually believe that science progresses. For a relativist, the story of the birth of medical science in the first half of the nineteenth century is a profoundly reassuring one, because the unintended and adverse consequences of so-called progress are far

more striking than the intended consequences; doctors, trying hard to save lives, went around killing people. But the story of the birth of the clinic is also attractive to historians because it ties the history of medicine firmly to other sorts of history: the purpose-built hospital can be compared with the prisons and schools being built at the same time (the subject of Foucault’s Discipline and Punish); the new tech- nically skilled medical professional can be compared with his fellow professionals in law, in the universities, in engineering; experimental physiology is merely one of the new experimental sciences. The prob- lem with the alternative emphasis on the germ theory of disease is not just that it creates a radical discontinuity between the new medicine and the old; it is also that it is much harder to situate this revolution in time and space. In the story of the birth of the clinic everything can be brought back to the hospital, and the hospital can be given a history. To ask why doctors didn’t do better makes little sense. They did what they could in a world that was not of their making. But the story of germ theory is very different. It makes perfect sense to ask why doctors for centuries imagined that their therapies worked when they didn’t; why there was a delay of more than two hundred years between the first experiments designed to disprove spontaneous generation and the final triumph of the alternative, the theory that living creatures always come from other living creatures; why there was a delay of two hundred years between the discovery of germs and the triumph of the germ theory of disease; why there was a delay of thirty years between the germ theory of putrefaction and the development of antisepsis; why there was a delay of sixty years between antisepsis and drug therapy. Any history of medicine which focuses on what works immediately brings to the fore these uncomfortable questions about delay, resistance, hostility, and (if we use the word metaphorically) malpractice.

Random Posts

Comments are closed.