BAD MEDICINE-BETTER MEDICINE

17 May

We all have bodies, and all our bodies function in much the same way. Each of us originates in a fertilized egg; we all breathe and maintain a heartbeat; we all eat, digest, and excrete. If we cannot perform these basic functions for ourselves, then our life depends on medical machinery doing them for us. In these respects we are all alike, and like, too, not only all the generations of human beings before us, but all mammals, birds, and reptiles. Bodies, you could say, have no history because they have been much the same since the first human beings came into existence. But our bodies do have a history. I am tall, over six feet. The vast majority of people over six feet tall have been born in the last century, perhaps in the last thirty years. In the mid-eighteenth century Frederick the Great of Prussia searched across Europe to assemble a regiment of men over six foot tall: the enterprise took its point from the rarity of such giants. Anybody inspecting my body for a post mortem would find that on my upper arm there is the scar of a vaccination against smallpox, which must have occurred after 1796, when Jenner invented vaccination, and before 1980, when smallpox was officially declared eradicated. They would also find evidence of my surviving an appendix operation and a compound fracture of the tibia: this, as we shall see, implies medical care received after 1865. Before that date an appendectomy was almost certain to be fatal, while the only hope for someone with a compound fracture (where the bone sticks through the skin) was amputation.

1.  James Ensor, The Bad Doctors, 1895. Etching. Three doctors, working with crude instruments (a carpenter’s saw, a corkscrew) have been perform- ing abdominal surgery on a helpless patient – they have even removed his backbone.

The amalgams used to repair my teeth, and my varifocal lenses, without which I would be half blind, set a terminus post quem in the late twentieth century. My life expectancy is quite different from that of someone born a hundred or a thousand years ago. Put two dead bodies, one from the eleventh century and one from any industrialized society in the twenty-first, on to a mortuary slab, and you would not need to be an expert to tell them apart. To have a body is to experience, at least on occasion, pain: every infant suffers from wind and teething.

Every child encounters disease. And part of the process of growing up is discovering that death awaits us all. All societies seek to alleviate pain, ward off disease, and post- pone death; to fail to do these things would be inhuman. In Western society, we turn above all to the medical profession for help, and the doctors who treat us belong to a profession that dates back to Hippocrates, the ancient Greek who, some 2,500 years ago, founded a tradition of medical education that continues uninterrupted to the present day. Yet the striking thing about the Hippocratic tradition of medicine is that, for all but the last hundred years, the therapies it relied on must have done (in so far as they acted on the body, not the mind) more harm than good. For some two thousand years, from the first century bc until the mid-nineteenth century, the main therapy used by doctors was bloodletting (usually opening a vein in the arm with a special knife called a lancet, a process called phlebotomy or venesection; but also sometimes cupping and leeching), which weakened and even killed patients.

Moreover medicine became more not less dangerous over time: nineteenth-century hospitals killed mothers in childbirth because doctors (trained to consider themselves scientists) unwittingly spread infections from mother to mother on their hands. Mothers and infants had been much safer in previous centuries when their care had been entrusted to informally trained midwives. For 2,400 years patients have believed that doctors were doing them good; for 2,300 years they were wrong. I think it is fair to say that historians of medicine have had difficulty facing up to this fact. Historians of medicine are a diverse group, with widely differing views, but in general they no longer write about progress, and so they no longer seek to distinguish good medicine from bad. Indeed they try to avoid what they think of as anachronistic evaluations: ‘only the most dyed-in-the-wool Whig history still polarizes the past in terms of confrontations between saints and sinners, heroes and villains’, wrote Roy Porter (1946–2002, the greatest medical historian of his generation) in 1989.

This book, on the other hand, is directly concerned with progress in medicine: what made it possible, and why it was so long postponed. To talk about progress is to talk about discoveries and innovation, and about obstacles and resistance: it is inevitably to talk about heroes and vil- lains, if not about saints and sinners. This book, therefore, is written against the grain of contemporary historical writing. There is a particular reason for writing about progress in medicine now. In recent years the medical profession has discovered what it calls ‘evidence-based medicine’ –– that is, medicine that can be shown to work. This is the first history of medicine properly to acknowledge that most medicine, even into the present day, has not been evidence- based, and indeed that it did not work. If the story I tell in this book is very often one of failure not success that is because we have begun to redefine success, which means we are now in a position to rethink the history of medicine. Recognizing how late and limited medical progress has been makes the progress that has taken place even more remarkable. So this book is also about the process whereby we have at long last learnt to preserve life and health. Here I have tried to concentrate on the big picture: the first successful operation on appendicitis took place, as best we can tell, in 1737; in Britain the first successful caesarean sec- tion, in which both mother and baby survived, had been performed by the end of the eighteenth century; but until 1865, when Joseph Lister, working in a Glasgow hospital, first demonstrated the prin- ciples of antiseptic surgery on a young boy with a compound fracture of the tibia, such operations were bound to be almost always fatal. With Lister there begins a new era in medicine, made possible by the triumph of germ theory, and the third part of this book examines the incredible revolution in medicine that began in 1865.

When I use phrases like ‘until 1865’ or ‘a new era’ I am using a sort of shorthand. There was considerable resistance to Lister’s innov- ations, and they were slow to win acceptance. Despite the fact that antiseptic surgery helped consolidate a germ theory of disease, it was to be thirty years before a cure was found for any major infectious disease. The new era is separated from the old by a lengthy period of transition, from antiseptic surgery to penicillin, from 1865 to 1941, not by a single event, Lister’s first antiseptic operation. Moreover Lister’s innovations made possible new types of bad medicine. For the first time it was possible to operate on the abdo- men, and some surgeons proceeded to happily chop out bits and pieces (an appendix here, a colon there) not because they were infected, but because they might one day become infected –– the historian Ann Dally has called this ‘fantasy surgery’. These operations never became the norm, but tonsillectomies did, and we now know they did more harm than good. Worse still, the decision as to whose tonsils should be removed was not remotely rational. Of 1,000 11-year-old children in New York in 1934, 61 per cent had had tonsillectomies. The remaining 39 percent were subjected to examination by a group of physicians, who selected 45 percent of these for tonsillectomy and rejected the rest. The rejected children were re-examined by another group of physicians, who recommended tonsillectomy for 46 per cent of those remaining after the first examination. When the rejected chil- dren were examined a third time, a similar percentage was selected for tonsillectomy so that after three examinations only sixty-five children remained who had not been recommended for tonsillectomy.

These subjects were not further examined because the supply of examining physicians ran out. Clearly the decision as to who should have a tonsillectomy was entirely arbitrary. This was bad medicine alive and well in the 1930s. I do not want to suggest that everything changed in 1865. But 1865 marks the moment when real progress first began in medical therapy, and, however imperfectly and haltingly, progress has con- tinued since then. 1865 marks a turning point, not a transformation; by 1950 medicine had acquired a genuine capacity to extend life. This claim, that modern medicine works, is not I think really contentious. It once would have been. Between 1976, when Ivan Illich published Limits to Medicine and Thomas McKeown published The Modern Rise of Population, and 1995, when J. P. Bunker published an essay entitled ‘Medicine Matters After All’, there was a serious body of intellectual opinion which held that medicine had made no real difference to life expectancy, that the achievements of modern medicine were just as illusory as the achievements of ancient medicine. Now the balance of the argument has shifted: it is easy to exaggerate the extent to which medicine matters, but it would be strange to claim that it achieves nothing of any significance, and 1865 usefully marks the moment at which doctors began to be able to save lives. Lister became a qualified doctor in 1854; the moment of his entry into the profession was marked, we may imagine, by his taking the Hippocratic Oath. The oath was written by Hippocrates when, in c.425 bc, he began to provide a medical education to people who were not members of his immediate family. Or at least this is what we are told by Galen, a Greek doctor who practised in Rome six hun- dred years later, and whose writings were, for 1,400 years, regarded, in both Islamic and Christian countries, as the ultimate authorities on all medical questions.

A few years ago I watched with pride as my daughter took the Hippocratic Oath in Glasgow. There is something dizzying about the idea of a ritual that has survived for 2,500 years, while paganism has given way to monotheism, the mathematics of Pythagoras to the mathematics of Einstein, the technology of Archimedes to that of Werner von Braun, the Greek city state to the modern nation state. The true story of the Hippocratic Oath is a bit more complicated. It almost certainly was written by Hippocrates. Scribonius Largus (c. ad 1–50) describes the oath being administered in his day; we have an Egyptian papyrus copy from c. ad 275. This evidence is so frag- mentary that it suggests that the oath was not routinely employed in the education of doctors in the classical world, and it was certainly not regularly administered in the Middle Ages. We first find it being administered in a medical school in Wittenberg in Germany in 1508, and it first becomes part of a graduation ceremony in Montpellier in France in 1804. During the nineteenth century some European and American medical schools administered the oath, but many did not: I don’t know if Lister took the oath or not. As late as 1928 only 19 per cent of American medical schools administered the oath; and it is only after the Second World War that the oath (in its various modernized forms) began to be administered almost universally. Nevertheless the oath effectively symbolizes the unbroken intel- lectual tradition descending from Hippocrates into the nineteenth century and, thanks to the conservatism of the medical profession, beyond. Even where continuity is an illusion (as it is in the case of the oath), not a reality, doctors have wanted to foster a sense of continuity.

Or at least they have until very recently: the new move to problem- based learning, where medical students no longer attend lectures, means that in the future medical knowledge will cease to be pre- sented as a body of information which has accumulated over time. Soon medical graduates will be taking the Hippocratic Oath without knowing who Hippocrates was. In ancient Greece and Rome, throughout the world of Islam from the ninth century until the twentieth century (there were still ‘Ionian’ doctors practising ancient Greek medicine in Iraq in the 1970s, and I imagine there are still some today), in Western Europe from 1100 until the mid-nineteenth century, to be a doctor was not just to take one’s place in a tradition descending from Hippocrates, it was to employ the therapies recommended by Hippocrates (although later generations were to place much more emphasis on bloodletting than Hippocrates himself had done). The standard editions of Hippocrates and Galen date to the moment when that tradition was coming to an end: 1839–61 in the case of Hippocrates, with an important English translation, 1849; 1821–33 in the case of Galen, with an important French translation, 1854–6. In the 1850s, when Lister went to university, Hippocrates and Galen were still part of every doctor’s education. 1861, when the standard edition of Hippocrates was completed, is, as we shall see, an important date, the date of Pasteur’s first major publication in germ theory and so (at least according to conventional accounts) the key moment in the founding of modern medicine. In 1846 the American J. R. Coxe could write of Hippocrates and Galen: ‘the names of both these great men are familiar to our ears, as though they were the daily companions of our medical researches’. That daily companionship was to come to an end within a few years, but it had been so long-enduring, so constant, so intimate that nobody foresaw its end, and nobody celebrated its death. Hippocratic medicine had no funeral, no memorial, no obituary. Instead there was an almost wilful determination to pretend that modern medicine was a natural development from Hippocratic medicine, that Hippocrates could still be the doctor’s daily companion.

At least until the 1860s there was a continuous tradition of Hippocratic medicine, and for century after century patients turned to their doctors to be cured. For two and a quarter millennia doctors insisted that medicine was a science that saved lives. But there were critics from the very beginning. An ancient work called The Science of Medicine, which dates to c.375 bc, is the first defence of Hippocratic medicine against its critics. The philosopher Heraclitus, for example, said that doctors tormented the sick, and were just as bad as the diseases they claimed to cure. It was Heraclitus, not the author of The Science of Medicine, who had the better argument, for Hippocratic medicine was incapable of fulfilling its promises. This should be obvi- ous, but modern commentators are unable to admit this simple fact. They persist in treating The Science of Medicine as if it were a defence of science against quackery and superstition, rather than what in reality it is, a defence of quackery against justified scepticism. They seem to feel that the reputation of modern medicine is somehow at stake in this defence of ancient medicine, and that our idea of science is somehow the same as that of the ancient Greeks. It is worth stressing that Hippocratic doctors were familiar with what we might think of as genuinely scientific and technological ways of thinking. A number of texts survive which the ancients attributed to Hippocrates; many were certainly written not by him but by his pupils, but amongst those with the best claim to have been written by Hippocrates himself is a work called Fractures, evidently written for the education of doctors in the fifth century bc. Its author explains how to make metal rods with which to force displaced broken bones back into place. One should use these, while extension is going on, to make leverage . . . just as if one would lever up violently a stone or log. This is a great help, if the irons are suitable and the leverage used properly; for of all the appa- ratus contrived by men these three are the most powerful in action –– the wheel and axle, the lever and the wedge. Without some one, indeed, or all of these, men accomplish no work requiring great force. This lever method, then, is not to be despised, for the bones will be reduced thus or not at all. If, perchance, the upper bone over-riding the other affords no suitable hold for the lever, but being pointed, slips past, one should cut a notch in the bone to form a secure lodgment for the lever. This was a perfectly effective technology, well-grounded in theory; but Hippocratic doctors persisted in defending bloodletting and cauterization as if they were just as reliable as the application of a lever to a stone or a log. I have deliberately introduced the term ‘technology’ because I want to stress that medicine, at least since Hippocrates, has always been a technology, a set of techniques used to act on the material world, in this case the physical condition of the patient’s body.

With technologies it is perfectly legitimate, and not at all anachronistic, to talk about progress. Thus a steam engine is a technology for turning heat into propulsion. Progress in the design of steam engines means either that greater propulsive force is obtained, or the same force is obtained more efficiently. The definition of progress is internal to the technology itself. In the case of medicine, progress means that pain is alleviated, periods of sickness are shortened, and/or death is post- poned. Hippocrates would have recognized this to be progress, so would Lister, so would Richard Doll, the man who discovered that smoking causes lung cancer. To ask if there is progress in medicine is not to ask an illegitimate question, as it might be, for example, to ask if there is progress in philosophy or poetry. Hippocrates thought that he could alleviate pain, shorten sickness, and postpone death. We now know that (in so far as his techniques acted on the body not the mind) he was wrong. Studies in the nine- teenth century, when Hippocratic therapies were finally coming under attack, showed that when the standard Hippocratic therapies were employed against broncho-pulmonary infections, mortality was increased by about two-thirds. Hippocratic medicine was bad medicine in that it killed when it claimed to cure.

BAD MEDICINE-BETTER MEDICINE

2. This woodcut, reproduced from Guido Guidi, Opera Varia (Lyons, 1599), first appears in 1544. It accompanies a text by the fourth-century Byzantine medical writer Oribasius. Hippocrates’ Fractures is included in the same volume. Of course Hippocrates did not know this, and he had no idea corresponding to our concept of an infectious disease. For Hippocrates no two illnesses were exactly alike; because illness was a disorder of a particular body each person’s illnesses were to some degree idiosyncratic. Before you can start measuring the success of a therapy you need to start lumping particular occurrences of illness together. There are various ways in which you can come to do this. One is by recognizing that illnesses can be spread from one person to another –– that the illness I have today is the very same one that you had yesterday. Most of the first claims that the effectiveness of therapies could be measured were directed at curing contagious diseases, and depended on the prior development of a concept of contagion.

But there are other ways of getting to the same result. Thomas Sydenham (1624–89), an English doctor and friend of John Locke, thought that Hippocrates had had a vital insight when he had seen that at certain times of year, in certain places, lots of people got very similar diseases. Sydenham did not believe in contagion, but he did believe that one could produce what he called ‘an accurate history of diseases’. For too long people had thought of disease as ‘but a con- fused and disordered effort of Nature, thrown down from her proper state, and defending herself in vain’, but diseases had their own patterns and their own orderliness. Sydenham, like theorists of con- tagion, had come to think of diseases as if they fell into certain distinct species, just as (to use his comparison) plants did.

Later generations of English doctors revered him as ‘the English Hippocrates’ because he had refounded medicine as a study, not of patients and their disorders, but of diseases and their regularities. From this point it becomes easy, in principle, to compare therapies, and decide if one is better than another at alleviating pain, shortening illness, and postponing death –– Sydenham claimed to have brought about a great improvement in the treatment of smallpox (even though he did not recognize it as a contagious disease). Other doctors had bled smallpox victims, covered them with hot blankets, and given them warming drinks, despite the fact that they were suffering from a fever. Sydenham thought this could lead to boiling of the blood, brain-fever, and death. He cooled his patients, gave them cool liquids, and, naturally, bled them, though only moderately. His patients were certainly more comfortable, and may well have got better faster.

For other conditions, however, his therapies were entirely orthodox. He believed, for example in treating a cough, of whatever sort (he was well aware there were different sorts of cough), with bleeding and (often repeated) purges (laxatives to induce diarrhoea). In Sydenham’s day, people were beginning the first systematic study of life expectancies, based on the London ‘bills of mortality’, which recorded the cause of death for everyone who died in London. As we shall see, the new intellectual tools were being assembled which would eventually make it possible to evaluate therapies and measure progress in medicine. The more this was done, the more it became apparent that traditional remedies were defective. Foucault gives the example of an early nineteenth-century doctor who aban- doned all the traditional therapies. He was aware of 2,000 species of disease, and treated each and every one of them with quinine.

Now quinine, a drug that was new in the seventeenth century, really does work against malaria. Its great advantage in use against the other 1,999 conditions is that (unlike traditional Hippocratic remedies) it does little harm. Although Hippocrates had no way of knowing it, his technology was defective. Hippocratic medicine was not a science, but a fantasy of science; and in this it is much more like astrology than it is like Ptolemaic astronomy (the classical account which placed the earth at the centre of the cosmos, with the sun and planets rotating around it), for classical astronomy worked rather well as a method of predicting the movements of bodies in the heavens. But where modern astron- omy founded itself in the rejection of astrology, where the astrologers were thrown out of the universities by the astronomers, modern medicine incorporated the Hippocratic tradition and the Hippocratic profession.

Where the history of astronomy was long written as if it had nothing to do with astrology, so that modern historians have had to rediscover the fact that astronomy and astrology were once one and the same thing, history of medicine has been written as if it has everything to do with Hippocrates, so that the historian now has to discover the fact that Hippocratic medicine was not itself a science, but a fantasy of science. The whole of medicine before 1865 was caught up in a fantasy world. One reason for this appearance of continuity, this peculiar insist- ence that the history of medicine begins with Hippocrates, not with Pasteur or with Lister, is that in medicine the astrologers turned into astronomers, the Hippocratic doctors turned into scientific doctors. But there is another reason, and that is that the new doctors kept on doing the equivalent of casting horoscopes. Until the invention of penicillin in 1941 there was very little doctors could do about most infections; even the new science left them virtually powerless in the face of disease. They had no alternative but to keep up the age-old pretence that medicine had something useful to offer, when for the most part what it offered was a ritual, a rite, a performance, a show. Doctors did not cure patients; rather they helped them contain their anxieties, which is an important undertaking in itself. But the age-old pretence that they could do more than this still affects the way in which we write about the history of medicine, and prevents us from thinking straight about progress in medicine.

The medical revolution of the second half of the nineteenth century meant that soon textbooks were no longer restatements of the teachings of Hippocrates and Galen; but the notion that medicine was a long-standing profession, that it had an ancient tradition, was preserved in the face of change. Just as the medical profession survived surprisingly unchanged, so too our language continues to reflect the beliefs and practices of an earlier age. When I say my blood boils; when I admit I’m hysterical; when I assume that red-headed people are hot-blooded or complain that someone is cold-blooded or ill- humoured; when I say someone is phlegmatic; when I listen to the song ‘My Melancholy Baby’, I’m thinking in terms which once made sense as part of a coherent and subtle system of belief. Our language is littered with the flotsam and jetsam of a vast historical catastrophe, the collapse of ancient medicine, which has left us with half-understood turns of phrase that we continue to use because metaphorical habits have an extraordinary capacity for endurance. It has also left us with a vocabulary which seems so completely modern that we scarcely even realize that we have inherited it from the ancient Greeks: apoplexy, arthritis, asthma, cancer, coma, cholera, emphysema, haemorrhoid, hepatitis, herpes, jaundice, leprosy, nephritis, opthalmia, paraplegia, pleurisy, pneumonia, spasm, tetanus, typhus amongst the diseases; artery, muscle, nerve and vein amongst the parts of the body. The history of ancient medicine is still, though only just, a part of our own history.

The whole enterprise of the history of medicine has been vitiated by its inability to take seriously the extent to which medicine was, until 1865, an impossible, a misconceived project. Before contempor- ary history of medicine (roughly speaking, history of medicine since 1973), medical history was presented as a grand narrative of progress, and indeed there is some logic to such a narrative as long as one thinks of medicine as a body of knowledge or a science, not as a technology for treating illness. The first historian (he resisted even the word ‘historian’, preferring at one point ‘archaeologist’, at another ‘genealogist’) to break with the grand narrative of progress was Michel Foucault, whose The Birth of the Clinic: An Archaeology of Medical Perception appeared in 1963 in French and 1973 in English. But Foucault thought that modern medicine began in 1816, with the pathological anatomy of François Broussais, that modern medicine could be identified with a particular way of looking at patients’ bod- ies, not, as I will argue, with the germ theory of disease.

So his book was not about progress in medicine at all, at least not in the sense of medicine understood as a technology. Broussais was no better at cur- ing diseases than Hippocrates had been, even if he preferred letting blood by applying leeches to the body (often to the anus) rather than by using a lancet to slice into a vein, as Galen would have done. Actually, as we shall see, the story Foucault tells in The Birth of the Clinic is best understood, not as the story of the birth of modern medicine, but as the story of the final crisis of ancient medicine. A central claim of this book is that one of the most interesting things about medicine is that it works, and that we therefore need to study progress in medicine. We can only think about medical progress if we start with the long tradition of medical failure. We need to begin with bad medicine if we are to understand better medicine. We need, quite consciously and deliberately, to engage in what Porter called a polarization of the past. We need to think about the obstacles to progress, about the villains as well as the heroes. When my daughter was 8, some twenty years ago, I bought her a large pop-up book called The Body. It contained illustrations of bones, muscles, and nerves, and of organs such as the heart and the uterus. Back then, before computer simulation, there was something mes- merizing about the crude three-dimensionality of folded paper. We both found it fascinating. Thinking I was being a good parent, I took the opportunity the various images of sexual organs presented of explaining, in the simplest terms, sexual reproduction.

My daughter was puzzled. The next day she came back from school and said she had discussed the matter with her friends. My theories were quite mistaken. No physical contact between the mummy and the daddy was needed to make a baby. She had consulted the ultimate authority, her peer group, and that was the end of the matter. At the time I thought my attempt to teach my daughter elem- entary biology had been a hopeless failure, but since she has now grown up to be a doctor, perhaps I achieved more than I realized. At any rate, I learnt a great deal from that experience, and this book has its origin in that conversation. For I had been educated on John Locke and John Stuart Mill. I took it for granted that in an open argument good ideas would always defeat bad ideas; this was what made progress possible. I assumed that I only had to explain modern science to her in order for her to believe in it. I had no understanding of why someone might reject an unfamiliar and unwelcome idea. However, the real world is not the world of Locke and Mill.

There was something fundamentally wrong with my idea of how know- ledge is transmitted from one person to another. Bad ideas often triumph over good: we will see a striking example of this when we look at the history of scurvy. Peer-group pressure often halts progress in its tracks. Despite the brilliant work of philosophers and historians of science (including historians of medicine), no one has really worked out how to write a history that takes account of this. We know how to write histories of discovery and progress, but not how to write histories of stasis, of delay, of digression. We know how to write about the delight of discovery, but not about attachment to the old and resistance to the new. We know how to write about drug patents and about the growth of new industries, but not about the ways in which economic interests can obstruct change. We know how to write about successful treatments and lives saved, but not about worthless therapies and lives lost. We know how to write old- fashioned histories of progress, although for the most part we choose not to do so. Because we only know how to tell one half of the story, the story we could tell is so obviously unsatisfactory that (if we are professional historians) we usually choose not to tell it. Many years ago, in 1932, a famous historian, Herbert Butterfield, wrote an attack on narratives of progress called The Whig Interpretation of History. Butterfield’s immediate target was a view of English history that saw it as being about the progress of liberty –– a view invented by the Whig party in the eighteenth century.

As a result ‘Whig history’ has become the label for any anachronistic history of progress, and the self-confessed ‘dyed-in-the-wool Whig historian’ (to quote once again Roy Porter in one of the epigraphs to this book) has become an extinct species. Butterfield seems to have recognized that historians were bound to slip into such narratives, and happily slipped into them himself in many of his short books on big subjects, such as his book entitled The Origins of Modern Science. The alternative, he thought, was a sort of technical history that presented events as being the result of enormously complex processes, and described outcomes as being uncertain and unpredictable. Butterfield thought there were in effect two types of history: a bird’s eye view, which surveyed the past from the point of view of the present, and was necessarily biased and ana- chronistic; and a worm’s eye view, in which small things loomed large, and it was impossible to get one’s bearings. Since Butterfield there has been a general agreement amongst historians that the best history is written from a worm’s eye view –– despite the fact that some problems only come into focus if one stands back and looks at the big picture. Go into any good bookshop and you will discover that there is more than one type of medical history. Much history of medicine is written by doctors for doctors. It deals with the past from a doctor’s point of view, not from a historian’s. There are many books that survey the key discoveries in medical history. Several of these books contain a chapter on the invention of the stethoscope by René Laennec in 1816. Doctors still use stethoscopes, indeed one of the first things a medical student does is buy a stethoscope, and so the inven- tion of the first stethoscope looks like an important step towards modern medicine. One of the first uses of the stethoscope was to improve the diagnosis of women suffering from phthisis.

Where a doctor could not put his ear to a woman’s chest as he could to a man’s, he could put his stethoscope there and hear the characteristic sounds associated with phthisis. Phthisis no longer exists as a disease: we now call it tuberculosis because we think of it as an infectious disease caused by a specific micro-organism. The same sounds in a stethoscope that would once have led to a diagnosis of phthisis now leads to tests to confirm tuberculosis. But there is an important differ- ence between our diagnosis of tuberculosis and Laennec’s diagnosis of phthisis: we can cure tuberculosis (most of the time), while his patients died of phthisis –– he died of it himself. Until 1865 (when Lister introduced antiseptic surgery) virtually all medical progress was of this sort. It enabled doctors to get better and better at prognosis, at predicting who would die, but it made no difference at all to therapeutics. It was a progress in science but not in technology. We tend to assume that where there is progress in knowledge there is progress in therapy: for over the last hundred years the two have gone hand in hand. But before 1865 progress in knowledge rarely led to improvements in therapy. So we need a history of medicine that recognizes that progress can long be irrelevant (as in the case of the stethoscope). Nineteenth-century doctors could hear chest wheezes and heart murmurs through their stethoscopes; but there was no treatment for tuberculosis before 1942, and no effective heart surgery before 1948. Diagnosis was pointless without an effective therapy.

Only once there was a treatment for tuberculosis did the stethoscope become a powerful tool. And this is one example of a much wider pattern. Much knowledge that was effectively useless at first became useful once new therapies began to be devised. The knowledge about human physiology and the diagnostic techniques that had been accumulated by doctors over time took on a new significance once they could be used to enable effective therapies; in that sense modern doctors have been able to draw on reserves of knowledge accumu- lated over centuries, just as modern astronomers could draw on the knowledge accumulated by astrologers. The idea that progress in knowledge and progress in therapy are quite distinct may seem an obvious point, but it took me a long while to grasp it. When I started working on this book, my intention was to write a history of different ways of conceiving of the human body –– in terms of the four humours (ancient and medieval medicine); as a mechanical system in which the heart functions as a bellows (the medicine of the scientific revolution); as a system of chemical interactions (nineteenth-century medicine); as a system for the repli- cation of genes (twentieth-century medicine), and so forth. Each represented itself as an advance on its predecessors. But then I recognized that there was a fundamental difference between ideas about the body and medical therapies. Between the sixteenth and the nineteenth centuries, ideas about the body changed fundamentally, but therapies changed very little. Bloodletting was the main medical therapy in 1500, 1800, and 1850.

The discovery of the circulation of the blood (1628), of oxygen (1775), of the role of haemoglobin (1862) made no difference; the discoveries were adapted to the therapy rather than vice versa. Textbook histories of medicine make it hard to understand this because they emphasize change not continuity. And they just assume or assert that bloodletting was phased out early in the nineteenth century when in fact it continued long afterwards. Thus they try to elide a basic fact: if you look at therapy, not theory, then ancient medicine survived more or less intact into the middle of the nineteenth century and beyond. Strangely, traditional medical practices –– bloodletting, purging, inducing vomiting –– had continued even while people’s understand- ing of how the body worked underwent radical alteration. The new theories were set to work to justify the old practices. Venous and arterial blood, for example, were still thought about as if they were fundamentally different even after Harvey had shown that the one changed constantly into the other, and even after it became clear that the difference between them was that one contained oxygen and the

3. Abraham Bosse, Bloodletting, c.1635. This etching shows a doctor in seventeenth- century France tying the ligature around an aristocratic patient’s arm before letting blood. other did not. And this imaginary difference had to be preserved in order to justify the claim that letting venous blood could cure disease, while the letting of arterial blood was always to be avoided. It is because of this fundamental continuity in therapies and in theories of disease (bad air was thought to be the cause of epidemic disease in the mid-nineteenth century just as in the days of Hippocrates), even though theories of the body had undergone radical change, that I use the terms ‘Hippocratic medicine’ and ‘traditional medicine’ to cover not just the period when humoral theory was in the ascendant, but the whole period through to the rise of the germ theory of disease. Having recognized that therapies stood still even while knowledge advanced, I had to face a deeply disturbing fact. Much of the new knowledge was founded on vivisection. This did not greatly worry me, I have to confess, for as long as I thought that all medical know- ledge was useful knowledge. But how could you justify the suffering of Harvey’s experimental animals when you realized that Harvey was no better at treating the sick than any other seventeenth-century doctor? As I worked on this book, I became more and more puzzled at the way in which standard medical histories ignored vivisection, which turned out to be absolutely central to the history of medicine. Vivisection, and even dissection, I realized were difficult and emo- tionally disturbing subjects, and one needed to face the fact that modern medicine had been born out of a series of activities that were both shocking and distressing.

As long as I thought of medical history in terms of a continuing progress in knowledge, I could assume that dissection and vivisection were worth it; but once I realized that there was virtually no progress in therapy before 1865, I was bound to ask myself how one could justify mangling the dead and torturing the living. And then I slowly became aware of a third problem. Histories of progress are written on the assumption that there is a logic of discovery. Once you discover α (say, germs), it is easy to discover β (say, antibiotics); without a theory of germs you will never discover antibiotics. A good example is Newton’s theory of gravity. As long as the sun, the moon, the planets, and the stars were believed to circle around the earth it seemed obvious that there were different laws of movement on earth and in the heavens –– here, natural movement was in a straight line, there it was in a circle; when, with the Copernican theory, the earth became a planet moving through the heavens, it became possible to ask if the same laws governed movement on earth and in the heavens. Copernicus is thus a precondition for New- ton, and the discovery of gravity requires that one first surmount a number of major epistemological barriers, beginning with rejecting the evidence of one’s own senses, which tell one that the earth stands still. Once the epistemological barriers to a discovery have been overcome, the discovery itself ought to follow rapidly and fairly easily. The classic stories of discovery thus include priority disputes, such as whether Servetus discovered the circulation of the blood before Harvey.

Or they include independent but almost simultaneous dis- coveries: Priestley and Scheele, for example, both discovered oxygen; Newton and Leibniz both discovered calculus; Cagniard-Latour and Schwann both discovered that yeast is animate. The logic of scientific discovery seems so strong that it either bears individuals along, or it makes individuals irrelevant. Pasteur said that his work was shaped by an inflexible logic, and one might assume that the same logic also shaped the work of his contemporaries. Pasteur published on putre- faction in 1863; Lister developed antiseptic surgery two years later, and stressed how closely his own discovery followed on Pasteur’s work. Once Pasteur had discovered a vaccine for anthrax in 1881, the hunt for other vaccines was on. Once penicillin had been discovered in 1941, the hunt for other antibiotics was on. But the more I looked for the logic of discovery, the more often it seemed to slip through my fingers. Harvey announced that the heart pumped blood through the arteries in 1628; yet the use of the tourni- quet in amputations, which one would have thought was an abso- lutely elementary application of Harvey’s theory, was first pioneered by Jean Louis Petit (1674–1750), roughly a century later. Leeuwen- hoek saw what we would now loosely call germs, or more accurately bacteria, through his microscope in 1677; yet in 1820 microscopes had no place in medical research, and in 1881 the conflict between germ theorists and their opponents was only just entering its final phase. Penicillin was first discovered not in 1941 but in 1872. And so on. What we need in cases such as these is a history, not of progress, but of delay; not of events, but of non-events; not of an inflexible logic but of a sloppy logic, not of overdetermination, but of underdetermi- nation. And these cases, it turns out, are in medicine (at least until very recently) the norm, not the exceptions.

To give a recent example, the discovery that bacteria (and not stress) cause stomach ulcers met with considerable resistance and was only generally accepted –– and rewarded with the 2005 Nobel prize for medicine –– after a prolonged delay: it is too soon to say whether this is now an exceptional case or not. Delay may have been, may still be, normal, but the reasons for it vary greatly. Let me briefly take one example. Whenever our bodies are involved, our feelings and emotions, our hopes and fears, our delights and disgusts, are engaged. Medicine has often involved doing things to other people that you normally should not do –– touching them, hurting them, cutting them open. Think for a moment what surgery was like before the invention of anaesthesia in 1842. Imagine amputat- ing the limb of a patient who is screaming and struggling. Imagine training yourself to be indifferent to the patient’s suffering, to be deaf to their screams. Imagine developing the strength to pin down the patient’s thrashing body. Imagine learning how to be, as Ambroise Paré, the great sixteenth-century surgeon who pioneered the tying off of blood vessels when performing amputations, put it, ‘resolute and merciless’. Imagine taking pride, above all, in the speed with which you wield the knife, in never having to pause for thought or breath: speed was essential, for the shock of an operation could itself be a major factor in bringing about the patient’s death.

Now think about this: in 1795 a doctor discovered that inhaling nitrous oxide killed pain, and the fact was published and discussed. Nitrous oxide was used as a fairground amusement; there was no mystery about its properties. Yet no surgeon experimented with this, the first anaesthetic, nor with carbon dioxide, which Henry Hill Hickman was using as a general anaesthetic on animals from 1824. The use of anaesthetics was pioneered not by surgeons but by humble dentists, not in London, or Paris, or Berlin, the centres of medical research, but first in Rochester, NY, and then in Boston. One of the first practitioners of painless dentistry, Horace Wells, was driven to suicide by the hostility of the medical profession. When anaesthesia was first employed in Europe, in London in 1846, it was called a ‘Yankee dodge’. In other words, practising anaesthesia felt like cheating. Most of the characteristics the surgeon had developed –– the indifference, the strength, the pride, the sheer speed –– were suddenly irrelevant. Why did it take fifty years to invent anaesthesia? Any answer has to recognize the emotional investment surgeons had made in becoming a certain sort of person with a certain set of skills and the difficulty of abandoning that self-image. Interestingly, the first European to adopt the Yankee dodge was the surgeon who had least to fear from the accusation of cheating: Robert Liston, the man who best embodied the traditional skills of the surgeon, the man who worked faster than anyone else. The history of medicine has to be something more than just a history of knowledge; it also has to be a history of emotion. And this is difficult because our own emotions are involved. The truth is that historians do not like thinking about what surgery was like before anaesthesia.

They too deafen themselves to the patients’ cries. The result is that we never actually hear what we need to hear: because we have not listened out for the screams, we never hear the eerie silence that fell over operating tables in the 1850s. If we turn to other discoveries we find that they too have the puzzling feature of unnecessary delay we have just seen in the case of anaesthesia. So if we do start looking at progress we find we actually need to tell a story of delay as well as a story of discovery, and in order to make sense of these delays we need to turn away from the inflex- ible logic of discovery and look at other factors: the role of the emo- tions, the limits of imagination, the conservatism of institutions, to name just three. If you want to think about what progress really means, then you need to imagine what it was like to have become so accustomed to the screams of patients that they seemed perfectly natural and normal; so accustomed to them that you could read with interest about nitrous oxide, could go to a fairground and try it out, and never even imagine that it might have a practical application. To think about progress, you must first understand what stands in the way of progress –– in this case, the surgeon’s pride in his work, his professional training, his expertise, his sense of who he is. Anaesthetics made the work of surgery easier. They were no threat to surgeons’ incomes. At first sight surgeons had everything to gain and nothing to lose from the discovery of pain relief. And indeed, from 1846, anaesthesia established itself with great speed. Yet it is clear from the inexplicable delay, from the extraordinary hostility expressed towards its inventors, from the use of the phrase ‘Yankee dodge’, that there was something at stake, some obstacle to be overcome.

That obstacle was the surgeons’ own image of themselves. Since this book argues that real medicine begins with germ the- ory, at its heart there is a most puzzling historical non-event: the long delay that took place between the discovery of germs and the triumph of germ theory. It’s fairly easy to find names for things that happen –– the Scientific Revolution, the Great War. It’s much harder to name a non-event, but non-events can be every bit as important as events. Historians regularly insist that to understand the past one must approach it as if one did not know what was going to happen next. But, despite this, they are very reluctant to take seriously the idea that things might have happened differently. The standard view is that when important things don’t happen it is because they couldn’t possibly have happened. Thus the great biologist François Jacob, in The Logic of Living Things, argues that eighteenth-century biologists could not solve the intellectual problems presented by sexual repro- duction: which is why most of them accepted preformationism, the claim that every future human being was already present in Eve’s ovaries. But Jacob recognizes that another problem that exercised eighteenth-century scientists is rather different: most of them believed in the spontaneous generation of micro-organisms, but there was no logical reason for them to think that micro-organisms were different from organisms visible to the naked eye. The issues raised by spontaneous generation were nothing like as conceptually puzzling and problematic as those raised by sexual reproduction, and yet until they were resolved there could be no satisfactory germ theory of disease, and therefore no real progress in medicine.

Belief in spontaneous generation was not sustained by some insuperable intellectual obstacle. We must look elsewhere for an explanation of its endurance –– to the technical problems associated with experiments to disprove spontaneous generation, certainly, but also to a profound reluctance to accept that what one could see through a microscope could have any relevance to our own lives. Sydenham, for example, who was as we have seen one of the first to have a modern concept of disease, acknowledged that the key pro- cesses that took place within the body must take place on a minute scale. You might think he would immediately have reached for a microscope in order to study them. But no. Writing in 1668, when the first living creature invisible to the naked eye had just been dis- covered, he dismissed the microscope as irrelevant. How could one hope to dissect such a minute creature and identify its internal organs? No microscope, he said, could possibly see anything so small. Consequently the microscope could not enable us to see any import- ant process going on in our own bodies. The enquiry was abandoned as pointless before it was even begun. I do not think one can call this a rational response, so one has to assume Sydenham was in part unconscious of his own motives in rejecting the microscope. As long as people assumed one could learn nothing of importance by looking through a microscope the debate over the spontaneous generation of micro-organisms was an intellectual backwater. The microscope became a recognized tool for research in 1830; by 1837 the key experiment disproving spontaneous generation had been performed. If Sydenham and people like him had been willing to reach for their microscopes, the issue could have been resolved at least a hundred years earlier. In saying this I am passing judgement on Sydenham for failing to understand the potential of the microscope. This is inevitable because there is no such thing as an impartial account of the debate provoked by a scientific discovery.

When Oliver Wendell Holmes, giving a farewell address to students and colleagues on his retirement from Harvard University in 1882, referred to ‘the dark ages of medicine’ when bloodletting was the cure for every disease (the quotation is one of the epigraphs to this book), he had earned the right to use such strong language, because he was talking about the medicine in which he had been educated as a young man, and during his career he had fought a series of battles to bring light into darkness. But any historian of the transformation in medicine during Holmes’s career also needs to decide whether they are for or against bloodletting. In the disputes in which Holmes had been involved the two sides had disagreed about what the relevant information was, and about how to interpret it: we have known to expect this since Thomas Kuhn’s Structure of Scientific Revolutions (1962). Since their points of view are radically different, you have to choose between them. You have to take sides. I first began to understand this when I read a wonderful book, from which I have taken another epigraph, John Tyndall’s Floating Matter in the Air (1882). Tyndall was at the heart of the intellectual revolution associated with the triumph of germ theory. In 1875 he carried out a delicate series of experiments designed to prove the truth of germ theory and disprove the alternative, spontaneous gener- ation. The experiments worked perfectly, and he published the results with pride. A year later he tried to repeat the experiments, and over and over again what he seemed to produce was evidence of spon- taneous generation. He just could not get the results he had obtained only a year before. One might think that Tyndall should have changed his mind in the light of the new evidence. Instead he treated the new evidence as an obstacle to be overcome. He refused to give up, he refused to give in, he was determined not to be defeated. And this, every scientist would now say, was the right choice. There can be no impartial account of Tyndall’s refusal to accept the result of his own experiments, of his stubborn persistence in face of the evidence: what one makes of it depends entirely on whether one is a proponent of germ theory or of spontaneous generation. So don’t be misled by the title of this book. This is neither an attack on the medical profession nor an indictment of modern medi- cine. When I was young, doctors twice saved my life: I have the scars to prove it.

More recently, a plastic surgeon performed a wonderful operation on my right hand, on which I’d lost the use of two fingers. I’m all in favour of good medicine –– but the subject of good medicine is inseparable from the subject of bad medicine. To think about one, you need to be able to think about the other, and of the two subjects, bad medicine is both the less explored and by far the larger. Before 1865 all medicine was bad medicine, that is to say, it did far more harm than good. But 1865 did not usher in a new era of good medicine. For the three paradoxes of progress –– ineffectual progress, immoral progress, progress postponed –– are still at work. They may not work quite as powerfully now as they did before 1865, but they work more powerfully than we are prepared to acknowledge. There has been progress; but not nearly as much as most of us believe. In the final chapter of the book I will try to measure the extent of the progress that has taken place. I think most readers will be sur- prised to discover just how limited the achievements of modern medicine are. And, as we shall see, the paradoxes of progress do not cover the full range of problems we encounter in modern medicine. There is, for example, iatrogenesis, where medical intervention itself creates conditions that need to be treated, a particular case of doing harm when trying to do good. But other subjects, I want to stress now, lie outside the scope of this book, important though they are.

This book is not concerned with plain malpractice. There have always been incompetent, careless, and even malevolent doctors, but what I am concerned with in this book is the medical profession at its best. My subject is the bad medicine that was honestly believed to be good medicine. Second, this book is concerned only with physical, not with mental disease: the story of bad psychiatry would require at least a volume to itself. Third, this book is concerned with medicine in Western Europe and America. There were rapid advances after 1865 in the understanding and treatment of tropical diseases, and a chapter on typhoid or malaria might not have been out of place. But I have chosen to concentrate on medicine in those countries which first benefited from a sustained increase in life expectancy, for they are the countries in which we may best assess the impact of medical progress. But first, I suggest, before we study progress, we must make an effort to understand failure.

Random Posts

Comments are closed.