17 May

In the eighteenth century, serious thought about the impact of medical intervention concentrated on one question: the merits of inoculation against smallpox. The procedure was simple: a thread covered with pus from a fresh pock on someone mildly infected with smallpox was pressed into a cut on an arm and on a leg of someone who had not yet had the disease. This usually resulted in a very mild case of smallpox, from which the individual rapidly recovered, and remained henceforth immune against further attacks. Smallpox inoculation had been practised in Turkey, China, and elsewhere, but was first publicized in Northern Europe by Lady Mary Wortley Montagu, who had lived in Constantinople (and who herself was seriously disfigured by smallpox). She persuaded the princess of Wales to have two of her daughters inoculated in 1722. A trial was first made on six condemned prisoners in Newgate, on the understanding that if they survived they would be released. All six survived. As a result inoculation became increasingly widespread thereafter. Inoculation against smallpox raises a number of delicate questions and, although it is easy for us to assume we know what the right answers to those questions were, contemporaries were right to find them difficult and perplexing. At one level the argument was straight- forward. The chances of dying from inoculated smallpox were at first estimated at one in a hundred –– in 1723 James Jurin, who was Sec- retary to the Royal Society, and thus in a position to correspond with experts around the world, did a careful study which produced a figure of one in ninety-one. The chances of dying from normal smallpox were known to be around one in ten (excluding those who died under the age of 2). Most people were exposed to smallpox at some point. It therefore seemed to follow straightforwardly that inoculation would save large numbers of lives. John Arbuthnot, a mathematician and doctor, published in 1722 a statistical analysis that showed that one in twelve deaths in London were due to smallpox, though he still preferred the claim that smallpox killed one in ten of the susceptible population on the grounds that many infants cheated smallpox by dying before it had a chance to infect them. (This line of argument was mistaken: Haygarth and Percival would later show that one quar- ter of all smallpox fatalities were children under one year in age, and Arbuthnot had probably underestimated the proportion of London- ers dying of smallpox.) Similar but more complex calculations by James Jurin resulted in a more reliable figure of one in seven. Inocula- tion, it was argued, would save 1,500 lives a year in London alone. In England, despite a few vocal objectors, the case for inoculation was generally found persuasive. In France, on the other hand, inoculation was rejected by the medical establishment, particularly the Paris Faculty of Medicine –– which continued to reject Harvey’s theory of the circulation of the blood, along with one of the few effective drugs to be discovered in the Renaissance, cinchona, from which quinine (for the treatment of malaria) was later to be extracted. French doctors did not practise inoculation, and those in France who wanted to be inoculated had to turn either to laymen or to foreign doctors. Nevertheless, French intellectuals, such as Voltaire and La Condamine, urged their compatriots to copy the English. In 1760 La Condamine’s friend Maupertuis persuaded the great Swiss mathematician Daniel Bernoulli to enter the debate. Bernoulli set out to calculate the increase in average life expectancy that would result from inoculation, and came up with the figure of two years. In reply Jean d’Alembert questioned whether most people were actually prepared to run a significant risk (say one in a hundred) of immediate death in order to gain only two years. The state and the society might gain if everyone was inoculated, but d’Alembert had considerable sympathy with cowardly individuals who had no desire to put their lives at risk by a deliberate act. He saw himself as taking the viewpoint of a mother of a small child rather than a father. You could not reduce, d’Alembert argued, the sort of decision involved in deciding to be inoculated to a simple cost–benefit analysis. There was no simple ratio that would allow one to weigh an increased life expectancy against a risk of immediate death. In the 1750s inoculation techniques in England were greatly improved, the infected matter now being introduced under the skin rather than into the bloodstream. The procedure was safer. The prac- tice of first bleeding and purging people before inoculation, and of subjecting them to a special diet and a period of rest lasting for a month prior to the operation, in order to ensure they were fighting fit, was abandoned. Inoculation thus became cheaper, and con- sequently more widespread. But a person inoculated against smallpox was capable of giving the disease in its virulent form to anyone they contacted. Early calcula- tions of the number of lives that might be saved by inoculation made no attempt to factor in the number of people who might be infected as a result of inoculation. While smallpox was endemic in large cities such as London, and almost everyone was exposed to it sooner or later, it was epidemic in smaller towns and villages. In Chester, for example, one person died of smallpox in 1773, but 202 died (more than one third of all deaths) in 1774. In small towns and villages some people may have escaped exposure throughout their lifetimes. To inoculate a few people in such a town or village would be to endanger all the others. In a small village or even town this problem could be avoided by inoculating everyone at once, but in a large city this was not feasible. Thomas Dimsdale, who had been paid £10,000 to inoculate the empress of Russia, and who had carried out a general inoculation of the whole population of Hertford, argued in 1776 that there had been a recent increase in deaths from smallpox in London, and that indiscriminate inoculation was responsible. He thought that the recently founded (London) Society for the General Inoculation of the Poor, which made no attempt to quarantine those it had inoculated, was recklessly endangering lives. John Haygarth agreed. Haygarth was acutely conscious of the problem of contagion: he was the first to stress the importance of isolating patients with fevers from others in order to halt the spread of infections. He first advo- cated the establishment of separate fever hospitals, but then became convinced that it was easier to manage infections than he had supposed –– that one needed, for example, to come within eighteen inches of a smallpox sufferer in the open air to become infected. So he became convinced that separate wards were all that was required. Dimsdale and Haygarth, however, had very different responses to the possibility that inoculation might save the lives of a few but at the same time spread the infection to many others. Dimsdale was pre- pared to abandon the poor of London to the natural course of the infection. Haygarth wanted a national campaign to eradicate smallpox entirely, but the basis of such a campaign had to be the effective isolation of the newly inoculated. What was crucial to Haygarth’s grandiose project of 1793 was that he had made a close study of how smallpox actually spread. He had established the length of the infect- ive period. He knew how close you had to be to someone to be infected even if he did not know why (smallpox is spread by droplets em;he thought it was disseminated by a poison or virus dissolved in the air). Haygarth knew that the disease was normally transmitted between people who were close to each other, but that it did not require physical contact. He had thus been able to draw up Rules of Prevention that were tested out in Chester between 1778 and 1783 and were shown to halt the spread of the disease. The Rules banned anyone who had not had smallpox from visiting anyone who had the disease, or anyone who had the disease from leaving their house until the scabs had dropped off, and even then not before their body and their clothes had been washed. Anything the patient had touched was to be washed, and anyone dealing with the patient must wash their hands before touching anything that was to leave the patient’s house. The Chester experiment immediately became the model for inocula- tion campaigns in larger cities such as Leeds and Liverpool. Ideally the Rules were to be enforced by a mixture of carrot and stick –– monetary rewards for those who complied, and fines for those who did not (even today in the UK, doctors receive a financial reward for meeting inoculation targets). Applied on a national scale they would, Haygarth was well aware, require an army of government inspectors. But he was convinced that only if one combined inoculation with the Rules could one be confident that one was making progress towards eliminating the disease rather than helping to spread the infection. Haygarth called on another brilliant mathematician, John Dawson, to calculate the impact of his proposals: according to Dawson, they would result in the population of Great Britain being increased by a million within fifty years (a claim which is roughly compatible with Bernoulli’s calculated gain in life expectancy). This whole argument became irrelevant within a few years. Jenner’s cowpox vaccination (1796) provided immunity to smallpox, but cowpox was not itself contagious amongst humans. Vaccination, unlike inoculation, carried with it no risk of spreading smallpox. It is easy to tell the story of inoculation as if it was clear from the beginning who was right and who was wrong. Lady Mary Wortley Montagu, Voltaire, Arbuthnot, Bernoulli are the heroes of this story and the Paris Faculty of Medicine the villains. If you tell the story this way Dimsdale’s concerns have to be belittled, as they were by some contemporaries. There was always some fluctuation in death rates from smallpox, even in London where they were more stable than elsewhere, so that Dimsdale’s evidence for a rising death rate in the years 1768 to 1775 (which he attributed to more widespread inocula- tion) was and still is dismissed. In modern accounts, inoculation becomes indistinguishable from vaccination, an obvious benefit to mankind. But this way of telling the story ignores the fact that early cam- paigners for inoculation gave no thought at all to the risk of spreading the disease. Dimsdale and Haygarth were the first to recognize the possibility of what we now call iatrogenesis, to see that one had to measure the adverse consequences of medical intervention as well as the benefits. (The word iatrogenic dates to 1924, but at first it seems to have been used only in the context of mental diseases exacerbated by psychiatrists; the earliest example the OED gives of its use in the context of a disease that is not psychological in origin dates to 1970.) Those who practised inoculation before the formulation of Haygarth’s Rules were relying on blind faith. They knew that small- pox was contagious, but they did not want to consider the possibility of contagion. Bernoulli felt sure that he could show that society benefited from inoculation, even if no mathematical formula could determine if individuals did –– but his confidence was spurious. Without proper study of the period of infectivity and the mode of transmission the consequences of inoculation were unpredictable and incalculable. The caution of those such as the Paris Faculty of Medicine who feared that inoculation might do more harm than good was entirely sensible. Haygarth’s achievement in formulating his Rules represents an important moment in the history of medicine, for Haygarth was the first to identify and seek to minimize the unintended consequences of medical intervention. In fully recognizing the risks as well as the rewards associated with inoculation, and in taking the risks as ser- iously as the rewards, Haygarth provided a model of how medical knowledge and public health policy should progress. But in the text- books the risks associated with inoculation disappear from the story, and as a consequence Haygarth either disappears from view or becomes an uncritical advocate of inoculation. As we shall see, the Rules are not the only reason why Haygarth deserves to be better known. In sixteenth-century Venice novel therapies were sometimes tested on condemned criminals –– there could, it seemed, be no objec- tion to experimenting with the life of someone who was already about to die. We have seen a similar trial of smallpox inoculation was conducted in London in 1721. But such tests never compared two competing therapies. There are no reported clinical trials until those described by Ambroise Paré, in his Œuvres of 1575. Paré was a great surgeon, the first to tie off blood vessels during amputations, although unfortunately he did not see the need to apply a tourniquet, which meant that fatalities were still frequent. He describes two occasions (in 1537–8) when he carried out a comparative trial of two therapies. At stake was the conventional view that bullets did not only tear a hole in you, and gunpowder did not just supply an explosive force: gunpowder, it was held, killed by poisoning as well as burning, and gunpowder residue on bullets introduced poison into the body, and so an anti-venom remedy (oil of elderberry, to which a little theriac had been added, producing a paste usually applied boiling hot) should be administered. On one occasion Paré was called to deal with a drunk who had accidentally set fire to his flask of gunpowder, which had exploded in his face. Following the advice he had earlier received from an old woman on how to treat burns from boiling oil, he applied an ointment of onions to one side of his patient’s face, but the usual remedy to the other: the onion ointment clearly prevented the blis- tering and scarring which disfigured the side of the face to which the burning paste had been applied. On another occasion Paré ran out of the paste while treating a group of soldiers who had been hit by gunfire, so that some were treated only with a cold dressing of egg yolk, turpentine, and oil of roses, which was normally applied only as a second dressing, after the burning paste, to encourage healing. These patients did far better than those who had received the orthodox treatment, which Paré abandoned thereafter. Paré’s publication in 1545 of a treatise on how to treat gunshot wounds served to kill off the myth that gunpowder acted as a poison. Paré had no doubt that he had learnt something important from his two comparative trials, yet he never conducted any others. Surgery was a relatively empirical and untheoretical discipline, and surgeons were not doctors –– they had not benefited from a university education, which is why in England they are still titled ‘Mr’. Surgeons like Paré were therefore relatively open to innovation. It is also important that gunpowder burns and gunshot wounds were new to Western medical science. Guns had only been important on the battlefield for a hundred years or so. Paré did not have to question long-established authority in order to abandon theriac. No one was being asked to deny the authority of Hippocrates or Galen. Paré’s trials were accidental and ad hoc. We have to wait more than a hundred years for a proper clinical trial to be proposed. In Oriatrike, Johannes Baptista van Helmont (d. 1644), not himself a doctor and, as we have seen, a bitter opponent of conventional medicine, suggested that five hundred sick people should be randomly divided into two groups, one to be treated with bloodletting and other conventional remedies, and the other by his alternative therapies. Helmont was not nearly influential enough to bring such a trial about, but his followers continued to propose such tests of the efficacy of his remedies. Thus in 1675, Mary Trye claimed that if her medicine for curing smallpox was tested against the conventional remedy of bloodletting, then it would be found that the proportion of her patients that survived would be twice that amongst those receiving conventional therapies. In 1752 the philosopher and doctor of divinity (but again, no doctor) George Berkeley suggested a similar experiment to test the value of tar water in the treatment of smallpox. These are lonely examples that show that there was nothing particularly difficult about conceiving of a clinical trial. But none of these proposals came from doctors, and actual trials never took place. Ironically, had such trials been per- formed they would have shown that the therapies favoured by the Helmontians and by Berkeley were, like the mercury therapy favoured by MacLean in 1818, little better than conventional treatments. One doctor has become established in the literature as the inventor of the modern clinical trial. James Lind was a Scot, and he qualified first as a surgeon and then as a doctor. In 1753 he published A Treatise of the Scurvy. Scurvy is a condition that we now know is caused by vitamin C deficiency. The first symptoms are swollen gums and tooth loss. The victim soon becomes incapable of work. Death follows, though not swiftly. The standard medical therapies were (of course) bleeding, and drinking salt water to induce vomiting. A patent remedy was Ward’s Drop and Pill, a purgative and diuretic. In other words, scurvy was understood in humoral terms and the remedies were the conventional ones –– bleeding, purging, and emetics. Scurvy becomes a problem only if you have a diet that contains no fresh vegetables. In the Middle Ages castle garrisons subjected to prolonged sieges came down with it, but it became a major problem only with the beginning of transoceanic voyages: if you are healthy to begin with, you will only show symptoms of scurvy after you have been deprived of vitamin C for some ten weeks. Ancient and medi- eval ships stayed close to land and came ashore regularly to take on water and fresh food; but once ships embarked on long ocean voyages they needed to carry food supplies which would not perish, usually salted beef and hard tack (dried biscuit, notoriously full of weevils). Any fresh vegetables were hastily consumed before they could perish. In 1740 George Anson commanded a fleet of six ships and two thousand men against the Spanish in the Pacific Ocean. Only about two hundred of the crew survived, and nearly all the rest died of scurvy. During the Seven Years War, 184,899 sailors served in the British fleet (many of them press-ganged into service); 133,708 died from disease, mostly scurvy; 1,512 were killed in action. The normal death rate from scurvy on long voyages was around 50 per cent. One estimate is that two million sailors died of this dreadful disease between Columbus’s discovery of America and the replacement of sailing ships by steam ships in the mid-nineteenth century. These death rates are so horrific that one is bound to think that only people who were incapable of statistical thinking would have volunteered to sail on long voyages; and yet these same sailors must have been perfectly capable of calculating the odds when betting at cards. It takes further enquiry to establish an even more surprising fact: the medical profession were responsible for almost all these deaths (for, when good arguments are beaten from the field by bad ones, those who do the driving must bear the responsibility). In 1601 Sir James Lancaster had stocked his ship, sailing to the East Indies, with lemon juice. The practice became standard on ships of both the Dutch and English East India Companies in the early seven- teenth century. The power of lemons to prevent scurvy was known to the Portuguese, the Spanish, and the first American colonists. By the early seventeenth century the problem of scurvy had effectively been solved. Yet this treatment made no sense to doctors with a university education, who were convinced that this disease, like every other, must be caused by bad air or an imbalance of the humours, and it was under their influence (there can be no other explanation) that ships stopped carrying lemons. This is a remarkable example of some- thing that ought never to occur, and is difficult to understand when it does. Ships’ captains had an effective way of preventing scurvy, but the doctors and the ships’ surgeons persuaded the captains that they did not know what they were doing, and that the doctors and sur- geons (who were quite incapable of preventing scurvy) knew better. Bad knowledge drove out good. We can actually see this happening. There is no letter from a ship’s surgeon to his captain telling him to leave the lemons on the dock, but we do know that the Admiralty formally asked the College of Physicians for advice on how to com- bat scurvy. In 1740 they recommended vinegar, which is completely ineffectual, but now became standard issue on navy ships. In 1753 Ward’s Drop and Pill also became standard issue. Historians, far from holding doctors responsible for the two million deaths from scurvy, credit a doctor with discovering the cure. James Lind was a surgeon (and not yet a qualified doctor) aboard HMS Salisbury in 1747, serving in the Channel Fleet. Of his crew of 800, 10 per cent were suffering from scurvy. Of the eighty, he took twelve and put them all on the same diet. He divided the twelve into six pairs. Two were given cider each day; two were given elixir of vitriol; two were given vinegar; two were given salt water; two were given a herbal paste and a laxative; two were given oranges and lemons. Some other sailors were given nothing. Within a week those on oranges and lemons were cured (and the ship’s stock was exhausted, so the remedy could not be tried on others). Those on cider made some slight progress. The rest deteriorated. Lind had con- ducted the first clinical trial since Paré; indeed he was the first medical doctor to conduct a clinical trial. He had discovered an effective therapy; eventually that therapy was universally adopted. The modern history of therapeutics begins here. Or does it? Lind waited six years, and first qualified as a doctor, before publishing his remedy. His book, A Treatise of the Scurvy, was four hundred pages long, and only four pages were devoted to his clinical trial. The rest were devoted to a complex theoretical argu- ment and to a review of the literature. His basic theory was humoral, although he presented a modernized humoralism in that he stressed the importance of perspiration through the skin for the overall balance of the humours. (Sanctorius, in the early seventeenth century, had been the first to try to measure the amount of fluid lost through perspiration.) The cause of scurvy, Lind argued, was a blockage of the pores caused by damp air. Lemons, he claimed, had a particular cap- acity to cut through this blockage: he thought this was something to do with their being acidic, although he admitted that other acids (such as vinegar) lacked the requisite quality. Contemporary readers saw nothing decisive in Lind’s arguments, and one can think of some obvious objections to them. Did sailors not sweat? If the problem was an imbalance of the humours, why should traditional remedies not work? The book was translated and reprinted, but it did not alter the practice of ships’ surgeons. In 1758 Lind was appointed chief medical officer at the Royal Naval Hospital at Haslar, the largest hospital in England. There he was responsible for the treatment of thousands of patients suffering from scurvy. But he treated them with concentrated lemon juice (called ‘rob’), and he concentrated the lemon juice by heating it to a tem- perature close to boiling. He also recommended bottled gooseberries. In both cases, the heat employed destroyed much of the vitamin C, and Lind conducted no tests to compare his concentrates with fresh fruits: he just assumed they were the same thing. As a result he seems to have gradually lost faith in his own remedy, which had actually become less effective, and he became increasingly reliant on bloodlet- ting. When he published in 1753 he had not heard of the argument of the Polish doctor Johan Bachstrom, who had maintained that scurvy ‘is solely owing to a total abstinence from fresh vegetable food and greens, which is alone the true primary cause of the disease’. By 1773, when he published the 3rd edition of his book on scurvy, he rejected Bachstrom’s claim. It was impossible, Lind insisted, to reduce the cause or cure of scurvy to a matter of diet. Thus Lind himself had no clear understanding of exactly what it was that he had discovered in 1747, and no grasp of the importance of the clinical trial as a procedure. He conducted various trials of therapies at Haslar, in the elementary sense that he tried them out, but, despite having ample opportunity, he never gave an account of any other trial in which therapies were compared directly against each other. He does say in his Essay on Diseases Incidental to Europeans in Hot Climates (1771) that he had ‘conducted several comparative trials, in similar cases of patients’ on drugs to alleviate fever, but what he actually gives his readers are conventional case histories. Moreover his therapeutic practice remained entirely conventional. We find him on a single day bleeding ten patients with scurvy, a woman in labour two hours before her delivery, a teenage lunatic, three people with rheumatism, and someone with an obstruction of the liver. He was cautious when bleeding people with major fevers, but only because he preferred to use opium or to induce vomiting. If Lind had inven- ted the clinical trial, then he had done a profoundly unsatisfactory job of it. Why then has Lind become a major figure in the history of medicine? The answer is that when the formalized clinical trial for new drug therapies was introduced in the middle years of the twentieth century there was a natural desire to look back and find its origins. In 1951 A. Bradford Hill published his classic article ‘The Clinical Trial’ in the British Medical Bulletin. It contains references to nine scientific publications, all in the previous three years. As far as Bradford Hill was concerned the clinical trial was a brand-new inven- tion, introduced to test drugs such as streptomycin. Streptomycin had been discovered in 1944, and in 1946 the (British) Medical Research Council began testing it on patients with tuberculosis –– the results were published in 1948. Because streptomycin was in short supply it was decided that it was ethical to have a control group that did not receive the drug. This was claimed at the time to be the first randomized clinical trial reported in human subjects (Pasteur had done clinical trials with silkworms and sheep). It was precisely at this point that Lind was rediscovered to give the randomized clinical trial an appropriate history, with an article by R. E. Hughes on Lind’s ‘experimental approach’ in 1951, and an article by A. P. Meiklejohn in 1954 on ‘The Curious Obscurity of Dr James Lind’. Lind’s importance is entirely the result of backward projection. If Lind did not succeed in curing scurvy, who did? In 1769 Captain James Cook’s Endeavour set sail on the first great voyage of exploration in the Pacific. When it returned more than two and a half years later, not one of the crew had died from scurvy. Cook had primarily relied on sauerkraut to keep the scurvy at bay, and in fact it does contain a little vitamin C. He had also taken on fresh vegetables whenever he made landfall. On his next voyage, which lasted seven years, he took a number of remedies –– Lind’s concentrated lemon juice, an infusion of malt called ‘wort’, carrot marmalade, and soda water amongst them. Taken together the remedies worked, and though Cook thought soda water useless, and never administered the carrot marmalade, he had no idea which of the others were effective and which were not. Cook at one point acknowledged that a careful inspection of the ship’s surgeon’s journal might clarify the point; but no such inspection was ever made. On his third voyage, in search of the Northwest Passage, Cook was killed in Hawaii (1779); but his crew returned after a voyage of four and a half years without having lost anyone to scurvy. Cook had shown that long voyages could be undertaken without crews suffering from scurvy, but no one knew exactly how he had achieved this; he did not know himself. The consensus view however, supported by appropriately elaborate med- ical theories, was that it was the wort that had done the trick; when a merchant sea captain wrote to the Admiralty in 1786 informing them that lemon juice mixed with brandy always cured scurvy he was told straightforwardly that he was wrong: trials have been made of the efficacy of the acid of lemons [i.e. rob] in the prevention and cure of scurvy on board several different ships which made voyages round the globe at different times, the surgeons of which all agree in saying the rob of lemons and oranges were of no service, either in the prevention or cure of that disease. The first person in the navy, after Lind, to give unconditional support to lemon juice was Gilbert Blane, appointed physician to the West Indies Fleet in 1780. Blane seems first to have established the peculiar efficacy of lemons by trial and error, for he started with both lemons and wort on board his ships. In 1793 a formal trial of lemon juice was made, at Blane’s suggestion, on the Suffolk: the ship was at sea for twenty-three weeks, crossing from England to the East Indies, without taking on fresh food. Lemon juice was administered pre- ventatively, and when scurvy appeared the dose was increased, with satisfactory results. In 1795 lemon juice became part of the daily ration throughout the navy, so that by the end of the 1790s scurvy had been virtually eliminated. It had taken fifty years for Lind’s dis- covery of the curative power of lemon juice to be generally adopted, and no further controlled clinical trials had been conducted. It was Gilbert Blane, not Lind, who had persuaded the navy to adopt lemons, and the triumph of lemon juice over wort had done nothing to further the idea that therapies should be subjected to systematic comparative testing. Blane had never conducted a comparative test of lemon juice against other therapies. Lind’s failure to press home the implications of his single trial, and his failure to repeat and extend it, mean that he actually deserves to be left in obscurity. If one wants to identify key figures in the invention of the clinical trial it would be better to look elsewhere. In 1779, for example, Edward Alanson published Practical Observations on Amputa- tion. He recommended new techniques –– the cutting of a flap to close the wound, and the laying together of wound edges to facilitate heal- ing. He compared his results before he adopted his new methods (10 fatalities out of 46 amputations) with his recent results (no fatalities out of 35 amputations). This persuasive statistical argument had a major impact on surgical technique in England and Europe (though the French remained sceptical). Alanson’s control group was histor- ical, and he did not randomly select who was to receive what treat- ment; but he was using numbers to effect. In 1785 William Withering published a careful account of his use of the foxglove (the active ingredient being digitalis) to treat a total of 163 patients suffering from dropsy (or what we might now diagnose as congestive heart failure). But in practice digitalis, once it had established itself in the pharmacopoeia, was soon being used to treat a whole host of diseases, and was often not used to treat dropsy –– bad knowledge had once again driven out good. Even more important than the work of Alanson and Withering is John Haygarth’s Of the Imagination as a Cause and as a Cure of Disorders of the Body (1800). Haygarth wanted to debunk the claims made on behalf of some instruments, metal pointers called ‘tractors’, presum- ably because they were used to ‘draw out’ diseases, that had been patented by an American, Elisha Perkins (1744–99) and that were sold at the astonishing price of 5 guineas. These briefly had an enormous success, particularly in the fashionable and moneyed world of Bath, where Haygarth had retired from the practice of medicine. He set out to show that he could obtain remarkable cures, first of rheumatism and then of other conditions, with pointers that vaguely resembled those of Perkins, but were made of any other substance. What made the cure, he argued, were not the patented tractors, but the demean- our of the doctor and the credulity of the patient. The fact that fake (or as he quaintly called them ‘fictitious’) tractors worked as well as real ones did not only show that the real ones had no peculiar therapeutic quality. It also proved ‘to a degree which has never been suspected, what powerful influence upon diseases is produced by mere imagination’. This was the discovery of the placebo effect, though the word placebo first appears in English rather later, in 1832. One of Haygarth’s colleagues had also elucidated the limits of the imagination in effecting a cure. He had used Haygarth’s fictitious tractors on a woman who suffered from pain in her arm and from an inability to move her elbow joint, which had become locked by an abnormal growth of bone. Her imagination had cured her of her pain, and she thought it had given her new movement in her elbow; but in fact if one watched closely one could see her elbow was still locked, and she was merely compensating more successfully with increased movement in her shoulder and wrist.


26. A set of Perkins tractors. Haygarth also noticed a small number of cases in which patients got worse not better when the fictitious tractors were applied: thus he showed that imagination could cause as well as cure diseases. In such cases, we might say, the symptoms Haygarth was producing in his patients were psychosomatic in origin; but Haygarth did not pursue this line of thinking. More importantly, he did not reverse it: he did not claim that the symptoms that were cured by fictitious tractors were psychosomatic. This is important because doctors had known since ancient times that emotions could give rise to physical symp- toms and that these symptoms could be cured by a change in the patient’s emotional state. Edward Jorden in 1603, for example, had discussed the case of a young man who had fallen out with his father and then fallen victim to ‘the falling sickness’ (epilepsy): he had been cured by a kind letter from his father. Haygarth did not argue that the fictitious tractors only worked to cure conditions that were psychological in origin. Haygarth believed that his experiments with fictitious tractors explained why a famous doctor was often more successful in his practice than someone without an established reputation, and why a new medicine was often more successful when it was first introduced than when it had been around for some time. One doctor or one medicine might be more successful than another because they were more effective in eliciting the cooperation of the patient’s imagin- ation. For real success, he claimed, it was important that both the doctor and the patient should be believers: ‘Medical practitioners of good understanding, but of various dispositions of mind, feel different degrees of scepticism in the remedies they employ. One who pos- sesses, with discernment, the largest portion of medical faith, will be undoubtedly of greatest benefit to his patients.’ Here Haygarth’s conclusion was at odds with his own research. He had successfully shown that sceptics, using fictitious tractors, could, by pretending to believe, elicit results indistinguishable from those achieved by true believers using genuine Perkins tractors. His doctors had cynically used the patter employed by the advocates of the Perkins tractor, without believing for a moment what they were say- ing. Why pretend otherwise? One can only assume that Haygarth wanted to protect himself against the charge of encouraging lying and hypocrisy when he asserted, against the evidence of his own trials, that if one wants to touch one’s patient’s heart one must speak what one feels. But what Haygarth had done was suggest that much standard medicine relied entirely on the placebo effect. Within a few years the arguments he had deployed to explain the apparent success of Perkins tractors were to be employed by his medical colleagues to explain the success of homeopathy. Oliver Wendell Holmes’s essay on ‘Homeopathy and its Kindred Delusions’ (1842) contains an extended discussion both of the Perkins tractors and of Haygarth’s fictitious tractors. But the genie was out of the bottle. If the placebo effect could explain the success of Perkins tractors and of hom- eopathy, what part of orthodox medicine was based on a similar delusion? Haygarth’s importance lies in the fact that he was the first to ask this question. Haygarth does not appear in histories of medicine for his discovery of either iatrogenesis or the placebo effect. They make no mention of Perkins tractors. For them Haygarth is important (if at all) as an advocate of smallpox inoculation. Instead of discussing Haygarth on the power of the imagination, they discuss the commission established by the king of France in 1785 to enquire into the cures achieved by Franz Anton Mesmer. Mesmer had achieved enormous success in Paris by claiming that diseases could be cured by manipulat- ing the patient’s ‘animal magnetism’, a process that induced trances, fits, and faints. The commission, which reported in 1785, included the great chemist Lavoisier, Benjamin Franklin, and the now infamous Dr Joseph Guillotin. The commissioners set out to subject Mesmer’s claims to tests very similar to those that Haygarth was to devise a few years later to test the Perkins tractor. Thus they had someone trained by Mesmer ‘mesmerize’ a glass of water, and then they offered a patient five glasses of water to drink, including amongst them the mesmerized glass. The patient fainted on drinking from one of the glasses –– but it was a glass of ordinary water. They concluded that Mesmer’s cures were produced by the power of his patients’ imagin- ations, and their enquiry certainly was one of the first systematic attempts to devise a ‘blind’ test of a therapy: as such it is an important moment in the birth of evidence-based medicine. But, unlike Haygarth, the royal commission did not go on to acknowledge that the power of the imagination must play a role in orthodox medicine; they failed to recognize that what we would now call the placebo effect is present in all medical treatment. They failed to direct their scepticism towards orthodox medical practice. Medical historians would seem to have a similarly blinkered vision. They are interested in Haygarth because he discovered how to pre- vent infections spreading from one patient to another, and in the French royal commission because it pioneered the blind trial. Yet, as we have seen, what is really important about Haygarth is that he was the first properly to understand (if not to name) both iatrogenesis and the placebo effect: he understood that hospitals spread infections and that inoculation might spread smallpox; and he recognized that conventional medicine relied in large part on the same power of imagination as that evoked by the Perkins tractor. The initial usage of the word placebo was to refer to a pill (made of flour, sugar, or some other inert substance) given to reassure a patient for whom no effective treatment was available. The first use of the placebo in clinical trials was apparently in Russia in 1832 (coincidentally the year in which the word first appears in English). There trials were being carried out to test the effectiveness of hom- eopathy. In these tests homeopathy was systematically compared with placebo therapy (pills made of flour paste), and found to be no more effective. This was the first occasion on which the test of effectiveness in a therapy was defined as being more effective than a placebo –– one of the tests employed today (therapies can also be assessed against no treatment, or against alternative treatments). By the early nineteenth century there was thus nothing problem- atic about the idea of a controlled trial of a medical therapy. In 1816 an Edinburgh military surgeon called Alexander Lesassier Hamilton described, in a thesis published though so obscure as to be little read (it was in Latin), a model trial that he had carried out on sick troops in 1809, during the Peninsula War. The troops were randomly divided into three groups, one third receiving bloodletting and two-thirds not. Of the 244 soldiers treated by alternative methods six died, while of the 122 whose blood was let, 35 died. Unfortunately Lesassier Hamilton was an incorrigible liar who led a dissolute life (we know a great deal about him because all his private papers were seized during divorce proceedings, and were preserved for posterity in the archives of his wife’s lawyers), and the story is almost certainly an invention. His detailed diary for 1809 contains no reference to any trial. Nevertheless, Lesassier Hamilton invented only to impress, so his story does tell us what he thought his examiners would want to read. The most important advocate of the new numerical method was Pierre-Charles-Alexandre Louis, whose Recherches sur les effets de la saignée were first published in a learned journal in 1828, then expanded as a book in 1835. They were translated into English and published in America (1836). Looking back in 1882, Oliver Wendell Holmes, who had studied under Louis, thought this ‘one of the most important written contributions to practical medicine, to the treat- ment of internal disease, of this century’. In his original article, Louis looked at three conditions, pleuropneumonia, erysipelas of the face, and angina tonsillaris, all classified at the time as inflammatory dis- eases, and therefore believed to respond well to bleeding. In the case of pleuropneumonia every patient (there were a total of 78) was bled, but some were bled sooner than others and some more frequently than others. Louis could find little indication that when or how often the patient was bled affected the outcome, whether one looked at the proportion who survived or, amongst the survivors, at the amount of time it took to recover to good health. In the case of erysipelas of the face Louis looked at thirty-three patients, twenty-one of whom were bled. Those who were bled recovered three-quarters of a day sooner than the others. In the case of angina tonsillaris Louis looked at twenty-three severe cases, of whom thirteen were bled. In the case of the patients who were bled, recovery took place a day and a quarter earlier than in the case of the patients who were not bled. The con- clusion was obvious and even italicized: ‘bloodletting, notwithstand- ing its influence is limited, should not be neglected in inflammations which are severe and are seated in an important organ’. In the edition of 1835, Louis considered additional cases. Again, every case of pleuropneumonia, now called pneumonitis, had been bled. The conclusion was now rather different from before. It was ‘that bloodletting has a happy effect on the progress of pneumonitis, that it shortens its duration; . . . that patients bled during the first four days recover, other things being equal, four or five days sooner than those bled at a later period’. (The ‘four or five days’ claim is very puzzling: on Louis’s own figures, the difference was two-and-three- quarters days.) He then ended his book with a survey of recent books on bloodletting. The survey is wonderfully amusing, were the subject not so deadly serious. Here, for example, is a small part of his discussion of a doctor Vieusseux, who had published a treatise on bloodletting in 1805: Our author, as may easily be conceived, has not been very difficult as to particular examples; and in adducing proof of this I am embarrassed only as to a choice among the cases he states. Thus on the subject of abdominal diseases, which he thinks are often attended with gangrene, he says, ‘I have seen an instance of the alternate use of venesection and leeches in a female thirty years of age, who was subject to pain in the abdomen, and who suffered two or three days without fever and with- out tenderness on pressure. Suddenly the pain became very violent, and was accompanied with fever and vomiting. She was bled eleven times, and meanwhile had leeches to the anus twice, in the course of seven or eight days; she recovered rapidly, escaping suppuration, which should be avoided at any cost.’ Vieusseux considers this observation neither as short nor as incomplete; he gives it as if it were approved. Now I will ask the reader what is proved by an observation, relative to an abdominal affection, which contains no account of the form and volume of the abdomen, of the condition of the discharges, of the colour of the matter vomited, of the expression of the face, nor of the state of the pulse . . . Louis’s conclusion is ‘that many authors consider facts only as a sort of luxury, to be used as seldom as possible’. I have carefully outlined Louis’s argument because extraordinary misconceptions appear in the books that discuss him. It is said that Louis had set out to show that bloodletting was pointless. Yet he clearly believes it to have some considerable merit and advocates it in the treatment of all the diseases he studies. One commentator claims Louis had shown bloodletting postponed recovery in cases of angina tonsillaris. Louis believed he had shown the opposite. There was noth- ing in Louis’s book to persuade doctors to abandon venesection, and it is clear that he did not abandon it himself. On closer inspection it seems that Louis interpreted his data in a fashion that was strangely biased in favour of venesection. His own figures suggested early vene- section shortened the disease by 2.75 days, yet he claimed that ‘other things being equal’ it shortened it by four or five days. In fact, he had shown that other things were not equal. Those bled early were on average eight years and five months younger than those bled late, which in itself is sufficient to explain their more rapid recovery. Again, it is said that Louis was concerned to criticize the contempor- ary use of leeches, which had been strongly advocated by François Broussais. It is true that he seems to have little time either for leeching or vessication (blistering or cupping). But it is quite clear that what he is trying to study is the merits of venesection, and that he believes the only way of establishing how far venesection helps is by comparing cases statistically. Since Louis’s conclusion was that bloodletting, though it never halted a disease in its tracks, was still good for patients, one is bound to look closely at his statistics. After all, if he was right, why do we not still let blood? Table 1 is a simplified version of his table detailing the second group of patients with pneumonitis, twenty-five of whom survived the disease. The top row shows the day on which blood was first let. Each cell below records the number of days it took a particu- lar patient to recover, until the final row, which shows the average recovery time for patients in that column. I have already suggested that Louis’s argument that those bled early recover more quickly than those bled late is spurious, and that the form in which he presents his figures conceals a correlation between youth and rapid recovery. Table 2 gives the information he supplies reorganized in a way that it never occurred to him to organize it. This suggests that, over the age of 20, the older you are the longer recovery takes. But it is also clear that these tables reflect a hopelessly

TABLE 1 . Recovery times of Louis’s second group of patients with pneumonitis


TABLE  2. A reorganization of Louis’s data, showing recovery time in days according to the age of patients


small sample. A few more cases bled early and slow to recover, or bled late and fast to recover, and Louis’s results would be quite different. A few more teenagers who recovered rapidly, and my results would be quite different. Louis has no way of testing to see if the distribution of numbers in columns actually reflects an underlying pattern –– a statistically significant correlation –– or is purely random. Louis, the great advocate of the statistical method, thus played fast and loose with his own statistics. Above all, his approach was hopelessly crude because he had no test of statistical significance.

Louis faced plenty of contemporary critics who argued that medicine was an art not a science, and that each case had to be considered individually. He also met critics, such as the great Claude Bernard, who argued that science was concerned with causal connections, and that consequently ‘statistics can never yield scientific truth’. What were needed were experiments conducted in the laboratory. But Louis also faced the criticism of mathematicians. In 1840, Jules Gavarret published his Principes généraux de statistique médicale. There he argued that no results of the sort that Louis was claiming to pro- duce would be reliable unless they were based on several hundred observations. Thus Louis had observed (in a book published in 1834) 140 cases of typhoid fever, with 52 deaths and 88 recoveries, and concluded that 37 per cent of patients recovered. Gavarret showed that with a sample this size the margin of error was roughly 11 per cent, so that all one could reliably say was that between 26 and 48 per cent of patients recovered. In 1858, Gustav Radicke also set out to expose the fallacy in using small samples to draw large conclusions. He was particularly inter- ested in samples where you measured something (e.g. days taken to recover) and he argued that it was very important in such cases to establish whether the measurements tended to be homogeneous or heterogeneous.

If the measurements were homogeneous, a fairly small sample might produce a reliable result, but if they were heterogeneous (in the case of Table 2 above, recovery times vary between 9 and 27 days for teenagers, and between 11 and 28 days for people in their sixties) there was obviously a significant possibility that further meas- urements would alter the averages and transform the conclusions. He proposed a mathematical test to establish when a result derived from figures of this sort was, and when it was not, trustworthy. Despite the efforts of Gavarret and Radicke to apply sophisticated probability theory to medical statistics, no nineteenth-century med- ical researcher made use of the methods they proposed. Nevertheless, as we shall see, statistics lay at the heart of the most important achievements of nineteenth-century medicine. In 1801 a reviewer of one of John Haygarth’s books still felt it worth noting that ‘the facts or cases upon which the whole of the reasoning . . . is founded are exhibited in the form of tables . . . This synoptical mode of recording and exhibiting cases in an inquiry of this sort is attended with many advantages.’ We might go so far as to say that the statistical table was the first direct threat that Hippocratic medicine had faced in over two thousand years. By 1860 the revolution represented by the table was complete. ‘Statistics have tabulated everything –– population, growth, wealth, crime, disease’, wrote Oliver Wendell Holmes. ‘We have shaded maps showing the geographical distribution of larceny and suicide. Analysis and classification have been at work upon all tangible and visible objects.’ Looking back towards the end of his life, in 1882, over the recent history of medicine, what impressed Holmes was not the rise of the laboratory, but the triumph of statistics: ‘if there is anything on which the biological sciences have prided themselves in these latter years, it is the substitution of quantitative for qualitative formulae’. One of the greatest medical breakthroughs of the nine- teenth century, John Snow’s account of the transmission of cholera, was due to the careful use of statistics, and even of maps, but before we look further at the statisticians, we need to look at the alternative intellectual tradition that for a time promised to revolutionize med- ical knowledge, at experimental physiology.

Random Posts

Comments are closed.