In 1879 an American doctor, T. H. Buckler acknowledged that ‘the lancet, by the common consent of the profession at large, had been sheathed never to be drawn again’. Yet he was writing ‘A Plea for the Lancet’. In 1875 an English doctor, W. Mitchell Clarke, wrote ‘we are most decidedly living in one of the periods when the lancet is carried idly in its silver case; no one bleeds; and yet from the way in which I ﬁnd my friends retain their lancets, and keep them from rusting, I cannot help thinking they look forward to a time when they will employ them again’. Bloodletting had largely been abandoned because statistical studies had shown that it did not work, and recent developments in physiology had been able to show that it resulted in reduced haemoglobin concentration, which hardly seemed likely to be beneﬁcial. But doctors clearly regretted sheathing their lancets. The lancet was the symbol of their profession and of their status as doctors –– the leading English medical journal is still called The Lancet. Worst of all though, the abandonment of the lancet was not compensated for by the introduction of any new therapy that could replace it in general practice. A gap was left, and something was needed to ﬁll the gap. By 1892 the leading American physician of his day, William Osler, was writing ‘During the ﬁrst ﬁve decades of this century the profession bled too much, but during the last decades we have certainly bled too little.’ And he proceeded to advocate blood- letting for pneumonia: done early it could ‘save life’. Similarly in 1903 Robert Reyburn, an American, was asking ‘Have we not lost some- thing of value to our science in our entire abandonment of the prac- tice of venesection?’ The Lancet of 1911 contained an article entitled ‘Cases illustrating the uses of venesection’ –– the cases included high blood pressure and cerebral haemorrhage. Bloodletting was also recommended for various types of poisoning, from carbon monoxide to mustard gas. In the trenches in 1916, venesection was the approved method of treating the victims of gas attacks. Heinrich Stern, publish- ing The Theory and Practice of Bloodletting in New York in 1915 declared that ‘like a phoenix, the fabulous bird, bloodletting has out- lasted the centuries and has risen, rejuvenated, and with new vigor, from the ashes of ﬁre which threatened its destruction’ –– he thought bloodletting a useful treatment for drunkenness and homosexuality. Others recommended it for typhoid, inﬂuenza, jaundice, arthritis, eczema, and epilepsy. At the beginning of this book, I said that 1865 was a useful marker, but that medicine was not at once transformed. Just as new, eﬀective therapies were only developed slowly in the years after 1865, so old therapies were only slowly phased out. Hippocratic therapies survived into the 1920s. Why was progress so slow? It was not because doctors like Osler were opposed to modern science, or did not believe in the idea of progress; quite the contrary. We need to look elsewhere for an explanation. Part of the explanation lies in the way in which people identify with their own skills, particularly when they have gone to great trouble and expense to acquire them. Just as surgeons wanted to go on being surgeons, and so were blind to the possibilities of anaesthetics, so doctors wanted to go on being doctors, and so were reluctant to sheath their lancets. Another part of the explanation lies in the risk associated with pursuing new ideas. Once germ theory had begun to establish itself, people like Tyndall were convinced that contagious disease could be conquered. But there existed only one model for the defeat of a contagious disease, and that was smallpox vaccination. So most of the eﬀort went into the pursuit of vaccines: anthrax, rabies, and typhoid vaccines were the result. Germ theory could equally have rapidly led to a search for substances that could be injected into the bloodstream to kill germs. Penicillin could have been developed at almost any point after 1872. But there was no conceptual model for an antibiotic. The risks seemed high and the rewards uncertain. Thomas Kuhn, in The Structure of Scientiﬁc Revolutions, oﬀers two concepts for thinking about this phenomenon. The ﬁrst is the concept of the paradigm: once penicillin had clariﬁed the concept of an antibiotic, research on antibiotics proceeded rapidly. What was needed was a clear model of how to proceed. The other is the concept of ‘normal’ as opposed to ‘revolutionary’ science. From Hippocrates until the 1870s there was a ‘normal’ therapeutics, which survived because it was believed to work and, when its eﬃcacy became doubtful, it continued to be employed because patients expected it and doctors could oﬀer nothing better: Louis assumed that a doctor would let the blood of a dying patient, not because there was any prospect of this saving their life, but because it would enable him to assure the family that everything possible had been done. Conventional therapy had enormous stability because both patients and doctors were educated to trust it. That trust carried bloodletting into the twentieth century. So much, I think, it is easy to see: there were psychological and cultural factors working against innovation. As long as doctors believed they had eﬀective therapies, those factors were suﬃcient to exclude the microscope from the medical school, and to exclude the theory of animate contagion from practical medicine. From the 1690s to the 1830s the main obstacle to progress in medicine was not some gap in the knowledge, research equipment, or intellectual resources of medical scientists, but rather the psychological and cultural factors which stood in the way of innovations. The diﬃcult question is whether we need to introduce a further level of argument. After all, precisely when conventional medicine was in a deep crisis, the germ theory came along to rescue it. Was this just luck? Or does it represent some sort of rational adaptation on the part of medical institutions themselves? How you answer this ques- tion depends in part on a further question. Do you think institutions have a life of their own? The answer to this question, is, I think, yes. Not being a methodological individualist, I do not think all actions can properly be said to be performed by individuals; some actions are performed by institutions, even though individuals have to be involved as the representatives of the institution. Faced with a range of choices, a committee may reach a decision that was nobody’s ﬁrst choice. In certain circumstances an institution will implement policies that no individual person within the institution thinks are good –– this will happen, for example, if an outside agency controls the institution’s funding and requires that it meet certain criteria that nobody within the institution actually believes in (a situation not unfamiliar in contemporary universities). This situation was the norm under communism, and is commonplace in institutions that rely on government funding. So there are plenty of circumstances in which an institution can take a decision, or pursue a policy, but there is no simple way in which that decision or policy can be said to be that of any individual within the institution. Institutions can thus take on a life of their own. So it is legitimate to ask whether there were important insti- tutional constraints obstructing progress in medicine. Did university faculties of medicine, hospitals, or doctors’ professional organizations seek to preserve traditional therapies in order to safeguard insti- tutional interests? At the end of Chapter 7 I said that the medical profession turned its back on microscopy: was this the medical pro- fession as a collection of individuals, or as a group of institutions with lives of their own? All the evidence we have seen suggests there was no need for institutions to act; or, where institutions did act, there was no signiﬁcant gap between those actions and the views of individuals. Did doctors know what they were doing when they obstructed progress for a century and a half? I’ve already said that when bad arguments drive out good, those who do the driving must bear the responsibility, but one can be responsible for something one never intended to do –– losing one’s temper for example. Once doctors decided that they need pay no attention to micro-organisms they immediately ensured that they would never have to encounter evi- dence suggesting they had made the wrong choice. There are many decisions which have the peculiar characteristic of being self- conﬁrming because you never know what would have happened if you had made a diﬀerent decision. It is perfectly sensible to say that doctors had no idea what they were doing, but that they bear a burden of responsibility for the consequences of their actions. After 1830 the microscope came back into fashion, and progress, eﬀectively halted since the 1680s, recommenced. The new micro- scopes were much easier to work with than Leeuwenhoek’s had been, and they had the air of being serious scientiﬁc instruments. Their introduction coincided with a crisis in therapy provoked by the beginning of serious counting. That crisis deepened over the next few decades. In the 1860s Listerism came to the rescue of the hospitals when they faced an extremely uncertain future. Without germ theory the crisis in the hospitals would never have been resolved, and the hospital as an institution would not have survived. So the story appears to be one of successful adaptation. Is there any sense in which we can say that individuals or institutions pursued a strategy intended to rescue medicine from its crisis? Did the new knowledge serve institutional purposes? I ask these questions because the story I have been telling might be of the sort that is labelled functionalist: according to functionalist arguments, institutions and social groups react to diﬃculties by moderating and displacing con- ﬂict, allowing their own adaptation and survival. Few people want to be thought of as functionalists, just as few people want to be thought of as dyed-in-the-wool Whigs. And yet, just as there ought to be some histories of progress, so there ought to be some histories that show how the social order sustains itself, and how it sometimes does so without any of those who participate in the process fully under- standing what they are doing. Functionalist arguments can be legitimate. But the account of the revolution in medicine given here is not intended to be functionalist. Individuals and institutions are naturally conservative and risk averse. Unless circumstances are very unfavour- able, they prefer the known to the unknown, continuity to change. Major change requires a crisis of the sort that hospitals were undergo- ing in the 1860s: adaptation comes late, not early. Even then change is likely to be easier to bring about in low-status institutions than in high-status institutions, on the periphery than in the centre. Listerism triumphed ﬁrst in Glasgow, then in Edinburgh; it established itself quickly in Scotland, but slowly in England. In France, the most advanced centre of medical research in the early nineteenth century, germ theory was slower to establish itself among doctors than in Germany or England. Resistance to innovation is usually most deeply entrenched in those institutions that feel they have most to lose. In such circumstances there is nothing to ensure that institutions will successfully adapt and survive. It is a remarkable fact that the triumph of germ theory eventually occurred not through a new profession growing up alongside the old profession of medicine, but through doctors adopting the new therapies and (however reluctantly) abandoning the old ones. There was nothing inevitable about this process. Until the discovery of diph- theria serum in 1894, the French medical profession was generally opposed to the new science; then it rapidly converted, and changed the education of doctors to bring it into line with germ theory. Without the discovery of diphtheria serum it is perfectly possible to imagine germ theory continuing to develop in opposition to French medicine, not within it. From the 1860s on it was clear that germ theory could be applied, whether in silkworm production or surgery. A positive feedback loop was established between research and practice –– in the case of medi- cine between theory and therapy. Once this occurred progress became inevitable and almost irresistible; but it was not inevitable that existing institutions would successfully adapt to this change. Had they failed to do so, there would have been a revolution anyway, even if it had destroyed the existing institutions and fatally weakened the professions of medicine and surgery. Just as there are still homeopaths, so there might still be doctors practising Ionian medicine in Boston and Paris, London and Berlin. Bigger, newer buildings might have sprung up alongside the decaying hospitals of the Ionian doctors, calling themselves Pasteur Institutes or Lister Institutes and dispensing vaccines and antibiotics. It so happens that conventional medicine adapted to germ theory, and it did so because it was very conscious of already being in crisis. But things could easily have turned out diﬀerently. So although germ theory was adopted by the medical profession to serve its purposes, I do not think the story I am telling is func- tionalist. Germ theory succeeded, not because doctors adopted it, or because it served the purposes of the medical profession, but because it demonstrated a capacity to prolong life, because it was a more eﬀective medical technology than Hippocratic therapy. Germ- theory-based therapy was thus better than Hippocratic therapy at fulﬁlling the function that conventional medical therapy had claimed to fulﬁl. This is to take us back to the idea with which the book began, that technologies fulﬁl functions and establish their own stand- ards of progress. But to agree with that idea is not to commit oneself to ‘functionalism’ as a doctrine in the social sciences. Rather, it is to accept that there can be standards of rationality that are cross-cultural. Certain tasks –– growing crops, cooking food, postponing death –– are common to many cultures. An improved yield, a better irrigation system, a more durable cooking pot, a more eﬀective drug –– all these have a logic which is potentially cross-cultural. This does not mean that cross-cultural dissemination is easy or automatic. In the eight- eenth century both the French and the English used windmills to grind corn. The English invented the fan-tail, which automatically points the windmill into the wind, and so saves labour. The French never adopted it. Perhaps their labour costs were lower, or their winds less variable, or the capital investment seemed too great. Whatever the reason may be, they certainly understood what the fan-tail was for. So too Lister’s contemporaries might have found germ theory puzzling and unconvincing; but they could perfectly understand his claim to have reduced mortality and rendered amputations unnecessary. Arguments for the cultural relativity of rationality have their limits: this is one of the lessons to be drawn from the triumph of germ theory.