One could argue that few, if any, medical studies have had the same impact as Andrew Wakefield's study, published in the British journal Lancet, attempting to establish a link between the MMR vaccine and autism. Certainly, no study has had the same impact with as small a dataset - Wakefield's twelve case studies, eight of which were claimed to show a temporal correlative link between autism and vaccines, were an incredibly small sample size for such a groundbreaking study. Still, from 1998, when the study was published, to 2010, when it was completely retracted (Lancet had already published a partial retraction in 2004), the study stood as ostensibly sound science. Of course, the scientific community, which had already viewed the small sample size with a jaundiced eye, saw the total inability of any other study to replicate Wakefield's findings (Hurley et al., 2010) as the final nail in the coffin of an already suspect idea, but that wasn't enough to stop the study from launching a new wave in the anti-vaccine movement. This new wave brought with it an increase in the rate of vaccine-preventable disease. Measles in Ireland, Whooping Cough in the US - declining vaccination rates inevitably brought the return of diseases previously reduced to a historical footnote. A single slip in peer-review brought hundreds of deaths, perhaps even thousands, by the time the movement runs its course. With this in mind, it is worth examining the course of events that led to the publication of something so resoundingly rejected by the scientific community and so harmful to the world at large.

     When Wakefield started his study, he was working in conjunction with Richard Barr, a lawyer, to bring a lawsuit alleging that the MMR vaccine was the cause of autism, and seeking damages for nearly 1,500 families. His proposed bowel-brain syndrome was the centerpiece of the lawsuit; his study thus had the potential to be incredibly financially beneficial if it reached the desired result. To that end, Barr emailed the families he was representing, asking any families who had symptoms that matched the desired sequence (the MMR vaccine followed relatively closely by intestinal distress and then autism). Wakefield's twelve were selected from this group and from patients at the Royal Free Hospital, where Wakefield held a non-clinical position. This should have been the first warning sign: a doctor with an egregious conflict of interest selecting an exceptionally small sample. It would be difficult to imagine a situation more conducive to cherry-picking data.

     Strangely enough given the opportunity he had to cherry-pick his data, optimistic data selection wasn't enough to prove Wakefield's point. His hypothesis was that the MMR vaccine brought on bowel-brain syndrome, which then lead to regressive autism. From NHS records, we know that his sample of twelve included between one and six examples of regressive autism - given his already-small sample size, not enough to even pretend to have reached a conclusion. To sidestep the issue, he committed the cardinal sin in science: he fabricated his data. In his paper, three patients who definitely did not have regressive autism, and five whose symptoms were unclear, were reported as having regressive autism. Nor did he stop there - the next two steps in his hypothesis were that the patients have non-specific colitis, a bowel disorder, and experience their first symptoms less than two weeks after receiving the MMR vaccine. Only three of the twelve showed non-specific colitis, and only two showed their first symptoms less than two weeks after receiving the MMR vaccine (five even showed symptoms before receiving the vaccine). Wakefield reported that eleven had non-specific colitis, and that eight experienced symptoms less than two weeks after receiving the MMR vaccine (Deer, 2011). The data Wakefield started with matched subsequent studies almost exactly; the data he ended with, however, was a different matter.


     How did peer-review miss it? In hindsight, it seems obvious: a doctor being paid to find a link between MMR vaccines and bowel-brain syndrome found a previously undiscovered link between MMR vaccines and a previously unknown species of bowel-brain syndrome. Even without knowing that Wakefield had fabricated his results - although that was predictable enough - the conflict of interest should have been enough to prevent the study from being published. In fact, the conflict of interest not only should have been enough, it would have been enough. Wakefield never reported it. Despite taking more than $600,000 from Barr for his work on the lawsuit, Wakefield did not disclose the income, and the Lancet went ahead without knowing that the lead author of the paper had been paid to reach the conclusion he did. The problem couldn't be solved by looking at the fabricated data, either: because of privacy concerns, that only became apparent after an intensive investigation, after the conflict of interest had already been found out and the paper retracted.

     Therein lies the problem: peer-review is a system designed for participants who are basically, or at least functionally, honest. As long as the author and reviewers are willing to display a modicum of honesty, or even shame, the system works. Wakefield simply overwhelmed the system (and his coauthor John Walker-Smith, who only just avoided utter disgrace by pleading ignorance of Wakefield's methods) by lying at every turn. Ultimately, a system that could stop the Andrew Wakefield's of the world without fail would be intrusive, unwieldy, and in the end would only limit scientific advances - after all, it is the unpopular ideas, that go against the prevailing paradigm, that, if true, contribute the most to science. Peer-review couldn't stop Andrew Wakefield from publishing nonsense because it was never designed to, and it wasn't designed to because it couldn't be.

     This is not to say that the concept of peer-reviewed science is unreliable - that would be an absurd conclusion - only that it is often misunderstood. If a scientist is willing to deceive and fabricate to the extent that Wakefield did, he very well might be successful in having his paper published (he might also be caught early on and drummed out of the profession, of course). Even if he does, though, the inexorable drive of his colleagues to publish something original gives them a powerful impetus to reexamine his work, and if they can't replicate his results, as in the case of Wakefield's study, further scrutiny will follow.

     No, the problem is not with the way science is conducted - despite the occasional setback, its advance is nearly inexorable - it is with the way it is perceived. A single study, perhaps even more, might be published despite being junk science, but a retraction will almost inevitably follow. In the case of Wakefield's study, though, the period between the release of the study and its retraction saw the rise of devoted band of followers, for whom the eventual rejection of Wakefield's conclusions didn't matter. A movement formed around a single study and refused to dissolve when it became apparent how absurd the study had been, and that blind faith, not peer-review, is the source of the problem.

     The solution, then, is not to modify how science is conducted, but how it is communicated. Science is not infallible, and it is rarely simple. Even settled facts - that vaccines do not cause autism, for example - face counterarguments. The mere existence of the counterargument is far from conclusive for a scientist, but for the uneducated public, for whom science has been nothing more than the absolute truths and pat facts, presented with no hint of counterargument, that they were shown in middle and high school, a point may seem hotly disputed when in reality it is anything but. Worse still, once a movement takes on the aura of infallibility too often incorrectly attributed to science, those in it can find themselves caught up in their own pride, with neither the ability nor the desire to think critically on the issue, even when their scientific support has been removed. The only answer is patience, both for the scientific community and the public at large. For scientists, the key is being patient enough to explain what science is, without attempting to quash nonsensical ideas by overemphasizing certainty. Ironically, it is the perceived near-infallibility of science, built to answer challenges to science, that gave a spark to the rejection of science on vaccinations. For the public, in its turn, the key is waiting for the scientific method to run its course before developing an emotional attachment (a connection that effectively shuts down critical thinking) to a hypothesis by building a movement around it. 

     The general lesson to be drawn from the failure of peer-review that allowed Wakefield's dangerous drivel to be published, put in its simplest form, is that peer-review is not a guarantee of infallibility, that it cannot be, and that it would be wise to remember that, but also that a larger body of peer-reviewed work, although still far from infallible, can be far more reliable. Science derives its explanatory power from its ability to be verified by subsequent study; without it, it has the same limitations as any other method of inquiry. Inexorable as it might be, it is a process, not an end in itself, and at any given time the body of scientific knowledge will contain errors. The power lies in the process; it cannot be frozen in time.





References:

Deer, B., 2011, "How the case against the MMR vaccine was fixed." BMJ
     342, doi:http://dx.doi.org/10.1136/bmj.c5347.
Hurley A.M., Tadrous M., Miller E.S., 2010,"Thimerosal-containing vaccines and autism: a review of recent
     epidemiologic studies." J. Pediatr. Pharmacol. Ther., 15(3), 173-81.

0 comments:

Newer Post Older Post Home