Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

     One could argue that few, if any, medical studies have had the same impact as Andrew Wakefield's study, published in the British journal Lancet, attempting to establish a link between the MMR vaccine and autism. Certainly, no study has had the same impact with as small a dataset - Wakefield's twelve case studies, eight of which were claimed to show a temporal correlative link between autism and vaccines, were an incredibly small sample size for such a groundbreaking study. Still, from 1998, when the study was published, to 2010, when it was completely retracted (Lancet had already published a partial retraction in 2004), the study stood as ostensibly sound science. Of course, the scientific community, which had already viewed the small sample size with a jaundiced eye, saw the total inability of any other study to replicate Wakefield's findings (Hurley et al., 2010) as the final nail in the coffin of an already suspect idea, but that wasn't enough to stop the study from launching a new wave in the anti-vaccine movement. This new wave brought with it an increase in the rate of vaccine-preventable disease. Measles in Ireland, Whooping Cough in the US - declining vaccination rates inevitably brought the return of diseases previously reduced to a historical footnote. A single slip in peer-review brought hundreds of deaths, perhaps even thousands, by the time the movement runs its course. With this in mind, it is worth examining the course of events that led to the publication of something so resoundingly rejected by the scientific community and so harmful to the world at large.

     When Wakefield started his study, he was working in conjunction with Richard Barr, a lawyer, to bring a lawsuit alleging that the MMR vaccine was the cause of autism, and seeking damages for nearly 1,500 families. His proposed bowel-brain syndrome was the centerpiece of the lawsuit; his study thus had the potential to be incredibly financially beneficial if it reached the desired result. To that end, Barr emailed the families he was representing, asking any families who had symptoms that matched the desired sequence (the MMR vaccine followed relatively closely by intestinal distress and then autism). Wakefield's twelve were selected from this group and from patients at the Royal Free Hospital, where Wakefield held a non-clinical position. This should have been the first warning sign: a doctor with an egregious conflict of interest selecting an exceptionally small sample. It would be difficult to imagine a situation more conducive to cherry-picking data.

     Strangely enough given the opportunity he had to cherry-pick his data, optimistic data selection wasn't enough to prove Wakefield's point. His hypothesis was that the MMR vaccine brought on bowel-brain syndrome, which then lead to regressive autism. From NHS records, we know that his sample of twelve included between one and six examples of regressive autism - given his already-small sample size, not enough to even pretend to have reached a conclusion. To sidestep the issue, he committed the cardinal sin in science: he fabricated his data. In his paper, three patients who definitely did not have regressive autism, and five whose symptoms were unclear, were reported as having regressive autism. Nor did he stop there - the next two steps in his hypothesis were that the patients have non-specific colitis, a bowel disorder, and experience their first symptoms less than two weeks after receiving the MMR vaccine. Only three of the twelve showed non-specific colitis, and only two showed their first symptoms less than two weeks after receiving the MMR vaccine (five even showed symptoms before receiving the vaccine). Wakefield reported that eleven had non-specific colitis, and that eight experienced symptoms less than two weeks after receiving the MMR vaccine (Deer, 2011). The data Wakefield started with matched subsequent studies almost exactly; the data he ended with, however, was a different matter.


     How did peer-review miss it? In hindsight, it seems obvious: a doctor being paid to find a link between MMR vaccines and bowel-brain syndrome found a previously undiscovered link between MMR vaccines and a previously unknown species of bowel-brain syndrome. Even without knowing that Wakefield had fabricated his results - although that was predictable enough - the conflict of interest should have been enough to prevent the study from being published. In fact, the conflict of interest not only should have been enough, it would have been enough. Wakefield never reported it. Despite taking more than $600,000 from Barr for his work on the lawsuit, Wakefield did not disclose the income, and the Lancet went ahead without knowing that the lead author of the paper had been paid to reach the conclusion he did. The problem couldn't be solved by looking at the fabricated data, either: because of privacy concerns, that only became apparent after an intensive investigation, after the conflict of interest had already been found out and the paper retracted.

     Therein lies the problem: peer-review is a system designed for participants who are basically, or at least functionally, honest. As long as the author and reviewers are willing to display a modicum of honesty, or even shame, the system works. Wakefield simply overwhelmed the system (and his coauthor John Walker-Smith, who only just avoided utter disgrace by pleading ignorance of Wakefield's methods) by lying at every turn. Ultimately, a system that could stop the Andrew Wakefield's of the world without fail would be intrusive, unwieldy, and in the end would only limit scientific advances - after all, it is the unpopular ideas, that go against the prevailing paradigm, that, if true, contribute the most to science. Peer-review couldn't stop Andrew Wakefield from publishing nonsense because it was never designed to, and it wasn't designed to because it couldn't be.

     This is not to say that the concept of peer-reviewed science is unreliable - that would be an absurd conclusion - only that it is often misunderstood. If a scientist is willing to deceive and fabricate to the extent that Wakefield did, he very well might be successful in having his paper published (he might also be caught early on and drummed out of the profession, of course). Even if he does, though, the inexorable drive of his colleagues to publish something original gives them a powerful impetus to reexamine his work, and if they can't replicate his results, as in the case of Wakefield's study, further scrutiny will follow.

     No, the problem is not with the way science is conducted - despite the occasional setback, its advance is nearly inexorable - it is with the way it is perceived. A single study, perhaps even more, might be published despite being junk science, but a retraction will almost inevitably follow. In the case of Wakefield's study, though, the period between the release of the study and its retraction saw the rise of devoted band of followers, for whom the eventual rejection of Wakefield's conclusions didn't matter. A movement formed around a single study and refused to dissolve when it became apparent how absurd the study had been, and that blind faith, not peer-review, is the source of the problem.

     The solution, then, is not to modify how science is conducted, but how it is communicated. Science is not infallible, and it is rarely simple. Even settled facts - that vaccines do not cause autism, for example - face counterarguments. The mere existence of the counterargument is far from conclusive for a scientist, but for the uneducated public, for whom science has been nothing more than the absolute truths and pat facts, presented with no hint of counterargument, that they were shown in middle and high school, a point may seem hotly disputed when in reality it is anything but. Worse still, once a movement takes on the aura of infallibility too often incorrectly attributed to science, those in it can find themselves caught up in their own pride, with neither the ability nor the desire to think critically on the issue, even when their scientific support has been removed. The only answer is patience, both for the scientific community and the public at large. For scientists, the key is being patient enough to explain what science is, without attempting to quash nonsensical ideas by overemphasizing certainty. Ironically, it is the perceived near-infallibility of science, built to answer challenges to science, that gave a spark to the rejection of science on vaccinations. For the public, in its turn, the key is waiting for the scientific method to run its course before developing an emotional attachment (a connection that effectively shuts down critical thinking) to a hypothesis by building a movement around it. 

     The general lesson to be drawn from the failure of peer-review that allowed Wakefield's dangerous drivel to be published, put in its simplest form, is that peer-review is not a guarantee of infallibility, that it cannot be, and that it would be wise to remember that, but also that a larger body of peer-reviewed work, although still far from infallible, can be far more reliable. Science derives its explanatory power from its ability to be verified by subsequent study; without it, it has the same limitations as any other method of inquiry. Inexorable as it might be, it is a process, not an end in itself, and at any given time the body of scientific knowledge will contain errors. The power lies in the process; it cannot be frozen in time.





References:

Deer, B., 2011, "How the case against the MMR vaccine was fixed." BMJ
     342, doi:http://dx.doi.org/10.1136/bmj.c5347.
Hurley A.M., Tadrous M., Miller E.S., 2010,"Thimerosal-containing vaccines and autism: a review of recent
     epidemiologic studies." J. Pediatr. Pharmacol. Ther., 15(3), 173-81.



     Faster than a speeding bullet, able to soar higher than any plane... no, not a superhero: light. Light’s unique and intriguing properties have long tantalized scientists. Indeed, if the glimpses of potential it presents were ever fully utilized, the possibilities would be nearly endless. Significant advances have been made in harnessing light - the development of fiber optic cables and photovoltaic cells come to mind - but these are merely ripples on the surface of what could be. The development which could change all that, though, is an increase in the understanding and utilization of photonic crystals. In a general sense, the operation of photonic crystals is extremely simple: they act as optical equivalents of semiconductors. In semiconductors, variations in the arrangement of ions control the movement of charge carriers. In photonic crystals, the periodic arrangement of areas of high and low indices of refraction (a measure of how much light is bent when entering or exiting the material) control the movement of photons. In both cases energy bands of allowed energy levels - in the case of light, wavelengths - are created, which control the motion of either electrons or photons, depending on the substance in question.
   
In practice, although the theory behind photonic crystals is reasonably complex, the actual application of photonic crystals is even simpler. Photonic crystals are used to exert control over the movement of photons, a task which they can accomplish much more effectively than other methods of controlling the path of light because of their unique abilities. Much more could be said on the behavior and characteristics of photonic crystals, but that is all we need to know at the moment.

     That ability to change the path of light presented by three-dimensional photonic crystals has incredible potential. Recent research (Guldin et al. 2010) suggests that photonic crystals could be utilized in building more efficient solar cells. Currently, silicon is used in most photovoltaic cells to absorb sunlight and convert it into electricity. The silicon absorbs highly energetic photons and uses the energy from those photons to create free charge carriers, which can then be extracted to an external circuit. This process generally works well, however, it can be improved. The silicon does not absorb all the photons that impact it - some pass through. Although it is backed with aluminum to reflect some of the photons which pass through, these are reflected at a high angle and have a reasonable chance of passing through the silicon again. On the other hand, a three-dimensional photonic crystal, with its ability to control the paths of photons, could reflect more light than aluminum and diffract the light as it did so, causing it to re-enter the silicon at a much lower angle than it would otherwise and increasing the chance of reabsorption. By attaching photonic crystals, rather than aluminum, to the back of the silicon, the efficiency of the cell could be improved significantly.



Figure 1: A photovoltaic cell with a backing of three-dimensional photonic crystals. From Guildin et al. 2010.

     The increased efficiency inherent in the system is not its only advantage. Notably, unlike aluminum, photonic crystals do not have to be opaque. Technically, nothing prevents the entire solar cell from being completely transparent if photonic crystals are used instead of aluminum. This could provide the potential for using solar cells in windows, to produce windows that generate electricity. Given that solar sources cannot provide baseload power, it is as a supplementary power source that solar power is most attractive. Incorporating solar power generation into something as common as windows would provide a heretofore unheard of capability for solar energy to be used as a supplementary power source, and would remove many of the difficulties (for example, transportation and storage) which plague solar energy.

     Appealing as the concept of applying photonic crystals to increase the efficiency and utility of photovoltaic cells may be, it is not without its challenges. As previously mentioned, three-dimensional photonic crystals are difficult to fabricate, whatever purpose they are intended for. A further difficulty arises from the fact that the particular form of photovoltaic cell used - a diblock-copolymer based dye-sensitized solar cell - is limited in its effectiveness by its tendency to crack and delaminate, making it of dubious usefulness at present. Efforts to remedy both of these deficiencies are, however, ongoing. Efforts to improve diblock-copolymer based dye-sensitized solar cells show promise, and it seems more than likely that three-dimensional photonic cells will be fabricated much more readily in the not-too-distant future, making the possibility that solar cells could be far more efficient and widespread through the use of photonic crystals more likely than not.

      If there is to be an insurmountable challenge to the use of photonic crystals in solar cells, it will likely come from the invention of other methods of generating energy which make solar power unnecessary. Here photonic crystals are seen again. Researchers at MIT (Yi Xiang Yeng et al. 2011) have been quite successful in fabricating photonic crystals using tungsten and titanium which can withstand temperatures of up to 1200 degrees Celsius. In theory, these can absorb infrared radiation which will subsequently be converted to electricity. The actual process is essentially the same as that used in solar cells, the only difference is that the potential applications are much broader. Indeed, the possible applications for such technology are nearly endless: any heat source can be used, even heat sources already used in the process of generating energy whose heat is not completely consumed. This would include nuclear, geothermal - here photonic crystals could play a particularly significant role, since the chief challenge to geothermal energy at the moment is collection - coal, oil, and other hydrocarbons, and even as means of regaining heat lost through processes not intended to generate energy. Even the heat of a summer day could be converted to electricity, given the right conditions. Perhaps the most appealing option, however, is as a sort of “nuclear battery.” The concept originates from existing ideas. Many of NASA’s deep space missions make use of the concept - the Curiosity rover, for example, uses radioisotope thermal generators to generate electricity. Heat is generated by the decay of a radioactive element such as plutonium, but in present-day radioisotope thermal generators, this heat is converted to electricity using a thermocouple, which achieves less than 10% efficiency. That is, less than 10% of the thermal energy produced by radioactive decay in the apparatus is converted to electrical energy by the thermocouple. The process is still useful, but it could be far more efficient with the use of photonic crystals. It is unclear exactly how much photonic crystals could improve efficiency, but the improvement would be significant.

      Nor are those squeamish about using nuclear energy shut out of the market for batteries that utilize photonic crystals. Burning butane to produce heat has also been proposed, and batteries combining the combustion of butane to produce heat with photonic cells to collect heat could potentially last ten times as long as conventional batteries while providing a cleaner energy source (butane is produced as a byproduct of refining natural gas and is a far cleaner energy source than, say, coal or oil). An improvement in battery technology of this magnitude would have a tremendous impact on every field that uses batteries, but most notably on electric cars. It is not unthinkable that in the next several decades cars with the improved efficiency and emissions of electric cars but the range of petroleum-fueled cars could come on the market. Further, it takes little imagination to envision an incredible range of applications for longer-lasting batteries - the implications are positively staggering.

      Perhaps most thrilling, there are no clear obstacles to the development of batteries using photonic crystals. The team of researchers at MIT established a simple way of generating durable three-dimensional photonic crystals for the high-temperature environments required. That achievement is the bulk of the theoretical work in designing batteries using photonic crystals. What remains, although important and difficult, is comparatively simple. In fact, the MIT team estimated that such devices would be viable within the next two years. That estimate is perhaps optimistic - the MIT study was the last major development in the field, and that came in 2011 - but the fact remains that the most daunting hurdle has already been cleared.

     In the closely related fields of photovoltaic and thermovoltaic cells there are few developments more exciting than photonic crystals. Use of these materials in collecting energy could, if all went as expected, provide a significantly cheaper and cleaner energy source, particularly for small scale, localized energy production. While putting the theory into practice will require effort and may take a few years, the concept itself represents a tantalizing glimpse into what might be.



References:

Guldin, Stefan, Sven Hüttner, Matthias Kolle, Mark E. Welland, Peter Müller-Buschbaum,
Richard H. Friend, Ullrich Steiner, and Nicolas Tétreault. 2010. “Dye-Sensitized Solar Cell Based on a Three-Dimensional Photonic Crystal.” Nano Letters 2010 10 (7), 2303-2309
Yi Xiang Yeng, Michael Ghebrebrhan, Peter Bermel, Walker R. Chan, John D. Joannopoulos, Marin 
             Soljačić, and Ivan Celanovic. 2011. “Enabling high-temperature nanophotonics for energy 
             applications.” PNAS 2011 109 (7), 2280-2285

     Nuclear power today is defined by the antiquated needs of, of all things, submarines. During the development of the USS Nautilus, the first nuclear submarine, it became apparent that a solid uranium-fueled reactor would not only provide certain benefits when used in a submarine, it could also produce weaponizable byproducts, and, perhaps most important, could be ready sooner than its many competitors. Admiral Hyman Rickover decided in favor of a water cooled solid reactor fueled by uranium oxide enriched in U-235., and in doing so decided the future of nuclear power.

     To modern eyes Rickover's choice seems inexplicable. Up until his decision thorium appeared to be the future of nuclear power, however, once the water cooled solid uranium reactor was supported by the deep pockets of Uncle Sam, the contest was essentially over. Thorium, although promising, required development and could not compete with uranium. Although the reasons behind Rickover's choice are no longer relevant, uranium has maintained its ascendancy due to the massive costs associated with building and operating a nuclear reactor.

     The advantages of a thorium-fueled reactor seem almost too good to be true. Thorium is approximately four times as abundant as uranium, a much higher percentage of the energy inherent in that supply can be extracted, and it is often found in conjunction with, and can easily be separated from, the vitally important rare earth elements, making it an attractive long-term option. In an age when the dangers of nuclear proliferation are glaringly obvious, one feature of thorium which to Admiral Rickover was a negative, has become one of its most highly touted selling points: a liquid fluoride thorium reactor (LFTR) of the sort proposed by most of the thorium lobby does not produce weaponizable byproducts (Hargraves and Moir 2010). An LFTR produces energy, freshwater, and a very small amount of low-grade waste. Due to this fact it could be installed in places a conventional uranium reactor could not, removing the opportunity for endless foreign policy debates about whether a particular partially unhinged petty dictator is pursuing nuclear power for peaceful or military reasons.

     Thorium's case is further advanced by the nature and amount of the waste produced. An LFTR produces less than 10% of the waste a conventional reactor does, and waste from an LFTR has less than 1% of the radiotoxicity of waste from a conventional nuclear reactor. Further, that waste, rather than taking on the order of ten thousand years to become safe, it requires closer to one hundred years to become safe. These advantages are due to the fact that most of the waste produced by an LFTR is reused in the reactor, leaving only a small, relatively innocuous portion to be disposed of.

     Once an LFTR has been built, thorium can also be more than competitive economically. At present electricity in the United States costs between $0.05 and $0.06 per kWh and the potential “clean” energy sources—wind and solar—cost between $0.20 and $0.30 per kWh. In contrast, an LFTR has the potential to produce power at a cost as low as $0.03 per kWh (Hargraves and Moir 2010). The difference per kWh is small, but when one considers that, given that the average home consumes around 10,000 kWh per year, an LFTR could mean the difference between an annual electric bill of $50,000 or $60,000 at current rates and a bill of only $30,000 it suddenly becomes much more meaningful.

     In light of the devastating effects of mismanaged nuclear power at Chernobyl, Three Mile Island, and, more recently, Fukushima, few care about the logistics and viability of a power source if it also carries the potential to irradiate the surrounding countryside. Here thorium continues to shine. A conventional reactor is cooled by pressurized water, creating the potential for a catastrophic leak. Further, when the temperature in a conventional reactor rises the fuel expands, which accelerates the reaction, which heats the reactor, which in turn causes the fuel to expand. A conventional reactor aslo requires active cooling, meaning that if power is shut down such that cooling can no longer take place, as occurred at Fukushima, the reaction will continue to accelerate until the reactor melts down. An LFTR is cooled molten fluoride salt which is not under pressure, removing the single most dangerous feature of conventional reactors. Additionally, an LFTR will simply shut down if power is removed—unlike a conventional reactor, it does not require power to shut down but to stay running (Hargraves and Moir 2010; Shiga 2011). The LFTR thus presents an extremely attractive option as far as safety is concerned.

     Thorium presents an economical, safe, effective, and “clean” energy source. It can compete with and beat coal and oil in cost. It can be used in areas too unstable to sustain conventional nuclear and too poor or incompetent to use other conventional fuel sources. It's waste products are not abundant and are relatively innocuous. Although mining and transportation may be accompanied by some pollutant emissions, the reactor itself is not. Why, then, is thorium still an unknown cousin of uranium? The answer, as one might expect, is money and government. A prototype thorium reactor would cost on the order of $1 billion dollars; a commercial model closer to $5 to $10 billion. Very few people are willing to spend that kind of money on a project which is, whatever its potential, still unproven. Further, any investment of that magnitude would have yield a significant return within a reasonable amount of time. At best, it would take 10 years for an investor to being to see returns on the investment and, crucially, the extent and even the existence of those returns hinges on an uncertain regulatory environment. In countries where the government has demonstrated that it is willing to support investment in thorium research projects to build thorium-fueled reactors have moved ahead. In countries where the government has not shown such resolve thorium research has stalled or has never begun. In any case, it is hard to believe that the obvious benefits of thorium will remain hidden for long: it seems far more likely that in thorium we can see what will one day be unequivocally the fuel of the future.


References:

Hargraves, Robert, and Ralph Moir. 2010. "Liquid Fluoride Thorium Reactors." American Scientist 98, no. 
     4: 304-313.
Shiga, David. 2011. "Rescuing Nuclear Power." New Scientist 209, no. 2805: 8-10.

     Science: the word itself conjures up a plethora of images ranging from wild-eyed geniuses in lab coats scribbling esoteric ramblings in disorderly notebooks to the wild, rough world of field disciplines such as geology and biology. Science has brought incredible benefits to the world and has become firmly entrenched as the unquestioned best source of truth about the material world. At the same time, perhaps even as a result of this success, the world today has developed a particularly absurd view of science. It is held with an almost religious fervor by many to be the only method of determining truth. The vast majority of scientists, and even, I would guess, a very large percentage of the population, holds to this view, either explicitly or implicitly. While this view seems quite reasonable—science, after all, is the process of applying reason to learn about the world—it holds a fatal flaw. Science, by its very nature, can only address the physical realm. It cannot provide answers about either the existence or nature of the metaphysical realm. It cannot address any question that does not have as its answer some natural process, and those who cling to science alone for truth must simply posit, without any form of proof, that no such question can exist in the real world.

     The flaw lies in the very foundations of science. It was conceived as a method for acquiring information about the material world, a task for which it is a very effective tool. For any phenomenon which can be explained purely through natural causes science is unrivaled. In the event, however, that it is faced with phenomena of supernatural origins science can produce no answer. This results from the fact that science begins by excluding the supernatural. Pure science “cannot allow a divine foot in the door,” as Richard Lewontin put it. In essence it asks “what natural causes can one find for this phenomenon?” In asking that question alone it excludes the divine and places facts and empiricism at the feet of philosophy and a priori presuppositions—the very thing it seeks to avoid. Science is thus inextricably intertwined with philosophy and cannot produce a coherent worldview on its own, however much research and effort one puts into into it.

     Still, most of the time science produces a reasonable answer. It gave us, as many materialists like to point out, the electric light, the airplane, space travel, and much more. It is true that most of the time the world behaves in a predictable, completely ordinary manner (this is actually an excellent argument for the Judeo-Christian worldview, incidentally) and in these cases science is our best method of finding an answer. If something out of the ordinary has occurred science will, however, produce a nonsensical answer. Nowhere is this fact more evident than in the subject of origins, an area that cries out that something profoundly extraordinary has taken place. In this realm science cannot produce a reasonable answer. Take cosmic evolution: each step is more improbable than the last, beginning with a quantum fluctuation in nothing that produced everything. Moving further in the process, excellent work has been done analyzing faster than light expansion and subsequent localized contraction, and as of yet no reason for or driving force behind either posited occurrence has been found. Taken as a whole, the hypothesis is disturbingly close to a statistical impossibility and, in fact, is arguably completely impossible. It is a perfect example of the futile flailing of science divorced from all other forms of seeking truth.
     Science is nonetheless the best method we have of learning about the physical world. Divested of this, our best recourse, will we be forced to return to a time when foolish superstition driven by fear suppressed understanding? Certainly not! Indeed, the fallacy that led many in the past to attempt to explain the world around them solely through superstition is, in many ways, the same that leads many today to attempt to explain everything through science: the search for one process that can answer any question, an ultimate panacea. Instead, we should remember that the search for truth is bigger than any one method and allow ourselves to consider every possibility. It is not only the best way to truly understand the world around us, it is integral to our unique purpose as humans to seek and find truth wherever it can be found.

Home