Of March and Myth: The Politicizing of Science

May 01, 2017 0 Comments A+ a-

Of March and Myth: The Politicizing of Science

Scientific integrity, self-correction, and the public


WikimediaCommons.org/Public Domain
British philosopher of science Karl Popper believed in the doctrine of falsifiability, i.e., that hypotheses can be tested and refuted. That is the difference between the scientific method and other methods of inquiry.
Source: WikimediaCommons.org/Public Domain
Twentieth century Austrian-born British philosopher of science Karl Popper once wrote, “Science must begin with myths, and with the criticism of myths.” (Conjectures and Refutations, p. 66) For Popper, science is unique in its systematic approach to errors and its emphasis on self-correction. Popper was best known for his doctrine of falsifiability, i.e., the importance of the testability of a hypothesis.  In other words, what distinguishes the scientific method from other methods of investigation is that it is a method of attempting to discover the weaknesses of a theory--to "refute or to falsify the theory.” (Popper, All Life is Problem Solving, p. 10) “Science has nothing to do with the quest for certainty or probability or reliability,” wrote Popper, “We are not interested in establishing scientific theories as secure, or certain, or probable…we are only interested in criticizing them and testing them, hoping to find out where we are mistaken…” (Conjectures…,p. 310)

To address some of the issues and difficulties within science, the National Academy of Sciences recently sponsored a three-day Arthur M. Sackler Colloquium Reproducibility of Research:  Issues and Proposed Remedies, organized by Drs. David B. Allison, Richard Shiffrin, and Victoria Stodden, in Washington, DC. This colloquium brought together international leaders from multiple disciplines, including Nobel-Prize winning researcher Dr. Randy Schekman, its keynote speaker; 28 of these lectures can be retrieved on YouTube.com (Sackler channel.) (For a summary of these outstanding lectures, see Cynthia M. Kroeger’s http://www.bitss.org/2017/04/03/reproducibility-of-research-issues-and-p....) The subject is clearly opportune: Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions by science journalist Richard Harris was released recently.


photo by Sylvia R. Karasu, M.D.
Science journalist Richard Harris explores the crisis in reproducibility in science in his recently published book.
Source: photo by Sylvia R. Karasu, M.D.
The “sloppy science” of which Harris writes, includes everything from misconduct such as falsifying, fabricating, or even plagiarizing data, to distorting the manner, by so-called “spin,” in which a study’s results are reported.  Says Schekman, “Sometimes, though, there is a fine line between sloppy science and overt misconduct and fraud.”
Sloppiness also occurs when imprecise language fails in “capturing the science” for the media, says colloquium speaker University of Pennsylvania Professor of Communications Kathleen Hall Jamieson. Without accuracy, scientists are “inviting misinterpretation” and what she calls “narrative infidelity.” Scientists, for example, generate confusion when they write of “herd immunity” as opposed to “community immunity” or when they create the “controversial frame” of a “three-parent baby” when they describe the technique of obtaining genetic material from a cell’s mitochondria, the energy powerhouse of the cell, says Jamieson. She adds, “Mitochondria do not determine parenthood.”

Jamieson, though, is loath to call the existing scientific narrative a crisis.  Rather than the accepting the narrative, “Science is broken,” Jamieson prefers to view science as “self-correcting,” (Alberts et al, Science, 2015) just as Karl Popper had emphasized years earlier.  Colloquium organizer Shiffrin also takes issue with the use of “inflammatory rhetoric” that merely increases the public’s skepticism and undermines its trust in science. Shiffrin noted that since so much of science is exploratory, with a need for “successive refinement,” it does not always lend itself to reproducibility. Researcher John Ioannidis, writing recently in JAMA (2017) acknowledges the importance of reproducibility and what we can learn when results cannot be replicated; he also appreciates, though, the complications that can arise from “unanticipated outcomes” in “complex and multifactorial” biological systems that can interfere with reproducibility.


One of the difficulties in reproducing results is that there are literally hundreds of kinds of bias--systematic errors as opposed to errors by chance-- that can creep, either knowingly or not, into scientific studies. “Science, though, is a bias-reduction technique and the best method to come to objective knowledge about the world,” says organizer Allison, who is a Distinguished Professor, biostatistician and Director of Nutrition Obesity Research Center (NORC) at the University of Alabama at Birmingham. “Its validity depends on its procedures but there is always room for improvement,” he adds.


Another type of bias that can occur, as well, is what Allison and his colleague Mark Cope, in their 2010 papers in both the International Journal of Obesity (London) and Acta Paediatrica, call white hat bias, which they define as “bias leading to distortion of information in the service of what might be perceived as righteous ends.” Examples of this kind of bias include misleading and inaccurate reporting of data from scientific studies by “exaggerating the strength of the evidence” or issuing media press reports that distort, misrepresent, or even fail to present the actual facts of the research or do not even mention any caveats or limitations. Cope and Allison note that white hat bias can be either intentional or unintentional and can ‘demonize’ or ‘sanctify’ research. Sumner et al (PLOS One, 2016) for example, note that press releases “routinely condense complex scientific findings and theories into digestible packets” that may produce “unintended subtle exaggerations” when they use simple language.  Regardless of which way white hat bias leans, though, it can be “sufficient to misguide readers,” say Cope and Allison.

WikimediaCommons.org/Public Domain
William Blake's "Newton," 1795, Tate Britain. 
 
Source: WikimediaCommons.org/Public Domain
 
Other difficulties that complicate research stem from the present reward system for scientific advancement that may inadvertently foster an unhealthy climate whereby “journal publications become the currency of science,” writes Harris. One inherent difficulty in the scientific literature, for example, occurs when studies are more apt to be published if their results are perceived as statistically significant, unusually remarkable, or even improbable, particularly in what are called high impact journals. This is called publication bias, i.e., when publication depends more on a study’s outcome rather than its overall quality.  For Harris, this designation of high impact is “a measurement invented for commercial purposes to help sell ads and subscriptions.” Statistically significant results, incidentally, may have considerably less genuine clinical significance. A medical trial for a cancer treatment, for example, may significantly improve patient survival but only by a few weeks—hardly clinically significant for an individual patient.

One of the procedures that scientists are beginning to recognize as a prerequisite for reproducibility and scientific integrity is the need for full transparency and a sharing of data as the default position. “Show your work and share, lessons we all learned in kindergarten,” says colloquium speaker Brian Nosek, a psychology professor at the University of Virginia. Some researchers have now suggested those papers receive a badge, a kind of seal of approval, as an incentive for full disclosure. Cottrell, who supports the idea that peer review in science should no longer be anonymous and calls it an “historical anachronism” (Research Ethics, 2014) has described science as “struggling with a crisis of confidence.”And since young researchers model themselves after the ethical conduct of their mentors, a laboratory’s culture becomes exceptionally important.

WikimediaCommons.org/Public Domain
Joseph Wright of Derby (1734-1797): "An Experiment on a Bird in an Air Pump," 1768, National Gallery, London
 
Source: WikimediaCommons.org/Public Domain
 
Says Harris, “Getting biomedical research right means more than avoiding the obvious pitfalls…it’s also critical to think about whether the underlying assumptions are correct.” Getting it right, though, sometimes involves admitting errors and retracting papers whose results cannot be replicated, a fairly new concept that has grown only within the past fifteen years. Though the majority of journals now have retraction policies in place, actual paper retraction is a tedious and thankless endeavor, according to Allison. He found many journals were unwilling and overtly resistant to correcting blatant inaccuracies and spurious data that he and his colleagues have discovered while reviewing hundreds of papers weekly for their obesityandenergetics.org website that is freely available to over 80,000 of their subscribers. But as Harris says, “There is little funding and no glory involved in checking someone else’s work.”

Perhaps there should be. Mistakes in the literature, of course, are not just of academic interest: they can have far-reaching public health consequences.  Many parents, for example, were wrongly discouraged from vaccinating their children because of the fraudulent connection between vaccinations and autism that had been published (and later retracted) in the reputable British journal Lancet. In recent years, there is now an organization Retraction Watch that reports on and encourages this self-policing practice.

While scientists are becoming more cognizant of their need to self-police, they are also aware of the need of calling attention publicly to the importance of science. Organizers of the April 22nd March for Science, in 500 cities worldwide, proclaim “Science, not Silence.” While hundreds of thousands of scientists have registered, not all scientists believe the march will accomplish its goal. Coastal geologist Robert S. Young, for example, in a New York Times editorial back in January (1/31/17), thinks the march will politicize science even further and create a “mass spectacle.”  Said Young, “We need storytellers, not marchers.”


The main impetus for the upcoming march is the looming threat of a substantial reduction involving billions of dollars in public funding for scientific research in the proposed budget of the Trump Administration. How very different from other administrations.  It was, after all, an Act of Congress, signed in 1863 by then President Abraham Lincoln that had first established the National Academy of Sciences, perhaps our nation’s most prestigious assemblage of scientific scholars that now include almost 500 Nobel-Prize winners among its members.  How has it now come to this, though, that science needs both its marchers and its storytellers?

WikimediaCommons.org/Public Domain
American humorist Mark Twain, back in the 1870s, attacked science when he spoofed the nonsensical extrapolation of data.  Attacks on science are nothing new.
 
Source: WikimediaCommons.org/Public Domain
 
The misguided policies of our current government notwithstanding, science itself, by its very complexity, has come under scrutiny and has become fair game for public assault. There has also been “a tendency to lump all opposition to science, despite topic, into a common ‘anti-science’ camp…with a ‘them vs us narrative,’” write McClain and Neeley (F1000Research, 2014) Whether attacks are worse now or just apparently more prevalent due to increased social media exposure is not clear. There have, though, always been attacks. Mark Twain, for example, back in 1874, in his Life on the Mississippi, spoofed the nonsensical extrapolation of data and wrote, “There is something fascinating about science.  One gets such wholesale returns of conjecture out of such a trifling investment of fact.” Twain’s quotation appeared in Darrell Huff’s own 1954 best-selling spoof, How to Lie with Statistics. In more recent years, scientists themselves, as we have seen by the recent Sackler Colloquium, are now acknowledging the need to confront their own shortcomings. Storytellers, after all, can tell tales that are factual or fictitious.


Marches and stories are useful for generating awareness, but they are not enough. Scientists will convince the public and funding agencies of the importance of their science, though, only by their own unrelenting adherence to a culture that consistently fosters self-scrutiny and the value of self-correction. That is the science defined by Karl Popper over fifty years ago.


photo by Sylvia R. Karasu, M.D.
Karl Popper's classic 1963 book, "Conjectures and Refutations" in which he writes, "Science must begin with myths and with the criticism of myths."
 
Source: photo by Sylvia R. Karasu, M.D