The central 


should always

be: "Is this 


in light of the 


more likely 

than not?" 





Adequate Evidence for What?
Reported by Jeanine DeNoma 

     This article is based upon a lecture titled "Evidence for What? When Values and Rigor Clash" given by distinguished professor of philosophy Dr. Larry Laudan. He spoke at Oregon State University on May 1, 1997 for the Thomas Hart and Mary Jones Horning Lecture and Colloquium Series. The 1996-97 series, titled "What We Know to be True: Argument and Evidence," was sponsored jointly by the OSU Department of History and the Department of Philosophy.

     Laudan is currently a professor at the University of Hawaii and a research scholar at the Universidad Nacional Autonoma de Mexico. His specialty is the history and philosophy of science. He has addressed such questions as: How do scientists resolve controversy? How do scientists come to agreement? What cognitive values determine successful problem-solving among scientists? What makes decisions rational? Laudan has authored over fifty articles and eight books, including Progress and Its Problems, Science and Values, and Science and Relativism. He is currently working on two books, one on the concept of harm and another on how evidence is used.

What constitutes adequate evidence to accept a scientific hypothesis? What constitutes adequate evidence to convict a criminal? Larry Laudan discussed how a hypothesis, considered to be supported by evidence in one context, may be viewed as inadequately supported by the same evidence in another context. He focused on the "cultures" of science, law and regulatory activity and how evidence is evaluated in each of these fields.

     "Many Americans were perplexed and taken aback to discover that O.J. Simpson, having been acquitted of murdering his wife and Ronald Goldman, could then be successfully sued in court for wrongfully causing their deaths," said Laudan. For many this was the first time they had confronted a case where the same evidence supported a hypothesis in one context, but not in another. After all, either O.J. committed the murders or he did not. While many believe one jury or the other was wrong, most accept that a juror might come to diametrically opposed verdicts in the two differing circumstances: That, while there was enough evidence to convict O.J. under the standards of civil law, there might not have been enough to convict him under the higher standard of criminal law. Or that while it is more probable than not that O.J. committed the murders, the evidence did not demonstrate it "beyond a reasonable doubt."

     Within the practice of law there are two standards of evidence. In criminal law the accused is presumed innocent. The prosecution must establish that the defendant committed the crime and in doing so they must show, not just that it is more likely than not that the accused committed the crime, but that it is overwhelmingly likely that he did so. The defendant need show nothing. "In civil law, on the other hand, there is no presumption of guilt or innocence, nor does the burden of proof fall more heavily on one party or the other. The judge or jury is simply charged with finding out which hypothesis is more probable," said Laudan.

     The high standard of criminal law was created to prevent wrongful punishment and to avoid sending innocent men to jail. But there is a cost for such a high standard, said Laudan. Under these rules, criminals may be set free when the evidence is inadequate to find their guilt overwhelming. A system designed to prevent false positives will perpetrate many false negatives. Laudan called this the "paradox of confidence." The more confident you are of avoiding a statistical Type I error (a false positive) the more likely you are of committing a Type II error (a false negative).

     Science, like law, also has two cultures of evidence: basic science and applied science.

     In basic science a hypothesis is evaluated by asking if it fits the facts better than the competing hypotheses. When enough evidence has been collected to convince basic scientists that a hypothesis is more likely than not, it gains acceptance. They do not wait until the evidence establishes it beyond a reasonable doubt. Scientists do not "prove" theories or even establish them beyond a reasonable doubt, Laudan pointed out. If scientists did, "the history of science would not be what it is, that is, a succession of theories that once looked good and were subsequently rejected because they broke down under new evidence."

     Applied science, unlike basic science but analogous to criminal law, starts with a presumption and has an asymmetrical burden of proof. In applied science, a hypothesis is considered false until proven true. For example, take the hypothesis that smoking causes lung cancer. The presumption is that it does not. The hypothesis of a causal link between smoking and lung cancer it not accepted until the evidence for such a link is overwhelming. The burden of proof falls on the advocates of the hypothesis.

     "If you want to make a scientifically respectable claim that smoking causes lung cancer, you have to show it to be...wildly improbable that there is no connection between lung cancer and smoking," said Laudan. When an applied scientist says there is a statistically significant link, he is saying that he is at least 90 to 95% sure there is an association. However, when he says "I haven't found a significant link," that does not mean the two are unrelated; it only means the evidence was not overwhelming. His test did not meet the 90% certainty level required by applied science. If his test showed the hypothesis to have an 80% probability, even though that would seem to establish that it was more likely than not to be linked, it would not be accepted and the scientist would report that no statistically significant link was found.

     Laudan went on to discuss what happens when the "evidentiary cultures" of criminal law, civil law, basic science and applied science meet, as when scientific experts present evidence in tort law. To illustrate, he used this imaginary scenario: Arlo worked for 20 years as a cook in Alice's restaurant. Alice never installed the recommended ventilation equipment. Arlo, who recently developed lung cancer, is suing Alice. He blames his cancer on the carcinogens in Alice's kitchen. Both Arlo and Alice will bring scientific experts to court to help establish that cooking oil fumes did or did not cause Arlo's cancer.

     From the 1920s until the late 1980s the evidence presented would have been predictable, said Laudan. Civil courts operated under what was known as the Frye Rule. This rule required applied scientists to limit their testimony to opinions which had been accepted by peer review. Therefore, no scientist would have been allowed to testify that cooking oil caused cancer unless such a hypothesis had been accepted at the 90-95% certainty level by the scientific community. This despite the fact that the standard for the civil court requires only that Arlo show it was more likely than not that it had.

     The insistence of using the standard of applied science "worked enormously to the advantage of powerful interests," Laudan explained. The Frye Rule commonly benefited corporate America, most frequently the defendant, and placed the "little guy," most frequently the plaintiff, at a disadvantage. "The only science [the plaintiffs] could produce on their behalf had to be near-certainty applied science, even though the standards of tort law required merely showing that something was more probable than not," said Laudan.

     "Because of this tension, the Frye Rule began to be abandoned. In its place, I am sorry to report, is a doctrine which errs in the opposite direction and permits virtually any claim to count as scientific evidence," said Laudan. Testimony no longer necessarily reflects the majority view in the scientific community. Anyone trained as an expert can say whatever he chooses; there need not be sufficient evidence to support his claim; and the only rebuttal will be the testimony of an opposing expert.

     "Instead of replacing the overly tough standards of applied science with the rather more lenient standards of civil law allowing for a hypothesis to be supported by the preponderance of evidence, many courts have turned the process of using scientific experts into a three-ring free-for-all," said Laudan. Many scientists are seething over courtroom abuses of scientific arguments, while many frivolous tort cases have been won. "It seems to me there has to be some middle ground," said Laudan.

     Government regulation, like law and science, has two "evidential cultures." These can be seen by comparing the evidentiary standards of the Food and Drug Administration (FDA) to those of the Environmental Protection Agency (EPA).

     The FDA is charged with ensuring the safety of our food and the effectiveness of our drugs. It does so using the standards of evidence of applied science. The FDA works under the presumption that a drug is unsafe until proven otherwise, and it requires this evidence at the 95% certainty level. Manufacturers of drugs and food additives carry the burden of showing their product's "probability of being dangerous is vanishingly small." As a result of these standards the cost of getting new drugs approved is very high and many useful drugs never make it to market: A drug which has an 80% chance of being effective does not met the 95% convention and, therefore, will not be approved.

     By contrast, the EPA, which is charged with controlling air and water quality, presumes a chemical is safe until it has been shown to be dangerous. A chemical or pollutant cannot be banned unless it is shown to have a 95% probability of being dangerous (i.e. that is, was shown to have a 5% or less probability of being safe). So if the EPA wanted to ban secondary cigarette smoke, under their presumption of safety, the burden of establishing safety falls on the EPA, not on the tobacco industry. And the EPA would need to show there is less than a 5% probability that secondhand smoke is safe.

     The differences in presumptions and the burden of proofs between the FDA and EPA, as each agency was set up by Congress, has its own consequences. The FDA errs by keeping useful drugs off the market, while the EPA errs by allowing exposure to danger-ous chemicals through our air and water.

     "Conflicts of a very bizarre kind often arise from these two standards," pointed out Laudan. "For example, the EPA monitors drinking water; it will allow substances in the drinking water unless they have been proven dangerous. The FDA controls soft drinks and will not allow these same substances to be used as additives in soft drinks until they have been proven safe. Which policy is right?

     "My own view, is that presumptions and burden of proofs are artificial and unwelcome impediments to inquiry. Basic science and civil law have, I think, got it right. The central question should always be: Is this hypothesis, in light of the evidence, more likely than not? Should I believe it?" said Laudan.

     He suggested we remove all artificially imposed burdens of proofs, start with no presumptions, and let the preponderance of evidence guide what we believe and what actions we should take on that belief. Creating standards sensitive to only one of two types of errors, he argued, ignores the seriousness of both false positives and false negatives. While we want to avoid sending innocent men to jail, we certainly don't want to let guilty men go free. Likewise, while we don't want to prescribe unsafe or ineffective drugs, we also do not want to keep useful drugs off the market.

     "Insisting that it is irrelevant that a belief is more probable than not ... blurs the boundary line between belief and action," said Laudan.

     "I think it is important to distinguish what we should believe in light of the available evidence from what we then do on the strength of that belief when it is a matter of needing to translate a belief into certain forms of action. It is there, I think, that we should make a calculation into the relative costs of making mistakes with respect to false positives and false negatives," Laudan said.

     In the discussion period following Laudan's talk, David Bella, a water resource engineering professor at Oregon State University, raised another set of issues concerning hypothesis testing. "Let's take engineering, I don't think we ever prove anything with 90 to 95% certainty. What we do is rig the calculations with safety factors ... so that we gain desirable outcomes and avoid bad ones in the face of huge amounts of ignorance. And I think the same thing goes for environmental questions." For example, argued Bella, the standards are different when something is biodegradable versus when it is not; or in engineering, when a lower safety standard is set for a temporary warehouse than for a hospital.

     "Lots of times," Bella elaborated, "there are institutional pressures that shape the burden of proof. The tobacco industry, for example, or the salmon crisis. We have a crisis because the burden of proof on complex problems is set so scientists will never be able to meet it."

     Several audience members expressed concern about throwing out standards such as the "beyond a reasonable doubt" in criminal convictions. As one audience member expressed it, "It seems to me it is very idealistic as laid out here, and very simplistic... We set evidentiary standards for each of the areas you discussed based on a value which society sets in a much larger context."

     "Is it an over simplification of what is actually taking place?" mused Laudan in response. "Or is the idealism in the thought that mere intellectual critiques like mine cannot do anything to change or grapple with the problem?"

Return to Archive Index
 2001 Oregonians for Rationality