Early research inpsychology showedthat people consis-tently overestimatethe occurrences ofrare and dramaticcauses of fatalitieswhile underestimatingthe frequency of morecommon causes ofdeath...
|
Mad Cows, Mad Corn and Mad Communities
By Jeanine DeNoma |
Risk perception research is a multi-disciplinary field with insights coming from psychology, sociology, political science, and other fields of study. Early research in psychology showed that people consistently overestimate the occurrences of rare and dramatic causes of fatalities while underestimating the frequency of more common causes of death (Lichtenstein et al. 1978). Deaths from botulism, for example, are estimated to be more common than they actually are, while deaths from diabetes and stroke are estimated as less frequent than they actually are. Other studies show that experts and lay people rate the risks posed by new technologies differently. Lay people consider nuclear power to be a highly risky technology, while experts rank it as relatively safe. On the other hand, lay people perceive little risk from medical x-rays, while experts rank risks from x-rays much higher. These differences sometimes leave both experts and lay people frustrated and angry. Scientists often feel the public is irrational; lay people complain that scientists do not understand what is really important about the risks posed by new technologies. Research into the perception of risk has focused on this frustration, leading researchers to ask why differences between lay people and experts exist, and why these differences persist despite campaigns to educate people about the risks of technology.
Traditional risk assessment
Risk assessments are traditionally done by a small group of experts who review or conduct studies to obtain better technical knowledge about the risks posed by an activity or a new technology. These experts generally define risk as “the chance of injury, damage or loss,” commonly measured by the number of deaths caused, reduction in life expectancy, etc. To lay people, however, risk means something entirely different. Quantitative risk measurements are “absolutely crucial for informed and accountable decision making,” said Finucane. “Recently, however, social scientists have been rejecting the notion of ‘real’ or ‘objective risk’ and saying, instead, that risk is inherently subjective, that risk is a social construct that means different things to different people.”
In other words, while the actual danger posed by a technology or action is real, the perception of its risk is subjective. Technical experts’ perception of risk correlates closely with estimates of fatalities, while lay people’s risk perceptions include other factors. Understanding how and why individuals evaluate the riskiness of a technology or an activity can help society make informed decisions about acceptable risk levels, and will facilitate communication between lay people and scientists. “Plenty of hard-learned lessons have taught people that risk management without the involvement of the public is a doomed enterprise,” said Finucane.
The unknown and the dreaded
In the early 1980s, Slovic et al. (1985) showed
that lay people’s perception of risk is a function of two main factors:
the unknown risk and dread. Unknown risks are those that
are unobservable, unseeable, new, and have delayed consequences. Dread
reflects the degree to which a technology is perceived to be catastrophic
in potential, involuntary, uncontrollable, affecting future generations,
and inequitable in its distribution of risks and benefits (i.e. that the
risks go to those who benefit the least). If a graph is created such that
dread is plotted along the x-axis and unknown risk is on the y-axis, a
technology’s placement on this graph is predictive of the social reaction,
media response, and demand for regulation that it will receive, said Finucane
(see Figure 1). Consider two current controversies that have received public
outrage, Mad Cow Disease and Genetically Modified Organisms (GMOs), and
their placement in terms of dread and unknown risk.
When it was learned in the mid-1990s that mad cow disease was related to Creutzfeldt-Jakob Disease in humans, the political, economic and social consequences were dramatic. Fear of the disease placed it squarely in the domain of high dread and the unknown. The disease rates high in dread: Its symptoms are horrific, it is fatal, the risk is involuntary in that exposure can only be avoided by not eating meat, and the risk is seen to be uncontrollable in that cattle are raised by strangers and processed and distributed by large companies. It also ranks high in unknown elements because scientists do not understand the disease well, there is no known vaccine or cure, and the onset of symptoms is delayed for years.
GMOs also rate high in measures of dread and unknown risk. Interestingly, the concern has been over GM foods and not GM medicines. This is predicted given each’s placement on this two-dimensional scale. GM foods concern people because, without adequate labeling, their use is involuntary; they are seen to be inequitable be-cause their benefits go largely to seed manufacturers while the risks go to the consumer. There are elements of unknown risk from GM foods because they are new enough that scientists do not have long-term data about their risk to people and the environment; their effects may be delayed; and they are ‘mysterious’ to consumers who do not understand genetic engineering. GM medicines, on the other hand, are used voluntarily, generally under the recommendation of a trusted doctor, and it is the consumer who receives the benefits.
GMOs demonstrate another interesting aspect of risk perception: cultural and national differences, said Finucane. GMOs arrived in the marketplace just as the global economy was developing. This international context adds social and cultural complexity to the genetic engineering debate. Initially GM foods were accepted in the US, while European counties were opposing them. There were reasons why Europeans had a heightened concern not seen in the US. Lack of discussion about European field trials increased the sense of the unknown. Coincidentally, a number of health crises, such as mad cow disease and dioxin contamination in Coca-Cola products, surfaced in Europe just as GM products were set to go on the market. These crises left European consumers less trusting of authorities and technology.
Other factors played into concerns in some countries. France, for example, was concerned about the invasion of the American fast-food industry, and was attempting to reassert “culinary sovereignty” over their foods. This threat of “invasion” added cultural concerns that included elements of uncontrolability, involuntary compliance, and inequitibility.
Other cultural issues reflected concerns that genetic manipulation is “unnatural.” Some Christian churches said that it is immoral and akin to playing God to cross species that do not naturally mate. The New Zealand government placed a moratorium on GM products due, in part, to concerns expressed by the Maori people who see a spiritual essence in the ecosystem and the relationship among people, flora, and fauna, explained Finucane. They were concerned about the recognition and protection of Maori cultural knowledge, and their spiritual, ethical and cultural values. “The point is,” said Finucane, “there are worldviews and belief systems other than the scientific that need to be taken into consideration and be made more salient if we are to reach any consensus about what is an acceptable risk.”
Apart from the scientific worldview, how else might risk be examined? One way is to look beyond toxicologists’ assessments and insurance actuarial tables and examine how and why non-experts perceive risk as they do. Current research has shown that different segments of the population perceive risk differently. Men, for example, consistently judge risks to be lower than do women.
Gender differences
Traditionally gender differences in risk perception were explained in either one of two ways:
1. Women were thought to be more irrational, more capricious, and less educable than men. Any deviations from the “objective risk estimates” were viewed as irrational and thought to reflect a lack of understanding of complex technical and scientific information. Women, whose estimates deviated the most, were assumed to be less knowledgeable. Recent studies have shown, however, that gender differences persist even between men and women who are highly trained in the fields of toxicology and risk assessment, indicating it is unlikely the difference is due to either education or irrationality.
2. Others have argued the difference is biologically based, possibly reflecting an adaptive advantage for men to accept risky behavior. They suggest that women, who nurture life and are more vulnerable to violence, may have a heightened sense of risk. Research has shown that while biology may be a factor, it cannot be the entire answer, said Finucane.
In a 1994 study by Flynn et al., white males consistently rated the risk of technologies and actions lower than any other group, including white females, males from other ethnic groups, and females of other ethnic groups. In fact, white females’ perceptions of risk were closer to that of males and females from non-white racial groups. Non-white males and non-white females had more similar risk perceptions than did white males and white females. “If biology explained the risk perception differences between men and women then you would expect it would transcend racial boundaries. So while biology might explain some of the differences, it will not account for them entirely,” said Finucane.
The “white male effect”
The observation that white males perceive risks to be lower than other groups was dubbed the “white male effect.” The question then became what accounted for this. One hypothesis was that it reflects social and political factors. Flynn’s research showed “the white-male effect seemed to be caused by about 30 percent of the white male sample that judged risks to be extremely low,” write Finucane and Paul Slovic, also of Decision Research (Finucane and Slovic, 1999). This subset of white males also places more trust in experts and disfavors citizen decision making on risk issues. Maybe white males perceive technologies as less risky “because they are more involved in creating, managing, controlling, and benefitting from technology...Indeed, risk perceptions seem to be related to individuals’ power to influence decisions about the use of hazards,” write Finucane and Slovic (1999).
Finucane’s recent research has examined the social and political factors that affect risk perception and that explain the white male effect. Finucane et al. (2000) surveyed 1,204 men and women from a variety of racial and ethnic backgrounds, asking them to rate the risks from various activities and technologies. The study over-sampled minority groups, which allowed them to reliably look at data in terms of Asian, African-Americans, and Hispanics by gender. And, they surveyed respondents about their worldviews. Among their findings were:
1. White males were more likely to endorse hierarchial views, as seen in their agreement with the statement “When a risk is very small it is ok for society to impose that risk on individuals without their consent.”
2. White males were more likely to disagree with egalitarian statements, as seen in their disagreement with the statement “What the world needs is a more equal distribution of wealth and power.”
3. White males were more distrustful of community decision making, and more likely to disagree with the statement “People living near a nuclear power plant should have the ability to vote and close that power plant if they believe it is unsafe for them.”
“This provides a sense of the kinds of socio-political attitudes that might come into play to explain some the differences in risk perceptions,” said Finucane.
Social, political and cultural attitudes define one’s worldview. Hierarchists tend to support the established societal order, trust authorities and experts, and dislike civil disobedience, while egalitarians tend to be distrustful of institutions, support an even distribution of wealth and power, and dislike rank-role differentiation. An individual’s hierarchical or egalitarian attitudes tend to correlate with their risk perceptions. Hierarchists are likely to feel their lack of knowledge leaves them unqualified to make risk assessments and they prefer to rely upon standards set by experts. Egalitarians, on the other hand, tend to want information with which to make their own decisions. “The reason it is important to look at the idea of worldviews for explaining risk,” said Finucane, “is that it has direct implications for which safety standards will be supported.”
Social trust
Research indicates that shared values are one
of the foundations of trust. If one perceives that an institution shares
one’s values then one will deem it trustworthy. Once trust is established,
he will expect the institution’s future actions will be guided by those
shared values. Government agencies and private industry have become concerned
about the public’s lack of trust in them. “Social trust—the willingness
of someone to rely on an agency or its employees for decisions and policies—is
a critical component in people’s risk perception and in the gaps between
experts and lay people about what constitutes an acceptable risk,” said
Finucane. Agencies and industries would like to improve their trust relationship
with the public. Understanding how the public perceives risk is an important
step toward this goal. Risk perception research will also improve communication
among individuals with different worldviews and help communities come together
to confront risks.
REFERENCES AND READING
Bennett, Ruth. 2000. Risky Business. Science News 158:190-191.
Dake, K. 1991. Orienting dispositions in the perception of risk: An analysis of contemporary worldviews and cultural biases. Journal of Cross-Cultural Psychology, 22:61-82.
Finucane, M. L., E. Peters, and P. Slovic. (in press). Judgment and decision making: The dance of affect and reason. In S. L. Schneider & J. Shanteau (Eds.), Emerging perspectives on judgment and decision research. Cambridge, England: Cambridge University Press.
Finucane, M. L., P. Slovic, C.K. Mertz., J. Flynn, and T.A. Satterfield. 2000. Gender, race, and perceived risk: The “white male” effect. Health, Risk, & Society 2(2):159-172.
Finucane, Melissa and Paul Slovic. 1999. Risk and the white male: A perspective on perspectives. Decision Research, 1201 Oak St. Eugene, OR 97401. (Also published in Swedish in Framtider, Journal of the Institute for Future Studies 19(2):8-9. Sockholm, Sweden. 1999.)
Flynn, J., P. Slovic, and C.K. Mertz. 1994. Gender, race, and perception of environmental health risks. Risk Analysis, 14(6):1101-1108.
Gupta, A. 2000. Governing trade in genetically modified organisms: The Cartagena protocol on Biosafety. Environment 42(4):23-33.
Lichtenstein, S., P. Slovic, B. Fischhoff, M. Layman, and B. Combs. 1978. Judged frequency of lethal events. Journal of Experimental Psychology: Human Learning and Memory 4:551-578.
Paarlberg, R. 2000. Genetically modified crops in developing countries: Promise or peril? Environment 42(1):19-27.
Sheehy, H., M. Legault, and D. Ireland. 1996. Consumers and biotechnology: A synopsis of survey and focus group research. Ottawa, ON: Office of Consumer Affairs, Industry Canada. Available on-line at this link.
Slovic, P., B. Fischhoff, and S. Lichtenstein. 1985. In Perilous Programs: Managing the Hazards of Technology. R.W. Kates, C. Hohenemser, J.X. Kasperson, Eds. Westernview, Boulder, CO, pp. 91-125.
Slovic, Paul. 1987. Perception of risk. Science 236:280-285.
Slovic, P., B. Fischhoff, and S. Lichtenstein. 1980. Facts and fears: Understanding perceived risk. In Societal Risk Assessment: How Safe is Safe Enough? R. C. Schwing and W. A. Albers, Jr. (Eds.) (pp. 181-216). New York: Plenum.
Slovic, P., T. Malmfors, C.K. Mertz, N. Neil, and I.F.H. Purchase. 1997.
Evaluating chemical risks: Results of a survey of the British Toxicology
Society. Human and Experimental Toxicology 16: 289-304.