I repeatedlyobservedteachers failingto communi-cate scientificconcepts andstudents havingdifficultiescomprehendingeven the mostbasic principlesof physics.
|
A Distinctive Family of Errors
By Phil Pennington |
Phill Pennington, a physicist, taught on the faculty of Portland State University and has a special interest in problems of education and the misunderstanding of physics. Fall, 1995 and Winter, 1996 ProFacto carried Parts I and II
One dark night in the mid-1950s I was driving some Sierra Club desert peak climbing enthusiasts over remote, dirt backroads in the Mohave desert. The climber in the front passenger seat had given up trying to navigate using my topographic maps. We weren’t lost, but each junction seemed to have five different roads taking off in six different directions. “These roads are like physics: impossible to understand because there’s so much to remember,” complained my ex-navigator.
That comment struck many dissonant chords, not the least because he was a language major, a Dane taking a doctorate in French at UCLA. He spoke remarkably flawless English, as well as several other languages. French, or any other language, is monumental rote memorization. The elementary physics course he was taking, on the other hand, consisted of a handful of simple principles and involved almost no memorization. What’s more, a map is a simple, easy to use representation of terrain.
Different people might use a map, conceptualize a physics principle, or learn a foreign language in ways so different that communication is virtually impossible. Science teaching persistently fails to communicate. This problem is extensive. Richard Feynman once reviewed all the K-12 science texts offered for evaluation to the California Board of Education. In the chapter “Judging books by their covers” in Surely Your Joking Mr. Feynman, he wrote, “Everything was written by somebody who didn’t know what the hell he was talking about, so it was a little bit wrong, always! ... the books are lousy, UNIVERSALLY LOUSY! ... that’s the way all the books were. They said things that were useless, mixed-up, ambiguous, confusing, and partially incorrect. How anybody can learn science from these books, I don’t know, because it’s not science.”
These teachers and authors communicated science as they saw it, but what they saw was often wrong. Yet they didn’t realize it. Feynman observed a serious problem that gets little attention: Much of science is simply not what it at first seems. As a student and a professor of physics I repeatedly observed teachers failing to communicate scientific concepts and students having difficulties comprehending even the most basic principles of physics. Let’s examine the source of some of these misunderstandings. Let’s try to boost communication by sharing and examining our ...
Examples, models, and exemplars
The prevalent misconception that air temperature affects actinicity of sunlight, and the offset between the days of earliest sunset and latest sunrise (see Pro Facto, Winter 1996) are examples of “obvious yet unobserved” phenomena—one aspect of the problem. The error of “flip-flop thinking,” the failure to recognize multiplicative interaction, thereby mixing one correct statement with one incorrect statement, is illustrated with the exemplar: The area of a corn field is not produced by its length; it’s produced by its width. (Exemplars are easily understood examples we can use to illuminate more obscure cases). The agreement between a map and the terrain is a useful exemplar for the elements of “correctness” or “truth” as used in science, as well as an example of the “obvious, yet largely unobserved” persistence of errors. Human vision is a revealing model for understanding how we do and do not understand the more abstract principles of science and mathematics. Color vision also provides examples, models, and exemplars for many “obvious, yet unobserved” phenomena and “simple, but difficult” concepts.
What do I see? What do you see?
Vision is the entry port for much of what we know. It doesn’t reveal all; in fact, it reveals little. Nevertheless, we usually feel it is complete—a nonexistent completeness.
Using the words of Stephen J. Gould (The New York Review of Books, April 4, 1996), human vision is “a substantial set of largely independent attributes.” For example, our vision has an image focused by a lens with an adjustable focal length, edge detection by the retina, a primitive polarization detection system, limited wavelength discrimination (color), depth by stereopsis from image differences due to the different position of our two eyes, plus multiple other kinds of visual information processed by the brain, such as motion detection and line-attitude discrimination. Our ancestors, interacting with their surroundings, gradually acquired these attributes through evolution, although different species perceive somewhat different aspects of the world.
Gould, however, was not describing vision, but intelligence. His was an emphatic rejection of the “old notion of a single scalar quantity recording overall mental might.” The fallacy of the scalar of intelligence becomes obvious if we consider the parallel fallacy of using visual acuity to test for color vision versus colorblindness. A single scalar quantity—visual acuity or IQ — cannot represent a multicomponent entity. We must measure largely independent attributes independently. Vision is easy to understand; intelligence is more difficult. The power of the model is obvious here.
The most obvious attribute of measurement is the ordering it leads to. Line up the kids in the class by height: the closer two kids are in height, the closer together they are in the line. That’s a rank order. Now do the same with colors. Arrange a large number of paint samples so that proximity and similarity correlate. Putting them in line won’t work. If you have normal color vision you need a three-dimensional array. If you have one of the two “missing cone” color blindnesses—protanopia or deuteranopia — you need a two-dimensional array. Only if you are totally colorblind (this is very rare) can you place the color samples in a line. Multicomponent color is an exemplar for multicomponent measurement.
Virtually everyone automatically associates a unique rank order with any measure. Such ordering is transitive: If Fredrick is larger than Johnny and Johnny is larger than Bob, then Fredrick is larger than Bob. That’s OK if “larger” always means “taller.” Is the flaw obvious? If the boys are on the wrestling team, “larger” could mean “heavier.” Fredrick is a bean pole, taller than Bob but much lighter. “Larger-smaller” might have more than one component. Transitivity vanishes with multicomponent measure, and rank orders are not unique.
Furthermore, the problem is not which component or rank order is “correct.” All could be correct; the problem is making the measure complete. Confusing validity with completeness is a common error. Human color vision is not complete. It has only three components out of a possible infinity. Spectroscopic color requires an infinity of dimensions to arrange all colors so that proximity and similarity correlate. Our limitation blinds us to much. We are, for example, blind to the profound differences between line spectra, like that of a sodium or mercury vapor lamp; continuous spectra, like that of an ordinary household tungsten bulb; and mixed continuous and line spectra, like that of a fluorescent lamp. The exact color matches we make with our human color vision, like matches between a flower petal and a paint mixture, are almost always no match at all to a spectroscope: We are colorblind to those differences. Evolution gave us only a faintest shadow of what color vision could be.
From seeing to reasoning
Science has taken us far beyond the evolution-determined edges of human perception. Extensions in reasoning accompanied extensions in perception. Much of science’s reasoning involves relationships of multiple elements and much misunderstanding of science involves the failure to sense such relationships. The “New Age” use of quantum mechanics and relativity to support the hypotheses of “mind over matter” is an example of such misunderstanding. In Part I (Pro Facto, Fall 1995) we saw that New Age misunderstanding of Heisenberg’s uncertainty principle misses the point that position and momentum constitute inseverable elements of a multi-element measurement. The uncertainty principle merely states that the information content of that measurement cannot be infinite. There is nothing strange about that. The mystery lying at the edge of human comprehension is in the (observed) wave-like nature of matter, not in (unobserved) psychic waves emanating from our brains.
New Age use of Einstein’s special relativity also misses Einstein’s actual accomplishment. He solved profound puzzles about electricity and magnetism, which proved to be merely components of an inseverable whole: the electro-magnetic interaction of charged particles. And his well-known discovery that “time is the fourth dimension” added a time component to the three inseverable spatial components of position.
Multicomponent-ness has many important consequences, of which New Age writing usually seems oblivious. For example, individual components change when we change our fame of reference, but significant values do not. The coordinates of the end of a stick change when we rotate the coordinate system, but the length of the stick does not change. If we don’t realize each coordinate is incomplete without its mates, we could get the impression our arbitrary choice of coordinate system is a “mind over matter” influence. Einstein based his theory of relativity on an a priori presumption that such “mind over matter” cannot be. Einstein did not show that “everything is relative” to our individual viewpoints. That’s a discovery every normal child makes around the age of seven when he realizes how to draw a picture as seen from a viewpoint different from his own. And long before Einstein, Galileo wrote equations that describe the relativity of viewing from a different viewpoint. Einstein’s relativity was a theory for predicting the interactions between charged particles—electrons, protons and the like. It’s beyond the edge of obvious human comprehension, but it contains no evidence for mind over matter.
Correct perception of our multiple-element world recognizes the need to weigh and interrelate all relevant evidence, including disconfirming evidence. If we sense each thing in life as deriving from “The Cause,” rather than a complex web of causes and effects, we become open to one of the most common and most serious errors: pointing to confirming evidence as proof. We have found The Cause! Advertising and arguments in jury trials are suffused with this sense, with this nonsense.
The psychic tells you to look under the book after you have picked the number “seven.” Under the book is a note reading: “I knew you would pick ‘seven’—proof that I’m psychic.” Of course reading all the notes hidden around the room might lead a skeptic to a different conclusion. This magician’s trick is an exemplar for the fallacy in “accept confirmations and ignore disconfirmations.”
Another example is that of the man in the supermarket checkout line who exults, “Look! I’ve won two dollars on a one dollar lottery ticket”—focusing on the one win and willingly ignoring his other 11 losing tickets. State lotteries are reliable mechanisms for moving money from billfolds to state treasuries. They are a clever tax on logical and statistical naivete. The mathematics of lotteries predict the outcomes of playing lotteries with ever increasing accuracy the more times one plays. The outcome is a loss proportional to the total amount played, usually a loss of about 40% to 70%. Statistical behavior is erratic with occasional extremes, but the big picture is reliably predictable. The error of focusing on the desirable while ignoring the undesirable is devastating to the gambler.
From reasoning to knowing
Science has standards for truth, validity, and correctness. As an exemplar for scientific truth, consider maps. If you need to get from home to a place in town you’ve never been, you need a map that accurately shows the streets. Maps are like the equations and theories of science. They represent the world in ways that, when you understand their use, you can use them to interact with the world successfully: You can select from alternatives of action so that you correctly anticipate the outcomes of your actions with probability as much above pure chance as possible.
Most published maps are like Feynman’s science texts: “Universally lousy!” They show entire grids of nonexistent streets; they mistakenly show boundaries of various kinds of streets; they leave off important streets. They commit creative cartography!
Inaccurate maps are like the misconceptions of science. They mislead. They miss important subtlety and detail. Using such a map limits success in route-finding. Scientific theories are maps of abstract spaces. They are useful to the extent that they are correct, sufficiently complete, and understood. Maps, science, or any other useful knowledge, need some criteria of correspondence with reality. That correspondence makes knowledge useful.
From knowing to understanding
Map errors are bizarrely persistent. Those lousy maps remain uncorrected edition after edition, often for decades. Their errors seem permanent. Part of this peculiar persistence may stem from difficulties many people have reading maps. A runner’s booklet I made for our YMCA described routes in words, and pictured them on maps. I didn’t describe the more complex routes, but suggested that runners consult the maps. Objections! “The map means nothing to me,” complained a fellow running committee member. “Only the verbal descriptions are useful.” (I, on the other hand, can’t use the verbal descriptions—too much to memorize.) Maps are meaningless to many.
Even stranger, to anyone who frequently relies on maps, is that many people do not see an agreement between maps and terrain as critical to the suitability of a map. They defend those terribly inaccurate maps! “That map is useful to a city planner because it shows ‘dedicated’ streets which could be made into streets,” one city planner told me of a map that was virtually unusable to someone trying to get around town. “That map has correct neighborhood boundaries,” a Fire Bureau employee told me of the same inaccurate map on which the neighborhood boundaries had been drawn. Successful users of knowledge will recognize that errors exist — instead of agreeing the map is full of errors and then arguing that it is an acceptable map. Some crucial comprehension is missing.
When we apply logical elements unseen by a proponent, we might see the proponent’s hypothesis lacks some of the truth or validity claimed.
From understanding to reaching out
The Danish student of French language became a professor at a major university. I suspect he still doesn’t find either maps or physics very useful. He works miracles with language. He demonstrates the great complexity and variety in human mental skills.
Another example of this complexity and variety was a student in my elementary physics course. He was a “D student.” He found physics very difficult, but was one of the best in understanding objectives of teaching, and in designing and evaluating small problem-solving oriented units of instruction. I suggested he look over my shelves for texts that might help him understand physics. He selected The Feynman Lectures on Physics as the most helpful. That surprised me; most physicists consider it to be in a class of difficulty all its own. Nevertheless, he really did understand much of Feynman’s peculiar style. He actually surpassed many professional physicists in some of the thinking that made Feynman so outstanding. That student flunked out of college the following semester. We did not reach out to him. Our traditional academic setting doesn’t deal well with the “substantial set of largely independent attributes” contributing to learning and intelligence, to knowing and using knowledge. It remains rooted in that “old notion of a single scalar quantity recording overall mental might.”
The organizing theme of the “distinctive family of errors” is the failure
to recognize multi-element relationships in our interaction with the world.
These relationships, beyond the edge of easy human comprehension, are as
important as they are hidden.
Feynman’s Tensors |
“Physicists always have a
habit of taking the simplest examples of any phenomenon and calling it
‘physics’ ... Since most of you are not going to become physicists but
are going into the real world ... sooner or later you will need to use
tensors.” (The Feynman Lectures on Physics v. 2).
In one brief statement Richard Feynman 1) called attention to the pervasiveness in daily life of multicomponent measures; 2) exposed the invisibility, to virtually everyone, of these measures; and 3) demonstrated the “magical” Feynman insight with his perception of these abstract entities. Tensors are a family of multicomponent measures. The sequence “scalar—vector—tensor” describes measures of increasing complexity. A scalar is a single number, a vector is a list of numbers, and a tensor is a matrix of numbers—the matrix can be of many dimensions. Scalars are what the non-Feynmans of the world see when looking at any of them. Physicists like simplicity, so physics has a lot of scalars. But Mother Nature likes complexity, so physicists can’t avoid it. Vectors exist, but just at the edge of human comprehension, so beginning physics students persistently see scalars when their professors show them vectors. A scalar is a special case of a vector, and a vector is a special case of a tensor. Speed, cost and IQ are scalars. The speed is 30 mph above the limit, the cost to the red-faced driver is $500 for the traffic ticket, and the driver’s IQ is 70. Speed is not always the critical concern. The fastest airliner is rarely the best choice. Direction matters. A plane’s velocity is a “three-vector” which might be given as: vNorth = 20 m/sec, vEast = 20 m/sec, vUp = 5 m/sec. That plane is travelling northeast and climbing. To specify the shade of red in the above speeder’s face we need three parameters: a common choice is the Munsell chart’s hue, saturation, and intensity. J.P. Guilford finds 17 components for human intelligence. Velocity, color, and intelligence are vectors. For most physics measures, it is dependence on direction that forces us to use tensors (or vectors) rather than scalars. Direction might be irrelevant, however. Human color vision requires a three-component vector because we have three kinds of cones on our retinas. The “three-vector” describing color could alternatively be chosen as the responses by each of our three types of cones. We understand colorblindness, then, to be a failure of one or more of these “dimensions,” a failure of one or more cone type. To remind us of the importance of genetics and evolution, we should note that birds’ color vision is believed to have four, five, or six components. Compared to birds, we humans are all colorblind. Spectroscopic color has an infinity of dimensions because the spectroscope spreads light into the spectrum of all wavelengths. The “color vector” for colored light is a list of intensities for each of the infinity of wavelengths: that is, it is a mathematical function of intensity versus wavelength. Colored surfaces are different from colored light. We might expect the reflectance of a surface to be a mathematical function that tells the percentage of light reflected for each wavelength, another infinity vector. But a surface can fluoresce: a single wavelength might be reflected as a range of wavelengths. We need, for each wavelength, a graph of wavelengths telling us how that single wavelength is reflected. This measure is a “rank-two, order-infinity” tensor because it has an infinite number of values in each of two dimensions. (A three-vector, like force or velocity, is a “rank-one, order-three” tensor, and a scalar is a “rank-one, order-one” tensor.) Tensors might be a little intimidating, but they are a source of exciting new discoveries. Several of the Nobel prizes in economics (to Vasily Leontiff and Kenneth Arrow, for example) have been awarded for recognition of the tensor-like nature of economic systems. —P.P. |