What is this new theory?” the long-retired New York University cognitive psychologist, Lloyd Kaufman, asked me.
We were sitting behind the wooden desk of his cozy home office. He had a stack of all his papers on the moon illusion, freshly printed, waiting for me on the adjacent futon. But I couldn’t think of a better way to start our discussion than to have him respond to the latest thesis claiming to explain what has gone, for thousands of years, unexplained: Why does the moon look bigger when it’s near the horizon?
He scooted closer to his iMac, tilted his head and began to read the MIT Technology Review article I had pulled up. I thought I’d have a few moments to appreciate, as he read, the view of New York City outside the 28th floor window of his Floral Park apartment, but within a half-minute he told me, “Well, it’s clearly wrong.”
It wasn’t even my theory, yet I felt astonished. It described two researchers—Joseph Antonides (an undergraduate) and Toshiro Kubota (a computer scientist), of Susquehanna University in Pennsylvania—who had constructed a perceptual model in which the sky was contiguous with the horizon, so that the moon was placed, as it were, in front of the sky, occluding it. Since our depth perception also places the moon farther away from us than the horizon, we are faced with a perceptual dilemma. The scientists reasoned that the horizon moon’s enlargement is a product of the brain trying to solve this dilemma.
It’s wrong, he told me, because “you can get the illusion if you have only one eye. Simple!”
The moon illusion is a sort of Rip Van Winkle figure in the history of science. Unlike other astronomical puzzles, the moon illusion, wrote Rutgers University philosopher Frances Egan, “has persisted through massive changes both in our overall physical theory, and in our very conception of the scientific enterprise.”
The earliest mention of the moon illusion we know of was impressed almost 3,000 years ago, in cuneiform script upon a clay tablet, when it was housed in the royal library of Nineveh.
Later on, in the second century A.D., Ptolemy argued that it was the result of the magnifying properties of the atmosphere’s moisture and haze. “It is just like the apparent enlargement of objects in water, which increases with the depth of immersion,” he wrote.5 On account of something like divine authority, this physical or “refraction” account of the problem went unchallenged for more or less 1,000 years, a real shame since he also had an alternative physiological account that went largely ignored until Newton’s time.
Today, this physiological account is known as the “angle-of-regard” hypothesis, for the angle that our eyes (or head) make relative to the horizon. The more your eyes are angled upward, the thinking goes, the smaller something looks, due to the physiology of our visual system. Angle-of-regard sat dormant for hundreds of years after Ptolemy, until the Irish philosopher George Berkeley revived it, in 1709, as part of a debate with the then-new geometrical optics of philosophers like René Descartes and Nicolas Malebranche.
They took the moon illusion to support their contention that vision is inherently three-dimensional, and that we can compute size and distance using vision alone. In his “Essay Towards a New Theory of Vision,” Berkeley opposed this view, pointing out that the moon illusion could be explained away using the angle-of-regard hypothesis; and claiming that there is nothing inherently three-dimensional about what we see—that instead we learn about how far and how big things are by moving around in the world, hands-on, as it were. Descartes, though, didn’t accept the angle-of-regard dismissal of the moon illusion. Instead he held to the “apparent distance” hypothesis, according to which the horizon moon seemed larger because we judged it to be farther away.
“It’s the challenge of solving a problem the likes of Galileo and Newton couldn’t handle.”
Read More: Here