Don’t Have Faith in Poll Workers, Or Anyone Else for That Matter, To Distinguish Odd from Even Numbers

Back in 2012, I had a piece running in Slate on the Hunter case (which I talked about further in The Voting Wars), in which voters were disenfranchised because of poll worker error.  Among the errors—a poll worker who could not tell that number 798 was an even number and sent the voter to the odd-numbered precinct.

The piece inspired UW Madison psychology professor Gary Lupyan to conduct some experiments testing people’s ability to distinguish odd and even numbers, in the service of making a larger point about the sorts of things that are easy versus difficult to compute by biological computational systems as distinct from digital computers.

The results are pretty depressing, including the fact that a “sizable minority” of people believe the number 400 is more even [corrected!] than the number 798!  The article is Lupyan, G. (2013). The difficulties of executing simple algorithms: Why brains make mistakes computers don’t. Cognition, 129(3), 615–636. doi:10.1016/j.cognition.2013.08.015.  Here is the abstract:

It is shown that educated adults routinely make errors in placing stimuli into familiar, well defined categories such as TRIANGLE and ODD NUMBER. Scalene triangles are often rejected as instances of triangles and 798 is categorized by some as an odd number. These patterns are observed both in timed and untimed tasks, hold for people who can fully express the necessary and sufficient conditions for category membership, and for individuals with varying levels of education. A sizeable minority of people believe that 400 is more even than 798 and that an equilateral triangle is the most ‘‘trianglest’’ of triangles. Such beliefs predict how people instantiate other categories with necessary and sufficient conditions, e.g., GRANDMOTHER. I argue that the distributed and graded nature of mental representations means that human algorithms, unlike conventional computer algorithms, only approximate rule-based classification and never fully abstract from the specifics of the input. This input-sensitivity is critical to obtaining the kind of cognitive flexibility at which humans
excel, but comes at the cost of generally poor abilities to perform context-free computations. If human algorithms cannot be trusted to produce unfuzzy representations of odd numbers, triangles, and grandmothers, the idea that they can be trusted to do the heavy lifting of moment-to-moment cognition that is inherent in the metaphor of mind as digital computer still common in cognitive science, needs to be seriously reconsidered.

Share this: