Yesterday, Nate Silver posted a column entitled “Measuring the Effects of Voter Identification Laws” on his popular (and informative) FiveThirtyEight page of the New York Times. Rick linked to the column here, calling it an “absolute must-read.”
If you do, read with caution. There should really be a question mark after the headline. Two reasons why, after the jump.
One: Nate is examining the impact of voter ID on turnout. There’s nothing wrong with that … but in the press of picking winning candidates and losing candidates, it’s easy to forget that this is not the only impact of relevance. In 2008, there were about 80 million eligible American citizens who did not cast a ballot; in 2010, there were about 120 million such people. Comparative turnout measures don’t include any of those people, which is an awful lot of potential effect to ignore. It’s fine to debate turnout effects, but “the likely effect of ID laws” is not the same thing as “the likely change in turnout.”
Two: Nate is using past turnout studies to estimate the effects of voter ID laws. I’m a big fan of Nate’s usual analysis. But Robert Erickson and Lori Minnite have written what’s still the best explanation around for why it’s statistically troublesome to assert that the existing studies (including the ones that Nate used as a baseline) can accurately assess the impact of turnout. And I didn’t see anything in Nate’s analysis that addresses those concerns.
Even beyond the statistical objections explained in the Erickson/Minnite article, there are two other problems with using the studies Nate relies on to draw solid conclusions about the actual effects of ID laws on the votes cast in an election. (The studies themselves generally acknowledge these issues.) First, most of these studies draw their conclusions from the Census’s Current Population Survey, which as Alvarez et al. explain, is “about as close to a canonical dataset as political scientists have.” But the Current Population Survey asks people whether they voted, which brings in a self-reporting effect that (unlike most such effects) can be expected to change significantly over time. Specifically, there’s the placebo of a provisional ballot that doesn’t count because of a change in the underlying law — and it’s a placebo that will affect the results. The 2 million people who cast provisional ballots in 2008 are likely included in turnout studies, even if their ballots never counted. Which means that any CPS turnout analysis is likely to systematically undervalue the “effect” on real votes of any law that increases the number of provisional ballots that aren’t counted.
Second, the studies that Nate used all examine “identification laws” generally. Alvarez et al. had the most comprehensive scale, examining the differences in laws ranging from “you must state your name” and “you must sign in at the polls” to “you have to show particular photo ID documents or your vote doesn’t count.” The new 2011-12 laws are mostly (but not entirely) the latter. But for these types of states, there are only two data points for presidential turnout: Georgia and Indiana in 2008. (Both states were also new presidential battleground states, which should be expected to skew measurements of turnout change.) In order to get more data (which is necessary to tell signal from noise), most of the existing studies rely on the turnout effects in states without strict photo ID laws. It’s possible that assessing the incremental effect of asking voters to sign their name in a pollbook tells you something about the incremental effect of preventing voters without an ID from voting a valid ballot. But that’s an assumption I have yet to see justified.
To be clear, this isn’t a critique of the studies themselves — they’re examining the available data, and pointing out the limits of their conclusions. The fact that the data aren’t up to the task of answering the right questions is not the researchers’ fault. But using those studies to show what those studies don’t show is more “truthiness” than truth.