February 04, 2008Bob Pastor Responds to My Carter-Baker Oped CriticismFollowing up on this post, I received the following email from Bob Pastor which he asked me to post on my blog:
There are other possible explanations for the failure to find people impersonating others. I recall being denied a chance to vote several years back when I lived in Georgia because my name had been checked as having voted, even though I had not voted. That could have been a clerical mistake, or it could have meant that someone impersonated me. Who knows? I complained, but I doubt my complaint was registered because we did not have a procedure, as many countries have, for registering complaints at the precinct and allowing analysts to examine them to determine the magnitude of particular problems. With regard to our Voter ID study, Rick said there were "some serious methodological questions," but he doesn't say what they are. We commissioned a survey of three states, and found a very low number -- 1.2 percent --of registered voters who did not have photo IDs. I was very surprised by that, and posed dozens of questions to the people who did the polls. I then assembled a group of five senior methodologists that included Rob Santos, the senior methodologist at the Urban Institute, and four experts from American University, who had not been involved in the study and are known for their skills and independence. Only after they had done extensive reviews of the data, and declared it sound methodologically, did we decide to issue the report. This took us about three months. Based on their very thorough evaluations, which are summarized in the report, we issued the report. It is true that the paper has not yet been refereed by a journal, but we will submit it soon, and are confident -- based on what we have already done -- that its methodology will hold up. The Carter-Baker report said: "There is no evidence of extensive fraud in U.S. elections or of multiple voting, but both occur, and it could affect the outcome of a close election." The question remains how to generate confidence. In our recent survey, we found that the perception of fraud is quite high -- nearly one-fifth of registered voters in the three states saw or heard of it in their precincts and nearly two-thirds heard of it in other places. It seems unlikely that the reality is as bad as the perception, but to instill confidence in the system, we need to find ways to alter the perception. Our survey found that more than two-thirds of registered voters in all three states thought that the electoral system would be more trusted if voters were required to show photo IDs. The Carter-Baker report, the CDEM study, and the op-ed all underscore the importance of applying such ID requirements in a manner that would increase -- not reduce -- the number of voters. In the op-ed, Carter and Baker state that the current laws do not do that. Our study suggests that the problem is not IDs per se, but the fact that certain groups of people -- minorities, elderly -- are not registered to vote. With an affirmative effort by the states, as recommend by the Carter-Baker Commission, and repeated in the op-ed, we could use the ID requirement to expand the registered base and provide free photo IDs at the same time. Carter and Baker argue for changes in the laws that would have states play an "affirmative role" to go out to areas where certain groups of people -- the elderly, minorities -- may be less likely to be registered or have photo IDs and do both. That is the real importance of the Carter-Baker oped. It criticizes Republicans for legislating IDs without making it easier to get those IDs, and it criticizes Democrats for abdication from the debate instead of trying to craft the laws to expand the voter base while acknowledging concerns about ballot integrity. It is only when that gap is bridged that we will have a better electoral system. Finally, Rick criticizes the op-ed for not doing an extensive review of the literature. I can't think of a lot of 700 word op-eds that do that, but if he wants a review, he should read our report. We summarize and analyze numerous studies, including one that he apparently cites that suggests that the percentage of registered voters in Indiana without photo IDs is very high. We raise some specific questions about the paper's methodology, which is not available, though we requested it. That paper also was not refereed, though I don't think Rick mentions that. Robert A. Pastor I have a few responses to Bob. 1. I have read the report. Perhaps Bob doesn't know how hyperlinks work, but in my recent post I linked to this earlier post noting that now all three of the main studies on voter id and turnout--this one, Milyo, and Barreto--have been used in the Crawford case on one side or the other but they have not gone through peer review. And I wondered specifically regarding the CDEM study about the use of telephone surveys and sampling problems that might come with trying to reach people who don't have ID. 2. The report itself does have a fair discussion of the Barreto paper. But that's not the point. The Carter-Baker oped (which I now believe more than ever must have been drafted by Bob) does not mention that there are other conflicting studies out there, and presents the 1.2% figure as gospel. Though Bob presents the "real importance" of the oped as trying to move beyond partisanship, I see a more partisan project at work here: to convince the Supreme Court justices in the Crawford case to accept that 1.2% figure as gospel and uphold the Indiana law. But maybe, given my own amicus brief in the case, this view of the Carter-Baker oped is to be expected.
2. While the op-ed does not have a literature review, it does say this about the 1.2 percent figure: "While the sample was small, and the margin of error was therefore high [4.5%], we were pleased that so few registered voters lacked photo IDs. that was pretty good news. The bad news, however, was this: While the numbers of registered voters without valid photo IDs were few, the groups least like to have them were women, African-Americans, and Democrats. Surveys in other states, of course, may well present a different result." 3. I think this is further proof that there is no partisan project to their recommendation, and it is hard to see how Rick could describe the 1.2 percent figure as "gospel" in the light of the qualifications and caveats above. Unlike Rick, neither Carter, Baker, nor I chose to take a position in the Indiana case. Bob Bauer comments:
Still more, this one from one of the study's authors:
Our mixed mode survey design for the Voter ID study explicitly included registered voters with telephone access as well as those without telephone access (the latter we captured via a mail-in component of the survey). And we clearly stated that the survey excludes non-registered voters (i.e., adult citizens who are eligible to be registered but are not; these folks might reasonably be expected to have a higher rate of no ID). With regard to the telephone component, you mentioned in your blog a concern about self-selection from interviewing only people who happen to be at home with telephone access. But our survey data collection protocol employed a rigorous minimum 8-callback rule that spanned all days of the week, thus avoiding altogether an 'easy’'quota sample (just getting the 'easiest-to-reach'’ respondents and stopping our fieldwork when the targets were achieved). The 8-callback rule represents conventional social science industry norm for telephone sample surveys. Moreover, we reported our response rates and included explicit weighting adjustments to compensate for potential nonresponse and noncoverage biases. Finally, we substantially increased the size of our margins of error to appropriately reflect the disproportionate stratified sample design (i.e., the oversampling of registered voters with telephone access). The net result was that we report margins of error of +/- 4.5% rather than +/-2.2% (the latter represents the margins of error for an equivalent sized simple random sample). From a scientific/statistical design perspective, there are few if any ways to increase rigor (within available budget constraints) beyond what was done. In our analyses we took great pains to avoid overstating the survey results, specifically taking into account the margins of error of survey estimates. We also took great pains to explicitly discuss the limitations of our research (see Section F), and we wish others who publish their research findings would aspire to such transparency. We are not saying that our study is without shortcomings, as all studies have limitations. We have been open about our methodology and limitations so that others including you and your readers can judge for yourselves the rigor and applicability of the survey findings. Robert Santos Senior Institute Methodologist The Urban Institute Washington, DC My response is this: This study may well be a worthy one. But as the editor of a peer reviewed journal, I can tell you that many papers that look fine on the surface can have important methodological problems that only come to light through that process. What bothers me is not the paper at all, but the idea that this paper (and the others I've mentioned) is being used to sway the Supreme Court on the pages of the NY Times before it has undergone the rigors of peer review. Posted by Rick Hasen at February 4, 2008 10:23 PM |