Eversole Responds on Discrepancy in Military Voter Turnout Data, Questions FVAP data.

Following up on this post, I received the following response via email from Eric Eversole, who is the Executive Director of the Military Voter Protection Project and Visiting Assistant Clinical Professor at Chapman University. It raises questions about the accuracy of the FVAP data.  (Of course, I would be very interested in posting any response from FVAP).

The data used by the Military Voter Protection Project (MVP Project) in its June 2011 report is the same data that is used and collected by the Election Assistance Commission (EAC) in their biennial military voting report.  As required by federal law, states have an obligation to collect and report data on the number of military voters (including active duty service members and their spouses) that request and return absentee ballots in federal election.  The states count real ballots from real absentee military voters and report that data to the EAC.

Not surprisingly, the accuracy of the state data depends on state and local officials that collect and report the data.  Prior to the 2008 election, the EAC data was often incomplete and it was difficult to draw too many conclusions.  Many state or local jurisdictions refused to report military voting data all together or they reported UOCAVA totals by lumping military voting with overseas voting data.

While the overall quality of pre-2008 EAC data was not great, there were clear exceptions where state and local election officials provided good, accurate data.  For example, the military voting data from Florida, Nevada and Colorado in 2006 is remarkably complete.

Since 2006, the quality of the EAC data has improved remarkably.  This improvement was due, in large part, to the EAC’s outreach efforts before the 2008 presidential election to work with state and local election officials.  Congress also kicked in $10 million of grant money for the states to improve data collection.  Finally, the Voting Section of DOJ did its part by bringing several UOCAVA lawsuits to ensure compliance with the reporting requirements (see http://www.justice.gov/crt/about/vot/litigation/recent_uocava.php#vt_uocava and http://www.justice.gov/crt/about/vot/misc/al_uocava_comp08.php).

The end product in 2008—and especially in 2010—is a more complete picture of absentee military voting in the U.S.  Last week, the EAC released its 2010 post-election report showing that approximately 4 percent of military voters (100,557 of 2.5 million) were able to successfully vote by absentee ballot.  (see: http://www.eac.gov/assets/1/Documents/EAC%202010%20UOCAVA%20Report_FINAL.pdf).

A report issued by my organization, MVP Project, in conjunction with Chapman University (see: http://www.mvpproject.org/MVPProject_study_download.pdf) reaches a similar conclusion.  The MVP Report found that approximately 4.6 percent of military voters were able to cast an absentee ballot that counted.  However, as emphasized in the report, the MVP Report was an early snapshot of the EAC data and was limited to data from 24 states.

In deciding what state data to include in the MVP Report, we specifically excluded data from several states where there were questions about the quality of the data.  For example, when Hawaii reported a surprisingly low number of requested absentee ballots, we followed up with state election officials.  They explained that they only included absentee military ballots that were sent 45-days before the election in their UOCAVA totals.  Since such a number would underreport actual military voter participation, it was not included in the final report.  This is why MVP’s percentage is slightly higher than the EAC’s percentage.

None of this is intended to say that the EAC state data is perfect—something the EAC acknowledges in its report.  The EAC data does not account for service members voting in person.  Nor are the states entirely consistent in how they collect and report data, which makes it difficult in some cases to draw comparisons across states.  However, the EAC state data is an important tool for determining whether military members and their spouses are able to vote by absentee and whether there have been any improvements in absentee voting rates.

Focusing on the absentee voting data, the MVP report found little or no evidence indicating an improvement based on a comparison of the 2006 and 2010 EAC data.  As noted above, while much of the 2006 data was incomplete, there were several good data sets from state and local jurisdictions (i.e., the data from Florida) that showed little or no improvement in 2010 as compared to 2006.

More importantly, the 2010 data (which is drastically improved) does not support the absentee military voting rates claimed by FVAP.  According to FVAP, 29% of 1.4 million active duty military members voted in 2010 and 67% (that is, 272,000 military members) voted by absentee ballot.  FVAP also claims that 34% of 1.1 million military spouses voted in 2010 and that 40% of those spouses (approximately 150,000) voted by absentee ballot.  Taken together, these totals suggest that 425,000 military members and their spouses voted by absentee ballot in 2010.

Of course, if this were true, it would mean that the EAC state data, which reported about 108,000 returned absentee ballots from military voters in 2010, misses the mark by more than 300,000 absentee military ballots.  That is simply unbelievable.

Nor can the disparity between FVAP’s and the EAC’s data be overcome by the reasons provided by FVAP.  FVAP argues that state officials make two errors that impact the total reported to the EAC: (1) some local jurisdictions do not respond to the EAC’s survey; and (2) local election officials only count military voters as UOCAVA voters when they submit a Federal Post Card Application (FPCA).

With respect to the first point, it is true that some jurisdictions fail to report data.  However, as noted above, the instances of non-reporting have decreased dramatically since 2008.  The amount of non-reporting in 2010 appears to be relatively small and, at best, would only move the needle by a few percentage points.  It certainly wouldn’t account for 300,000 missing ballots.

The second point is far more disconcerting.  FVAP seemingly suggests that states are failing to recognize nearly 75% of absentee military voters as UOCAVA voters.  Taken to its logical conclusion, FVAP is suggesting that states are not providing a vast majority of military voters with the protections guaranteed by UOCAVA.  If that were true, it should be a national outrage.

But, it is not true.  In my conversations with state election officials, a couple of points are repeatedly made: (1) a vast, vast majority of absentee ballot requests from military voters are FPCAs due, in part, to the fact that most states use the FPCA as their official absentee ballot request form (in fact, most state websites link to FPCA or FVAP); and (2) even in cases where a state form is used, most local election officials can identify the UOCAVA voter (because of APO/FPO or military base addresses) and classify them as such.

In fact, some states have a block on their state absentee ballot forms for voters to indicate whether they are a UOCAVA voter.  In order to comply with the MOVE Act, many states adopted this block to allow UOCAVA voters to identify their UOCAVA status to qualify for electronic delivery options.

For example, Nevada’s state form has a block that allows the voter to indicate whether he or she is a domestic or overseas military voter (http://nvsos.gov/Modules/ShowDocument.aspx?documentid=1711).  Did this make a difference in Nevada’s EAC data for 2010?  No.  Nevada still reported that less than 6% of military voters requested an absentee ballot in 2010.

Nor is FVAP’s position sustainable based on its survey responses.  The survey question (which asks two questions: (1) whether they submitted an FPCA and (2) whether the FPCA is required under state law) is vague and confusing.  It assumes (wrongly, in my view) that military voters know the difference between a state form and a federal form, especially if that form was downloaded from a state website.  Most voters would not know the difference in that circumstance or any circumstance.

At the end of the day, there is simply no tangible evidence from the states supporting FVAP’s conclusions and it raises serious questions about the accuracy of the survey.




Comments are closed.