All posts by Ned Foley

Total Vote Runoff: A Majority-Maximizing Form of Ranked Choice Voting

I’ve posted another paper on SSRN. Here’s its abstract:

Total Vote Runoff (TVR) is an electoral system designed to be identical to Instant Runoff Voting (IRV), which is the most commonly understood and implemented form of Ranked Choice Voting in the United States, except for one key detail. Like IRV, TVR sequentially eliminates the weakest candidate on the ranked-choice ballot when no candidate is ranked first on a majority of ballots. Unlike IRV, however, TVR identifies the weakest candidate to be eliminated based on the total votes each candidate receives on all the ballots, rather than just the number of first-place votes (as IRV does). A candidate’s total votes from each ballot is defined as the number of other candidates the candidate is ranked higher than on the ballot—as being ranked higher than another candidate is equivalent to securing a vote against that candidate, given that ranked-choice ballots can be conceived as mathematically equivalent to a round-robin election among all the candidates on the ballot. TVR has the advantage, compared to IRV, of always electing a candidate whom a majority of voters prefer to each other candidate on the ballot and thus who would be the undefeated winner of the round-robin election. More generally, TVR improves upon the instant runoff nature of the IRV process by using all the information from each ranked-choice ballot, rather than just first-choice preferences, in order to determine which candidate most deserves to be eliminated in the instant runoff procedure. A comparison of TVR and IRV in the context of the most recent midterm elections in the United States shows that TVR potentially could perform better than IRV in redressing the increased polarization affecting American politics, resulting in elections that better represent the preferences that a majority of voters record with their ballots.

This paper will be published in the University of New Hampshire Law Review (an earlier draft was presented at its symposium). As with the Self-Districting paper posted earlier today, I very much welcome comments on this paper while it continues to go through the editing process. Moreover, this paper like my other work on alternative electoral systems are part of a larger project on the relationship of democratic and constitutional theory to electoral system design, including in particular the egalitarian foundation for majority rule as well as the alternative means of specifying majority rule within a constitutional republic committed to the basic principle of popular sovereignty. I welcome feedback and input on the larger project as well as on this particular paper.

Share this:

SELF-DISTRICTING: The Ultimate Antidote to Gerrymandering

I have posted this new paper on SSRN. Here’s the abstract:

It is possible to end gerrymandering by removing the power to draw district lines from government officials and giving this power to the voters themselves. In a self-districting system, as described herein, each voter chooses which constituency the voter wants to join for purposes of legislative representation. These constituencies can be geographically based as in traditional districting systems, but they also can be based on other attributes—whatever associational communities the voters themselves wish to form. If enough voters join a constituency to form more than one district based on the constitutional principle of equally populated districts, then this constituency can be subdivided into districts based on strict computer-implemented geographical criteria without any possibility for gerrymandering. This self-districting system not only complies with the U.S. Constitution; it is also consistent with the Act of Congress that requires single-member districts. Thus, there is no federal law obstacle to prevent states from adopting a self-districting system for both their congressional delegation and their own legislative chambers. In fact, self-districting is a way to avoid the problem of minority vote dilution, a task likely to become more difficulty given anticipated changes in the U.S. Supreme Court’s jurisprudence on the topic. Because of the increasingly pernicious nature of gerrymandered districts, which cause voters—and most especially minority voters—serious representational harms, the alternative of self-districting, where voters are empowered to make these representational decisions for themselves, deserves serious consideration.

The paper is being published by the Kentucky Law Journal (an earlier draft was presented at its symposium). Comments are very much welcome as it undergoes the editing process.

Share this:

Election Reflections:A roundtable of election law experts


This Friday, December 9 at noon (ET), Election Law at Ohio State is hosting a roundtable of election law experts, as we have previously. This time is to reflect on the 2022 midterms and consider key issues confronting our election systems in the months and years ahead. Panelists include Rebecca Green, Lisa Manheim, Derek Muller, Nate Persily, and Charles Stewart. My colleague Steve Huefner will moderate, and I look forward to participating.

For more information about the webinar and to register, click here.

Share this:

“Total Vote Runoff” & Baldwin’s method

Since publication of the Washington Post column describing the “Total Vote Runoff” variation on Ranked Choice Voting, questions have arisen about the relationship of this procedure to what is known in the electoral systems literature as Baldwin’s method. The two are very similar, and even mathematically equivalent, but they are not operationally identical. The operational distinctiveness of the “Total Vote Runoff” procedure is significant for purposes of both law and policy.

Continue reading “Total Vote Runoff” & Baldwin’s method
Share this:

An Additional Detail about “Total Vote Runoff”

As noted when I blogged about the Washington Post column on the “Total Vote Runoff” variation of Ranked Choice Vote (co-authored with Eric Maskin), a mathematical feature of this method is that it will elect a Condorcet Winner–the candidate who is ranked higher on more ballots than each other candidate when compared head-to-head. The Washington Post column illustrated this point with the example of the recent Alaska special election, where Nick Begich was the Condorcet Winner based on all the ranked-ballots cast and would have won the Total Vote Runoff had that procedure been used, but was eliminated through the regular “instant runoff” method that Alaska employs.

Although not affecting the outcome of the Alaska special election under the TVR procedure, there is a detail that is necessary to include in order to assure in any future election that the mathematical property of electing a Condorcet Winner holds. The detail concerns ballots in which a voter leaves unranked more than one candidate. As noted in the Washington Post column, when a voter leaves only one candidate unranked, doing so is equivalent to ranking the candidate last, and any candidate ranked last on a ballot receives zero votes from that ballot under the Total Vote Runoff procedure (because the voter does not prefer that candidate to any other candidate). But when a voter leaves more than one candidate unranked, then it is necessary that the unranked candidates be treated as tied for all the positions on the ranked-choice ballot that are left unfilled.

Continue reading An Additional Detail about “Total Vote Runoff”
Share this:

10th anniversary of “blue shift” research

On Election Night in 2012, I first noticed the phenomenon that I termed the “blue shift” and since 2020 has often been called “the red mirage.” While watching the returns coming in, I was attempting to calculate what size lead would put a state outside the margin of litigation–recognizing the possibility that Election Night leads might shift in subsequent days depending on the counting of provisional ballots, valid absentee ballots not previously counted, and other adjustments during the canvassing process. As a result of doing those calculations, I realized that for elections since 2000 the changes in vote totals during the canvassing period tended to favor the Democratic candidate significantly. Hence, the “blue shift” term.

I reported these findings in a blog post on December 17, 2012, and then published the first “blue shift” law review article the following year. With Charles Stewart, I continued to do research on the “blue shift” phenomenon. I’m pleased that Charles and other political scientists continue to conduct significant “blue shift” research using statistical methods that are far beyond the capacity of this election law professor. I, too, used this “blue shift” research to predict in 2019 (before Covid occurred) that Trump would attempt to discredit valid ballots counted after Election Day, with with possibility that he might extend his effort to subvert the outcome of the 2020 presidential election all the way to the January 6, 2021 joint session of Congress.

I engage in this reflection on a decade of “blue shift” scholarship, not as an exercise of self-promotion, but instead to ask this question: what will it take for the public as a whole to be inoculated from conspiracy theories aimed at using the “blue shift” to discredit election outcomes? In the last few days, as the media prepares for what might happen tomorrow night and its aftermath, there is renewed fear that another reiteration of the “blue shift” in significant races will cause candidates to wrongfully claim fraud and for the supporters of those candidates to distrust the valid results of those races. (Today’s Washington Post editorial is one example among many expressing this fear.)

I’m cautiously hopeful that, as the “blue shift’ phenomenon becomes increasingly familiar to more and more voters (because it occurs election after election), it becomes understood and accepted by the average citizen in the same way as it is by election administrators (and now journalists): just the routine operation of our nation’s vote-counting process insofar as it relies more heavily on absentee and provisional ballots, with counting rules that inevitably cause those ballots to be counted after Election Night.

But if this acceptance of the “blue shift” phenomenon does not take hold among ordinary voters, regardless of their partisan leanings, and instead the “blue shift” continues to foster conspiracy theories that further erode public confidence in the electoral process, those of us who care about the ongoing operation of the democratic process will need to come up with a different solution. We can’t continue to have a huge chunk of the electorate not believing in the validity of the election, because they do not understand–and thus do not accept–that large numbers of ballots, which potentially could make the difference in the election, are entitled to be counted after Election Night.

We need to solve this crisis-of confidence problem one way or another. I’m less concerned with the specific way in which we solve it. I just worry that if we don’t solve it, we will cause the cancer of election denialism to become ever more malignant, to the point where the body politic can no longer function as a democracy.

Share this:

More related to Ranked Choice Voting

I had the opportunity to discuss Ranked Choice Voting on WOSU All Sides with Ann Fisher. (Mike Thompson was substituting for Ann Fisher today.) I was fortunate to be joined in the conversation by Nate Atkinson, who teaches at the University of Wisconsin Law School and who also published in the Hill his own op-ed on Ranked Choice Voting (co-authored with Scott Ganz of Georgetown’s business school), which is very much in line with my Washington Post column with Eric Maskin, about which I blogged yesterday. Broadly speaking, both Nate’s piece and mine argue for an adjustment to Alaska’s RCV system that would counteract the way in which its particular “instant runoff” method elevates more extreme candidates at the expense of candidates who, although having fewer first-place votes, have broader appeal across the entire electorate.

The conversation that Nate and I had on the All Sides program is, I think, useful for anyone interested in the pros and cons of alternative versions of RCV, and these alternatives also compared to the current prevailing way of conducting elections. The conversation focused on basic principles concerning the purposes of holding elections–and especially how those purposes can best be achieved in the current context of polarization. The program will be available in podcast form for anyone wishing to listen.

Share this:

“Total Vote Runoff” tweak to Ranked Choice Voting

Eric Maskin and I have written a Washington Post column explaining how a Total Vote Runoff version of Ranked Choice Voting is a small but significant adjustment to the “instant runoff” method used in Alaska (and elsewhere). The only change is the method of identifying the candidate to be eliminated when there is no candidate with a majority of first-place votes. The Total Vote Runoff eliminates the candidate with the fewest total votes from all the ranked-choice ballots instead of the candidate with the fewest first-place votes, and the column explains how to calculate a candidate’s total votes. For those interested in the technical aspects of this procedure, a candidate’s total votes is equivalent to a candidate’s Borda score, and eliminating sequentially the candidate with the lowest total votes (Borda score) will never fail to elect a Condorcet winner (the candidate who beats all other opponents in each head-to-head matchup). A more detailed explanation of the Total Vote Runoff procedure is contained in the presentation I gave at the University of New Hampshire Law Review symposium last month (the paper for which will be published in the review’s symposium issue).

Share this:

What Worked Before May Work No Longer

An astute reader of my previous blog post, The Problem with Pendulum Politics, inquired why it might be necessary to abandon America’s longstanding electoral system of partisan primaries followed by plurality-winner general elections, when that system served the country well for decades without causing the kind of calcification-plus-parity that characterizes the nation’s politics today. If the electoral system is not responsible for the current problem, and functioned satisfactorily before the current problem materialized, why not seek some other cure to the current problem besides altering the longstanding electoral system?

The question is a fair one, and I cannot address it fully here. But I can point towards a potential answer. There may be developments in America’s overall political system that require modifications to aspects of that system that functioned in an appropriate equilibrium prior to pressures from other sources that knocked the system out of balance. One recent piece of political science along these lines is the essay Madison’s Constitution Under Stress: A Developmental Analysis of Political Polarization by Paul Pierson and Eric Schickler. They argue that changes in various social and cultural forces–including how interest groups interact with parties, how state parties interact with national parties, and the new media environment–make electoral competition within the Madisonian system no longer work as it previously did. Because it’s not possible to undo the external forces that destabilized the system, it’s necessary to adjust the system in order to restore its previous equilibrium.

Schickler also co-authored a separate essay with Nolan McCarty, On the Theory of Parties, which likewise emphasizes the importance of the changing relationship of state and national parties within the overall federalist system. For example, if voters increasingly base their votes for US Senator not on which candidate on the ballot they prefer, but which national party they would prefer to be in the Senate majority, that type of electoral behavior might reinforce the kind of calcification described in The Bitter End, because voters would be less persuadable by the attributes of the specific candidates competing for their Senate seat. Senate elections might differ from gubernatorial elections, for example, in this regard even though both types of elections are statewide races involving the same election. (We’ve arguably seen some evidence of less partisan tribalism in gubernatorial elections than US Senate elections in recent years, and it will be interesting to see if this year’s midterms follow that pattern.)

Ultimately, the merits of an electoral reform proposal must be based on an overall assessment of pros and cons. In conducting that assessment, it’s necessary to acknowledge that just because the cost-benefit analysis would have yielded one outcome under previous conditions does not mean it yields the same outcome under new and significantly changed conditions. Just like climate change may require novel mitigation measures that would have been unthinkable previously because the climate change itself is irreversible, so too may larger social forces that are irreversible require novel electoral reforms that previously would have been considered unwarranted.

Share this:

The Problem of Pendulum Politics

Another must-listen podcast (in my judgment): this one from Ezra Klein, interviewing John Sides and Lynn Vavreck about their new book, Bitter End (also co-authored with Chris Tausanovitch). I’ve starting reading the book, and so far it’s a must-read, but meanwhile I urge all ELB readers to listen to the podcast.

Here’s the central idea conveyed on the podcast: the nation’s politics have reached a precarious position that’s characterized not just by polarization, which has been developing for decades (as is well known). There are two additional features beyond polarization that put the country’s politics in a particularly difficult situation. One is the calcification of polarized views, meaning the rigidity of partisans on each side of the polarized divide (in other words, less persuadability at the level of the individual voter). The other is the knife’s edge parity of the red-blue divide, both nationally overall and in battleground states.

The consequence of calcification and parity combined is that election outcomes turn–and, more importantly, swing between extremes–on a shift from one side of the divide to the other of a relatively tiny number of persuadable voters who remain in the middle.

We can expect this kind of electoral oscillation (what some have called leapfrog representation). to continue, and even worsen as the swings of the pendulum become increasingly extreme, as long as the nation retains its prevailing electoral system. If the system of partisan primaries followed by a plurality-winner general election works as designed, the Blue and Red primaries will produce Blue and Red nominees whose views align with the median voter in each primary, and the general election winner will depend on which side of the knife’s edge parity the overall electorate’s median voter happens to come down. But as the Blue and Red median voters increasingly diverge and calcify in their divergence, then the amplitude of the oscillation between a temporary electoral victory for the median Blue position and a temporary electoral victory for the median Red position will become increasingly large.

If this electoral oscillation between extremes is problematic, then the combined phenomenon of calcification and parity would seem to require reform that aims for the kind of centripetal electoral system I mentioned in a previous ELB blog post. A centripetal electoral system would not choose between two alternatives selected by the median voter of each partisan primary, but instead would choose between two candidates positioning themselves to be more attractive to the overall electorate’s median voter, even recognizing that the center of the overall electorate has been hollowed out by the polarization-plus-calcification process.

The analysis in this blog post necessarily must remain tentative while I continue to read, and learn from, the Bitter End book. Still, I’m sure about one thing: anyone who does listen to Ezra Klein’s podcast with John Sides and Lynn Vavreck should keep listening all of the way to the very end. That’s when Lynn Vavreck expresses deep concern for the future of American politics if we don’t figure out a way to rectify the problem their research has identified.

Share this:

The Missing Middle?

Two weeks out from Election Day, many of the most closely watched and competitive statewide races are between a MAGA Republican and a Democrat. Here are few examples:

Arizona’s gubernatorial race between Kari Lake and Katie Hobbs

New Hampshire’s Senate race between Don Bolduc and Maggie Hassan.

Pennsylvania’s Senate race between Mehmet Oz and John Fetterman

In each of these races, if the MAGA Republican ends up winning, it will be seen as a victory for Donald Trump and the predominance of his MAGA movement within the current Republican Party.

But in each of these races, there was a non-MAGA candidate who fell just short of winning the Republican primary. In Arizona, Karen Taylor Robson lost to Kari Lake by only a few points (43% to 48%). New Hampshire’s GOP Senate primary was even closer, with the non-MAGA Chuck Morse a point behind Bolduc (36% to 37%). In Pennsylvania, David McCormick’s primary defeat against Oz was a proverbial photo finish that went to a recount (with the final margin being less than 0.1%).

Thus, if the MAGA candidate wins the general election, it’s reasonable to ask whether the non-MAGA Republican would have won the general election also. In fact, the non-MAGA Republican might have won the general election by an even wider margin than the more polarizing and divisive MAGA candidate. For example, I’ve heard many pundits predict that David McCormick would have had an easier time beating Fetterman than Oz does; and so if Oz wins, presumably McCormick would have as well but by a wider margin.

Thus, in each of these cases, it would be wrong to interpret a victory for the MAGA candidate as meaning that the general election voters subscribed to the MAGA movement. Instead, in each instance, the general election voters might have preferred the non-MAGA Republican more, but were deprived of that option by the way that the partisan primary limits who’s on the general election ballot.

When the dust settles on this year’s midterms, will we be in a position to evaluate whether the electoral system is structured to enable the general election voters to choose the candidate whom they most prefer?

Share this:

If Pennsylvania used Alaska’s electoral system…

ELB readers know that throughout this year I’ve urged us all to process election-related news with this question in mind: what if the relevant state used Alaska’s new electoral system? (Obviously, this question doesn’t apply to Alaska itself.) Tonight’s debate between Fetterman and Oz in Pennsylvania’s US Senate election is another useful reminder of the ongoing relevance of this question. If Pennsylvania used Alaska’s “top 4” system, the current election would involve four candidates: David McCormick and Conor Lamb, in addition to Fetterman and Oz. With Alaska’s system, voters would have the opportunity to rank their preferences among these four alternatives. I can imagine that many Pennsylvania voters right now might be wishing that they would have the opportunity to express their relative preferences among these four candidates, rather than just having to choose between Fetterman and Oz.

Share this:

“Running an Election in the Heart of Election Denialism”

A must-listen episode of The Daily (NY Times podcast). If you want a sense of what election administration is like right now (and, most especially how it has evolved–meaning deteriorated–in the months since January 2021), you can’t do any better to listen to this episode, which is an extended interview with Stephen Richer, who runs elections in Maricopa County, Arizona. It’s very sobering, and unclear how to improve the situation as we complete this year’s midterms and prepare for 2024. But diagnosis comes before prescription, and this episode is essential for understanding the current conditions.

Share this: