All posts by Nicholas Stephanopoulos

“Finding Condorcet”

I just posted this paper, written for the Washington & Lee Law Review’s symposium on voting rights in a politically polarized era. The paper uses a new dataset of about two hundred foreign instant-runoff voting (IRV) elections to assess the performance of IRV in practice. The abstract is below:

Instant-runoff voting (IRV) is having a moment. More than a dozen American localities have adopted it over the last few years. So have two states. Up to four more states may vote on switching to IRV in the 2024 election. In light of this momentum, it’s imperative to know how well IRV performs in practice. In particular, how often does IRV elect the candidate whom a majority of voters prefer over every other candidate in a head-to-head matchup, that is, the Condorcet winner? To answer this question, this article both surveys the existing literature on American IRV elections and analyzes a new dataset of almost two hundred foreign IRV races. Both approaches lead to the same conclusion: In actual elections—as opposed to in arithmetical examples or in simulated races—IRV almost always elects the Condorcet winner. What’s more, a Condorcet winner almost always exists. These findings help allay the concern that candidates lacking majority support frequently prevail under IRV. The results also reveal an electorate more rational than many might think: voters whose preferences among candidates are, at least, coherent in virtually all cases.

Share this:

RPV Near Me – Updated with Asian and American Indian/Alaska Native Estimates

This is a guest post by Ruth Greenwood, Director of the Election Law Clinic at Harvard Law School:

With coalition claims in the election law news again, I wanted to flag an update to a resource that I shared a year ago. The tool, RPVNearMe (developed by the Election Law Clinic at Harvard Law School and Christopher T. Kenny) provides estimates of racially polarized voting for every county in the country. Note that the tool only offers estimates for certain statewide and federal elections — i.e., not the local appraisal you would need for VRA litigation — but it still gives a guide to possible coalition voting.

The site now includes six racial/ethnic categories: white, Black, Hispanic, Asian, American Indian/Alaska Native, and other, for every county in the country.

You can see there are some places where multiple minority groups clearly coalition together (e.g., Virginia Beach, where I was one of the attorneys representing the Holloway plaintiffs in their successful tri-coalition (Black, Latino, and Asian voters) claim). There are also places where those three communities don’t clearly coalition vote (e.g., Kings County, New York, suggesting Black and Latino cohesive voting but not cohesive voting with Asian voters). Those charts are copied below. All the underlying data can be downloaded from the website and the code is on Github.

Share this:

“Using big data to make elections fairer

Kosuke Imai and Ruth Greenwood wrote this column for CommonWealth on some of their data visualization projects for redistricting.

We run two projects that provide tools to help people, courts, and legislators understand how and why that is so. The Algorithm-Assisted Redistricting Methodology (ALARM) Project and the Election Law Clinic are housed in Harvard University’s Institute for Quantitative Social Science and Harvard Law School, respectively.

The Election Law Clinic partners with PlanScore to offer visualizations of the partisan biases of redistricting plans. The site includes data from 1972 to 2022 for every state, and allows users to easily see the partisan skews of congressional, state house, and state senate plans. . . . [ALARM] relies on a method developed by one of us (Professor Imai) to randomly create thousands of congressional district plans for every state with more than one district. . . .

Thankfully, the fight against gerrymandering continues in the states—and benefits from the work of PlanScore and ALARM. PlanScore’s scoring tool has been used to score over 389,000 plans in the 2020 redistricting cycle. The resulting evaluations of maps have been cited by experts, discussed by courts, introduced to redistricting commissions, and covered by journalists. These assessments have shown when proposed plans are highly skewed and should be vigorously opposed. They’ve also revealed when maps are fair and should be commended.

Likewise, one of us (Professor Imai) has used the methodology underlying ALARM as an expert in several cases, including one being argued in the Supreme Court this week. In that case, the technique supports the conclusion that South Carolina’s First Congressional District was racially gerrymandered. That district has an artificially smaller Black population than almost all randomly generated districts in the Charleston area. Outside the litigation context, activists in Ohio relied on the ALARM findings to write a constitutional amendment to end partisan gerrymandering. Signature gathering is now underway and that proposal is likely to be on the ballot in 2024.

Share this:

The Contributions of Politics as Markets

I’ve long thought of Politics as Markets as the most important contribution to election law in memory (noting that I’m using Politics as Markets as a metonym for the whole series of related articles by Rick and Sam). What made Politics as Markets so groundbreaking? At least three things. The first was Rick and Sam’s declaration of independence for election law. Mainstream constitutional law might continue to balance burdens on individual rights against countervailing state interests in areas like substantive due process, equal protection, and the First Amendment. But election law, said Rick and Sam, should be different. Election law should abandon rights-versus-interests balancing and replace it with a direct focus on how electoral regulations affect structural democratic values. This proposal raised the profile of election law. It could, and should, be its own intellectual domain, free of the doctrinal frameworks that govern constitutional law. The proposal also had immense appeal for scholars (like me) interested in how electoral rules affect values we care about. In Rick and Sam’s model, functional impact would be the touchstone of legal and policy analysis.

The intellectual arbitrage of Politics as Markets was also revelatory. Rick and Sam extensively cited the corporate law and antitrust literatures—not the usual reading lists of public law scholars. In these literatures they discerned a move from first-order to second-order regulation: ensuring that the marketplace as a whole is properly structured as opposed to policing individual firms or transactions. In a flash of insight, Rick and Sam realized that the same move could be made in election law (indeed, in all of public law). Electoral systems could also be structured to ensure their dynamism and resilience, in which case courts could step back from adjudicating disputes one by one. This application of private law ideas to public law contexts was exceptionally creative. It was also persuasive for those (again like me) with a preference for wholesale over retail electoral regulation.

Politics as Markets was pioneering, lastly, in its emphasis on a single democratic value: electoral competition. (Too) much work in this area observes that many democratic values exist, often pointing in different directions, and then demurs from reaching firm conclusions in the face of this multiplicity. In contrast, Rick and Sam bit the bullet and argued that competition should be the primary concern of scholars, judges, and policymakers. This argument was notable for its elegant simplicity, collapsing a welter of considerations to just one factor. It also strengthened the connection between election law and corporate and antitrust law, where (a different kind of) competition is the predominant objective. Competition is distinctive, too, as Rick and Sam pointed out, in that it’s attractive both intrinsically (for its own sake) and instrumentally (because it promotes the achievement of other democratic values, like responsiveness and accountability).

Of course, I have my quibbles with Politics as Markets. (What academic wouldn’t?) Its fixation with competition arguably reflects its era, when uncompetitive U.S. House races, in particular, were seen as a major national problem. Today, we face a host of democratic threats unrelated to lack of competition, like pervasive misinformation and a waning commitment (among some) to free and fair elections. I also wish that Rick and Sam had fleshed out their proposal in certain respects. For instance: How exactly should competition be measured? Should competition be conceived only as a sword (to attack anti-competitive practices) or also as a shield (to defend pro-competitive practices)? And what’s the empirical evidence that specific practices actually are anti- or pro-competitive? Most fundamentally, I diverge from Rick and Sam in the priority I place on competition. I certainly think it’s an important democratic value. But more vital still, I argue in several articles and a forthcoming book, is alignment between governmental outputs and popular preferences. A polity can still be democratic (I think) if its elections are uncompetitive but its government largely does what its people want. But if the link between public policy and public opinion is broken (in my view) so is democracy itself in any meaningful sense.

To be clear, these are cavils—not foundational disagreements—with Politics as Markets. In my alignment work, in particular, I endorse Rick and Sam’s move from rights-versus-interests balancing to structuralist, functionalist analysis. I also share their interest in competition, just as a driver of alignment rather than the ultimate desideratum for scholars, judges, and policymakers. Put differently, if Politics as Markets is now the central cleavage of election law, I know on what side of that divide I stand. It’s Rick and Sam’s side.

Share this:

The laudable Pico decision

Justin and Rick have already noted the California Supreme Court’s major decision yesterday about the CVRA. I wanted to flag a few reasons why the decision is commendable — a model for state voting rights acts (and courts construing state voting rights acts) in other states. First, the court properly held that liability can’t be established based on racially polarized voting alone. It would be quite troubling if this were the only element of a racial vote dilution claim. Racially polarized voting is very common in American elections, so if this were all that had to be shown, few electoral systems would be safe from plausible challenges. Additionally, the existence of racially polarized voting doesn’t necessarily mean that some other system would improve minority representation. If a minority group couldn’t secure more representation under any other system, it’s a stretch to say its vote is diluted under the status quo.

Second, the court held that a plaintiff must identify a lawful alternative system under which the plaintiff’s class would be better represented. This approach nicely solves the well-known problem of the benchmark for assessing vote dilution. Under this approach, the benchmark isn’t proportional representation or maximal representation or (as under the federal VRA) the representation that would follow from drawing reasonably-configured majority-minority districts. Instead, the benchmark is simply whatever lawful alternative system a plaintiff specifies. Dilution is present if the plaintiff’s class would, in fact, be better represented under that system than under the status quo.

Finally, the court made clear that a plaintiff can identify an alternative other than a single-member-district map. In particular, a plaintiff can put forward a system of proportional representation using cumulative, limited, or ranked-choice voting. A couple California cities have recently switched from at-large elections to systems of proportional representation after being threatened with CVRA lawsuits. These systems promise better minority representation than single-member districts along with fewer policymaking pathologies. It’s wonderful news that the court explicitly recognized these systems as viable alternatives to the status quo. Hopefully this will encourage more CVRA litigants to consider these systems as remedies when violations are found.

I should also note that the court’s holdings are perfectly consistent with the recommendations that Ruth Greenwood and I offered in our recent article on state voting rights acts. We advised that (1) liability should be based on both racially polarized voting and minority underrepresentation; (2) the plaintiff should be responsible for identifying the benchmark relative to which vote dilution is determined; and (3) systems of proportional representation should be favored as remedies for vote dilution. In all these ways, the CVRA now corresponds to our conception of best practices for state voting rights acts.

Share this:

Amicus brief on computer algorithms in racial gerrymandering cases

Harvard Law School’s Election Law Clinic filed this amicus brief today, on behalf of Jowei Chen and me, in Alexander v. South Carolina State Conference of the NAACP. The brief explores the uses of redistricting algorithms in racial gerrymandering litigation. Here are some excerpts from the introduction:

[T]he brief explains why computational redistricting can be probative in the racial-gerrymandering context. The basic logic is that racial-gerrymandering claims focus on the intent of mapmakers, and computational redistricting can be a helpful way to produce evidence of mapmakers’ intent. Consider a district attacked as a racial gerrymander and defended on the basis that one or more of nonracial criteria A, B, and C predominantly account for the district’s creation. A computer algorithm can be instructed to incorporate criteria A, B, and C—but to ignore racial data—and to churn out large numbers of districts in the vicinity of the disputed district. If these computer-generated districts significantly differ demographically from the disputed district, that’s supportive evidence for the inference that race predominantly drove that district’s formation. Had race not been the primary factor, that district would likely have had a different demographic makeup, one in the range of the computer-generated districts.

Critically, the emphasis on intent in racial-gerrymandering claims distinguishes this context from other areas where this Court has been skeptical of computational redistricting. In Rucho, ensembles of computer-generated maps were offered as the benchmark for determining partisan effect—a “baseline from which to measure how extreme a partisan gerrymander is.” 139 S. Ct. at 2505. Likewise, in Allen, Alabama argued that “millions of possible districting maps for a given State” should constitute the “race-neutral benchmark” relative to which the effect of racial vote dilution should be assessed. 143 S. Ct. at 1506. The Court properly rejected Alabama’s claim on several grounds, one of which was that racial vote dilution “turns on the presence of discriminatory effects, not discriminatory intent.” Id. at 1507. Unlike racial vote dilution, though, racial gerrymandering turns on the presence of racial intent, not racial effects. Computational redistricting can therefore be probative here for precisely the reason it was inapt in Allen—its ability to shed light on mapmakers’ motives.

It’s true, as the Court pointed out in Allen, that it’s generally infeasible for computer algorithms to enumerate every lawful map for a jurisdiction. See id. at 1514 (“What would the next million maps show?”). But mathematical proofs show that modern algorithms can produce—and mounting empirical evidence demonstrates that they often do produce— representative map ensembles with the same statistical properties as the entire map universe. See, e.g., Benjamin Fifield et al., The Essential Role of Empirical Validation in Legislative Redistricting Simulation, 7 Stat. & Pub. Pol’y 52 (2020) [Fifield et al., Essential Role]. The Court was also correct in Allen that the inclusion of different criteria in algorithms can “yield different benchmark results.” 143 S. Ct. at 1513. But in racial-gerrymandering cases, experts rely on the criteria specified by jurisdictions, not whichever parameters they happen to prefer. Experts also should and do conduct robustness checks to investigate if their conclusions hold when they vary the instructions for their algorithms.

Share this:

Quick Thoughts on Moore

1. It’s striking how the entire Court accepts Arizona State Legislature and its holding that the Elections Clause doesn’t preclude independent redistricting commissions. Roberts spends two pages discussing the case without suggesting any criticisms (even though he authored a bitter dissent in Arizona State Legislature). Even more strikingly, Thomas relies on Arizona State Legislature‘s broad definition of “Legislature” as the “the lawmaking power as it exists under the State Constitution.” Given these moves, independent redistricting commissions seem quite safe going forward.

2. A potentially important line in Kavanaugh’s concurrence (quoting Rehnquist’s concurrence in Bush v. Gore) is that, “in reviewing state court interpretations of state law, ‘we necessarily must examine the law of the State as it existed prior to the action of the [state] court.'” This suggests that, to avoid skeptical federal court review (and potentially reversal), state courts should make major changes to their election law jurisprudence in cases involving state elections. If and when these changes are later applied in cases involving federal elections, the changes will no longer be new. Instead, they’ll be part of “the law of the State as it existed prior to the action of the state court.” And so they’ll be significantly more likely to survive federal court review.

3. The final pages of Thomas’s dissent (joined by Gorsuch) present criticisms of federal court review in this context with which the three left-of-center justices likely agree. “[I]t is difficult to imagine what this inquiry could mean in theory, let alone practice.” Federal courts “are not equipped to judge whether a state court’s partisan-gerrymandering determination surpassed ‘the bounds of ordinary judicial review.'” “[T]his framework will have the effect of investing potentially large swaths of state constitutional law with the character of a federal question not amenable to meaningful or principled adjudication by federal courts.” And so on. Given these criticisms, it should be difficult to find five votes for actually reversing any state court ruling under state law about federal elections. Thomas and Gorsuch would presumably oppose any such reversal. And so, probably, would the three left-of-center justices, at least in all but the most egregious circumstances. If that’s right, then the door opened by Moore may not be opened very wide. In almost all cases, it may mean that federal courts will uphold challenged state law decisions.

Share this:

Lifting the Fog on the Census, Differential Privacy, and Swapping (Greenwood)

This is a guest post from Ruth Greenwood:

Civil rights groups, redistricting litigators, political scientists, and computer scientists largely fell into two camps when it came to privacy protections for the 2020 census: Team TopDown and Team Swapping.* The acrimony between the groups arose over concerns that the 2020 decennial census data would either not protect privacy, not produce sufficiently accurate and unbiased estimates, or both. This led to lawsuits and a general mistrust between the teams and the Census Bureau (“Bureau”).

The concerns on both sides were legitimate and had real-world consequences: if the Bureau cannot guarantee the privacy of its data, then people won’t answer the census and it will produce inaccurate and biased estimates. If inaccuracies are random and small then census data can still be fit for use. But, if population counts, and, in particular, if racial demographic data, are inaccurate in a biased way, then everything from federal funding to political representation could be skewed. And, as we all should know by now, if political power is skewed it is almost inevitable that it will tilt in favor of white people and away from people of color.

Jeff Zalesin and I had a hunch that with a bit more data we could find out whether the two methods produced accurate and/or unbiased estimates. So, we dipped our toe into the pond, and after many discussions with experts like Cynthia Dwork, Gary King, Terri Ann Lowenthal, and Terry Ao Minnis, we decided that getting the intermediate files used to create the decennial data, the noisy measurements files (“NMFs”), could allow really smart people who know how to use the data (i.e. not us) to investigate the accuracy and bias questions.

It turns out that getting the files was a little harder than we thought: first we (Cynthia, Gary, and I) asked publicly; then we (us three plus around 50 academics) asked directly; then we (the Election Law Clinic, on behalf of Prof. Justin Phillips) asked formally via a FOIA; then we (the Election Law Clinic and Selendy Gay Elsberg PLLC) filed a lawsuit to enforce that FOIA; then we found out that half the data we needed had been deleted by the Bureau; and finally, once the Bureau had recreated and released that first half of the data, we settled the lawsuit on the condition that the other half of the data would be forthcoming.

Thankfully, this Odyssean adventure seems to have paid off.

Kenny et al. have analyzed the first NMF released and answered the questions of whether TopDown and Swapping are accurate and unbiased. Happily, what they find is good news for both teams. They report that TopDown and Swapping are “similarly accurate in terms of . . . bias and noise,” and that “[t]hese patterns hold across census geographies with varying population sizes and racial diversity.” In terms of racial demographics, TopDown and Swapping produce data with almost identical (and extremely small) levels of inaccuracy and bias. It also turns out that the area where TopDown was known to have higher error rates (racial demographics for small geographies, like census blocks), also applies when Swapping is used.

The main concern raised by the Kenny et al. paper is that people who select Hispanic/Latino for their ethnicity, or who select multiple races, tend to get much noisier (less accurate), but not necessarily more biased, numbers regardless of whether TopDown or Swapping is used. This is a problem associated with the separate ethnicity and race categories. And there is a whole separate debate about how that should be resolved.

Two other comments from the Kenny et al. paper are that TopDown introduces errors that can be relatively large in geographies with small populations (while Swapping does not add these errors), and that the NMF itself has too much noise to be used in place of the final decennial data at any level.

The 2020 NMF was just released today, so, provided it doesn’t show some errant result occurred with the application of TopDown in 2020 (I look forward to the next in the series of Kenny et al. papers), we can rest easy knowing that the 2020 decennial census data is as accurate and unbiased as prior decades’ data, while still protecting privacy.

Does this mean that Team TopDown v. Team Swapping was all a lot of sound and fury signifying nothing? To the contrary. It seems likely that Kenny et al.’s earlier paper, along with detailed submissions from groups like MALDEF and AAJC, put pressure on the Bureau to revise their TopDown process to improve accuracy and reduce bias. And without the use of TopDown, there were real concerns that increasingly sophisticated external actors could have launched a successful reidentification attack on the new census data.

The real mystery here turns out to be why the Bureau took two years and a lawsuit to release data that would have quelled fears and improved relations with all involved. Why didn’t they listen to groups like the Leadership Conference on Civil and Human Rights when they sought clarity on whether the proposed DAS changes would cause people of color to be even more underrepresented than they already are in the decennial census data? Why didn’t they respond when over 50 academics asked for the data they needed to verify claims the Bureau had made?

Perhaps if the Bureau is a little more transparent and responsive in the leadup to the 2030 census, we can all be more confident it will produce a fair and accurate count.

* A quick note on terminology: The Census Bureau refers to its work to meet the statutory requirement of privacy protection for census responses as its “Disclosure Avoidance System” (DAS). The DAS for 2020 is referred to as “TopDown” and includes both the application of a differentially private algorithm and post-processing. The term “Swapping” refers to the DAS used by the Bureau in 1990, 2000, and 2010 (whereby census blocks with information likely to lead to the identification of individuals are swapped with nearby census blocks). A recent paper notes that Swapping can “satisf[y] the classic notion of pure differential privacy.”

Share this:

“Why the Supreme Court Declined an Opportunity to Diminish the Voting Rights Act”

Isaac Chotiner of the New Yorker did this Q&A about Milligan with Ruth Greenwood:

So, yes, I am completely shocked. There is no version of what happened today that I had predicted. We had fifteen different possible outcomes about how to prepare for the decision, and this was nowhere on my bingo card. My most cynical take is that the Court hasn’t released its affirmative-action decision, and perhaps Roberts is seeking some cover to completely eviscerate affirmative action by saying “Hey, but look—I left you with the Voting Rights Act.” It may also be that during the argument he saw Justice Ketanji Brown Jackson talk about the importance of these statutes to communities of color and maybe hearing that articulated by a Black woman affected him. . . .

I don’t think I’m in a place to be hugely optimistic about the rest of election law. We still have the Moore v. Harper case that will potentially come out, with the independent-state-legislature theory. If not in that case, it will come out in another case. I don’t think that this case represents an amazing new direction, but it is a very clear protection of a really important civil-rights statute. So, for what it is, it’s incredible.

Share this:

“Lee Drutman responds to Steven Hill: ‘Yes, Fusion does offer a new horizon for US Politics'”

Lee Drutman defends fusion voting as a policy that modern reformers should push for:

As a political scientist who studies electoral systems and the role of political parties in our democracy,  I want to explain here why I see fusion voting as a powerful way to in fact address some of the “troubling toxicities” in the immediate term. I like it in the immediate term because it can create an instant home for anti-MAGA Republicans who want to support Democratic candidates (or punish Republicans) without supporting the Democratic party, managing the most immediate existential threat to democracy.

 I like it for the long term because I also believe fusion voting creates a particularly promising pathway to fundamentally alter the destructive winner-take-all system. To move beyond the winner-take-all system means moving beyond the two-party system. Fusion is a pro-parties strategy that builds more parties. More parties can move us towards proportional representation.

Share this:

“New York Court Hears Arguments to Redraw the State’s Congressional Maps in 2024”

Politico on litigation that might lead to the redrawing of New York’s congressional map, with major implications for the composition of both New York’s delegation and the House as a whole:

A legal challenge that could eventually give New York Democrats a second crack at drawing new congressional district lines continued to work its way through the courts on Thursday, with a mid-level state appellate court hearing arguments that could restart redistricting by the end of the summer. . . .

“The IRC has a constitutional obligation to finish drawing New York’s congressional map,” said attorney Aria Branch of the Elias Law Group, a Democratic-aligned firm which brought the case. The court “drew a map in emergency circumstances for the 2022 elections only. That emergency is now over.”

If they win, the entire process would presumably start over. A reconstituted redistricting committee would hold hearings throughout the state this fall and produce new plans by January. If two sets of the maps are voted down, Democrats in the state Legislature could have a new chance to pick up the pen and draw more advantageous lines.

Share this:

“Counties Irate over Leg­is­lature’s Plan to Change Election Law”

NY State of Politics on the proposed New York bill to move some local elections on-cycle. Of course, county elected officials who won their positions in off-cycle elections oppose the change. By the same token, the beneficiaries of gerrymandering always oppose redistricting reform.

Most county leaders across the state are furious as lawmakers are expected to pass legislation at the last minute Friday to move most town and county elections to even-numbered years.

Supporters point to national research showing a more than 18% increase in voter turnout during presidential election years.

Bill sponsor Sen. James Skoufis says the change will maximize voter participation and improve New York’s democracy.

“As it stands right now, in a lot of these local, town and county elections, you have 20 or so percent of voters deciding the outcome for the entire jurisdiction,” he said. “Why are you so afraid of 50, 60, 70 percent of voters determining who should hold these local positions?” . . .

“Moving local elections to even-numbered years dramatically increases voter turnout,” said Ben Weinberg, Citizens Union’s director of public policy. “And it also makes the electorate more representative of the population.”

Share this: