Bruce Cain: Back to Institutional Basics (Rucho Symposium)

The following is a guest post from Bruce Cain, part of the symposium on Partisan Gerrymandering after Rucho:

Given all the various ways that the Republicans have gamed election rules in recent years to stay in power, I fully understand the frustration that many Democrats have regarding the majority decisions in Rucho v. Common Cause and Benisek v. Lamone.  But setting politics aside, I believe that the opinions reached the right outcome.  They mercifully ended the doomed quest for a magical US Supreme Court solution to a complex political problem and usefully directed us back towards state-level institutional reforms.

As an election law “ancient,” I have witnessed countless efforts over the years to end political gerrymandering. Every decade brings new players into the reform game proposing novel ways to persuade the Court to end partisan gerrymandering with a standard as clear and simple as “one person, one vote.”  The fresh faces this cycle included mathematicians working on new ways to compute compactness scores, political scientists reworking symmetry measures into less sophisticated efficiency measures, computer scientists developing innovative plan simulation algorithms, and law professors with new legal framings.

Broadly speaking, the partisan redistricting reform argument has shifted over my lifetime from a heavy reliance on the Fourteenth Amendment to incorporate the logic of the First Amendment plus Article I, Sections 2 and 4, and from proportionality to symmetry as the possible base fairness principle.  Predictably, it was all for naught given the Court’s composition and legal precedent.  Prior to Rucho and Benisek, the door to discovering a manageable federal partisan gerrymandering standard had been left ajar. There was no existence proof, but there was also no nonexistence proof.  Now the door seems firmly closed.

The majority opinion’s arguments regarding the complexity and futility of composing a partisan gerrymandering standard out of the US Constitution’s thin air echoes many of the arguments that I and many others have made for decades.  There is no reason to debate this further.  I prefer to think about what can be done institutionally to solve this problem at the state level.  I have five recommendations.

First, let’s go back to advocating for and improving the Independent Redistricting Commissions (IRCs).  They represented a step forward in curbing past redistricting excesses, but we have learned important lessons along the way that should be incorporated into improvements on the existing models.  As I have argued in the past, IRCs were well designed to handle the conflict of interest problem associated with politicians drawing their own district lines, but less equipped to handle partisan polarization as it has spread in the electorate.  I believe the solution starts with supermajority rules that require consent from all three partisan factions (Democrats, Republicans, Independents/Minor Party identifiers) with the prospect of the matter going to the courts in the case of deadlock. 

Second, for both IRCs and other redistricting bodies, the redistricting criteria used to draw district lines need to be formally ranked or ordered by tiers to give more clarity as to their priority.  With respect to partisan fairness, the underlying standard should be defined as explicitly as possible.

Thirdly, with respect to the above point, we should not fool ourselves: in the end, the only measure that really matters to people in politics is the ratio of seats to votes.  If we believe otherwise, we are being naïve.  It is fine to look at as many different fairness measures as we might like (and I think that gives us a fuller idea of what is going on), but in the end, the debate will always come down to the seats gained and lost relative to the votes cast.

Fourth, we need to incorporate simulation methods into our plan fairness assessments for two reasons.  It provides the necessary context for determining outliers or what I would call unreasonable partisan bias.  And in addition, it neutralizes the gaming of fairness measures.  As a line drawer, I would have enjoyed torturing my now deceased, former adversary Tom Hofeller by using the efficiency gap pretext to concentrate Republican votes in bizarre ways to compensate for the natural Democratic concentration in urban areas.   Simulation combined with outlier fairness standards would “cabin” my mischievous intent.

Finally, I advocate using simulation as a threat mechanism to break partisan deadlock.  Returning to my first point, both IRCs and state governments (especially but not exclusively the divided ones) struggle to find agreement.  In the end, legislators and citizens who have devoted many months to deliberating over new district lines do not want to cede their task to anyone, including the courts. So court intervention is both an option and inducement to act. 

However, when a panel of partisan nominated judges hand the matter over to an appointed Master, or worse, take a hands on approach to line drawing, political controversy follows.  Do this enough times, and the reputation of the courts will suffer. 

But if we simulate a large enough population of plans in a random way (in which the odds of randomly picking a plan mirror the random likelihood of drawing such a plan) and then stipulate that if a plan is not selected by a given redistricting body, the Court can then choose randomly from this random population of constitutionally acceptable plans. This uncertainty will incentivize agreement.  Of course, it doesn’t have to be a court that adopts this alternative; the state auditor or another government agency could do the job, too.

So, I say to those who want to do something meaningful about curbing partisan redistricting: let’s get back to improving on and advocating for institutional fixes before the next cycle of well-meaning, politically naïve, legal crusaders enter the fray.

Share this: