Moon Duchin: How to Reason from the Universe of Maps (The Normative Logic of Map Sampling) (Rucho symposium)

The following is a guest post from Moon Duchin, part of the symposium on Partisan Gerrymandering after Rucho:

Justice Kagan’s dissent in Rucho states in its opening paragraphs:  “The majority’s abdication comes just when courts across the country, including those below, have coalesced around manageable judicial standards to resolve partisan gerrymandering claims… They do not require—indeed, they do not permit—courts to rely on their own ideas of electoral fairness, whether proportional representation or any other.”

She’s talking about something called the “ensemble method” or the “extreme outlier approach.”  I think there’s still a fair amount of confusion in the election law community about the logic of ensembles, and (with some notable exceptions!) scholars have been surprisingly slow to take up the challenge of tailoring a legal approach to fit this newly sharpened evidentiary tool. 

The essence of the ensemble approach is just this: no measurable properties of districts come with obvious “neutral baselines”; rather, baselines can only be discovered by comparing a map to the whole universe of alternatives that hold the state’s rules and voters constant.  For instance, you can construct a representative sample of North Carolina congressional plans that meet the rules and are drawn without partisan goals, and check to see whether 10-3 is an extreme outcome (it is) and whether individual districts have partisan compositions that are extremely skewed relative to the alternatives (they do).  From there it is logical to argue that votes have been diluted—instrumentalized to realize partisan goals, and not given their full weight, power, and value—and that outlier status gives evidence of intent as well as effect.  See the Amicus Brief of Mathematicians, Law Professors, and Students

As Justin Levitt wrote at SCOTUSblog:  “Dilution depends on knowing what the baseline should be. You only know that a drink is diluted when you know it falls outside a normal range of what it should taste like. You only know that a district is diluted when you know it falls outside the normal range of what its composition should be.”  Good news, Justin!  There’s a tool for that.

Why does the dissent back away from other metrics favored by advocates? 

If you think proportionality is a non-starter because it prescribes S=V (seat share = vote share), then you are unlikely to prefer efficiency gap, which prescribes S=2V-½. 

And unfortunately partisan symmetry is just as much a standard that requires predominant attention to careful arrangement of district lines around partisan data.  All partisan symmetry-based scores (partisan bias, mean-median, etc) report an ideally fair plan if there is symmetry in the pattern of vote shares by district:  no matter what state you are assessing and what its rules and voting patterns may be, an outcome with 37%, 47%, 57%, and 67% in four districts earns the plan a suite of perfect symmetry scores, and one with 37%, 47%, 57%, and 60% does not.  (See Katz-King-Rosenblatt preprint for a survey; this follows from Definition 1 and Assumption 3.)

This makes those scores problematic in the logic of the dissent:  “Judges should not be apportioning political power based on their own vision of electoral fairness, whether proportional representation or any other.”  (dissent p14)

And again:  “Contrary to the majority’s suggestion, the District Courts did not have to—and in fact did not—choose among competing visions of electoral fairness. That is because they did not try to compare the State’s actual map to an “ideally fair” one (whether based on proportional representation or some other criterion). Instead, they looked at the difference between what the State did and what the State would have done if politicians hadn’t been intent on partisan gain. Or put differently, the comparator (or baseline or touchstone) is the result not of a judge’s philosophizing but of the State’s own characteristics and judgments.”  (dissent p22-23)

Does the ensemble approach also devolve to prescribing a seats outcome?

No.  Kagan cites the Massachusetts example (see Math Brief p25-26) at length (dissent p24) which shows that the ensemble method requires neither proportionality nor symmetry from a state whose political geography is the driver of non-intuitive outcomes. 

A relative standard?

In other words, doesn’t this approach produce baselines that vary among states and over time, because the rules and vote patterns themselves will vary? 

Yes– “But that is a virtue, not a vice—a feature, not a bug.” (dissent p25) 

How many standard deviations?

“And we can see where the State’s actual plan falls on the spectrum—at or near the median or way out on one of the tails? The further out on the tail, the more extreme the partisan distortion and the more significant the vote dilution.” (dissent p19)

This is one place where Kagan’s phrasing could lead to a problematic interpretation that the median is the ideal.  The logic of the ensemble method works best when it disallows outliers but does not choose among the vast body of remaining options.  At best, it proscribes rather than prescribes. 

And finally, it’s no more incumbent upon a court to set a global threshold or limit here than it is for, say, population deviation, where 19-person deviation could be problematic with respect to one set of facts, but 76-person deviation could be acceptable for another.

A word of caution and a path forward

Not all computer methods are equally sound!  The science of map sampling has made enormous strides in the last two years, and has crystallized around the use of Markov chains, which come with strong mathematical guarantees.  Law scholars and litigators should collaborate with academics to get the best science (representative sampling with an appropriate distributional design) paired with the best legal thinking.  As we move to state-level challenges and to best practices for commissions, this gives us a clear path forward.

Share this: