New paper on modeling election outcomes

From Danny Ebanks, Jonathan Katz, and Gary King, comes “If a Statistical Model Predicts That Common Events Should Occur Only Once in 10,000 Elections, Maybe it’s the Wrong Model”:

Political scientists forecast elections, not primarily to satisfy public interest, but to validate statistical models used for estimating many quantities of scholarly interest. Although we have learned a great deal from these models, they can be embarrassingly overconfident: Events that should occur once in 10,000 elections occur almost every year, and even those which should occur once in a trillion-trillion elections are sometimes observed. We develop a novel generative statistical model of US district-level congressional elections, validate it with extensive out-of-sample tests, and use it to compute the first correctly calibrated probabilities of incumbent losses, one of the most important quantities for evaluating a democracy. We find that even when marginals vanish, incumbency advantage grows, and other dramatic changes occur, the risk of an out-party incumbent losing a midterm election contest has been high and essentially constant since the 1950s. We then develop a broader theory of American democracy consistent with the results from our generative model and discuss the broader implications of our generative modeling strategy.

More here.  I look forward to digging in!

Share this: