Politics, Meet Big Data: Demystifying the World of Political Modeling

Note: All data from models for the midterm elections were taken in early October.

The Gallup Poll. The Approval Rating. The Consumer Confidence Index. Big data and smart statistics have allowed us to quantify our thoughts. Or that’s the hope. How good of a job is Obama doing? Which candidate has the best chance of winning in November? And perhaps most important as we head into the autumn election blitz: Who will control the Senate next January? Ask Gallup, or the Washington Post’s Election Lab, or FiveThirtyEight. They all say they’ve got the answer, but their answers are all different. The New York Times says Republicans have a 63 percent chance of retaking the Senate. The Huffington Post has it at 60 percent. The Princeton Election Consortium says just 36 percent. So which is it? And how do they get there?


 

Click here to read all of The Politic’s midterm elections coverage at MIDTERM COUNTDOWN 2014, complete with maps, tables, and articles.


 

Since George Gallup formed the American Institute of Public Opinion in 1935 and correctly predicted the landslide reelection of President Franklin Roosevelt the following year, scientific polling has been a mainstay of American politics. Even though the often-embarrassing failures of polling have led many politicians to dismiss pollsters as charlatans, the influence of the public opinion poll has only grown. By the 1980s, national campaigns were testing public opinion in nightly tracking polls, and the fundamentals of polling—randomly calling landline telephones with a bit of demographic weighting thrown in (to ensure, say, the appropriate balance of men and women)—has scarcely changed since President Roosevelt defeated Alf Landon.

The revolution began quietly enough. On November 31, 2007, under the username “poblano,” baseball statistician Nate Silver published a diary entitled “HRC [Hillary Rodham Clinton] Electability in Purple States” on Daily Kos, a liberal blog. By the Super Tuesday primaries in February, Silver was publishing projections on the primary battle between then-Senators Clinton and Obama using a model that incorporated past voting behavior, demographics, and polling data. After predicting that Obama would carry the day’s delegates 859-829 (in actuality Obama did so by a slightly narrower margin, 847-834), Poblano received a shout-out from Weekly Standard editor Bill Kristol on the editorial page of the New York Times. Three months later, after starting his own blog, FiveThirtyEight.com,and publishing spot-on projections in Indiana and North Carolina, Silver shed his anonymity and began work on an ambitious model for the November election between Obama and Senator John McCain.

Based on the principles that had underpinned PECOTA, the algorithm that Silver built to project baseball player performance, FiveThirtyEight brought a never-before-seen level of statistical sophistication to political polling. Pollsters’ historical records were scrutinized, and those with scarce or inaccurate results were given less weight. Polls were given a “half-life,” to allow their weight to depreciate over time. When a state lacked recent polling, a regression analysis that incorporated past voting behavior, fundraising, and demographics assumed more of the burden. Instead of assuming that margins of error in different states were independent, Silver introduced a more nuanced approach: if North Dakota voted four points more Republican than expected, for instance, similar states like South Dakota and Montana would be more likely to do so as well. By the time the election rolled around, FiveThirtyEight had exploded: nearly four million unique visitors visited the site in October. And the model didn’t disappoint. It correctly predicted every state except Indiana, which Obama won narrowly, and all of the thirty-seven Senate elections on the ballot nationwide.

Romney’s winning in the Gallup poll if you reweight the sample to have more Romney than Obama voters

But models are only one facet of the revolution that has been sweeping through political prognostication. With response rates falling drastically—Pew Research has reported a 26 percent drop in response rate to landline surveys over the past 15 years—and money growing tighter in traditional media, “gold-standard” polls with live interviewers and repeated call-backs are becoming scarcer. In their place, cheap robo-polls from pollsters like Rasmussen Reports and Public Policy Polling have proliferated, as have online polls from YouGov and others. And while these methods are here to stay, their value is hotly debated. With so many methodologies flying around, cherry-picking reigns: in the heat of the 2012 campaign, a website called UnskewedPolls.com used Rasmussen data to “reweight” polls toward Mitt Romney. Silver, then writing for the New York Times, sarcastically explained on Twitter, “Romney’s winning in the Gallup poll if you reweight the sample to have more Romney than Obama voters.”

Internal pollsters that work for political campaigns have fiercely debated their successes—like Mark Mellman’s spot-on numbers in Senate Majority Leader Harry Reid’s upset reelection in 2010—and their failures—like John McLaughlin’s poll that showed House Majority Leader Eric Cantor cruising to a thirty-four point victory before his primary defeat in 2014. In a country where demographics are shifting, landlines are being disconnected, and voters are tuning out a never-ending blitz of surveys and advertisements, can the data even be trusted?

If you have enough of it, the minds behind the curtain of President Obama’s reelection would argue. In swing states like Ohio, the president’s campaign gave every eligible voter a “support score”—how likely they were to support the president—and a “turnout score”—how likely they were to vote—based on demographics, publically disclosed donations, past voting behavior, and responses to campaign volunteers, among other metrics. Democratic voters deemed less likely to vote were given extra attention by volunteers, as were voters on the fence. The campaign studied anonymized data from cable providers to figure out what persuadable voters were watching on television—and blitzed cable channels like TV Land that had never been targeted by campaigns before.

The over two million Obama campaign volunteers were exhorted to share their Facebook friend lists with headquarters and then told which friends to reach out to based on a powerful voter mobilization model. To determine where to best allocate resources, the campaign ran 66,000 iterations of its own proprietary election model every night, filled with data that its media counterparts could only dream of. In the end, the data was shockingly accurate. Of 103,508 early voters in Hamilton County, Ohio, 56.4 percent were given a support score of over 50.1; 56.6 percent actually voted for the president, a degree of accuracy that dwarfed the capabilities of even the best-designed polls. In the simple words of Romney’s Ohio state director, “It is remarkable to see what they did, in the rearview mirror.”

While Democrats are seeking to tap into the extraordinary pool of data assembled two years ago and Republicans are rushing to catch up, the big news during this cycle has been the proliferation of models across major news outlets. Six years since Silver shed his pen name, “Poblano” is no longer a solitary blogger. After a three year stretch at The New York Times, where on the eve of the 2012 election Silver garnered one-fifth of all visits to the newspaper’s website, FiveThirtyEight has re-launched as a subsidiary of ESPN. In its place, the Times has formed its own data-driven political blog, the Upshot, led by former New Republic analyst Nate Cohn. Meanwhile, the Washington Post snapped up the Monkey Cage Blog, run by numbers-focused political scientists. And established blogs have also gotten in on the action: the Huffington Post has absorbed Pollster.com and its founder, Mark Blumenthal, while Daily Kos has brought on Drew Linzer, a former political scientist at Emory University known for his “Votamatic” model.

The differences between the models—especially debates about whether to include “fundamentals” alongside polling data and how to properly measure uncertainty—have occasionally bubbled over into Twitter battles and blog-based broadsides. Silver launched a cutting criticism in September against a model produced by Sam Wang, a professor at Princeton University, saying flatly, “That model is wrong… because it substantially underestimates uncertainty [and] makes several assumptions about how polls behave that don’t check out.” With the only model that currently picks the Democrats as favorites to retain the Senate, Wang argues that including fundamentals, such President Obama’s poor approval ratings and the Republican lean of many states with competitive races, on top of polling data is misguided. “For most of the year, polls have shown that Republicans are slightly underperforming, relative to those expectations,” he argued. “That’s the real story.” The dispute between Silver and Wang is certainly not the only one: Cohn has contended for over a year—dating back to his time at The New Republic—that Public Policy Polling, a prolific Democratic polling firm, has a deeply flawed methodology despite its solid results in the past several cycles; the quarrel soon involved Blumenthal and Linzer, among others, on Twitter, and remains ongoing.
Nerd fights aside, we only need to look at this year’s Senate races to see the issues that these models must determine. Democrats currently have four-point leads (in simple polling averages) in Senate races in North Carolina and Michigan, but they are hardly the same race: Michigan twice went easily for President Obama and is over 15 percent undecided, while North Carolina, where a Libertarian is drawing as much as 8 percent in polls, narrowly voted for Mitt Romney and has less than half as many undecided voters as Michigan. If neither candidate reaches 50 percent in November in either Georgia or Louisiana, the top two candidates move to a runoff that could easily determine control of the Senate.

All of these questions, once the sole domain of old-school analysts like Rothenberg and Cook, have entered the realm of the quants and emerged as the sharpest differences between analysts like Silver and Wang. Some of these answers will become clearer as we draw closer to Election Day, and many of the models will converge as a result. But others won’t: we need only remember how shocking election night was two years ago to Republican pollsters even as the results unfolded just as Obama’s campaign had projected. But this fall’s election night will not only pit Democrats against Republicans and pundits against analysts: models will also be going head-to-head.Screen Shot 2014-11-01 at 9.35.19 AM

Leave a Reply

Your email address will not be published. Required fields are marked *