Support The Bulwark and subscribe today.
  Join Now

The (Nonpartisan) Polls Were Fine, Actually

In 2022 the professional polling outfits did a pretty good job. The polling averages were skewed by the spamming of bad Republican polls.
November 30, 2022
The (Nonpartisan) Polls Were Fine, Actually

Heading into the 2022 election, the conventional wisdom suggested that the Republican party was going to have a good night. Traditional midterm indicators (such as Biden’s approval ratings) looked good for Republicans, and over the last few weeks of the cycle, the GOP’s poll numbers had improved in a number of races. But at the end, Democrats held the Senate and came within a whisker of winning the House.

So were the polls wrong? Sort of, but also, not really.

The nonpartisan polling was actually pretty good in 2022. Most of the phantom Republican strength in pre-election statewide polling was a function of junk firms with poor data quality and low transparency spamming the polling averages with bad polls.

In reality, an aggregation of nonpartisan polls predicted the correct winner in every Senate battleground and would have predicted the margin substantially more accurately than the partisan GOP pollsters which flooded the averages in almost every major race.

These Republican firms, such as the Trafalgar Group, overstated Republican strength by roughly 3 points more than the non-partisan polls. Trafalgar, for instance, showed the GOP with a closing advantage in New Hampshire, Nevada, Pennsylvania, Georgia, and Arizona. In states that saw Democratic blowouts, such as Washington and Colorado, they (incorrectly) showed tight races within the margin of error. In nearly every case, they look to have missed the mark substantially.

What happened?

There were signs even before the election that something was off. For example, Trafalgar’s polling methodology, which Split Ticket investigated in great detail, did not adhere to industry standards in terms of data reporting, methodology, or transparency. For example, their calculation of response rates did not fit any of the numerous accepted AAPOR definitions.

Trafalgar’s self-described approach was akin to artificially adding Republicans to account for the supposed inbuilt bias in polling. But their underlying problems weren’t fixed even then; as Nate Cohn noted, the crosstabs of Trafalgar polls were unbelievable, with Pennsylvania crosstabs suggesting a tight race despite Democrats winning white voters, and their numbers showing the heavily-Democratic Philadelphia metro area voting the same as the rest of the state. Both sets of numbers strained credulity even pre-election and were not explainable by the variance inherent to poll crosstabs. They could have thus been dismissed outright by people with familiarity of the state’s voting patterns. But residual scarring from 2020 may have left many observers wary of looking into this in detail.

There were similarly egregious misses in Georgia, where the aggregate of nonpartisan polls showed a Warnock lead while the partisan polls showed Herschel Walker up. The polls that showed Walker winning, however, all shared one key element: They kept showing an extremely Republican electorate in which Walker was pulling around 20 percent of the Black vote. This would have been a phenomenal showing for any Georgia Republican, especially considering that Trump got only 6 percent of black Georgians. When polls show seismic demographic shifts like this, they are worthy of deeper examination and occasional skepticism, especially when the shift is only observed by partisan pollsters (many of whom do not adhere to standard data quality procedures). Instead, they appear to have been taken at face value by many observers worried that polls would always overestimate Democrats.

That is a dangerous approach, because over the long term, polling error is much more random than people think. Historically, it does not correlate cycle to cycle. So the best approach to using polls to understand the political landscape probably remains the same as it always has been: Trust the work that is methodologically sound, with robust methodology paired with transparent data collection and reporting. And be skeptical of outfits with partisan motivations who won’t show their work.

Lakshya Jain and Dan Guild

Lakshya Jain is a computer scientist, lecturer at UC Berkeley, and a cofounder of and analyst at the elections website Split Ticket. He has also written for Sabato’s Crystal Ball. Twitter: @lxeagle17.
Dan Guild is a lawyer and project manager who lives in New Hampshire. He has written for Bleeding Heartland, CNN, and Sabato’s Crystal Ball, and contributed to the Washington Post’s 2020 primary simulations. Twitter: @dcg1114.