Don’t use the exit polls to draw conclusions about election results
The data are unreliable and can lead people to learn the wrong lessons
Election night is like early Christmas for those of us who conduct political analysis. For months, we look for any small bit of evidence we can find to gauge the electorate’s mood and understand how people might vote in the fall general election—opinion polls, candidate fundraising data, special elections, economic indicators, and much more. Finally, all this speculation materializes in hard, concrete results: on election night, evidence of how Americans are really thinking begins trickling in to fill out a puzzle of the country at the state, district, and county levels.
But those results are far from the last word. Questions remain about who voted, how they voted, and why they voted that way. In the weeks and months following Election Day, more data becomes available, offering clearer answers to those questions. For example, states make their voter files available for purchase a few months after the election. Commercial organizations will buy that data, combine it with other sources, and weight it to the results to produce a more accurate look at who cast a ballot and model how those people voted.1
Some outlets will analyze trends from individual voting precincts and compare them to previous elections. When overlaying that data with demographic information from the census, they can show the extent to which voters in different communities shifted from previous elections.
Another source of data that many national pundits rely on is the exit polls—surveys that historically have been conducted as voters leave (or exit) their polling place.2 They ask, among other things, whom someone voted for and why they voted that way and also identify certain demographic information about them. This data allows news networks to call some races as soon as the polls have closed, identify people’s top voting issues, talk about how various groups voted, and much more. Unfortunately, these polls are also often unreliable, and sometimes they are just plain useless.
The problems with the exit polls have spanned multiple election cycles. Perhaps most famously, early exits indicated in 2000 that Al Gore would win the key battleground state of Florida, leading networks to call it for him. However, these polls only surveyed people who voted in person on Election Day. As the state began tabulating absentee ballots, the picture looked more complicated, with Floridians who cast those ballots likely favoring George W. Bush. The networks were subsequently forced to retract that call later in the night.3
Then, in the 2002 midterms, the exits were so flawed that the organization responsible for administering them pulled the plug entirely, saying the results could not be trusted. Even after the networks ended the old exit poll consortium and formed a new one ahead of the 2004 presidential election with the firm Edison Research, some of the same reliability problems occurred yet again, with exits showing John Kerry ahead in three swing states that ultimately went for Bush.
More recently, the exit polls misrepresented the makeup of the 2016 electorate by significantly underestimating the share of non-college white voters and overestimating the share of college-educated white voters, which likely led both parties to misunderstand how much sway each of these groups had. The continual flaws in the exit polls finally prompted two organizations—the Associated Press and Fox News—to break off from the existing consortium with Edison after 2016 and start a new venture called the AP VoteCast. This project surveys voters who cast ballots (and even some who do not) using a more comprehensive methodology in hopes of cutting down on the errors that have plagued the traditional Edison exit poll.4
Despite these well documented issues, many networks, reporters, and institutions insist on continuing to draw early conclusions using the Edison exits. For example, on election night this year, coverage from CNN and MSNBC pointed out that exit poll data suggested fully 27% of voters nationally identified abortion as the most important issue to their midterm vote, just barely behind the top issue of inflation (31%). If true, this would be a huge story and a sign that the Supreme Court’s Dobbs decision was possibly singularly responsible for Democrats’ surprisingly strong midterm performance. The New York Times—the paper of record—also published a story using the exit polls to make this claim.
What these outlets did not make clear, however, is that the Edison exit polls only offered respondents five issues to choose from. Doing this necessarily flattened voters’ choices. Missing from the list were things like healthcare, democracy, climate change, and the COVID pandemic. This made it more likely that the shares of voters choosing some of those five options were overinflated, including for abortion: that 27% figure was a stark departure from pre-election polling, which consistently showed that it trailed the economy and inflation by wide margins, polling at about 10% or less.
In fact, the AP VoteCast, which gave respondents nine options (versus Edison’s five), showed that a strong plurality (48%) of voters said the economy was the top issue with abortion polling at just 10%—in line with pre-election trends. This phenomenon also materialized at the state level: the Times’ analysis talked about how the exits showed abortion was actually the top issue in states like Michigan and Pennsylvania, where Democrats had strong performances in competitive races. But the AP VoteCast data similarly found that, in those states, abortion was a distant second behind outright majorities who chose inflation.
So essentially, the Edison data on this question didn’t pass the smell test, and yet many in the national media have used it anyway to tell the story of the midterm.
Another narrative that has taken hold in the past few weeks is that it was young voters who saved Democrats. Much of this has been based on an early post-election report from an organization called the Center for Information & Research on Civic Learning and Engagement (or CIRCLE), which is considered the authority for research and analysis about youth voting trends. One of the headline takeaways from this report was that young people turned out at the second-highest rate in decades and that their support for Democrats was pivotal in key contests around the country. This reporting has been picked up by major outlets ranging from USA Today to the PBS NewsHour.
However, these stories failed to mention a few important things. First, this age bloc’s purportedly high turnout rate was just 27% (and only 31% in key swing states), meaning barely one in four voters aged 18–29 cast a ballot in the midterm. Additionally, after the 2018 midterms, CIRCLE initially estimated that youth turnout hit its highest point in over two decades at 31% nationally. But that figure was revised downward to 28% after the voter file was released, so it’s reasonable to think the same could happen this time as well. Perhaps the biggest issue, though, is the fact that this analysis relied on—you may have guessed it—exit polls. And yet, this hasn’t stopped pundits from forming an early narrative that young voters were the key to Democrats’ history-defying performance. The framing from PBS even suggested that this finding will change how the parties approach major political issues.
All these issues point to a problem of not only the exit poll data itself but the media’s reliance on it for their immediate post-election analysis: once flawed narratives get baked in, it can be difficult to change them—even if later, more accurate data tell a different story. This can also have practical consequences down the road. If Democrats believe they defied the historical odds in 2022 because they pounded the table on social issues that motivated younger, more liberal voters, it may inform how they approach future elections, even if issues other than abortion helped them find relative success (such as poor candidate quality on the Republican side) or groups other than young voters were actually more decisive.
Here’s the thing: there is nothing wrong with exhibiting some humility and acknowledging that we can only infer so much from the results we currently have. We will get better data in the coming months, at which time we can draw more accurate conclusions about what happened in this election. It’s very possible that the early exits showing young voters were decisive will prove to have been correct! Unfortunately, there seems to be an incentive to offer answers as soon as possible rather than waiting for more clarity. But doing so with dubious data at this early time can mislead the public, candidates, and parties. It is incumbent upon those with large platforms to understand this and adjust their reporting accordingly.
Organizations that conduct this type of analysis include the Pew Research Center and the Democratic data firm Catalist.
Increasingly, these also include telephone surveys.
After three cycles, the AP VoteCast seems to be producing more reliable findings than the Edison exits, though the validated voter analyses from Pew, Catalist, etc. are still the gold standard for this type of data.
And, in fact, CIRCLE's estimates of youth turnout were later revised downward, from 27% to 23%: https://circle.tufts.edu/latest-research/2022-youth-turnout-race-and-gender-reveals-major-inequities