WSJ: “The Polls Are Dead, Long Live Politics”
Washington Post: “Polling seems to be irrevocably broken.”
Vanity Fair: “WRONG, WRONG, WRONG: Can the American Polling Industry Survive Its 2020 Meltdown?”
The Hill: Frank Luntz, “Polling is done.” “Devastating to my industry.”
The Guardian: “Polling industry the night’s big loser as 2016 debacle repeats itself.”
Following the 2020 election, there appears to be near-universal agreement that polling was gravely flawed, leading many to expect (and maybe even depend on!) outcomes that didn’t actually come to fruition. What ISN’T agreed upon is how polling was still so off, and why it went wrong.
So…What Went Wrong?
A helpful piece from the New York Times by Nate Cohn (one of the top 3-5 Nates in the polling industry) poses a few broad, early theories about the failure. In a concluding remark, the article reports that an analysis “suggests a fundamental mismeasurement of the attitudes of a large demographic group, not just an underestimate of its share of the electorate.”
Piecing Together What’s Missing
After studying people and why they do what they do for nearly 30 years, I want to lay out what I believe is the missing element in this analysis of national importance.
First, let’s start with this bit of research reality: People don’t always do what they say they do.
Relying solely on asking people for their beliefs or commitments, as in surveys, is a mistake under most circumstances. If you want a rough idea of what’s going on, maybe it’s fine. In order to truly understand people’s attitudes, surveys should be preceded or followed by qualitative research to reveal underlying issues that set up, counter or dilute the statistical survey design or results. But relying on qualitative research – focus groups, online communities, etc.— can be cumbersome on a large scale to accommodate the many demographic variables in the voting public.
Second, survey results will tell you what people said and how many said it. But they won’t tell you WHY they said it. This is a simplistic comparison to the highly sophisticated statistical analysis that pollsters conduct, but it makes the point that there are key elements missing if you’re trying to understand the voters.
The only way to determine how and why citizens will vote is to analyze for the values that drive our decisions and our emotions, or how we feel about something. Values and emotions are the indicators of true intent. Further, in the right hands, text analysis now provides deeper, more meaningful insight into what’s really motivating people than asking them directly—and can handle large scale data.
What Do Values and Emotions Tell Us?
● Values can tell you that it was a belief in environmental causes that led a buyer to an electric car. And the buyer felt pride in that decision.
● Values can tell you that a shopper may normally value thrift, but I-gotta-have-it desire overrode those values for the purchase of a Burton snowboard
● Values can tell you that requirement for safety drives a vote for Trump and emotional need for trust demands no deviation
● Values can tell you that belief in diversity and equality means a vote for a mixed-race vice-presidential candidate with a feeling of pure happiness.
Extrapolate from these broad examples into detailed analysis of large numbers of people–specific groups with diverse needs, interests, and backgrounds–and you can learn so much more. It is from this point that analysts can begin to identify and segment the voting groups – by looking first at what really matters to them.
Finally, CNBC assessed the challenge of polling. Among their findings are two main areas to investigate: The record turnout, and why they turned out. Research and polling should be asking: “Who is being left out of the surveys? Why might they be reluctant to talk to pollsters? Who doesn’t get contacted? How might one reach them?” Making use of other types of data for an initial analysis of voter attitudes and behaviors will provide a stronger foundation for more accurate polling.
Using Emotions to Sort Voting Groups
It goes without saying: we are not just red and blue. The country is now wildly diverse and lumping ethnic groups into a voting block or all suburban moms together just won’t cut it. The Washington Post quoted a tweet by Nikole Hannah-Jones to this point, “One day after this election is over I am going to write a piece about how Latino is a contrived ethnic category that artificially lumps white Cubans with Black Puerto Ricans and Indigenous Guatemalans . . .”
Combining polling data with text analysis of qualitative research and social media data for values and emotions would give a well-rounded, more complete and maybe more nuanced view of a voter’s intentions. We can get closer to real answers by analyzing the many kinds context-rich, text-based sources of data that represents all voting citizens, processing it with tools that can identify values and emotions, then sorting respondents into voting groups. Yes, it takes more time, but the stakes for a presidential election warrant it.
My belief: Decontextualizing the data from the people or the broader culture is the failing we’re trying to fix. A strong understanding of and insight into the core issues of voting factions – the values that drive their decisions and how they feel about their lives, jobs, and country – would set up the next round of polling for far more success.