What Can Political Pollsters Learn From Marketers To Improve Their Data? | AdExchanger

By Dev April 01, 2022

What Can Political Pollsters Learn From Marketers To Improve Their Data? | AdExchanger

Polling Image

Baseball legend Yogi Berra once famously said, “It’s tough to make predictions, especially about the future.”

Berra wasn’t referring to the 2020 presidential election, of course, but the quote is apt when it comes to polling data in the days and weeks leading to Election Day, which, according to most reports, were way off base and predicted an easy win for former Vice President Joe Biden.

Instead, the race – as of this writing – is too close to call.

The Atlantic on Wednesday said that surveys which “badly missed” results amounted to a disaster for the polling industry and for media outlets and analysts that package and interpret the polls for public consumption, such as FiveThirtyEight, The New York Times’ Upshot, and The Economist’s election unit.

Still, polling data and analytics is complex. And the stakes are higher in politics, with pundits under pressure to create firm predictions and narratives.

“It’s a good reminder for marketers of how we have more data at a drastically higher scale, so we should be taking advantage of machine learning and marketing analytics whenever possible,” said Chris Kelly, CEO of California-based Upwave, a leading analytics platform for brand marketers. “Nothing we do in marketing and analytics should look like political polling because we have mountains of additional data – we should be doing much more sophisticated techniques, so we should be proud of what we’re doing and push all of our technology partners to do that.”

Kelly lauded Nate Silver’s FiveThirtyEight, recently criticized for missing the mark. Unlike the marketing industry and even sports, the data in political polling is simply not as robust.

“Based on the data they have – and the thinness of the data, they’re modeling something that happened four years ago under totally different circumstances – they’re probably as sophisticated as they can be,” he said. Even sports leagues have more data than politics, Kelly added – and sports models are often wrong.

“We should put pretty wide error bars on a lot of political projections because they have even less data than you have in sports, and way less data than we have in marketing,” he said. “It’s just hard to have accurate turnout models.”

Conversely, in the marketing world, sampling and polling isn’t always the right approach to drive more sales and optimize spending, said Rick Bruner, CEO of Central Control Inc., which helps companies run experiments for advertising ROI. Running experiments on ad campaigns against granular sales data outcomes is the better strategy.

Still, experts say that political polling outlets can learn from the tech innovations driven by the marketing industry to create more accurate data and analytics models.

Measure behavior

Measuring behavior can result in more accurate data, according to Jeff Bander, a professional researcher and US head for Eye Square, a survey and market research company. His firm leverages neuroscience in its research for clients through its market technology platforms, which he said could be done in political polling.

“You can show a commercial of one of the candidates, and through the camera, with permission, do facial coding and measure their emotions – joy, disgust, anger – and they don’t have to say a word,” he said. “Facial coding is one way to read emotions.”

Eye Square’s “System 0” eye-tracking solution, for example, helps marketers better predict what ads will perform best on Amazon and for ecommerce, and the subject is unaware of what’s being tested.

Watch out for biases

Bander said one of the difficulties of interpreting data from a modeling and statistical perspective are the data collectors’ biases.

“A lot of times in polling, people want to tell a story more than they want to get what’s actually going on,” he said. “We see that when people want to get certain results and asking these questions to get the results. That’s not going to work if you want real results. It’s really who you ask and how you ask the question to get accurate polling.”

As the election results trickle in, Bander said that political pollsters can take a page from the marketing world’s playbook.

“Typically, in marketing, you don’t give conclusions before the testing is done,” he said. “It’s very easy to get any answer you want if you ask certain people and ask it a certain way. It takes skill and talent to ask unbiased questions.”

He added that pollsters using the same methodologies from 20 years ago are going to be wrong.

“There are so many different biases right now and because the world’s changed, you have to adapt to what’s going on,” he said. “And I think some of the older polling companies have been slower to do that.”

Sample size matters

According to the Pew Research Center, for election polls, different measures of the race have different margins of error.

Central Control’s Bruner was surprised that after the 2016 elections, some of the surveys routinely cited in the media didn’t consider the sample sizes. Voters may also simply change their minds between the time they participate in surveys and Election Day. More accurate polling comes down to larger samples, how people are recruited to remove biases, and more attention to what the polls represent, such as battle ground states and an “appreciation of the electoral map.”

“You’re trying to make projections about what is true in a large body – in this case voters in America – from a small sample, people who respond to your survey invitation,” Bruner said. “And mathematically, if you are able to sample a perfect representation of the larger body, it should suffice. But for something like an election, how big the sample is matters a lot.”

Still, Bruner said that certain results of an election flip could still be within the range of what the researchers would call an “accurate” prediction by the poll.

Source: here