Why OTS-Based Exposure is Problematic in Brand Effectiveness Measurement

By Jeremy O'Brien January 05, 2021

Why OTS-Based Exposure is Problematic in Brand Effectiveness Measurement

Do you remember what days you wore green last week? What about the roads you drove yesterday? You might have a general idea (“I think I wore my new sweater Wednesday”) or perhaps the options blur together (“I pick the kids up on Academy St. but what errands did I run…”). You may be surprised to learn that high stakes media buying decisions are made based on answers to questions just like these. Fortunately, there is a better way.
Many brands and media partners regularly include OTS, or “opportunity to see,” questions when measuring campaign effectiveness. The idea behind OTS questions is to ask a respondent about their media consumption rather than to passively observe it through deterministic signals like tags or return path data, which require more effort and technical expertise to acquire. It’s important to be aware of just why using OTS questions as a stand-in for exposure is problematic.

How OTS questions work

In brand campaign measurement, respondents are assigned to either an exposed or a control group. Statistical tests are performed to understand if there is a causal relation between exposure and lifts in brand metrics like awareness, consideration, or intent, as revealed by survey questions.
In the course of a survey, it’s impossible to ask respondents sufficiently detailed questions about what media they consume, where, and when. As a result, OTS questions meant to serve as proxies for exposure don’t get detailed or specific, instead assuming regular and habitual patterns of media consumption. OTS questions aim to define that pattern, extrapolate it as a potential exposure graph, and overlay it with a media plan.
Say you were a respondent for just such a survey. You might be asked when you watch TV in a typical week or be presented with a grid of dayparts to fill out. Then you might be asked to select from a (very long) list of channels all those you typically watch. If you responded “weeknights from 8-10PM” and “WB, CBS, and Fox News,” those responses would then be cross-referenced with an actual commercial schedule. Since you had a self-reported “opportunity to see” ads during that period you would be marked as ‘exposed’ to any ads aired Monday through Friday from 8-10PM on WB, CBS, and Fox.

Why OTS-based exposure is problematic

Of course, there’s no way you would’ve seen every ad aired across three major networks every single night of the week, but that’s what OTS-based exposure would lead a brand to believe. This type of measurement is problematic for a few key reasons.
Delayed
One weakness of using OTS-based exposure is logistical: it relies on post-hoc media schedules of when and where ads actually aired. For channels like linear television, actualizing media schedules can take months, meaning that results may not arrive until later in a campaign — or, worse, after it’s finished. That doesn’t leave any room for real-time campaign optimization; if your ads didn’t work as expected, you can’t correct for that until the next campaign.
Imprecise
Another critique of OTS-based exposure is that it is imprecise: it rests on the idea that media consumption is routine and consistent. Until only a few short years ago, Nielsen had panelists in local markets keep a diary (on paper) to log their TV viewing from the preceding week. But media effectiveness surveys almost never have the luxury of weekly check-ins or diarized records. Because they need to capture media consumption information during the course of a single interview, they usually frame OTS questions in terms of a ‘typical week.’ Audiences today have a large amount of media choice, making it harder to accept that one week of media consumption is the same as the next.
Inaccurate
Perhaps the most common argument against OTS-based exposure is that it is inaccurate: it relies on respondents’ long-term memory of what media they consume and when. Psychological research has catalogued many ways in which bias affects memory. For example, we tend to give greater weight to recent events (availability bias) and project them backwards in time (consistency bias). There’s volition in memory — we remember things that confirm our hypothesis of how things are (confirmation bias) and conform to the view we want to have of ourselves (egocentric bias). Any one of these biases is troublesome, but combining them with the ambiguity of a ‘typical week’ makes it challenging to compute — much less systematically correct for — the error introduced by basing exposure on OTS.

An opportunity to improve

At Upwave, we believe brand effectiveness measurement should rely on more than memory and assumption. Particularly when conducting causal analysis, rampant contamination of exposed and control groups can easily make an effective campaign look ineffective, and vice versa. That’s why Upwave uses exclusively deterministic data to measure exposure across Linear TV, Addressable TV, Connected TV, OTT, display and audio. The result is the industry’s only single, deterministic, exposure dataset that provides a stronger signal for detecting brand lift and validating campaign targeting.