Ad Data Normalization: Measure What Matters

Ditch proprietary metrics and normalize ad data into a common language.

I talk to a lot of digital marketers every single day. And while there are plenty of issues with digital media they all agree need to be fixed (another blog, another day), I’m always more interested by the issues they aren’t talking about.

And one of the biggest issues that isn’t getting enough attention is ad data normalization.

Data normalization can be a scary word for digital marketers. There is so much new and changing in measurement technology that it seems like it’s a fool’s errand to try to keep up with it all. All of the platforms use different metrics to measure success, so why even bother? I’m not a data scientist, I’m a marketer for crying out loud!

Digital marketers shouldn’t have to have a Master’s Degree in Data Science to do their job because the concept of data normalization isn’t as scary as people think. And the benefits of analyzing normalized data allow you to reduce waste, increase performance and execute better campaigns.

With every new player in the digital ad ecosystem comes an additional set of proprietary performance metrics. It makes sense — who knew how to measure ads on mapping apps like Waze before they started getting data back?

But that second part is exactly the issue. As platforms began monetizing their digital ad inventory, they noticed particular trends about the way users viewed and/or interacted with their content, and those interactions didn’t always fit nicely into a set of standardized digital ad metrics… or (more importantly) agency’s media plans.

All of these proprietary metrics are fine, as long as you can confidently say that the platforms are providing performance data that can be measured against your business objectives.

Why Proprietary Metrics Complicate Things.

If you can’t figure out how the performance of a specific ad unit or tech partner stacks up against your other platforms, it’s important to ask yourself the tough question — does that metric even matter? Does it help my business meet its marketing objectives?

Take video ads as an example. Facebook has turned itself into one of the major players in the video ad business, even though Facebook was never necessarily a video platform. It was mostly a place to communicate with friends and share updates and photos.

As digital video became more accessible and popular, Facebook began aggressively building out its video capabilities so it could compete with the other digital video platforms — namely YouTube.

As more Facebook video content became available, Facebook (predictably) started monetizing it, but there was a problem…

People go to Facebook to catch up with friends and scroll through their newsfeed. The constant scroll does not bode well for video engagement metrics.

So, as Facebook started building its video ad buying platform, it had to build proprietary video ad metrics to match the way people were consuming content on their platforms. Until last fall’s release of Facebook’s ThruPlay metric, the default measurement of video ad performance on Facebook was a 3-second video view.

Yes, Facebook encouraged marketers to optimize their ad spend toward driving as many 3-second video views as possible.

The problem is that proprietary metrics are owned by individual platforms. A better name for them is usually “metrics that make our platform performance look better.” But that would be bad marketing, right? They’re proprietary metrics.

So, by using a platform’s proprietary metrics to measure said platform’s performance, you’re enabling them to manipulate your data to make their performance look better.

How One Metric Turns Into Three.

Here are the definitions of “video view” according to Google and Facebook:

+   Facebook’s ThruPlay: A video ad in which a viewer watches 15 seconds of the video ad (or the duration if it’s shorter than 15 seconds).

+   YouTube’s TrueView: A video ad in which a viewer watches 30 seconds of the video ad (or the duration if it is shorter than 30 seconds) or engages with your video, whichever comes first. Engagements include clicks to visit your website, call-to-action overlays (CTAs), cards and companion banners.

In other words, Facebook calls a “video view” a 15-second view and YouTube defines it as a 30-second view or a click.

For marketers distributing a 30-second video ad, that causes some issues when doing cross-platform campaign analysis. Add in any in-stream video display ads that are most likely using quartile reporting (my preference) and you’ve got three different definitions of a video view within a single video initiative.

So, if you pull “default” video reporting for these three platforms, you’ll end up comparing three totally different metrics.

Normalizing Data Shows the Truth.

As a savvy marketer, you undoubtedly set up a measurable KPI before the campaign went live. Let’s say you went with a vCPV (viewable cost per view) goal of <$0.08.

Whether you defined it or not, your team is most likely considering a “complete view” as a “view to 100% completion.” Assuming you’re running a 30-second video ad, we can define this KPI as “cost per completed viewable 30-second video view.”

Great, now on to the report. Here’s how that data would look if you pull default video metrics out of the individual platforms:

Not bad, right? We didn’t get viewability metrics in this data dump, but Facebook and YouTube are 100% viewable and we were running a 70% viewability filter on pre-roll.

Facebook is a bit more expensive than YouTube, but that makes sense given that YouTube is a video-only platform. Yes, pre-roll is driving completed views at a much cheaper rate, but we’re okay paying a premium for high-quality YouTube video views.

Based on these numbers, this campaign appears to be performing well. We’re at an overall CPV of $0.06, which is less than our outlined goal of $0.08.

However, when you pull the metrics based on your actual KPI, things look a bit different:

This changes the picture a bit. Looking at only the viewable video views removed about 30% of the display plays, while getting rid of some of the YouTube views that used a user engagement instead of a video view dropped those numbers by about 13%.

The biggest change came on Facebook though, which is clearly an inefficient channel for driving 30-second video views — $0.85 is 10x the goal and should probably be removed from the plan. More importantly, across all platforms, our effective vCPV is $0.09 which doesn’t meet our goal of $0.08.

This example was not meant to be an indictment on Facebook, it’s just the nature of that platform — people aren’t typically watching videos to completion while browsing their newsfeed or messaging friends.

I’m also not condemning YouTube for including “user engagements” (a.k.a. clicks) in their definition of a TrueView. There’s real value in getting someone to click from a video ad (assuming they didn’t just miss the “skip” button) and I’m happy to pay for that.

But what normalized data helped us uncover in the example above is that we’re probably better off shifting our Facebook budget into the better-performing platforms.

And that’s all it comes down to. I’m not saying one platform or another is better for video campaigns. All I’m saying is that normalized ad data uncovers the truth and helps marketers better meet and exceed their business objectives.

If you let Facebook or YouTube or the myriad other ad-buying platforms tell you how your ad dollars are performing on their platform, they’re going to do everything they can to make their data look better. Take control of your analysis by harnessing the power of data normalization.