My great grandfather Lawrence “Chubby” Woodman invented the fried clam on July 3, 1916, in front of his restaurant, Woodman’s, on the North Shore of Massachusetts. When the Depression hit, the restaurant business was slow, and Chubby tried his hand at gambling. He won but knew his wife wouldn’t approve so he hid his gambling winnings in the potato slicer. Unfortunately, he forgot about it, resulting in an incriminating pile of green potatoes the next morning. With his gambling efforts thwarted, Chubby refocused his attention to driving business to the restaurant and came up with a promotion for a free cup of clam chowder with any purchase of fried clams. He hired some local kids to distribute the coupons and paid them for a nickel for each coupon redeemed.
At the end of the promotion, one of the kids had earned much more than the others. As Chubby paid that kid, he asked him the secret to his success.
“Easy,” the kid replied, “The other kids rode their bikes around town stuffing mailboxes. I stood outside the door of the restaurant and handed them to people as they walked in.”
To be effective, advertising data measurement should be held to the same standard as any other scientific trial.
The stories of Chubby, the fried clams and even the money in the potato slicer are true. You might recognize the coupon scenario as an adaptation of a popular marketing anecdote. It does a good job of illustrating that attribution measurement was created so that credit could be assigned to ad touchpoints and not designed to scientifically measure ad effectiveness.
The current state of digital marketing measurement is similarly problematic, and it too started long ago. In the early stages of digital marketing, the ad servers like DoubleClick, had a third-party cookie that identified a browser across advertisers and publishers with a third-party cookie ID. DoubleClick also ran a performance ad network and needed a way to pay publishers based on their contribution to “measured performance.” This was the beginning of attribution measurement like last click/last touch (LTA) that would later lead to Multi-Touch Attribution (MTA).
The attribution method was developed to facilitate publisher payment and, without much critical analysis, was adopted by marketers as an acceptable form of measurement. Marketers were seduced by the personalization that came with digital marketing that had been so elusive with traditional forms of advertising. This led digital marketers to overlook an important critical flaw in attribution measurement that still drives the lion’s share of digital marketing decisions today.
The attribution methods observe the ad exposures prior to the consumer conversion and give credit to either some, all, or only the last observed ad touchpoint (depending on the method). These methods assume a causal relationship between ad exposure and conversion and do not take into account that the conversion might have occurred regardless of ad exposure. In other words, these consumers could have been handed coupons on their way into the restaurant.
This type of measurement will give credit to any ad that occurs along the customer journey, regardless of the ads’ influence on the consumer. It creates a perverse incentive for the media platform to place the smallest, least viewable, least expensive ads in front of users (human and nonhuman) that have the highest propensity to convert, even though the conversion probably would have occurred in the absence of the ad.
The problem is compounded when a marketer allows walled gardens and other media platforms to use machine learning to optimize to the metrics produced by that platform’s own self-serving measurement. The platform’s algorithms are given the opportunity and incentive to maximize media spend using this approach which shows the highest ROI and then justifies an increase in marketing budget.