Digital advertisers send digital campaigns to targeted audiences across a complex combination of marketing channels. Not surprisingly, they are challenged with answering basic questions: who saw their campaign, where they saw it and how they responded compared to consumers who might have been targeted but did not see the campaign. These questions, while fundamental, are not simple to answer.
To answer them quantitatively, Acxiom begins by onboarding the initial distribution file of individuals or cookies that were targeted, the ad exposure file of cookies that viewed impressions and the response file of consumers who converted or made purchases. Before the consumers can be matched across files, all data source files first must have individual or cookie identity information mapped to a common individual identifier.
Ideally the individual identifiers are persistent, unique and match across all files, whether they are sourced from offline consumer information or from online cookie information. Because the distribution, exposure and response files come from different entities and different domains (online or offline), they typically contain different identity information for the same consumers – e.g., name and address in the response file, name and email in the distribution file and email only in the exposure file. It is not sufficient to assign an identifier based on these disparate pieces of information; instead one needs an identity graph that will recognize these identities as a single individual and assign a persistent, unique and anonymous common identifier to fuel effective measurement.
The identity mapping process can be especially tricky for online exposure files that require non-distribution partners to report impressions and map cookie IDs to individual information that can then map to a common individual identifier. When impressions are distributed and reported by separate entities, their matchable cookie pools may overlap by a large percentage. The cookies that are not in the overlap percentage are unreportable; i.e., we will never know if they received an impression. The rest of the cookies may receive an identifier, but it is unique to that file and not likely to match to identifiers in distribution or response files. The cookies that do not receive persistent, unique and common identifiers are unmatchable. We may know they received an impression but will never know if they were in the initial distribution or if they responded to the campaign.
The Impact of what can’t be measured
What is the significance of unreportable and unmatchable consumers, and how does this impact measurement? Typically, we count distribution individuals who are identified and reported as exposed, and we count distribution or reported exposed consumers who are identified and matched as responders. We then normalize these counts, dividing by the total distribution or the total reported exposed population to get exposure or response rates, which can then be compared and tested for significance. Problems arise when the rate denominators include large numbers of individuals who are not reportable or matchable. We know some unreportable and unmatchable individuals were exposed to the ad and were responders, but we can’t identify and count them, so they can’t contribute to the rate numerator. However, they are included in the rate denominator, resulting in observed rates that are lower than the actual rates we would see if these individuals were excluded.
As an example, assume the actual ad exposure rate is 0.35 and the actual response rate is 0.02 for exposed consumers and 0.01 for unexposed consumers. If the exposure reportable rate is 0.8 and the exposure matchable rate is 0.3, then the observed ad exposure rate will be 0.28 (0.35 x 0.8) and the observed exposed and unexposed response rates will be 0.006 (0.02 x 0.3) and 0.003 (0.01 x 0.3), respectively. If the distribution and response files also contain unmatchable individuals, as they commonly do, then the observed rates could be even lower. The end result is that the actual response rates would most likely be significantly different, whereas the observed response rates would not be, leading us to conclude that the campaign had no effect.
How do we improve measurement?
The first step is to work with an onboarding partner like LiveRamp that can maximize the reporting overlap and maximize ID persistence, uniqueness and matchability across all files. The second step is to identify which IDs are not matchable or reportable and make sure these IDs are not included in the populations used to calculate exposure and response rates. You may end up with a smaller population, but your results should be more accurate, more consistent and more testable.