Identity graph generation, it seems, is being pushed down the route to commoditization. I get it; the nature of identity brings a scientific 1s and 0s approach to the surface. The availability of broad, high-coverage third-party data creates an easy path toward a one-size-fits-all answer.
The baseline approach historically has been standardizing first-party data and adding third-party data to fill the gaps. The third-party data takes one of two forms: linkages or feeds. With my background at Acxiom, linkages between first-party data and LiveRamp’s AbiliTec® is my comfort zone. However, there are many other referential graph providers that can be used in a linkage scenario.
The other common approach is to add third-party data files (credit bureau files, suppression feeds, prospect lists, etc.) within the ID graph itself. Name your focus, and I suspect there will be a provider that claims to cover the need. If you find a good provider, these assets do a great job establishing connections and filling coverage gaps, but by nature third-party data does not offer the ability to tailor the system to the brand.
The new science being applied is machine learning (ML). Depending on who you ask, ML is either a broad set of principles Acxiom has been applying to identity capabilities for more than 20 years, or, as I believe, a more tightly defined view of activity areas Acxiom is only beginning to touch. What is widely agreed upon is that these ML principles have endless applications, and those not already moving down that path are falling behind. However, ML itself is not the total answer either. While ML is by no means a single cookie-cutter, it is still a one-size-fits-all approach limited by the assets used to establish”‘the truth” via training of some type.
My questions include: Do these approaches go far enough to serve the brand’s needs? How can they without considering the brand in the first place? How can the brand take the next step to creating truly personalized experiences with a commoditized identity as a starting point?
I gravitate toward a blog from a fellow Acxiomite published this time last year, “It’s All About the Who – Linda Harrison, April 30, 2019.” Linda outlines the marketing need as all about the five Ws of journalism, particularly focused on the “who.” The “who” is identity. As Linda notes, the brand’s overall goal is to:
“…blend art and science to create engaging, meaningful messages for a unique audience … based on real people – the right real people.”
Being focused on identity for most of my career, I take Linda’s statement a bit further and say that “the right people” is a characteristic specific to the brand, and not something that can be answered in a one-size-fits-all manner.
This can only truly be solved by adding an artistic perspective that centers on the brand’s unique perspective on top of the scientific models.
You do not have to look hard to see that consumers are sending signals justifying their desire to be treated personally, transparently and fairly. The most obvious signal is the drive for transparency through privacy regulation. This has the industry’s attention generating major shifts in activity, including limiting (or eliminating) many third-party data usage models (see Acxiom POV response to Google’s decision to retire third-party cookies for an example). However, consumers are still clearly expressing a desire to be “wooed” by a personal experience, at the right time, on the right device, without being overloaded with content.
The artwork aspects of brand-centric identity enables the opportunity to push toward the final goal. The first part of this blog series “Building an Identity Graph – Pain Tolerance” covered how no two clients are looking for the same answer, nor do any two clients bring the same data assets to the mix. The true need is to talk to the clients, identify their use cases and make specific decisions to meet those specific needs.
Another is knowing when, where and how to use all the aspects you have in the mix. The scientific tools are there to leverage third-party data, ML, matching algorithms, brand data, client use cases and the brand’s specific characteristics.
Taking all of those to craft a system generating the result at the right level on the reach vs. precision spectrum does not always mean treating everything the same. Some data assets are simply better than others. Those should be used to build the spine of the graph. Other assets simply need to learn from the graph, without influencing it over time. The decisions of where and when to use each component are based on many factors, like the level of trust you have in the data asset itself or the usage restrictions associated with data assets either from the brand’s perspective or with third-party assets brought into the mix. Building a system taking all of these factors into account is not something viable for a commoditized identity model to achieve.
In the next post, I’ll dive deeper into crafting the full system, with a focus on the pitfalls I’ve encountered and how to avoid them through the build-out process.