This article was originally published by The Drum.
Bias has become a villain in the AI story, but is its reputation deserved? Graham Wilkinson, chief innovation officer at Acxiom, explains why bias in data-driven marketing is, to some extent, both necessary and unavoidable. He discusses what brands can do to minimize the potential negative impact of AI bias and make AI-powered marketing fairer for more people.
Since artificial intelligence (AI) entered the marketing conversation, bias has been getting a lot of attention. It’s portrayed as something intrinsically negative, and it certainly can be when it starts to unfairly impact people’s daily lives – something we all should remain vigilant against. But from a data-focused perspective, there are two things to consider regarding bias. Firstly, it’s integral to pattern recognition. Secondly, it’s impossible to completely erase.
The need for bias in pattern recognition
All data sets contain some form of bias. And, from a purely scientific perspective, this bias is necessary for patterns to emerge, enabling us to make data-based predictions. As a simple example, the fact that the climate is warmer in summer than in winter is a type of bias that contributes to our ability to forecast the weather.
Bias becomes a problem when people start to be treated unfairly as a consequence of it. As AI amplifies the bias – and any resulting unfair treatment – it’s only natural that we want to eradicate it. But that’s often just not possible.
The impossibility of solving bias
Bias is a moving target because if you address bias in one area, the chances are you inadvertently create bias in another. Bias can’t be eliminated altogether, only shuffled around, so it’s a pipe dream to think the world can be bias-free. Unfortunately, there will always be someone, somewhere, who’s disadvantaged by it.
Rather than seeing bias as something to solve, we should see it as something to be conscious of, to monitor, and to factor into decision-making. We might never be able to make AI bias-free, but we can help it make better, fairer decisions for more people.
Audience building as a bias incubator
Audience building is a prime example of a marketing activity where bias can easily creep in unnoticed. This leads to brands unwittingly excluding certain groups of people from offers and opportunities.
A brand might start with a data set that includes tens of millions of lines of data and thousands of attributes. The brand might build an audience using 10 attributes relating to behavioral, demographic, or socioeconomic factors. But in selecting those attributes, the brand could unintentionally cause others to disappear entirely from the data set. For example, it might exclude entire zip codes. Without being aware of every attribute that exists in the data, and how those attributes interrelate, it’s virtually impossible not to introduce some element of bias when creating audience segments.
Initially, this bias may not be overtly negative from a societal perspective. But as more data is pumped through the advertising ecosystem and used by AI to power tactics such as lookalike targeting, it can start to have a detrimental impact.
Tackling bias in audience creation
There are many practical things brands can (and should) do to guard against bias in building AI models. They can implement rigorous bias testing and adopt explainable AI (XAI), ie tools that explain their reasoning to humans. They can ensure ethical rigor in data sourcing and establish comprehensive AI governance frameworks with well-defined roles and responsibilities.
But tackling bias also requires a change in the way we think. Returning to the audience building example, brands can start by simply questioning why they’re selecting the attributes they are and considering what the potential impact of those selections might be. They can examine the data that’s been discarded from the model to find out who’s been left behind and learn from that insight. And they can pay more attention to the outcome of their marketing efforts – did they reach the people they expected to, or just a subset?
To take it a step further, they can adopt a strategy of constant experimentation and exploration at the edge of their customer universe. In addition to proven tactics like remarketing and lookalike audiences, they can try to reach out to a more diverse and perhaps fairer group of people, people they’ve never reached before but who are still likely to be interested in their products or services. The impact may only be small, but it’s these tiny incremental changes that will help minimize the impact of bias – perhaps even igniting a few sparks of creativity in the process.
For those who are fully committed to tackling bias, there’s also the option to force change in the system. By re-weighting the variables and pushing bias in a different direction, brands can influence what AI algorithms learn and produce different results.
As an example, we worked with a luxury automotive brand whose lookalike audience models were far more accurate at predicting behavior for men than for women. That’s not because women aren’t interested in driving and buying luxury cars, but simply because the model had more behavioral data about men doing so than women, which reinforced the bias into an ever-shrinking bubble. By re-weighting attributes like gender in the model, we were able to significantly increase the accuracy of predictions for women, while only slightly decreasing the accuracy for men. As discussed earlier, bias can’t be eliminated, only moved around, so there has to be some give and take.
Wielding the bias stick increases the problem
So why aren’t brands taking these relatively simple steps to address bias and make the world a little fairer for everyone?
The answer – at least in part – is the way bias is portrayed. Rather than seeing the opportunity to minimize its impact and act as fairly as possible, we’re frightened of being chastised for its very existence. As soon as we put a little bit of thought into managing bias, we’re acknowledging its presence and potentially taking responsibility for it.
For marketers, it’s easier to stick with what’s worked in the past and select the 10 attributes without considering the consequences. If they don’t ask the question, how can they be accused of bias? For brands, it makes sense to use an out-of-the-box AI model from a third party, because then they’re not the ones responsible for any bias it perpetuates. Many would get better results from building their own AI models, trained on their first-party data, but then there would be nowhere to deflect the bias blame.
Of course, none of this helps consumers. They still feel the negative impact of bias, no matter who is deemed responsible.
Managing bias takes a mindset shift
Minimizing the impact of bias to help AI make better, fairer decisions requires testing, governance, and ethical rigor. But it also takes a shift in mindset.
We must appreciate that bias is necessary and unavoidable. We must see it as something brands should be rewarded for managing rather than punished for acknowledging. And we must adopt a culture of continuous learning and experimentation. Until we start to think about bias differently, it will continue to disadvantage people far more than it needs to.