skip to main content

Six steps to guard against bias in AI results

  • Jordan Abbott

    Jordan Abbott

    Chief Privacy Officer

Created at January 22nd, 2024

Six steps to guard against bias in AI results

January 28 is a day when businesses and individuals alike reflect on, and take action to protect privacy and data. With artificial intelligence (AI) dominating the marketing and technology stage, I think it’s critical to address AI as it relates to data privacy.  

The recent CES event in Las Vegas might just as well have been called CES AI.  To say AI was the focus is an understatement!   And understandably so. AI has opened up exciting possibilities for brands, namely around improved efficiency and enhanced customer experiences. AI-driven marketing, of course, relies on data and is only as good as the data available to it.  This rightfully has raised concerns about the potential introduction of bias into the results. Let’s examine the causes of bias in AI results, outline the potential negative outcomes it can have, and discuss steps brands should consider to avoid these problems while safeguarding data privacy.

What is bias in AI?

First, what do we even mean by bias in AI results?   AI systems learn from the data they are trained on. If the input data lacks diversity, is influenced by societal biases, or is otherwise skewed, the AI technology may further perpetuate or amplify these biases in its outputs.  Decisions made during data selection and preprocessing can introduce bias, often unintentionally. 

An incomplete or unrepresentative dataset can lead to biased models, which might result in unfair outcomes or discriminatory practices adversely impacting marginalized communities.  For example, if geographic location is an input for an AI-generated marketing offer, and if the location of an individual is highly correlated with ethnicity, then an AI-powered algorithm could unfairly perpetuate bad decisioning – for example, offer suppression.  Bias in business location decisioning, healthcare, financial services, hiring decisions, and even our justice system, come to mind as areas where AI bias should be a concern. 

Also, the implicit biases of the developers who create and train the AI models can unintentionally influence the system’s behavior.  Privacy architect Carl Mathis said, “We, as humans, are creating our AI algorithms, so therefore our AI algorithms inherit any biases that we might have.”  If these implicit biases are not intentionally considered and addressed, they can unintentionally be reflected in the results.

It strikes me that bias, which is really just the perpetuation of exaggerated or missing information, has been a problem for mankind throughout history.  However, the speed at which this biased information can be replicated is a big concern with AI. If the biased AI system is used to automate decision-making processes, that can have legal effects,  e.g., like whether someone is hired or approved for credit or can rent an apartment.  It can rapidly reinforce existing inequalities in society, perpetuating social inequities and denying opportunities.

In a very crowded market, brand perception is job number one, which means not addressing biased AI systems can not only lead to bad decision making, it can lead to negative publicity, harming brand reputation and eroding public trust. People are increasingly aware of data privacy issues and will, no doubt, tend to favor companies that demonstrate ethical AI practices.

How can brands address bias in AI? 

I think brands should respond essentially the same way successful societies have always addressed misinformation or exaggerated information –- with robust education, thoughtful reflection, careful expert collaboration, and yes, legislation.  Here are six things brands can do right now to avoid bias systems that harness AI technology.

  1. Diverse and Representative Data: Strive for diverse and representative datasets when training AI models. Aim to include different demographics and perspectives to reduce the risk of perpetuating existing biases.
  1. Robust Testing and Evaluation: Conduct thorough testing of AI models to identify any biases in their outputs. Evaluation should focus on fairness across various demographic groups to catch hidden biases and discrepancies.
  1. Regular Audits and Monitoring: Implement regular audits and ongoing monitoring of AI systems to detect and correct any emerging biases. This ensures continuous improvement and alignment with ethical and privacy guidelines.
  1. Collaborate with Domain Experts: Involve domain experts, ethicists, and diverse voices during the development and training processes to provide valuable insights and challenge potential biases.
  1. Transparency and Explainability: Strive for transparency by ensuring AI systems are explainable to both users and developers. Understanding the factors that contribute to its decisions can help identify and mitigate biases effectively.
  1. Support a National Data Privacy Law:  We’ve seen multiple states take steps to address the overarching issue of data privacy.  It’s time to settle on a uniform national law that addresses many of these issues.  Doing so is a win-win for people and for business.

Data privacy concerns in AI systems go beyond safeguarding personal information. Bias in AI results poses serious threats to individuals and society as a whole. By addressing the causes of bias and implementing measures to prevent its introduction, brands can avoid potential negative outcomes. In doing so, they not only protect individual privacy but also cultivate a trustworthy and responsible AI ecosystem. Remember, safeguarding data privacy and mitigating biases are ongoing commitments that require collaboration, scrutiny, and continuous improvement to promote fairness, equality, and ethical AI practices.

Jordan Abbott

Chief Privacy Officer

Jordan Abbott is Chief Privacy Officer of Acxiom. He advises key stakeholders on legal, data governance and compliance policy as well as handling government relations, where he provides strategic insight on proposed legislation at the state and federal levels.

More from Jordan Abbott Connect on LinkedIn