BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Why Chief Diversity Officers Must Lean In More On AI Data Bias

What is the role of a Chief Diversity Officer (CDO)?

The primary role of a CDO is to promote and advocate for diversity, equity, and inclusion within an organization. They work to create a more inclusive environment by implementing strategies, policies, and programs that foster diversity and equality. They collaborate with different stakeholders to develop programs that address bias, discrimination, and underrepresentation. A diversity leader also facilitates training and education on cultural competency and unconscious bias, and they often support recruitment strategies to ensure diverse talent pools are targeted and sustained. Ultimately, their goal is to cultivate an inclusive workplace where diverse individuals can thrive and contribute their unique perspectives and talents.

Diversity officers can play a crucial role in addressing AI data bias.

Data bias” is a term used to refer to the effects of AI algorithms run on biased data. While machine learning programs are technically incapable of making human errors, these programs are “trained” based on data entered by ordinary human beings. If the data set used is biased, software simply reinforces the human errors and biases.

Bias in data is an error that occurs when certain elements of a dataset are overweighted or overrepresented. Biased datasets don't accurately represent ML model's use case, which leads to skewed outcomes, systematic racial prejudice, and low accuracy. Algorithms simply encode and amplify human biases and fuel many issues from impacting legal sentencing, advancing far too rapidly automotion of jobs, such as journalism which is significantly impacted by data bias, and inability to sort out cultural contextual relevance due to skewed data sets.

‘While appreciation that algorithms and ML programs are not immune to bias is increasingly mainstream, ongoing plans to correct for bias in said programs among businesses that use them are not”, a conclusion of a study by Progress Software Corp (based on 640 interviews with IT professionals in 12 countries). The lack of focus on data bias controls in sourcing AI vendors or in closely monitoring AI software development programs is a area for improvement, and many board directors and C-Leaders are not skilled in this area, exasperating the problems in this area.

Many leading researchers have been advocating for improving AI data sets, such as MIT researcher, Dr. Joy Buolamwini noticed in working with leading facial analysis software and noticed that the software could not detect her face, because the algorithm had not been trained on a broad range of skin tones and facial structures. You can watch her research video here.

With the explosion of AI, and most recently with ChatGPT, the growth of data sets that are being trained on historical data sets that are saturated with many social, cultural biases only amplifies what we do not want to bring forward in the new and enhanced AI intelligent world.

So, I tried an experiment yesterday with ChatGPT, as I am heading down to Seattle to speak at the Wellness Redefined Conference, on Generative AI, sponsored by Amazon and I asked ChatGPT this question:

What is the role of Chief Diversity and Inclusiveness Officers in AI data Bias?

I was very pleased to see how logical and thorough the ChatGPT response was.

As AI systems are trained on large datasets, they can inherit and perpetuate biases present in the data, which can lead to unfair or discriminatory outcomes. Diversity officers, with their expertise in promoting diversity and inclusion, can help identify and mitigate biases in AI systems.

Here are some ways diversity officers can contribute:

  1. Bias identification: Diversity officers can work closely with data scientists and AI developers to identify potential biases in the training data. They can bring awareness to different dimensions of diversity and ensure that biases related to race, gender, ethnicity, age, or other protected characteristics are considered and addressed.
  2. Ethical guidelines and policies: Diversity officers can collaborate with AI development teams to establish guidelines and policies that promote ethical AI practices. These guidelines can address issues such as bias detection and mitigation, data collection and labeling, algorithm transparency, and fairness in AI decision-making.
  3. Diverse representation in AI development: Diversity officers can advocate for diverse representation within AI development teams. By ensuring diverse perspectives and experiences are included in the development process, the likelihood of biased outcomes can be reduced. They can also encourage the inclusion of underrepresented groups in data collection and annotation processes to minimize biases in the training data.
  4. Continuous monitoring and evaluation: Diversity officers can establish mechanisms for ongoing monitoring and evaluation of AI systems to identify and address any emerging biases. Regular audits and assessments can help detect bias and provide insights for improvement.
  5. Stakeholder engagement and transparency: Diversity officers can engage with internal and external stakeholders to raise awareness about AI bias and its potential impact. They can facilitate dialogue and transparency, ensuring that concerns and feedback from diverse communities are considered in AI system development and deployment.

Conclusion:

Chief Diversity Officers have a very important role to play in mitigating the risks of AI data bias, and need to work collaboratively with diverse stakeholders, data scientists, ethicists, cyber-security experts, legal, and policy experts. The EU wrote a good policy research paper on AI DataBias.

As research from the EU Agency for Fundamental Rights (FRA) highlights, the use of AI can affect many fundamental rights. While algorithms can be a force for good, they can also violate the right to privacy or lead to discriminatory decision-making, which has a very real impact on people’s lives. A cautionary tale about the possible negative effects of biased algorithms in practice comes from the Netherlands. In 2020, it came to light that the Dutch tax authorities had used algorithms that mistakenly labelled around 26,000 parents as having committed fraud in their childcare benefit applications. Many of these parents had an immigration background. They were required to pay back large sums, which led to great financial and psychological difficulties for the families concerned. The data protection authority concluded that the processing of data by the AI system in use was discriminatory.

Chief Diversity and Inclusiveness Officers must increase their digital literacy understanding of AI, and engage more actively with their CIOs, CTOs, and Chief Data Science Officers. They, after all, have a great deal to lose if Data Bias risks in large data sets are not more rapidly curtailed. Leaning in more in AI practices will also increase their governance leadership profile in an area critical to ensure diversity and inclusiveness flourishes vs marginalized in old paradigms.

Research Sources:

OpenAI: ChatGPT (chatbot)

EU Report on AI Data Bias in Algorithms

Follow me on Twitter or LinkedInCheck out my website or some of my other work here