Fighting Unconscious Bias In AI With Diversity - Insights From SXSW 2021 (Part 2)
Workplace | 09 Apr 2021 | By Guest Author
Fighting Unconscious Bias In AI With Diversity - Insights From SXSW 2021 (Part 2)

It was an honor to represent Indonesia in the SXSW 2021! We met lots of wonderful connections from around the world and visited innovative booths by startups from various industries.

The most exciting part, though, was getting valuable insight straight from subject matter experts on key issues in people and technology.

In this special series, we want to share with you the essence of some of the most brilliant conference sessions and keynote speeches in SXSW 2021.

Fighting Unconscious Bias In AI With Diversity

On the fourth day of SXSW 2021, four thought leaders delivered an insightful talk on the topic of bias particularly with the development of AI. This title, Fighting Unconscious Bias In AI With Diversity, was especially interesting to us given the role of AI in Dreamtalent, and how bias can be harmful for recruitment and assessment.

The panel consisted of Beena Ammanath, Executive Director of the Deloitte AI Institute; Alka Patel, Head of AI Ethics Policy in the US Department of Defense; Jana Eggers, CEO of Nara Logics; and Kellie McElhaney, Founding Director and Professor at the Center for Equity, Gender, and Leadership (EGAL) in Berkeley Haas.

The AI is already a powerful tool, capable of anything from automating tasks to making decisions, and its future potential seems limitless. But in order for AI to reach its full potential there can be no bias involved in its creation or operation.


For AI to reach its full potential, there can be no bias.


However, the talk opened with the stark realization that humans are biased by nature. This creates the real risk of unconscious bias even when we are aware of the fact and try to be as fair and impartial as possible. That unconscious bias is then carried on when we write code, and our AI ends up unwittingly biased as well.

How can we take measures against unconscious bias in the development of AI, and how can diversity help us achieve this?

Mitigating Bias in the US Department of Defense

Alka shared about the systems-based approach to mitigating bias in AI in the US Department of Defense (US DoD). What is meant by a systems-based approach is to not take things in isolation: to not only look at the data side, but also the human aspect of AI, which is the team behind it.

This means looking at the entire AI development life cycle, including but not limited to:

  • Design
  • Development
  • Deployment
  • Usage
  • Evaluation

With this approach, we realize that the responsibility to prevent unconscious bias cannot lie on technologists and developers alone. Policy teams, test and evaluation teams must also be responsible in this purpose.

She noted that the human brain takes shortcuts in order to make the easiest decisions — it’s easier to form opinions and make decisions based on our personal experience rather than elaborate logic — and this is how unconscious bias could arise.

With that in mind, one way the US DoD mitigates unconscious bias in AI is to bring a diversity of perspectives, backgrounds, and way of thinking by involving all of the team involved in the entire AI project. It’s by involving the human aspect of AI development that we could fight against bias in AI.

Training AI Leaders in Equity Fluency

Kellie added on the opening remarks about the biased nature of humans and said that there are two types of people: those who are biased and realize it, and those who are biased but don’t realize it. The latter group, the unconsciously biased, is dangerous to the health of an entire organization, all the way to its products, its culture, and is carried on to the AI that it’s developing.


There are two types of people: those who are biased and know it, and those who are biased and don't know it.


How can we make the unconsciously biased conscious? To make people aware of their bias, it’s necessary to have difficult conversations, which in turn requires psychological safety (when it’s safe to be open and vulnerable).

Since culture comes from the top, it’s important that leaders in AI projects be fluent in equity in order to understand and shape a culture of psychological safety. When it becomes okay to address unconscious bias, it becomes easy to eradicate it for your team and therefore your AI.

Diversity in Hiring

The panel agreed that you need actual diversity in order to truly be bias-free, especially in AI development. While teaching equity and psychological safety is important, hiring with diversity in mind is much more effective in this respect.

What is meant by diversity is more than race, sex, and gender, but also diversity of thought and backgrounds. There was an interesting suggestion to diversify the hiring pipeline. Instead of limiting yourself to hiring from top universities, look towards other schools and universities because talent is everywhere, and the different backgrounds and ways of thinking would contribute further to your AI team’s diversity.

Yet the panel acknowledged that even in hiring there is bias, and the biggest risk of such happens during small talk. No doubt this is where multi measure assessments can serve recruiters to be objective and bias-free.

Finally, the panel discussed the important role of incentives in driving anti-bias behavior among the team until it becomes muscle memory and an integral part of your company culture.

Insights From SXSW

Dreamtalent is planning to cover one more incredible talk from our experience in SXSW 2021, so stay tuned for the final article in this series!

Read part 1 here: The Empathetic Workplace