Last week, The New York Times published an article titled “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement,” highlighting 12 different people who helped bring about the modern AI movement. Women are notably absent from the list.
The list has been rightly criticized for the omission of well-known female AI figures including Dr. Fei-Fei Li. It has also served as a springboard to talk about the broader diversity problem in the field of AI. In 2021, only 21.3% of the AI PhD graduates in North America were female. Less than one-third of AI-related positions at US tech companies are occupied by female employees, according to data Revelio Labs shared with Charter.
We recently spoke with Dr. Olga Russakovsky—an associate professor in the computer science department at Princeton researching computer vision and a co-founder of AI4ALL, a nonprofit dedicated to making AI more diverse and inclusive—about the lack of gender and racial diversity in AI. Here’s an excerpt from our conversation, edited for length and clarity:
What do you see as the root causes of the lack of diversity in computer science, and AI specifically?
Some of this is the correlation with socioeconomic groups and access, lack of role models, and lack of visibility. Some of this is around implicit bias in hiring and recruiting. And then there's the issue of fear of AI, which is different from the rest of computer science. There's the issue of job displacement, and the Hollywood killer robots, things like that. If that's the way that AI is being portrayed, then students who don't see themselves represented in the field and have to fight to get into it—why would they fight? Why would they go through the trouble if what they see is that this is a field that's going to take away jobs and kill us all?
One of the things we try to do at AI4ALL is talk about all of the ways in which AI can actually change the world for the better. Many of these students are passionate about climate, mental health, solving poverty, solving resource allocation and distribution, solving disaster relief or response accessibility. We talk about ways in which AI is going to be transformative for all of these fields, and help solve some of the problems they see in their communities. That’s the connection between what's going to happen with the field and how you inspire a more diverse group of students to join.
If we think about what's going on in the field right now, I think we're definitely underutilizing the power of this technology to solve some of these problems. There aren't enough people who have different training, who come at it from different perspectives and different backgrounds, who would be excited about solving this wide array of problems, and who are really passionate about bringing their particular perspectives, backgrounds, training into the field.
Your faculty page mentions a 2015 study published in Science that found that women and Black people are underrepresented in academic fields whose practitioners believe that innate talent is the main requirement for success. How do you see that playing out in the field of AI?
There are many loud voices who are talking about ‘brilliant’ and ‘genius’ advancements. I don't know if anybody has done research on where AI is on that spectrum of perceived brilliance, but given some of the news coverage, my guess is it would be pretty high up there. [There’s this idea] that you have to have a brilliant mind to make progress in it and you have to have some of the training that starts in middle school these days for some students. You kind of feel like you're already behind in terms of entering the field or making any kind of contribution. And that certainly contributes to driving folks [out]. We're implicitly discouraging them from even trying.
Charter recently published the research playbook “Using AI in ways that enhance worker dignity and inclusion,” which provides frameworks for implementing AI in the workplace in ways that ensure more workers benefit from the technology. Here are two of those frameworks that are particularly relevant to diversifying AI efforts:
- Recognize that discourse around AI can be exclusionary, and set a more inclusive tone. Conversations about AI are often full of jargon, leaving people on the sidelines. Many of the concepts underpinning AI can be taught to anyone. “We explain the basics of machine learning to high school students and [it’s] totally fine,” says Russakovsky. Here’s a great podcast episode to share with your colleagues who want to learn more about AI: Explained: The conspiracy to make AI seem harder than it is!
- Prioritize inclusive AI engagement by involving people in groups that are least likely to be using AI today. Here’s a short checklist of what organizations can do:
- Audit hiring practices. Who is being screened out because of their existing skills whom your organization might better bring along and train to use AI tools?
- Highlight AI benefits that are specific to a diverse set of users. Explain how, with human oversight, AI can make jobs easier, faster, and safer. Tailor messaging to resonate with relevant audiences based on their needs and concerns.
- Provide extensive AI training and support, including clearing time in staff schedules for participation. Training should take place at multiple knowledge levels, from AI basics to hands-on workshops, and be available for synchronous and asynchronous participation.