What do you think about when you think of artificial intelligence? Perhaps the Terminator movies, where the machines rise up and destroy us. Or perhaps you don’t think AI is worth worrying about until Siri is capable of playing the song you actually asked for. Many people fall into one of two camps: Either they have a fear of some future robot apocalypse or they feel that AI is tech-bro baloney that’s irrelevant to normal people. The trouble is neither camp is engaging with it, so they’re not seeing all the ways AI is affecting our lives right now.

Tech experts agree that AI will be as transformative for the world as the internet has been. But the story of AI—from Alan Turing’s work in the past century to “Godfather of AI” Geoffrey Hinton’s multi-layered neural networks—has so far been written by men. “It’s true that the field is shaped disproportionately by white dudes, and this is the case in both academic and commercial spaces,” says Brittany Smith, associate fellow at Cambridge University’s Leverhulme Centre for the Future of Intelligence. “This means that biases can be encoded into AI systems in lots of ways.”

Since OpenAI launched the widely used ChatGPT at the end of last year, tech giants such as Microsoft and Google have been in a race to introduce similar products. The ensuing frenzy has caused panic that we’re moving too fast. A group of prominent computer scientists signed an open letter insisting that AI development be paused because “human-competitive intelligence can pose profound risks to society and humanity,” from spreading misinformation and automating jobs to more catastrophic future risks that we can’t yet imagine. Even Hinton has warned that in the wrong hands, this technology could be devastating. Meanwhile, Christopher Nolan says that his film Oppenheimer, about the development of the atomic bomb, contains “very strong parallels” with AI, and the whole of Hollywood was recently on strike for fear of what AI might mean for their jobs. It’s scary to think that the genie might be out of the bottle and little wonder that so many of us take a head-in-the-sand approach. But there is a bright spot in this story: the women working to create positive change and make AI a force for good.

One of the areas where AI most affects our day-to-day is the workplace, particularly when it comes to recruitment. “It’s one of the civil-rights issues of our time,” says Hilke Schellmann, an Emmy-award-winning investigative reporter and author of the upcoming book The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted and Fired and Why We Need to Fight Back Now. “We are automating all of the inequities we already have, and it goes unnoticed.” If an AI system designed to scan CVs has been programmed using those of previous successful candidates, then it might adopt historical prejudices. For example, if people who were successful in a particular role before tended to be middle-class white men, then the algorithm will favour certain words, schools or even particular hobbies. So a woman might be perfect for the job but her CV would be rejected because it doesn’t meet the nebulous criteria based on who has done that role in the past.

Then there are issues with facial-recognition technology, which is being used for recruitment, with AI analyzing a person’s words, intonation and facial expressions. But what if these are misread? What if you’re prone to “resting bitch face,” where your neutral expression is perceived as irritated? This afflicts many of us but is almost always attributed to women. “A computer might assume that because you’re smiling, you’re a happy person, but that’s not scientifically rigorous,” says Schellmann. “And what happens if we’re being compared with successful candidates who are white men? Do they have expressions that perhaps women, or women of colour, don’t?”

It’s no coincidence that the earliest critics of AI have been women of colour who quickly realized that their experience of being marginalized in life was showing up in AI tools. In 2018, Joy Buolamwini published a study called Gender Shades, about how facial-recognition systems are inaccurate when used with darker skin. “That form of inaccuracy can lead to people being falsely accused of crimes, which has a direct effect on their life,” says Smith. “If you’re in jail for the weekend while you wait to see a judge on Monday morning, you might lose your job or not see your kids.”

Worrying about a sci-fi style apocalypse is something of a distraction from the very real issues that are affecting us right now.

Worrying about a sci-fi-style apocalypse is something of a distraction from the very real issues that are affecting us right now. “So much of the public imagination around AI is focused on speculative existential harms,” says Smith. “Will AI systems become more intelligent than us and decide to harm us? Maybe. But your life is already being affected by AI systems in ways that we can solve. Our ability to intervene successfully today is directly connected to bigger, more complex interventions we might need in the future. Ignoring what’s right in front of us would be bizarre.” The answer, adds Smith, is in public awareness and participation. “We need more public engagement and deliberative democracy,” she insists. “Some AI companies have launched efforts to gauge what people think are appropriate uses of AI. This is great, but we shouldn’t be solely dependent on the goodwill of companies to do the right thing for our democracy. This is why it’s so promising that governments are paying much closer attention now and why it’s important that we all think about how to maximize benefits while minimizing risks.”

Tech is also a space where women are having some influence on the future of AI. Partnership on AI is a non-profit global alliance of more than 100 organizations, including Meta, Apple, Google and IBM. “Our goal is to advance the development of AI so that it benefits people and society,” says CEO Rebecca Finlay. “We were founded in 2016 when people were starting to see that although AI has the potential to be extraordinarily positive, it can also be biased and harmful.” And if you think that you can somehow go through life without engaging with it, Finlay has news for you. “If you’re on TikTok, you’re using AI. If you use Google Maps, you’re using AI. I’d encourage people to acknowledge that and take a look at those systems and think: How is this working to help me? And how, potentially, could it be hindering me? Because you can’t just disengage. The reality is that we’re all engaging with it every single day.”

And while Finlay acknowledges the issues, she is optimistic about the future. “It’s a field where there has been a lot of good work,” she says. Some of that work has been around finding ways to regulate these systems. Lila Ibrahim is COO of DeepMind, the AI-focused arm of Google. She started her career as an engineer on Intel’s Pentium processor in 1993, so she has seen extraordinary change over the past 30 years. “The responsibility to ensure that this technology is representative, inclusive and equitable falls on us all across tech, government and civil society,” she says. “Like any transformative technology, AI demands exceptional care. That’s why we prioritize diverse perspectives and bring outside voices in—to hold us accountable.”

She uses the example of DeepMind’s AlphaFold database, a system that predicts a protein’s 3-D structure based on its amino-acid sequence, accelerating biological research. “Before releasing it, we sought input from more than 30 experts across biology research, security and ethics to help us understand how to best share the technology with the world.” She says that AI has the potential to help solve some of humanity’s biggest challenges. “But it must be built responsibly, with diversity, equity and inclusion as a priority, not a bolt-on,” she insists. “There’s no silver bullet, so we must work to make sure these voices have a seat at the table.”

The absence of that silver bullet means we’re lucky that there are smart, focused and ethical individuals working in this field, one of whom is Verity Harding, a globally recognized expert in AI, technology and public policy. Formerly of DeepMind, she is the founder of tech-consultancy firm Formation Advisory and an affiliated researcher at Cambridge University’s Bennett Institute for Public Policy. She rejects the narrative (as referenced by Nolan) that likens AI’s advent to that of the atomic bomb. “In a democracy, we can all have a say,” she says. “I’d encourage anyone who’s concerned or excited about the potential uses of AI to get involved in trying to shape it, whether by writing to their MP with their views, demanding that schools and workplaces have clear AI policies or simply reading more about the subject to understand the clear links between politics and technology.”

Harding’s debut book, AI Needs You: How We Can Change AI’s Future and Save Our Own (out in March 2024), points to an achievable future in which AI serves purpose rather than profit and is firmly rooted in societal trust. “The good news is that what we call ‘AI’ is actually just people,” she says. “It’s people building new technology and making decisions about what it should look like and how it should be used.”

Finlay, Ibrahim, Harding and other women are helping create a more diverse AI workforce, which will expand AI’s creative output. It seems that now is the time to start influencing the narrative around AI and how this technology will affect our lives. Humans are still the ones in control here, and we’re already seeing great advances in science and medicine. Forget the idea of a robot apocalypse (for now, anyway). Because if we were to overlook the benefits of AI out of fear or complacency, that would be the more imminent catastrophe.