Skip to main content

Topic / Gender, Race and Identity

Women and AI: Overcoming Potential Bias and Optimizing the Best of Both Worlds

Artificial intelligence (AI) technologies are prevalent in everyday life whether its users are aware of it or not.1 From facial ID recognition on mobile phones to e-mail spell checks, voice assistants, to GPS navigation systems — these are a small snapshot of AI’s efficiency in everyday life.2, 3

At one of the leading global technology conferences, the Web Summit, held in Qatar from February 26 to February 29, 2024, conversations on AI dominated many of the discussion topics over the four days.4

The significance of AI can be seen in the myriad of ways in which AI software has revolutionized and increased operational efficiencies in healthcare, banking, communication, entertainment, and modern life in general. Legal, moral, and ethical concerns accompany much of the buzz on AI’s capabilities.5 One of such concerns is potential AI bias against women.6

The 2023 AI Risk Management Framework by the National Institute of Standards and Technology (NIST) grouped the sources of potential AI bias in three sections: systemic, computational and statistical, and human-cognitive.7 Systemic bias occurs while training data is inputted into AI. Computational and statistical bias originates in the lack of diversity in samples that lead to AI datasets. Human-cognitive bias occurs when a person or a group of people process information or make judgements from an AI system to come to a conclusion. While bias is not intrinsic to AI, the lack of representative data used to train it leads to potential bias.8

Non-representative datasets or discriminatory data with which AI and machine learning tools are trained may negatively affect women and lead to gender biases when AI are deployed to make automated recommendations for policies, practices, and processes in real life matters.

Distorted output from AI systems and AI bias hinders both men and women from maximizing the full potential of AI’s capabilities in economic activity such as business, healthcare, communication, retail, and employment.

AI bias against women has been identified in various instances, such as in healthcare where under-representative datasets of females have the potential to “skew predictive AI algorithms.”9 In a 2022 research study, AI predictive models that used blood tests to identify liver disease were found to be “twice as likely to miss disease in women as in men.”10

In 2015, an AI recruiting program that was used by a tech company was found to be biased against females when reviewing CVs for a technical position.11 The AI tool studied the pattern of the prevalence of males in previous applications from the decade before. As a result, it negatively viewed the use of the word “women’s” in the CVs it reviewed. The tech company attempted to make the AI tool impartial to such words, but it later disbanded the team that built that specific AI program.

While AI is a useful and efficient tool in various sectors, such as in medical diagnostics and in recruitment, these automated tools need to be investigated for potential bias. In order to curb the gender gaps in AI models, four things need to be done. Firstly, more female experiences, perspectives, and insights need to be adequately captured into the datasets of different fields (medical AI, business etc.) to build AI more responsibly and reliably with inclusive data.

Secondly, there needs to be more female representation in AI talent, including more venture capital (VC) funding to women founders of AI-related startups. In 2022, only 1.9% of VC in the US went to female founders of startups.12 Women are underrepresented in technical and leadership roles in tech companies. According to the World Economic Forum’s 2023 Global Gender Gap Report, only 30% of AI professionals were women in 2022.13 While the figure is a 4% increase from 2016, significant improvement is still required.

Women’s attendance at tech conferences is also important because it signals women’s participation in important conversations that shape discourse related to AI. At the Web Summit in Doha, females represented 37% of attendees, 31% of participating startups, and 30% of speakers.14 It is at conferences where industry-shaping perspectives are shared, novel ideas acquired, and where mentorship, new relationships and networks are fostered. Having fewer women than men participating in such conferences means there are fewer women harnessing the benefits. The Web Summit has recognized the importance of female participation and has launched the “Women in Tech” initiative to attract more women attending the Web Summit to improve the gender ratio at its events.

Thirdly, it is important to have gender diversity in the pipeline of AI research, development and deployment, including having more female data scientists.15 In order to increase women’s representation in such positions requires ensuring that opportunities to science, technology, engineering and mathematics education are equally provided to women and that any barriers to entry are dismantled.16 The diversity of women’s representation in building AI responsibly is significant and starts with educational opportunities that ultimately lead to including more women into the different stages of designing AI solutions.

Fourthly, guardrails (that do not prohibit innovation) — in the form of laws, regulations, and controls — need to be put in place to safeguard AI.17 The future of AI regulation will require that in all the different points of an AI system, individuals should be able to understand and audit its decision-making processes and outcomes (the concept of explainability).18 There is currently a level of public skepticism on the use of AI. A survey conducted in 2023 showed that “52% of Americans are more concerned than excited about AI in daily life.”19 Explainability is important because as AI becomes more immersed in people’s lives through various industries (automating decisions and recommendations) its users and those affected by its output should be able to interpret and understand the AI algorithms’ particular result.20 This will build trust, transparency and accountability in the use of AI tools.21

The research, development and deployment of AI-enabled technologies needs to be balanced with safety and security, audits, transparency, fairness and privacy laws.22 To adequately and holistically regulate AI, key stakeholders — such as technology companies and AI startups, AI researchers and data scientists, legal experts, policy makers, and civil society — need to collaborate.23

Women and girls are essential to the efficiency of AI’s future. In order to optimize the full unique capabilities of AI, it is important that the full range and spectrum of women is represented at the different stages of AI development to deploy optimal AI solutions and efficiencies to address societal challenges and ultimately achieve social and economic progress.

  1. IBM, “What is artificial intelligence (AI),” Think Topics, ↩︎
  2. Apple, “About Face ID advanced technology,” Apple Support, January 10, 2024, ↩︎
  3. Arianna Johnson, “You’re Already Using AI: Here’s Where It’s at in Everyday Life, from Facial Recognition to Navigation Apps,” Forbes, April 14, 2023, ↩︎
  4. “Web Summit Qatar,” Web Summit, ↩︎
  5. Vincent C. Muller, “Ethics of Artificial Intelligence and Robotics,” Stanford Encyclopedia of Philosophy, April 30, 2020, ↩︎
  6. Kate Bravery and Radhika Punshi, “Are Women Right to be Wary of AI?”, Mercer, ↩︎
  7. National Institute of Standards and Technology, “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” U.S. Department of Commerce, January 2023, ↩︎
  8. Cheyenne DeVon, “‘AI Doesn’t Know Good from Wrong,’ Says Tech Expert — Why AI Bias Happens, and How to Fix It,” CNBC, December 16, 2023, ↩︎
  9. IBM, “Shedding light on AI bias with real world examples,” IBM Data and AI Team, October 16, 2023, ↩︎
  10. Isabel Straw and Honghan Wu, “Investigating for Bias in Healthcare Algorithms: A Sex-Stratified Analysis of Supervised Machine Learning Models in Liver Disease Prediction,” BMJ Health & Care Informatics 29 (2022): e100457. ↩︎
  11. Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women,” Reuters, October 10, 2018, ↩︎
  12. Dominic-Madori Davis, “Women-Founded Startups Raised 1.9% of All VC Funds in 2022, a Drop from 2021,” TechCrunch, January 19, 2023, ↩︎
  13. World Economic Forum, “Global Gender Gap Report 2023,” June 20, 2023, ↩︎
  14. “Women in tech,” Web Summit, ↩︎
  15. Ekaterina Hertog, “AI, Automation in the Home and Its Impact on Women,” Interview by Elizabeth Fetterolf, University of Oxford, March 7, 2024, ↩︎
  16. “Breaking the Gender Divide: How Business Can Build Future Career Paths for Women in AI and Emerging Tech,”, March 8, 2024, ↩︎
  17. Department for Science, Innovation & Technology, “A pro-innovation approach to AI regulation,” March 2023, ↩︎
  18. The Royal Society, “Explainable AI: The Basics,” Policy briefing Issued, November 2019, ↩︎
  19. Michellele Faverio and Alec Tyson, “What the Data Says about Americans’ View of Artificial Intelligence,” Pew Research Center, November 21, 2023, ↩︎
  20. Alejandro Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges towards Responsible AI,” Information Fusion 58 (June 2020): 82-115. ↩︎
  21. IBM, “What is explainable AI,” Think Topics, ↩︎
  22. Chanley Howell, “The Most Critical Factors for AI Legal Compliance: Transparency and Explainability,” JD Supra, September 19, 2023, ↩︎
  23. Howell, “The Most Critical Factors for AI Legal Compliance: Transparency and Explainability.” ↩︎