Understanding AI Bias: Key Insights from Our Learning Lunch

BY
LegalTech in Leeds

Wed

,

04

Dec

'

24

On October 30, 2024, we hosted a thought-provoking learning lunch titled "AI, Ain't I a Woman?" led by Dr. Emily Roach, Senior Lecturer in Law at The University of Law. With over twelve years of experience in international law firms and a PhD focused on the intersections of law, technology, and culture, Dr. Roach guided an insightful discussion on the important issue of biases in artificial intelligence (AI). The event explored the implications of AI across various sectors, including healthcare, finance, and law. Here are the key takeaways from this engaging event.

Introduction to AI and Bias

The session began with an overview of AI and bias, highlighting the critical need to understand algorithmic biases. A notable reference was made to a video by Code.org®, a nonprofit that seeks to expand access to computer science in schools, increasing participation by young women and students from other underrepresented groups. Dr. Roach highlighted comments from Dr. Amanda Askell, who discussed how already underprivileged individuals are particularly vulnerable to AI.

Systemic Biases and AI Algorithms

The discussion examined how systemic societal biases—particularly those linked to race, gender, and economic status—can influence human biases, which are then encoded into AI algorithms. Dr. Roach explained that these algorithms often perpetuate existing inequalities and are challenging to identify and mitigate due to their complexity and 'black box' nature. Recognising and understanding.

Historical Context for Ethical AI

Sojourner Truth’s speech, 'Ain’t I a Woman?' was a powerful call for racial and gender equality, challenging societal norms that dismissed Black women. Dr. Joy Buolamwini, founder of the Algorithmic Justice League, references Truth in her spoken word poem ‘AI, Ain’t I a Woman?’ which gave the discussion its title. Dr. Roach explained how Buolamwini identified racial bias in modern facial recognition technology and emphasised that, while technology advances, underlying social inequalities still need addressing. When these are replicated in AI unchecked, these biases can reinforce discrimination on a wide scale.

Sector-Specific Risks and Use-Cases of AI Bias

AI has transformative potential across sectors, but its application also brings significant risks, especially when biased algorithms impact critical areas such as healthcare, finance, and law. Ensuring that AI systems use representative data and undergo rigorous testing is essential to prevent harm to vulnerable populations. Each sector presents unique challenges and opportunities for ethical AI integration:

  • Healthcare: AI can revolutionise health assessments and diagnostics, but it is also susceptible to bias due to incomplete or unrepresentative data sets. For example, the Framingham Heart Study revealed racial biases in heart disease prediction tools, illustrating the need for diverse data in healthcare models to avoid inaccurate or unfair outcomes.
  • Financial Services: A study by the FCA and the Bank of England published in 2024 suggests that 75% of financial services firms are already using AI, with a further 10% planning to use AI over the next three years. As such, AI is increasingly central to consumer decisions. Biased algorithms can lead to unfair lending practices or discrimination. Regulatory clarity and thorough testing are crucial to align AI practices with anti-discrimination laws and ensure ethical standards.
  • Legal Sector: AI tools are becoming more prevalent in law, offering efficiencies and value in client services. However, biases embedded in historical legal precedents and judicial decisions must be scrutinised when implementing AI in legal systems. Rigorous testing with ethical considerations at the forefront is necessary to prevent unintended biases which impact on areas such as the criminal justice system.

Encouraging Diversity and Mitigating Bias in AI

The event underscored the importance of fostering diversity within the AI workforce, particularly by encouraging more women and other underrepresented groups to engage with AI tools and careers. Addressing barriers to entry and establishing support structures are essential steps toward creating a more equitable workforce. Dr. Roach highlighted research by the World Economic Forum which suggests that employers can tackle potential long-term diversity issues through upskilling their workforce and understanding bias mitigation.

Dr. Roach highlighted research by Dr. Arlette Danielle Román Almánzar, David Joachim Grüning and Professor Laura Marie Edinger-Schons which emphasises the critical role programmers play in reducing biases in AI systems. Dr. Roach explained that this research shows that programmers with a strong social responsibility mindset tend to produce less biased tools. Proactive measures—including rigorous testing and clear policies—are also vital to mitigate AI bias and ensure ethical development.

Although work has been undertaken to tackle a lack of diversity in the AI global workforce, statistics from organisations such as UN Women and the World Economic Forum suggest that women and global majority individuals remain underrepresented at all stages of the AI workforce cycle.

Public Education and Policy Makers

The necessity for public education on AI tools and their limitations was underscored, emphasising that informed individuals can better hold developers accountable. Finally, it was noted that while challenges exist, it's not too late for policymakers to address AI bias through collective efforts. The AI Now Institute offers valuable resources for public education on AI.

Final Thoughts

The event concluded with a call for continued discussions and proactive engagement with AI to ensure that its development benefits everyone, particularly marginalized communities.

For those looking to delve deeper into the subject of AI bias, Dr. Roach works closely with colleagues at The University of Law who have been developing courses and further training on this topic and others, with a particular focus on the use of AI in Higher Education.

"It was a pleasure to be invited to discuss such an important topic by @Whitecap Consulting. LegalTech in Leeds is making great strides in driving these conversations forward, which is essential to ensuring AI integration across business sectors centers ethical considerations from the outset. AI bias does not arise as a result of a glitch in the technology, but reflects global inequities and power imbalances that exist in the world around us. Understanding this is an essential part of the democratisation of AI, recognising its failings are human failings and striving to ensure developing technologies are sustainable, accessible and transparent."

If you're an AI Technology provider, AI Researcher, or part of a legal team adopting AI and would like to join future discussions, we invite you to express your interest by messaging us on LinkedIn or emailing us at leedslegaltech@whitecapconsulting.co.uk