The Flawed Promise of AI: Tackling Bias in Justice, Ethics, and Beyond
Artificial intelligence (AI) has long promised to modernise industries, improve efficiency, and solve societal challenges. But behind the glossy exterior of this technological marvel lies a pressing issue—bias.
Whether in facial recognition, healthcare, or criminal justice systems, AI has shown troubling patterns of perpetuating and amplifying discrimination.
Understanding how bias creeps into algorithms and addressing its consequences is essential for ensuring these tools benefit everyone.
Technology Law
Read the latest Technology Law updates and news on artificial intelligence, privacy and data protection law, digital assets regulation, and beyond—delivered straight to your inbox!
No spam. Unsubscribe anytime.
Bias in the Building Blocks of AI
Bias in AI isn’t just about faulty data; it’s deeply rooted in the human elements behind the scenes.
The creators of algorithms bring their own implicit biases to the table, shaping how systems are designed and deployed. A landmark study, Gender Shades by Dr. Timnit Gebru and Dr. Joy Buolamwini, highlighted how facial recognition systems disproportionately fail Black women.
This wasn’t just a matter of insufficient data representation but also a reflection of the biases of those designing these systems.
Even when efforts are made to improve diversity within datasets, challenges remain. Adding more representation doesn’t automatically erase bias in the deployment of these tools.
For instance, facial recognition systems are disproportionately used in communities of colour, further entrenching systemic discrimination.
Simply fixing the data isn’t enough; the entire process—from design to implementation—requires scrutiny.
Healthcare Algorithms: A Lesson in Oversight
The health sector offers another stark example of AI bias, as seen in the Optum algorithm designed to prioritise patients for care funding. By focusing solely on healthcare costs, the algorithm overlooked critical systemic inequities.
Black patients, who face historical and institutional barriers to accessing healthcare, were systematically deprioritised.
The algorithm failed to account for the mistrust many Black Americans feel toward the healthcare system, shaped by decades of unequal treatment and cultural misunderstanding.
The Optum case highlights the dangers of ignoring institutional factors when designing algorithms. A purely cost-based approach seemed neutral but ignored the broader realities of systemic racism in healthcare. This led to decisions that reinforced existing disparities instead of addressing them.
Bias in Criminal Justice Algorithms
One of AI’s most controversial applications lies in the criminal justice system, particularly in risk assessment tools intended to inform bail and sentencing decisions.
These tools were initially celebrated as a step toward reforming wealth-based pretrial systems.
However, as civil rights organisations have increasingly pointed out, the reliance on historical criminal justice data has made these systems perpetuate existing inequities.
Even when race is not an explicit factor in these algorithms, proxies like arrest records or neighbourhood crime data often correlate strongly with racial disparities.
Arrests, for instance, may reflect discriminatory policing practices rather than actual criminal behaviour, leading to skewed outcomes.
The phrase “bias in, bias out” captures this cycle—algorithms trained on biased data cannot produce unbiased results.
The Myth of AI Neutrality
One of the most dangerous misconceptions about AI is the belief that it is inherently neutral. Algorithms are often perceived as objective tools, but this illusion masks the biases embedded within their design and data.
Racial bias, in particular, can be obscured in layers of data, making it challenging to identify and address.
The lack of transparency surrounding AI datasets further compounds the problem. Without public access to the datasets used to train these tools, evaluating their fairness becomes nearly impossible. This opacity allows biased systems to operate unchecked, disproportionately affecting marginalised communities while maintaining an illusion of fairness.
A Never-Ending Challenge
Addressing bias in AI is far from straightforward. While some progress can be made by diversifying datasets and auditing algorithms, these measures are often reactive rather than proactive.
Bias emerges not only from the data but also from the institutional systems and human behaviours that shape the world.
For instance, algorithms cannot account for systemic inequities like institutional racism in healthcare or justice systems unless explicitly designed to do so—a task fraught with complexity.
Experts caution that eliminating bias entirely may be an unrealistic goal. The focus, instead, may need to shift toward continuous review and improvement.
Every algorithm must be scrutinised for potential biases, from its inception to deployment, creating a cycle of accountability that evolves alongside the systems it seeks to improve.
If AI systems are to gain public trust, transparency must become a priority. Opening datasets for external review can help identify biases and provide insights into how algorithms function.
However, transparency alone is not enough. Developers, policymakers, and communities must actively collaborate to ensure AI serves as a tool for equity rather than exclusion.
Artificial intelligence has tremendous potential to improve society, but without addressing the biases embedded within its systems, it risks deepening the very inequalities it aims to solve.
- Barabas, Chelsea. "Beyond bias: re-imagining the terms of" ethical ai" in criminal law." Geo. JL & Mod. Critical Race Persp. 12 (2020): 83.
- Ferrer, Xavier, et al. "Bias and discrimination in AI: a cross-disciplinary perspective." IEEE Technology and Society Magazine 40.2 (2021): 72-80.
- Oliveigha Gauthier-Moulton, The Risk of Using Artificial Intelligence in the Canadian Criminal Justice System, National Journal of Constitutional Law, Nat'l J. Const. L. 83.