Introduction
In the midst of the excitement surrounding artificial intelligence (AI) and digital transformation, it is essential to question whether we are too quick to assume that technology holds the answer to all our problems. Are we neglecting the potential societal and human problems that can arise from our enthusiastic embrace of AI? These are the questions raised by Meredith Broussard in her latest book, ‘More Than a Glitch – Confronting Race, Gender and Ability Bias in Tech.’
Broussard’s book adds to a growing body of work that explores bias and the wider social implications of our rush to adopt AI. Other notable works include Cathy O’Neil’s ‘Weapons of Math Destruction,’ Safiya Noble’s ‘Algorithms of Oppression,’ and Broussard’s previous book, ‘Artificial Unintelligence.’
The Problem of Technochauvinism
At the core of Broussard’s argument is the concept of ‘technochauvinism’ – the belief that technological solutions are always superior to social or other methods of driving change. Broussard illustrates this concept by discussing the example of a ‘stair climbing machine’ proposed by technologists as a solution to improve the lives of disabled individuals. However, Broussard highlights that often a simple solution, such as a ramp or an elevator, would be more practical and desirable for wheelchair users.
Broussard coins the term ‘disability dongle’ to describe ideas put forward by able-bodied engineers that offer quick technological fixes instead of addressing complex societal issues. She advocates for considering the right tool for the job, which may not always be the most advanced technology or algorithm.
The Difference Between Mathematical and Social Fairness
Broussard delves into the distinction between mathematical fairness and social fairness. While computers can provide mathematical solutions for equality and fairness, they may not align with social realities. Broussard shares a childhood anecdote about splitting a cookie, showcasing how a mathematical solution of dividing it in half may lead to an unfair distribution. Socially-constructed negotiation and compromise are often necessary for true fairness.
Thus, Broussard suggests that computers should be used for mathematically-oriented problems rather than heavily relying on them for societal challenges.
The Role of AI in the Future of Work
Broussard addresses concerns about AI replacing human workers, specifically in the writing and journalism professions. She argues that AI-generated content lacks essential human qualities like creativity and the ability to generate new ideas. Instead, AI can enhance humans’ creative skills and organizational abilities.
Ethical Considerations in Computer Vision
One area of AI that Broussard finds worrying is computer vision and its treatment of individuals based on race, gender, and other factors. Facial recognition technology, for example, exhibits biases that favor certain demographics over others, leading to ethical concerns in areas such as policing. Broussard emphasizes that facial recognition should not be used in policing due to its disproportionate impact.
A Balanced Approach to AI
Broussard acknowledges the potential benefits of AI but cautions against overhyping its transformative power. She notes that AI is not a replacement for human creativity but a tool to enhance it. She emphasizes the importance of organizations, institutions, and campaign groups in addressing the real harms experienced by individuals and ensuring fair and responsible AI development.
Conclusion
While some may hold a more optimistic view of AI’s potential, Broussard urges caution and emphasizes the need for critical examination of AI’s societal impact. She highlights the necessity of involving diverse voices and considering the real-world implications of AI applications. Ultimately, a balanced and responsible approach is essential to ensure that AI benefits society without exacerbating existing biases or inequalities.