AI is all around us, from the apps on our phones to the self-driving cars that are starting to hit the road. But as AI becomes more powerful, so does the risk of AI-driven discrimination.
AI discrimination can happen in a number of ways. For example, an AI-powered hiring tool might be more likely to recommend male candidates for jobs than female candidates. Or, an AI-powered facial recognition system might be more likely to misidentify black people than white people. Some real world examples of AI discrimination:
Amazon's hiring algorithm: In 2018, Amazon was found to have been using an AI-powered hiring algorithm that was biased against women. The algorithm was trained on a dataset of resumes that was heavily skewed towards male candidates, and as a result, it was more likely to recommend male candidates for open positions.
BBC recruitment tool: In 2019, it was reported that an AI-powered recruitment tool used by the BBC was biased against women. The tool was found to be more likely to recommend male candidates for open positions, even when the female candidates were equally qualified.
Met Police facial recognition: In 2020, it was reported that an AI-powered facial recognition system used by the Metropolitan Police was biased against black people. The system was found to be more likely to misidentify black people than white people.
Barclays credit scoring: In 2021, it was reported that an AI-powered credit scoring system used by Barclays was biased against people from ethnic minority backgrounds. The system was found to be more likely to give lower credit scores to people from ethnic minority backgrounds, even when they had the same financial history as white people.
Facebook Ad targeting: AI-powered ad targeting systems are often biased against certain groups of people. For example, a study by ProPublica found that Facebook's ad targeting system was more likely to show ads for high-paying jobs to white people than to black people.
AI discrimination can have many negative consequences. As the above examples prove, it can lead to people being denied jobs, loans, or other opportunities, and have a major affect on people's lives. It can also reinforce negative stereotypes about certain groups of people.
So, what can we do to address AI discrimination? There are a few things that we can do:
Use unbiased data to train AI systems
Design AI systems with fairness in mind
Have human oversight of AI systems to ensure that they are not being used in a discriminatory way
Organisations, such as the ICO, are working to address AI discrimination in a number of ways. This includes issuing guidance and practical toolkits to educate AI developers on ensuring their algorithms treat people and their information fairly, and conducting research on the potential risks of AI discrimination and developing tools to help organizations mitigate these risks.
AI is a double-edged sword. It can be used to improve our lives in many ways, but it also has the potential to harm. It is important to be aware of the risks of AI discrimination and take steps to mitigate them. We can use AI to create a more just and equitable world, but only if we are careful to avoid using it to perpetuate discrimination.