top of page

Why “AI Bias” is a good thing

When data-driven algorithms tell you that you have a problem that cannot just be put down to “a few bad apples” the opportunity is there for a healthy conversation. Should you choose to take it.


Every year millions of job interviews take place in New York.

Its a fantastic place that everyone should grab the chance to work in if they can. But it doesn’t take a deep understanding of US society to know that prejudice has long played a role in who gets what jobs. Prejudice against different waves of immigrants — the Irish! the Italians! the Jews! the Costa Ricans! — good old-fashioned sexism or bias on grounds of sexuality or skin colour have all had an impact on those job interviews. New York prides itself on being a place where ambition and ability matter most and those prejudices have — largely — been reduced, if not eradicated.

And from April New York job interviews will be audited for bias.

Or more precisely those automated tools that support the job interview process. These carry out anything from CV reading and sorting to delivering interviews via video. The audit details are still being worked out but the purpose is clear: AI tools need to be checked for bias.

What has driven this?

AI tools typically use historic data — in these cases on recruitment decisions — to drive their decision algorithms. This raises the issue that historic decisions have negatively impacted certain categories of people, for example on grounds of gender or ethnicity. Most famously Amazon had to withdraw its recruitment algorithm after it became clear that a historic bias towards white or Asian male engineers meant that it marked female applicants far more harshly than their male counterparts for equivalent CV achievements. No amount of code hacking or data weighting could cover for the embarrassment of the historic record.

AI tools provide a mirror to the real world.

Examples abound. Google’s image library was so racially skewed that African-American politicians were mis-labelled by its image categoriser as gorillas. The repeated instances of chatbots juddering towards racism or sexism given the corpus of Internet chunter that they had been trained on. These issues have real-world impacts — for example in healthcare where cancer-hunting tools deliver different (potentially life-threatening) results based on skin pigmentation. These events tend to be held up as calamities and signs of the great dangers that AI will impose on the world.

To my mind they are one of the great blessings of the AI age.

Should we choose to act on it.

Faced with hard mathematical evidence of issues from AI tools the conversation about what to do about them in the real world will be part of the discourse of the very technical and managerial elite who are building the 21st Century. Clearly this may not always be top of their agenda — although bias undermines the very AI tools that will build their companies. (Data bias could equally apply to missing images that might, for example, weaken the abilities of a quality control camera to spot broken widgets, automated maintenance robot to avoid catastrophe or for a compliance tool to function properly).

However, the very fact that AI bias is one of the few aspects of the AI age that politicians and regulators have the patience and affinity to grasp means that this is not a luxury that they will be able to enjoy indefinitely. New laws— in the EU, California or New York — will bring them back to the reality that these issues are being watched, discussed and will drive both reputational damage and regulatory action.

After all, it took AI for New York to take another look at those interview outcomes. “AI Bias” — its going to make the world a better place.



 

PS One way to handle this is to get an AI Explainability Statement drawn up. It will help drive the right discovery, governance and regulatory action.

17 views0 comments

Recent Posts

See All

How to handle AI in UK Elections

Lots of people think that there will be a problem - and the answer is usually to expect US Big Tech to fix it. This is not good enough The UK can and should take control of its own election, even abse

How to manage AI's original sin

Our article in the Washington Post (February 2023) signalling the risk of AI hallucination and what to do about it https://www.washingtonpost.com/opinions/2023/02/23/ai-chatgpt-fact-checking-tech/

bottom of page