Why an AI recruiter can be as biased as the humans that built it

IBM toolkit aims to level the playing field for job applicants

Algorithmic bias may sound like the kind of deep-geekery that only hardcore techies get excited about, but in fact it is increasingly significant in all our working lives. AI computer systems now make suggestions about everything from what ads and newsfeeds we see when we go online, to whether we get a job interview or a callback from a recruiter, and the influence of AI is only set to rise as such systems become more common in the workplace.

The thinking is that computers are less fallible than human beings when it comes to bias or built-in prejudice against certain groups of people. But last year one large online retailer was forced to withdraw an AI recruiting tool that showed bias against women candidates.

Such biases arise partly as a result of the way that AI systems are programmed, says Francesca Rossi, AI Ethics Global Leader and member of the European Commission’s High-Level AI Expert Group. “Why should AI recommend something that is discriminatory? It’s because we do not tell an AI system exactly how to solve a problem with a set of rules. Instead we give examples of what the solution looks like and the AI learns from these examples.”

If those examples, known as training data (such as a list of hiring decisions) unwittingly have bias built in, then the AI system is likely to pick up on that bias and amplify it. “If you are not careful about which examples you give, the AI system could learn something discriminatory, such as gender having something to do with the decision,” she says.

Mitigating these concerns through an ethical and people-centred approach to AI is key, says Rossi. “It’s important to build a system of trust – trust in the technology and in the people delivering it. AI can do things that our brains cannot, such as derive insight from large amounts of data. But without trust the technology will not be adopted and the many benefits it brings could be lost.”

Another potential source of bias is from the minds of the programmers themselves – people can be subject to as many as 180 different types of bias (the four most common of which are described in the box below), and AI developers are no less human than anyone else. “IBM puts a lot of effort into algorithms to mitigate bias, but also into initiatives to educate developers so that they think about bias in their jobs,” Rossi says. One such example from IBM is its Everyday Ethics Guide for Artificial Intelligence released last year for the developer community.

The AI system could learn something discriminatory, such as gender having something to do with the decision

At IBM, Rossi and her team have identified four characteristics that are required to deliver human-centred ethics in AI. Fairness – does the system treat all the people impacted by its decision making fairly? Explainability – can the system provide an explanation of why it is making a certain decision? Robustness – keeping the error rate of AI systems to an acceptable minimum, and understanding that the kind of mistakes AI makes can be very different to the kinds that people make. And transparency – fully communicating the design choices that have been made in the process of developing the system to all the stakeholders involved.

To this end, IBM has developed a toolkit to minimise the likelihood of bias arising in the first place, and spot and compensate for any biases that may already exist. These tools include Adverse Impact Analysis which uses IBM’s Watson AI to identify gender, race and education biases present in an organisation’s recruiting practices, and the AI Fairness 360 Toolkit, which provides developers with algorithms that can compensate for bias in the training data, and Watson OpenScale, which is an open environment that enables organizations to automate and operationalize their AI and also helps detect biases in real-time.

Ultimately, says Rossi, better AI requires the full involvement and trust of everyone touched by the recommendations that system will make. “It needs a multi-stakeholder approach and is not something that can be done only by developers. It also has to be done by talking to the people who are going to be affected by that AI system.”

Four top biases

Four common ways our brains trick us into making false assumptions

1. Stereotyping: oversimplified and misleading beliefs about the habits or characteristics of a certain group of people. ‘Only males play video games’ is a commonly encountered stereotype, but in fact nearly half of video gamers are women.

2. Confirmation bias: favouring information that confirms preconceived beliefs. If someone believes that left-handed people are more creative, for example, every creative left-hander they encounter is proof, whereas they are likely to regard creative right-handers as exceptions that can be ignored.

3. Halo effect: using a single physical or personality trait to form an overall judgment of that individual. So an attractive or well-dressed person is more likely to be perceived as honest, hardworking and trustworthy than someone less attractive or untidily dressed.

4. Like me effect: the tendency to favour people who are most like us. So when recruiting we may prefer candidates with similar — rather than different — backgrounds, interests or education to our own.

How IBM is Helping

For more than a century IBM has been creating solutions that build collaboration and trust between people, businesses and machines. IBM is now building AI systems without bias, allowing business leaders to make better, more ethical decisions. By aligning AI with the values and ethical principles of the organisation and community it affects, we can build AI systems that everyone can trust.

Find out more here