AI Risks Jobseeker Bias: Safeguards Needed

Artificial intelligence (AI) tools are rapidly transforming the world of work - not least, the process of hiring, managing and promoting employees.

Author

  • Natalie Sheard

    McKenzie Postdoctoral Fellow, The University of Melbourne

According to the most recent Responsible AI Index , 62% of Australian organisations used AI in recruitment "moderately" or "extensively" in 2024.

Many of these systems classify, rank and score applicants, evaluating their personality, behaviour or abilities. They decide - or help a recruiter decide - who moves to the next stage in a hiring process and who does not.

But such systems pose distinct and novel risks of discrimination. They operate at a speed and scale that cannot be replicated by a human recruiter. Job seekers may not know they are being assessed by AI and the decisions of these systems are inscrutable.

My research study examined this problem in detail.

I found the use of AI systems by employers in recruitment - for CV screening, assessment and video interviewing - poses serious risks of discrimination for women, older workers, job seekers with disability and those who speak English with an accent. Legal regulation is yet to catch up.

The rise of artificial interviewers

To conduct my research, I interviewed not only recruiters and human resources (HR) professionals, but also AI experts, developers and career coaches. I also examined publicly available material provided by two prominent software vendors in the Australian market.

I found the way these AI screening systems are used by employers risks reinforcing and amplifying discrimination against marginalised groups.

Discrimination may be embedded in the AI system via the data or the algorithmic model, or it might result from the way the system is used by an organisation.

For example, the AI screening system may not be accessible to or validated for job seekers with disability.

One research participant, a career coach, explained that one of his neurodivergent clients, a top student in his university course, cannot get through personality assessments.

He believes the student's atypical answers have resulted in low scores and his failure to move to the next stage in recruitment processes.

Lack of transparency

The time limits for answering questions may not be sufficient or communicated to candidates.

One participant, also a career coach, explained that not knowing the time limit for responding to questions had resulted in some of her clients being "pretty much cut off halfway through" their answers.

Another stated:

[…] there's no transparency a lot of the time about what the recruitment process is going to be, so how can [job seekers with disability] […] advocate for themselves?

New barriers to employment

AI screening systems can also create new structural barriers to employment. Job seekers need a phone and secure internet connection, and must possess digital literacy skills, to undertake an AI assessment.

These systems may result in applicants deciding not to put themselves forward for positions or dropping out of the process.

The protections we have

Existing federal and state anti-discrimination laws apply to discrimination by employers using AI screening systems, but there are gaps . These laws need to be clarified and strengthened to address this new form of discrimination.

For example, these laws could be reformed so that there is a presumption in any legal challenge that an AI system has discriminated against a candidate, putting the burden on employers to prove otherwise.

Currently, the evidential burden of proving such discrimination falls on job seekers. They are not well placed to do this, as AI screening systems are complex and opaque.

Any privacy law reforms should also include a right to an explanation when AI systems are used in recruitment.

The newly elected Albanese government must also follow through on its plan to introduce mandatory "guardrails" for "high risk" AI applications, such as those used in recruitment.

Safeguards must include a requirement that training data be representative and that the systems be accessible to people with disability and subject to regular independent audits.

We also urgently need guidelines for employers on how to comply with these laws when they use new AI technologies.

Should AI hiring systems be banned?

Some groups have called for a ban on the use of AI in employment in Australia.

In its Future of Work report , the House of Representatives Standing Committee recommended that AI technologies used in HR for final decision-making without human oversight be banned.

There is merit in these proposals - at least, until appropriate safeguards are in place and we know more about the impacts of these systems on equality in the Australian workplace.

As one of my research participants acknowledged:

The world is biased and we need to improve that but […] when you take that and put it into code, the risk is that no one from a particular group can ever get through.

The Conversation

Natalie Sheard receives funding from the University of Melbourne as a McKenzie Postdoctoral Fellow. This research was funded by a La Trobe University Graduate Research Scholarship and a La Trobe University Transforming Human Societies Research Scholarship.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).