AI Revolutionizes Hiring: From Bias to Balance

Macquarie University/The Lighthouse
A study of HR professionals shows inclusion-focused AI can reduce disability discrimination and improve fairness in real-world recruitment scenarios.

Artificial intelligence is rapidly reshaping how organisations hire. From screening resumes to shortlisting candidates, AI is often promoted as a tool that can remove human bias and make recruitment more objective.

On the flipside, taking a human element out of the hiring process also means there's a chance candidates will be overlooked if they don't fit a certain 'type'.

New research from Macquarie Business School suggests there is a way through this dilemma.

How AI can reshape HR

Bias is strongest when decisions are hardest

The study, published in Human Resource Management Journal, focuses on a persistent and often overlooked issue – disability bias in hiring. Despite growing awareness of diversity and inclusion, candidates with disabilities are still less likely to be selected even when they are equally qualified.

"Disability bias remains one of the most persistent hurdles in modern recruitment. It isn't always about conscious exclusion... it's often a byproduct of a high-pressure environment where the human brain defaults to 'safe' stereotypes over objective data."

When hiring decisions are simple, managers tend to focus on concrete skills and qualifications. But when decisions become more complex, people are more likely to rely on mental shortcuts, or stereotypes.

The more cognitively demanding the decision, the more likely bias is to influence the outcome.

In one experiment involving 238 HR professionals, disabled candidates were selected just 34 per cent of the time in complex hiring scenarios, compared to a neutral benchmark of 50 per cent. In simpler decisions, that gap narrowed significantly.

Not all AI is created equal

AI is often positioned as a solution to human bias. But the research shows standard AI tools – those focused purely on efficiency or technical criteria – don't necessarily fix the problem.

Instead, the study tested a different approach with 'inclusion-focused' generative AI.

Rather than simply processing information, this type of AI actively guides decision-makers. It prompts them to focus on job-relevant competencies, highlights fairness considerations and reduces the tendency to rely on stereotypes.

In complex hiring decisions, inclusion-focused AI significantly increased the likelihood of selecting disabled candidates compared to standard AI. In some cases, it nearly doubled hiring rates for disabled applicants.

Even in simpler decisions, the inclusion-focused approach consistently reduced bias.

"AI shouldn't just be a filter, it should be a coach," says Yang. "By using inclusion-focused generative AI, we aren't just processing resumes faster, we are actively guiding the human brain back to fairness by highlighting competency."

Why it works

The research draws on Construal Level Theory, which explains how 'psychological distance' shapes decision-making. When people feel distant from a decision – because it's complex or abstract – they are more likely to think in broad, simplified terms. That's when stereotypes tend to dominate.

Inclusion-focused AI reduces that distance.

By prompting evaluators to focus on concrete details like specific skills, qualifications and evidence, it shifts attention away from abstract assumptions and toward individual merit. It actively changes how people think about candidates.

"The power of this approach lies in its ability to disrupt how we process information," says Yang. "It prevents the mind from drifting into broad generalisations and keeps the focus laser-targeted on the individual's unique merits and qualifications."

A powerful tool – but not without risks

While the findings are promising, the researchers have a caveat.

In some scenarios, the inclusion-focused AI appeared to 'overcorrect', leading to higher-than-neutral selection rates for disabled candidates. This raises the possibility of inverted bias, where efforts to reduce discrimination unintentionally swing too far in the other direction.

This doesn't negate the benefits, but it does point to the need for careful calibration.

To genuinely improve fairness, AI tools should:

  • Prompt evaluators to focus on job-relevant competencies
  • Embed diversity and inclusion principles into decision workflows
  • Make reasoning transparent and auditable
  • Support, rather than replace, human judgment

In this model, AI becomes part of a broader 'fairness infrastructure' – working alongside structured interviews, standardised criteria and accountability processes.

"The goal isn't to replace human judgment with an algorithm, but to build a 'fairness infrastructure' that supports it," says Yang. "When we calibrate AI to prioritise inclusion, we create a more equitable landscape where talent isn't overlooked due to a checkbox."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.