AI Overload: Gig Workers Struggle With Info Flood

Macquarie University/The Lighthouse
Macquarie University research shows efforts to make AI management more transparent may be increasing mental strain for gig workers.

For millions of gig workers who drive for companies such as Uber Eats, DoorDash and Deliveroo, there is no human manager to call, no supervisor to appeal to and no office to walk into. Decisions about pay, performance, penalties and access to work are made by algorithms.

Increasingly, those algorithms are trying to explain themselves. This push towards 'explainable AI' is often presented as an obvious good. If workers understand why an algorithm made a decision, the thinking goes, they are more likely to see it as fair, accept the outcome and trust the system behind it. Regulators, policymakers and platform companies have largely embraced this logic.

Gig worker on a bike

But new research suggests the reality is more complicated . In some cases, explaining too much can actually make things worse.

A large experimental study involving more than 1100 gig workers examined how different types of AI explanations affect workers' acceptance of algorithmic decisions and their relationship with the platform. The research found that transparency helps up to a point, but piling on layers of explanation can overwhelm workers, reduce trust and damage management relationships.

"We often assume transparency is a universal remedy for AI scepticism," says Miles Yang, an Associate Professor at Macquarie Business School. "But our findings suggest when you indiscriminately layer explanations, you aren't empowering the worker. You are just increasing their cognitive burden."

When transparency becomes a burden

Gig workers operate under constant time pressure, often juggling multiple apps and income streams. They make rapid decisions in traffic, on the phone or between jobs. Crucially, they have little or no access to human managers when something goes wrong.

In this context, AI systems don't just support management – they are management.

The study examined how workers respond to different explanation styles commonly used in algorithmic systems. Some explanations are local, offering detailed, case-specific information, such as exactly how late a delivery was or which metric triggered a penalty. Others are counterfactual, describing hypothetical alternatives – for example, what would have happened if a worker had taken a different action.

Individually, both types of explanation can be useful. Local explanations provide clarity. Counterfactual explanations help workers learn how to avoid similar outcomes in the future.

The problem arises when platforms combine both at once.

Miles and Candy facing the camera

"When you ask a worker to analyse the specific reason for a penalty while simultaneously processing a hypothetical scenario of how they could have avoided it, the mental effort required outweighs the benefit of the information," says study co-author, Associate Professor Ying Candy Lu.

When workers are presented with highly detailed performance data and asked to process 'what-if' scenarios at the same time, their acceptance of the AI decision drops. Instead of feeling informed, workers feel overwhelmed. Perceptions of fairness weaken, and the relationship between worker and platform suffers.

Acceptance or explanation?

Acceptance of AI decisions plays a central role in shaping management relationships. Workers who accept algorithmic decisions – even negative ones – are more likely to view the platform as fair, trustworthy and legitimate.

But acceptance is not driven by how much information workers receive. It is driven by whether the explanation is cognitively manageable.

This challenges a core assumption shaping current debates about AI governance: more transparency does not automatically lead to better outcomes. Evidence suggests poorly designed transparency can backfire, particularly in high-pressure work environments like the gig economy.

"Trust isn't built on the volume of data provided, but on the clarity of the communication," says Associate Professor Yang. "When the explanation becomes a puzzle the worker has to solve, the perception of fairness evaporates."

A regulatory blind spot

These findings have important implications for Australia's ongoing debates about gig work regulation and algorithmic management. Recent reforms give regulators greater power to scrutinise fairness, transparency and control in platform work. Much of the regulatory focus remains on whether platforms explain decisions, rather than how those explanations are delivered.

An AI system can comply with transparency requirements and still make work harder by overloading workers with complex information. From a regulatory perspective, the system appears accountable. From a worker's perspective, it is exhausting.

If AI is functioning as a workplace manager, then explanation design should be treated as a management practice – one capable of supporting or undermining worker wellbeing.

"If AI is going to act as a boss, it needs to exhibit the qualities of a good boss. One that communicates clearly and concisely, rather than one that dumps a raw data file on your desk and walks away," says Associate Professor Ying Candy Lu.

Rethinking explainable AI for work

None of this suggests explainable AI is a bad idea. Transparency still matters, particularly in systems that affect income and job security. The research suggests explainability needs to be selective, contextual and informed by how people actually process information under pressure.

Just as good human managers know when to explain, when to summarise and when to step back, AI systems need to be designed with an understanding of human limits. More information is not always better information.

Associate Professor Dr Miles Yang and Associate Professor Dr Candy Ying Lu work in Macquarie Business School's Department of Management.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.