As debates about artificial intelligence and employment intensify, new research suggests that even warnings about near-term job automation do little to shake public confidence.
In a survey-based study, political scientists Anil Menon of the University of California, Merced, and Baobao Zhang of Syracuse University examined how people respond to forecasts of the arrival of "transformative AI," ranging from as early as 2026 to as distant as 2060.
The study will appear in The Journal of Politics.
The researchers found that shorter timelines made respondents slightly more anxious about losing their jobs to automation, but did not meaningfully alter their views on when job losses would occur or their support for government responses such as retraining workers or providing a universal basic income.
Respondents to the survey of 2,440 U.S. adults who read about the rapid development of large language and other generative models — similar to the systems driving ChatGPT or text-to-image programs — did predict that automation might come somewhat sooner. Yet their policy preferences and economic outlooks remained essentially unchanged. When all informational treatments were combined, respondents showed only modest increases in concern about technological unemployment.
"These results suggest that Americans' beliefs about automation risks are stubborn," the authors said. "Even when told that human-level AI could arrive within just a few years, people don't dramatically revise their expectations or demand new policies."
Menon and Zhang say their findings challenge the assumption that making technological threats feel more immediate will mobilize public support for regulation or safety nets.
The study draws on construal level theory, which examines how people's sense of time shapes their risk judgments. Participants who were told that AI breakthroughs were imminent were not significantly more alarmed than those given distant timelines (happening in 2026 vs. 2060).
The survey, fielded in March 2024, was quota-representative by age, gender and political affiliation. Respondents were randomly assigned a group: one control group and three groups exposed to short- (2026), medium- (2030), or long-term (2060) automation forecasts. Each vignette described experts predicting that advances in machine learning and robotics could replace human workers in a wide range of professions, from software engineers and legal clerks to teachers and nurses.
After reading the vignette, participants estimated when their jobs and others' jobs would be automated, reported confidence in those predictions, rated their worry about job loss, and indicated support for several policy responses, including limits on automation and increased AI research funding.
While exposure to any timeline increased awareness of automation risks, only the 2060 condition significantly raised worry about job loss within 10 years, perhaps because that forecast seemed more credible than claims of imminent disruption.
These results arrive amid widespread debate over how large language models and other generative systems will reshape work. Tech leaders have predicted human-level AI may emerge within the decade, while critics argue that such forecasts exaggerate current capabilities.
The study by Menon and Zhang shows that the public remains cautious but not panicked, an insight that may help policymakers gauge when and how citizens will support interventions such as retraining programs or universal income proposals.
The authors noted several caveats. Their design focused on how timeline cues influence attitudes but did not test other psychological pathways, such as beliefs about AI's economic trade-offs or the credibility of expert forecasts. The researchers also acknowledge that their single-wave survey cannot track changes in individuals' perceptions over time. Future research, they suggested, could use multi-wave panels or examine reactions to specific types of AI systems.
"The public's expectations about automation appear remarkably stable," they said. "Understanding why they are so resistant to change is crucial for anticipating how societies will navigate the labor disruptions of the AI era."