The headlines are scary, reporting one round of mass layoffs after another from companies including Amazon, Microsoft, HP, General Motors, and UPS. Although the latest report from the U.S. Labor Department showed a slight uptick in hiring last month, the job market is far from stable, experts warn. Hiring remains at a record low, with 1.28 million fewer people getting hired in 2025 than in 2024.
With that, new and powerful artificial intelligence agents are emerging that can produce in seconds what it took teams of people months and years to achieve—most notably, OpenAI's GPT-5.3 Codex and Anthropic's Claude Opus 4.6. The release of both models on Feb. 5 triggered a flood of posts on Reddit and other platforms from users who fear these bots and agents will take their jobs.
With these widespread concerns as a backdrop, Johns Hopkins University will co-host a forum titled "Will AI Make Work Obsolete?" at 6:45 p.m. on Wednesday, Feb. 25, at the Hopkins Bloomberg Center in Washington, D.C.
Participants will debate the hotly contested question, with former presidential candidate Andrew Yang, who founded the Forward Party, and Simon Johnson, a Nobel Prize-winning economist, arguing that AI will make work obsolete. Facebook cofounder Chris Hughes and Rumman Chowdhury, the founder and CEO of Humane Intelligence and an expert in responsible AI, will take the other side, arguing that AI will not replace human workers.

Image caption: Ritu Agarwal and Richard Smith
The debate is the third installment of the Hopkins Forum, a series of eight debates hosted in partnership with Open to Debate, the leading nonpartisan media platform steering the national conversation around the art of debate and the importance of free speech, and the SNF Agora Institute at Johns Hopkins University.
Ahead of that discussion, which is free and open to the public but requires advanced registration, the Hub spoke with two experts from the Johns Hopkins Carey Business School who have spent decades studying how individuals and organizations adapt to technological change—Ritu Agarwal, a professor of information systems and health, who cofounded and codirects the Center for Digital Health and Artificial Intelligence at Hopkins, and Richard Smith, a professor of practice and director of the university's Human Capital Development Lab.
In the conversation below, Agarwal and Smith weigh in on the effects of AI on workers across industries at various levels and what the evidence suggests about the evolving scenario.
Note: The conversation has been edited for flow and clarity.
People are on edge, with a Pew Research Center survey showing that 64% of Americans believe AI will lead to fewer jobs over the next 20 years. But is the picture that clear-cut—and can we blame recent layoffs and the struggling labor market on AI?
Ritu Agarwal: New technologies have long altered how we work, and humans have adapted. Just think of the telephone, the Xerox machine, the fax machine, and the internet—all gamechangers that altered our work but didn't replace us.
That said, I've never seen a technology as revolutionary as AI. It's significantly changing and will continue the change how we work, but that doesn't mean humans will no longer be needed. Humans will steer the ship and be needed in different capacities.
When people say so many of the job losses are attributable to AI, I have a hard time swallowing that because the cause and effect aren't clearly established. There's a lot of media hype, and the data is patchy at best. For example, there's the economic argument that companies are retrenching employment in the face of tariffs and the post-pandemic hiring boom—that they're bloated and under pressure from shareholders to retrench and save on labor costs. Usually, organizations and companies cut jobs for a variety of reasons, and those variables need to be teased apart and analyzed before jumping to a single conclusion. This will take some time and lots of good data.
Rick, you recently co-authored an opinion piece in The Wall Street Journal titled "AI Means the End of Entry-Level Jobs." The headline speaks for itself, but can you break this down for us? Are entry-level jobs over and done with?
Rick Smith: First, let me say: The title is alarmist and doesn't reflect the overall message, which is more nuanced. But is does reflect something happening in the workforce.
In our studies on well-being for the Human Capital Development Lab, we've noticed that younger workers are experiencing less well-being on the job. They're less engaged and fulfilled. They feel less autonomy, purpose, and belonging. We've also noticed unemployment rates rising for this demographic—specifically for 22- to 25-year-olds—in AI-affected sectors like manufacturing, finance, software engineering, marketing, and law. Unemployment rates are more stable, however, for older, more experienced workers in these areas. So we wondered: Why the discrepancy? What role is AI playing here that can help us understand what younger workers are facing?
Historically, younger workers have embraced new technologies, while older workers tended to resist change. Now, however, the opposite is happening because employees with more experience and knowledge are needed to implement AI and figure out how to take advantage of its capabilities. Junior employees don't have that knowledge and experience.
At the same time, AI is replacing some of the routine automation work performed by junior employees—tasks like data entry, customer service, coding, design and formatting, and document reviews. So it's a double whammy. Junior workers are, in some cases, being replaced by AI, and they're not able to manage AI because they don't have the experience.
We need to remember: AI isn't just another tool—it's a shift in how people work. It requires judgment and strategy, which is often less developed in younger workers. This, in turn, presents a conundrum. For example, what will happen when we need to replace the more experienced people managing AI? How do we build up the career pipeline with people who have that level of experience? That's a problem that will need to be solved.
If the traditional bottom rung of the career ladder disappears, then we will need to find ways for junior employees to learn alongside more senior employees—mentor-intensive programs where they can develop judgment, strategy, and other skills that AI can't fully replicate.
Ritu, you've published studies suggesting that AI will not replace human workers. What's your take on whether AI is coming for our jobs?
Agarwal: I think Rick and I are more aligned than not, and I don't think there are any absolutes just yet. The verdict is out. I do believe, as Rick mentioned, that certain routine and tedious tasks are easily relegated to AI. And it makes sense from both a business perspective and an individual productivity perspective to leverage the power of this technology to make work faster and more efficient.
At the same time, knowledge work is complicated. As Rick mentioned, it requires many human qualities that AI doesn't necessarily replicate so well. This is why I think of AI as augmenting rather than replacing what humans can do, and especially so in knowledge work.
In some professions, though, the situation may differ. For example, the new plugin for Claude released earlier this month produces extremely high-quality computer code. And a lot of coding work is being relegated to this software.
Does that mean we no longer need human coders? No, it means we need software engineers with different kinds of skills. We don't need them to write the Python code or whatever programming language they're using. Instead, we need them to see if the appropriate guardrails have been put into the software, or we need them to test the product, or to ensure its compliance with regulations.
I do envision replacement for a small, limited number of tasks. For instance, many factories are using AI-based robots to pick up and move stuff from one palette to the next. That task is probably best done by the AI, but in corporate work and knowledge work, I don't envision AI as a complete substitution for the end-to-end activity and responsibilities of an executive.
You talk a lot about teaming in your work, Ritu. What does that mean and look like?
Agarwal: Essentially, it's when humans and AI team up on a job, and it plays out differently in each context, depending on the nature of both the task and the individual. For instance, you might envision a team made up of one or two individuals and four or five AI agents, as opposed to a five-person team. This takes the teaming discussion and team dynamics challenge to a whole new level. Who delegates tasks? Who leads? Who checks output? Who does the quality control? The roles will need to get redefined and redistributed.
Rick, what's your take on teaming, or the idea of augmentation versus automation? Is that where we're heading—toward more teaming and augmentation, as opposed to all-out replacement?
Smith: We're still at the stage of AI development where we rely on human judgment—a human filter—to make sure we're on the right track and not hallucinating, as AI agents can do. But I'm concerned that some organizations are replacing humans with AI without fully understanding the long-term impact not only on people but also on the skills companies will need to succeed as we move into a new kind of work world.
What about jobs that involve creativity, such as writing and journalism, design, and entertainment? Are those jobs being slashed by AI?
Smith: Some may argue that AI approaches creativity in a way that differs from humans, and I would argue—that could be a good thing. I can imagine breakthroughs with creativity, whether in writing or other areas, and it's important to keep an open mind, asking the question: How might I partner with AI to create something new?
Agarwal: I think there will always be an appetite for human creativity, meaning it will still have a place. But it will be enhanced with powerful AI and go in the bucket of augmentation.
In some ways, we're all locked into our mental models of thinking about creativity as a purely human characteristic. I'm willing to be open-minded and say that if you take the creativity of millions of individuals, which is really what's captured in AI, then you could have a synergistic creativity, which is possibly larger than the individuals.
I'm not ready to say that AI is not creative. What I worry more about as an educator and a professor is what AI does to learning and skill development. For instance, studies have now started coming out about the atrophy of skills that can come with cognitive offloading, and I suspect that the interaction of AI with learning will be a huge area of concern and research in an AI-powered world.
What can employees and jobseekers do to protect themselves during this time of uncertainty?
Smith: The reskilling and upskilling imperative is so critical right now that we have the hackneyed phrase, "AI won't take your job, but somebody who knows AI will take your job."
Today's employers want to hire people who have developed the skills and ability to work with AI. In corporate and professional settings, they want people who are comfortable with the tools and understand how to use them judiciously for certain tasks, but not others. These are learned skills and capabilities. As educators, we have to build in this training for our students and faculty. And as managers and leaders, we have to spend the time and resources required to train our existing workforce.
I encourage people to think about an old parallel: the rollout of the calculator, when everyone warned us against introducing them in classrooms. They said calculators would replace people's math skills, without considering the many things we could do with a calculator in our hands. The same happened when laptops and typing replaced cursive handwriting. So it's important to think about progress.
How do you reduce the fear and anxiety? You do that by understanding the technology, by building your AI literacy. And that would be my advice to individuals, managers, and leaders.
Agarwal: I now think of the following statement as an axiom: A new configuration of skills and knowledge will be needed. And that involves upskilling. I agree with Rick that developing AI literacy is critical. We can't wait. The technology is already here. China just passed new legislation mandating that AI will be taught in schools, while the AI literacy bill in the U.S. has been knocking around Congress for a while now, leading to an executive order from the president that hasn't resulted in much yet. The message is to start learning it now, to embrace it and discover how it can be deployed to effectively advance individuals, businesses, and societies. That's our collective future.