
The Southeastern Pennsylvania Transportation Authority piloted a new enforcement tool in Philadelphia in 2023: AI-powered cameras mounted on seven of its buses. The results were immediate and dramatic: In just 70 days, the cameras flagged over 36,000 cars blocking bus lanes.
Author
- Murugan Anandarajan
Professor of Decision Sciences and Management Information Systems, Drexel University
The results of the pilot gave the transportation authority, also called SEPTA, valuable data into bus route obstruction and insights into the role of technology to combat these problems.
In May 2025, SEPTA and the Philadelphia Parking Authority officially launched the program citywide . More than 150 buses and 38 trolleys across the city are fitted with similar artificial intelligence systems that scan license plates for possible violations. The system uses AI-powered cameras that use computer vision technology to spot vehicles blocking bus lanes and scans license plates to identify the vehicles breaking the rules. If the system flags a possible infraction, a human reviewer confirms it before a fine is issued: US$76 in Center City, $51 elsewhere.
This rollout comes as SEPTA faces a $213 million budget shortfall , with imminent service cuts and fare hikes.
I'm a professor of information systems and the academic director of LeBow College of Business's Center for Applied AI and Business Analytics at Drexel University. The center's research focuses on how organizations use AI, and what that means for trust, fairness and accountability.
In a recent survey the center conducted with 454 business leaders from industries including technology, finance, health care, manufacturing and government, we found that the use of AI is often rolled out faster than the governance needed to make sure it works fairly and transparently.
That gap between efficiency and oversight is especially common in public-sector organizations, according to our survey.
That's why I believe it's important for SEPTA to manage its AI enforcement system carefully to earn public trust, while minimizing risks.
Fairness and transparency
When cars block a bus lane, they clog traffic. The resulting delays can mess up a person's day, causing missed connections or making riders late for work. That can leave riders with the feeling they can't rely on the transit system.
So, if AI enforcement helps keep those lanes clear, it's a win. Buses move faster, and commutes are quicker.
But here's the issue: Good intentions don't work if the system feels unfair or untrustworthy. Our survey also found that more than 70% of the surveyed organizations don't fully trust their own data. In the context of public enforcement, whether it's transit agencies or parking authorities, that's a warning sign.
Without trustworthy data, AI-powered ticketing can turn efficiency into costly mistakes, such as wrongly issued citations that must be refunded, lost staff time correcting errors, and even legal challenges. Public confidence matters here because people are most likely to follow the rules and accept penalties when they see the process as accurate and transparent.
Furthermore, this finding from our survey really caught my attention: Only 28% of organizations report having a well-established AI governance model in place. Governance models are the guardrails that keep AI systems trustworthy and aligned with human values.
That's troubling enough when private companies are using AI. But when a public agency like SEPTA looks at a driver's license plate and sends the driver a ticket, the stakes are higher. Public enforcement carries legal authority and demands a higher level of fairness and transparency.
The AI label effect
One may ask, "Isn't this ticketing system just like red-light or speed cameras?"
Technically, yes. The system detects rule-breaking, and a human reviews the evidence before a citation is issued.
But simply labeling the technology as AI can transform how it's perceived. This is known as the framing effect.
Just calling something AI-driven can make people trust it less. Research has shown, whether a system is grading papers or hiring workers, that the exact same process draws more skepticism when AI is mentioned than when it isn't. People hear "AI" and assume the machine is making judgment calls, so they start looking for flaws. Even if they think that AI is accurate, the trust gap never closes.
That perception means public agencies need to align AI-based enforcement with transparency, visible safeguards and easy ways to challenge mistakes. These measures increase trust in AI-based enforcement .
We've seen what can go wrong, and how quickly trust can erode, when an AI-based enforcement system malfunctions. In late 2024, AI cameras on Metropolitan Transportation Authority buses in New York City wrongly issued thousands of parking tickets, including nearly 900 cases where the drivers had actually followed the rules and parked legally.
Even if such errors are rare, they can damage public confidence in the system.
Build trust into the system
The Organization for Economic Cooperation and Development, the international body setting AI policy standards across dozens of countries, has found that people are most likely to accept AI-driven decisions when they understand how those decisions are made and have a clear, accessible way to challenge mistakes.
In short, AI enforcement tools should work for people, not just on them. For SEPTA, that could mean the following:
-Publishing clear bus-lane rules and any exceptions, so people know what's allowed.
-Explaining safeguards, like the fact that every bus-camera violation is reviewed by Philadelphia Parking Authority staff before a ticket is issued.
-Offering a straightforward appeals process with management review and a right to appeal.
-Sharing enforcement data, such as how many violations and appeals are processed.
These steps signal that the system is fair and accountable, helping shift it from feeling like a ticketing machine into a public service that people can trust.
Read more of our stories about Philadelphia .
Murugan Anandarajan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.