USC AI Masters Unknown Through Self-Learning

University of Southern California

For years, the guiding assumption of artificial intelligence has been simple: an AI is only as good as the data it has seen. Feed it more, train it longer, and it performs better. Feed it less, and it stumbles.

A new study from the USC Viterbi School of Engineering was accepted at the IEEE SoutheastCon 2026 , taking place March 12-15. It suggests something far more surprising: with the right method in place, an AI model can dramatically improve its performance in territory it was barely trained on, pushing well past what its training data alone would ever allow.

The method was developed by Minda Li , a USC Viterbi undergraduate who has been pursuing research since her freshman year, working alongside her advisor Bhaskar Krishnamachari *, a Faculty Fellow and Systems Professor in the Ming Hsieh Department of Electrical and Computer Engineering , with a joint appointment in the Thomas Lord Department of Computer Science at the USC Viterbi School of Engineering and the USC School of Advanced Computing . Together, they tested GPT-5's ability to write code in Idris, an extraordinarily obscure programming language with a fraction of the online presence of mainstream languages like Python. The results were striking: by giving the AI feedback on its errors and letting it try again, Li pushed the model's success rate from a dismal 39% all the way to 96%.

"Our AI tools are now able to transcend their initial training. Used to be, maybe a year or two ago, you would say an AI model is only as good as the data it has seen. This paper is saying something different." — Prof. Bhaskar Krishnamachari

A Language So Obscure, Even the Researchers Didn't Know It

Python, the world's most popular programming language, has over 24 million code repositories publicly available online, a vast library that AI models like GPT-5 learn from during training. Idris, the language Li and Krishnamachari chose to test, has approximately 2,000. That is roughly 10,000 times less data.

The choice of Idris was deliberate, and, as Krishnamachari describes it, a little playful. "We were hunting for a language so obscure that we hadn't heard of it," he said. "I think we were just in my office together, googling around, trying to find some crazy language that no one's ever heard of." They found Idris, a dependently typed functional programming language used by a small community of specialists, and decided it was the perfect test case.

Crucially, neither researcher could write a line of it themselves. "Neither Minda nor I had ever coded in it, and frankly, we could not tell you if the code was correct or wrong," Krishnamachari admitted. That is part of what makes the findings so striking: Li was guiding an AI to master a language that its own guides could not speak.

The Breakthrough: A Feedback Loop That Changes Everything

Li started by simply asking GPT-5 to solve 56 Idris coding exercises on Exercism, a popular coding practice platform. Out of the box, the model solved only 22 of them, a 39% success rate, far below its 90% success rate in Python and 74% in Erlang.

She then tried several approaches to improve performance: providing documentation, error manuals, and reference guides. These helped somewhat, pushing the success rate to the low 60s, but never dramatically.

The breakthrough came when she implemented what they call a compiler feedback loop. A compiler is the software that translates human-written code into instructions a computer can execute. When code is wrong, the compiler says so, precisely and in technical detail. She began capturing those error messages and feeding them directly back to GPT-5, asking it to fix the specific problems identified and try again. Up to 20 times per problem.

"I thought we'd probably get a 10% jump," said Li, who designed and ran the experiments. "I was surprised that just that alone, seemingly one simple thing, just keep recompiling, keep trying, was able to get to 96%."

Beyond Code: Why This Changes Everything

What Li and Krishnamachari built is essentially a method for unlocking capability that was always there but inaccessible. By engineering the right kind of structured feedback, they found a way to get far more out of an AI model than its training data alone would ever produce.

Krishnamachari envisions this approach being applied far outside the world of software and niche programming languages. "Imagine you're trying to get an AI to build 3D models of buildings," he said. "You have something that gives feedback: this model is structurally unsafe, it doesn't have the right distribution of materials, it's too expensive to build. Whatever it is, it just gives feedback on everything the AI generates, every iteration. What I've learned from this project is that so long as you can figure out how to provide that kind of clear and correct feedback, there's a chance we can now significantly improve the quality of AI outputs."

He also sees applications in mathematical reasoning and even legal logic, any domain with rules clear enough to generate objective feedback. "If you asked an AI agent to produce a proof of a theorem, it should be fairly easy to say this is incorrect, and here's why, and have it take another crack at it."

The research may also have meaningful implications for endangered and low-resource human languages. Krishnamachari's former Ph.D. student Jared Coleman has been working on Owens Valley Paiute , a Native American language with very limited written data, exploring whether AI can assist in translation with minimal training, mirroring the same core challenge Li tackled with Idris.

One Problem Down, One to Go

Li is already thinking about what comes next. The current system essentially brute-forces its way to a solution, trying and failing until something works, but does not retain what it learned from problem to problem. She wants the model to get smarter with each problem rather than starting from scratch every time.

For Krishnamachari, the bigger picture is about what AI is becoming. "Part of the craziness of all of this is getting an AI tool to do a task that we cannot do ourselves," he said. "We are building tools that are, in some sense, more powerful than we are." That doesn't worry him; it excites him. AI, he believes, will allow us to execute ideas we previously thought were out of reach, freeing us from the grunt work and putting the onus back where it belongs: on having good ideas in the first place.

It started, after all, with two people in an office, googling around, wondering what would happen if they tried something a little bit crazy.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.