For the first time researchers have used an advanced AI model that understands both images and language allowing them to model dyslexia, paving the way for potential new treatments.
Dyslexia, the world's most common learning disorder impacting reading, spelling and writing, is estimated to affect up to 20% of the global population. Until now, traditional approaches to studying dyslexia, such as behavioral and neuroimaging methods, have provided valuable insights but remain limited in their ability to test the underlying mechanisms of reading impairments.
Now, researchers from EPFL's NeuroAI Lab, part of the Schools of Computer and Communication Sciences and Life Sciences have modelled dyslexia using next-generation Vision Language Models, that can fully model the whole pipeline from seeing words to processing and understanding the context.
In a paper presented this week at one of the fastest growing artificial intelligence conferences in the world the 2026 International Conference on Learning Representations, the researchers outlined how, using stimuli from cognitive neuroscience, they found that parts of the AI digital-twin brain act in the same way a human brain processes written words. They then changed how those parts worked.
Similarities in behavior
"When we had identified the visual word form areas of the AI brain, areas that respond strongly to written words, we "turned these parts off" to see if the model had neurons that form a whole network and that would do the same thing as a human brain. We found that the AI had trouble reading, but it could still understand images and language in general... just as with people affected by dyslexia," explained Melika Honarmand, a doctoral assistant in the NeuroAI Lab and lead author of the paper. "It did work and showed us similarities in behavior."
This research comes on the back of almost a decade of work using vision and language models independently and has only been possible with the very latest generation of combined Vision Language Models.
"Vision only and language only models do predict some brain activity and some behaviors, but they are far from perfect," said Professor Martin Schrimpf, head of the NeuroAI Lab. "The Vision Language models we used were only released a few months before we started this project and even the earlier combined models were not capable enough for us to target dyslexia. This research has been made possible by the exponential recent advances in AI," he continued.
In addition to being the lab's first project using a next-gen Vision Language Model, the research takes a step away from past work modelling a healthy population to focus on a patient population.
"I'm not aware of any other work done in the neuro AI field that has done this," added Schrimpf. "Dyslexia kept coming up as a disorder or a dysfunction that is very difficult to describe, and nobody really knows the mechanism, but ethically you can't go into a human and knock out all those neurons. In the model, we can do mechanistic perturbations, changing anything the way we like, so it just felt like a big opportunity."
After completing this part of the study, the researchers turned to assessing different fonts, some commonly used and others developed specifically for those impacted by dyslexia.
A general framework for investigating brain disorders
"We checked how accurately the model could define texts using different fonts without anything else changing and we found that it did significantly better with the fonts specifically made for dyslexic people and worse on those known to be problematic for them. We are now using the model to create the best font possible for itself and by extension, those with dyslexia," said Honarmand.
"Although they are not made with exactly the same architecture, these models do try to replicate the brain so of course we expect to see some similarities. But it was surprising that a hypothesis about a specific human brain component, that when impaired results in a particular behavior, happened in exactly the same way in the model," she continued.
Beyond the study results, perhaps one of the most exciting outcomes of the research is the establishment of a computational framework for investigating brain disorders.
"We've applied this to dyslexia, but we think the general framework that we have developed is broadly applicable to a range of different brain dysfunctions. Currently we are investigating visual hallucinations linked to Parkinson's and more broadly, depression. Of course, not every clinician will immediately be convinced by the use of AI-based digital brains but at some point, you cannot reject the evidence anymore. If the results are there then at the very least, it just might be a useful tool," concluded Schrimpf.