The field of artificial intelligence is moving at a staggering clip, with breakthroughs emerging in labs across MIT. Through the Undergraduate Research Opportunities Program (UROP), undergraduates get to join in. In two years, the MIT Quest for Intelligence has placed 329 students in research projects aimed at pushing the frontiers of computing and artificial intelligence, and using these tools to revolutionize how we study the brain, diagnose and treat disease, and search for new materials with mind-boggling properties.
Rafael Gomez-Bombarelli, an assistant professor in the MIT Department of Materials Science and Engineering, has enlisted several Quest-funded undergraduates in his mission to discover new molecules and materials with the help of AI. “They bring a blue-sky open mind and a lot of energy,” he says. “Through the Quest, we had the chance to connect with students from other majors who probably wouldn’t have thought to reach out.”
Some students stay in a lab for just one semester. Others never leave. Nick Bonaker is now in his third year working with Tamara Broderick, an associate professor in the Department of Electrical Engineering and Computer Science, to develop assistive technology tools for people with severe motor impairments.
“Nick has continually impressed me and our collaborators by picking up tools and ideas so quickly,” she says. “I particularly appreciate his focus on engaging so carefully and thoughtfully with the needs of the motor-impaired community. He has very carefully incorporated feedback from motor-impaired users, our charity collaborators, and other academics.”
This fall, MIT Quest celebrated two years of sponsoring UROP students. We highlight four of our favorite projects from last semester below.
Squeezing more energy from the sun
The price of solar energy is dropping as technology for converting sunlight into energy steadily improves. Solar cells are now close to hitting 50 percent efficiency in lab experiments, but there’s no reason to stop there, says Sean Mann, a sophomore majoring in computer science.
In a UROP project with Giuseppe Romano, a researcher at MIT’s Institute for Soldier Nanotechnologies, Mann is developing a solar cell simulator that would allow deep learning algorithms to systematically find better solar cell designs. Efficiency gains in the past have been made by evaluating new materials and geometries with hundreds of variables. “Traditional ways of exploring new designs is expensive, because simulations only measure the efficiency of that one design,” says Mann. “It doesn’t tell you how to improve it, which means you need either expert knowledge or lots more experiments to improve on it.”
The goal of Mann’s project is to develop a so-called differentiable solar cell simulator that computes the efficiency of a cell and describes how tweaking certain parameters will improve efficiency. Armed with this information, AI can predict which adjustments from among a dizzying array of combinations will boost cell performance the most. “Coupling this simulator with a neural network designed to maximize cell efficiency will eventually lead to some really good designs,” he says.
Mann is currently building an interface between AI models and traditional simulators. The biggest challenge so far, he says, has been debugging the simulator, which solves differential equations. He pulled several all-nighters double-checking his equations and code until he found the bug: an array of numbers off by one, skewing his results. With that obstacle down, Mann is now looking for algorithms to help the solver converge more quickly, a crucial step toward efficient optimization.
Teaching neural networks physics to identify stress fractures
Sensors deep within the modern jet engine sound an alarm when something goes wrong. But diagnosing the precise failure is often impossible without tinkering with the engine itself. To get a clearer picture faster, engineers are experimenting with physics-informed deep learning algorithms to translate these sensor distress signals.
“It would be way easier to find the part that has something wrong with it, rather than take the whole engine apart,” says Julia Gaubatz, a senior majoring in aerospace engineering. “It could really save people time and money in industry.”
Gaubatz spent the fall programming physical constraints into a deep learning model in a UROP project with Raul Radovitzky, a professor in MIT’s Department of Aeronautics and Astronautics, graduate student Grégoire Chomette, and third-year student Parker Mayhew. Their goal is to analyze the high-frequency signals coming from, say, a jet engine shaft, to pinpoint where a part may be stressed and about to crack. They hope to identify the particular points of failure by training neural networks on numerical simulations of how materials break to understand the underlying physics.
Working from her off-campus apartment in Cambridge, Massachusetts, Gaubatz built a smaller, simplified version of their physics-informed model to make sure their assumptions were correct. “It’s easier to look at the weights the neural network is coming up with to understand its predictions,” she says. “It’s like a test to check that the model is doing what it should according to theory.”
She picked the project to try applying what she had learned in a course on machine learning to solid mechanics, which focuses on how materials deform and break under force. Engineers are just starting to incorporate deep learning into the field, she says, and “it’s exciting to see how a new mathematical concept may change how we do things.”
Training an AI to reason its way through visual problems
An artificial intelligence model that can play chess at superhuman levels may be hopeless at Sudoku. Humans, by contrast, pick up new games easily by adapting old knowledge to new environments. To give AI more of this flexibility, researchers created the ARC visual-reasoning dataset to motivate the field to create new techniques for solving problems involving abstraction and reasoning.
“If an AI does well on the test, it signals a more human-like intelligence,” says first-year student Subhash Kantamneni, who joined a UROP project this fall in the lab of Department of Brain and Cognitive Sciences (BSC) Professor Tomaso Poggio, which is part of the Center for Minds, Brains and Machines.
Poggio’s lab hopes to crack the ARC challenge by merging deep learning and automated program-writing to train an agent to solve ARC’s 400 tasks by writing its own programs. Much of their work takes place in DreamCoder, a tool developed at MIT that learns new concepts while solving specialized tasks. Using DreamCoder, the lab has so far solved 70 ARC tasks, and Kantamneni this fall worked with master of engineering student Simon Alford to tackle the rest.
To try and solve ARC’s 20 or so pattern-completion tasks, Kantamneni created a script to generate similar examples to train the deep learning model. He also wrote several mini programs, or primitives, to solve a separate class of tasks that involve performing logical operations on pixels. With the help of these new primitives, he says, DreamCoder learned to combine the old and new programs to solve ARC’s 10 or so pixelwise tasks.
The coding and debugging was hard work, he says, but the other lab members made him feel at home and appreciated. “I don’t think they even knew I was a freshman,” he says. “They listened to what I had to say and valued my input.”
Putting language comprehension under a microscope
Language is more than a system of symbols: It allows us to express concepts and ideas, think and reason, and communicate and coordinate with others. To understand how the brain does it, psychologists have developed methods for tracking how quickly people grasp what they read and hear. Longer reading times can indicate when a word has been improperly used, offering insight into how the brain incrementally finds meaning in a string of words.
In a UROP project this fall in Roger Levy’s lab in BCS, sophomore Pranali Vani ran a set of sentence-processing experiments online that were developed by an earlier UROP student. In each sentence, one word is placed in such a way that it creates an impression of ambiguity or implausibility. The weirder the sentence, the longer it takes a human subject to decipher its meaning. For example, placing a verb like “tripped” at the end of a sentence, as in “The woman brought the sandwich from the kitchen tripped,” tends to throw off native English speakers. Though grammatically correct, the wording implies that bringing rather than tripping is the main action of the sentence, creating confusion for the reader.
In three sets of experiments, Vani found that the biggest slowdowns came when the verb was positioned in a way that sounded ungrammatical. Vani and her advisor, Ethan Wilcox, a PhD student at Harvard University, got similar results when they ran the experiments on a deep learning model.
“The model was ‘surprised’ when the grammatical interpretation is unlikely,” says Wilcox. Though the model isn’t explicitly trained on English grammar, he says, the results suggest that a neural network trained on reams of text effectively learns the rules anyway.
Vani says she enjoyed learning how to program in R and shell scripts like Dash. She also gained an appreciation for the persistence needed to conduct original research. “It takes a long time,” she says. “There’s a lot of thought that goes into each detail and each decision made during the course of an experiment.”
Funding for MIT Quest UROP projects this fall was provided, in part, by the MIT-IBM Watson AI Lab.