Major law firms across the globe are investing in artificial intelligence (AI) platforms designed to streamline their work. But AI can also play a more socially conscious role in the legal arena: helping expand access to legal services.
Scott Shapiro, the Charles F. Southmayd Professor of Law, co-founded the Yale Legal AI Lab at Yale Law School two years ago to do just that.
"There was all this talk at the law school about what to do with AI, questions about the ethics of AI, the regulation of AI, the pitfalls, the promises," Shapiro said. "What was missing, I thought, was actually trying to build tools that are intellectually, ethically, and professionally responsible."
Shapiro has worked closely with Ruzica Piskac, a professor of computer science at Yale School of Engineering & Applied Science and co-founder of the lab, to learn about the latest techniques in AI and figure out which are best suited for different kinds of legal problems. His fascination with and enthusiasm for AI sometimes puzzles people - isn't he worried, they ask, that the technology might wind up robbing his law students of job opportunities?
"My response is, 'You're acting like 99% of Americans have access to lawyers. They don't,'" he said. "We can use these tools to help people access public housing and other types of benefits or to protect themselves against eviction. Cases in which it's either AI or nothing."
Yale News sat down with Shapiro, who is also a professor of philosophy in Yale's Faculty of Arts and Sciences, to talk in more detail about the focus of the AI lab. Here are five key takeaways.
Shapiro has a background in and passion for computer science.
As an undergraduate at Columbia University, Shapiro majored in computer science for a time before switching to philosophy. (It was there, in the 1980s, that he first encountered the then nascent concept of artificial intelligence.) He revisited his love of the subject for his last book, "Fancy Bear Goes Phishing: The Dark History of the Information Age, in Five Extraordinary Hacks" (Farrar, Straus & Giroux, 2023).
Shapiro is also founding director of the Yale CyberSecurity Lab, which provides cybersecurity and information technology teaching facilities. And he and Piskac recently began co-teaching a course, "Law and Large Language Models," about how AI can be applied to legal reasoning.
The AI lab is using theorem provers to build legal reasoning tools.
"People think that AI is ChatGPT, whereas AI is so many other things," Shapiro said.
ChatGPT is a large language model - a model trained on enormous amounts of data to generate human language responses to questions and other tasks. That model acts as a highly accurate guessing machine of sorts, which may be appropriate for certain types of legal questions but not the best approach for the rule-based legal problems that the lab is working on. For that they are using theorem provers, which function more like calculators.
"These are things that most people haven't heard of, but they make your computer work," he said. "The reason why your computer doesn't crash all the time is because they take all the computer programs and put them through these checkers that look for bugs in the code. What we're doing is, instead of using theorem provers on computer programs, we're using them on legal code."
The technology has the potential to provide tools for navigating the complicated rules that often act as barriers to accessing public benefits, he said.
The lab has already created some prototype tools.
They will begin rolling out their models once they find the right partners. One challenge in that regard is determining how best to make the tools available to the appropriate audiences. Is a tool like this best distributed as an app? And if so, how do you help people find the app? Do you work with legal aid organizations and provide them with QR codes to disseminate to their clients? These are all questions the lab is wrestling with.
"One idea I find really exciting is that we could give these kinds of tools to pro bono attorneys," Shapiro said. "Lawyers who want to donate some time on weekends, but don't know the area of law where help is needed. If they could use a tool that gives them accurate answers to questions, they could then explain it to clients who are in desperate need of legal services."
One prototype was developed in cooperation with Yale's human resources department.
The lab built a "concierge" named Alfred to help HR streamline tedious tasks, including developing guidance for hiring fellows and addressing employee benefits. In a demonstration video, an employee asks Alfred whether she can use the funds in her health flexible spending account to cover various expenses associated with a serious accident that kept her in the hospital for several weeks. Alfred responds with a series of questions designed to lead to the legally accurate answer.
Alfred provides answers to the kinds of questions that HR staff are asked all the time. "We learned an incredibly important lesson there, which is that it's not the hard problems that make the lives of the people at HR and elsewhere miserable," Shapiro said. "It's the easy problems, the things they get asked over and over and over again."