'Legal AI is bit of Wild West right now'

A growing number of AI tools are being developed for the legal sector, to help professionals search lengthy texts or check court rulings. Leiden SAILS researcher Masha Medvedeva, an expert on the technical development of these systems, warns: 'Users should know what's under the hood.'

Could you tell us a bit more about your research on AI tools for the legal sector?

'I have technical expertise on building AI systems and I've been embedded in various law faculties. My research is focused on technical design choices in systems that may have downstream implications on whoever is going to use them. These choices can have implications for law as a whole, for legal practice or for individuals. Legal search tools, for instance, have been around for a very long time and have been done pretty well. But in their search results, they can omit certain things, rank your search results in different manners and so on. So if you're a law professional using these systems, you should at least have some sort of understanding of how they work to a degree that you can make decisions of whether to rely on certain tools or not.'

Serie

What does AI mean for... the legal sector

You actually are quite skeptical when it comes to the development of AI tools for legal purposes. Why is that?

'In my experience, developers who build these systems often have little understanding of the sector they're developing for. This was also one of the findings in my PhD research on the development of systems that predict court decisions. Furthermore, tools often never properly evaluated on data. I'll give you an example, again with the systems predicting court decisions. Many scientific papers claim huge successes with their prediction models. But 93 percent of the academic papers evaluate their models on data that is only available after the judge made a decision. But this of course is not the kind information that would be available to somebody who would actually want to predict court decisions. So actually, these systems are predicting what the decision was rather than what it will be. Out of 171 papers in English that predict court decisions, only twelve are actually doing something with the data that is only available prior to a decision. The same goes for models built to generate legal reasoning. These models are often evaluated as if they're translation models. In essence, they're comparing how many words in the reasoning are the same as the original reasoning that they compare it to. That's not necessarily a way to evaluate whether the model's reasoning is actually sound.'

What would your advice be to legal professionals who really want to use AI tools?

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.