AI's Impact on Higher Education, Criminal Justice Explored

The following story explores conversations taken from "It's (Probably) Not Rocket Science," a new podcast produced by The University of New Mexico.   

It was nearly a year ago when OpenAI publicly launched its generative artificial intelligence platforms ChatGPT and DALL-E. The availability of this new technology immediately sent shockwaves through industries across the world and AI became part of the public's lexicon.

The sudden availability of the technology has left many with questions of how it may impact the world. Episode one of It's (Probably) Not Rocket Science, a University of New Mexico podcast, explores the impact artificial intelligence may have on the world.

Leo Lo, dean of the College of Learning and Library Sciences at UNM has made AI, and its impact and potential use in education, a research focus area in the months since he first encountered it.

"It kind of shocked everybody and it shocked me, definitely, that it can produce all these amazing written essays and very human-like responses," Lo said on a recent episode of UNM's "It's (Probably) Not Rocket Science" podcast.

As an expert in learning, Lo quickly jumped in to see how ChatGPT might change higher education and improve workflows for students and faculty alike. He took a course offered by the University of Oxford, began surveying colleagues in the University Libraries, and worked to develop best practices for the technology's use. The importance of AI literacy was immediately apparent to him.

"Every field from arts to business will be touched by people who can and know how to use AI," Lo said. "There is this is saying out there, humans are not going to be replaced by AI, at least in the short term, but will be replaced by people who use AI"

Lo recommends everyone explore the technology and weigh how it might be used in their workflows. Prompt engineering, or the ability to ask generative AI the right questions, will become a necessary skill for future-proofing a résumé. Lo developed and published the CLEAR framework for how to best prompt AI.

Here's how it works:

  1. Concise: Be brief and focused. Don't overload the AI with unnecessary information.
  2. Logical: Structure prompts logically with a clear progression of ideas.
  3. Explicit: Clearly state the expected length and format of the content.
  4. Adaptive: Adjust the words and phrasing until you're satisfied with the output.
  5. Reflective: Continuously evaluate and refine your question based on the response.

Lo described himself as optimistic about the future of AI, citing its potential to help synthesize information and tailor learning for individual students, as well as save time. Already, Lo uses ChatGPT to help him respond to emails, being sure to disclose its use to any recipients and edit the writing as-needed.

Still, he has concerns about the technology's uses in higher education. With discussions about plagiarism happening all over the country, Lo wants to first ensure faculty understand AI He recommends teachers and professors provide explicit guidelines to students on how the technology may or may not be used for classwork. He also cautions users, students or not, about a phenomenon now referred to as "hallucitations" where the technology generates text that is entirely fictional and presents it as fact.

"In the libraries we are getting a lot of students coming in with fake citations and we have to tell them that they don't exist," Lo said. "We are using these opportunities to teach students the chatbot is great in some ways and terrible in others."

It's not just students falling for made-up information generated by AI. Just a few months ago, lawyers in New York City were sanctioned for citing fake cases in a legal briefing they generated with ChatGPT. These and other legal issues are top of mind for Sonia Gipson Rankin, a UNM School of Law professor and computer scientist, who is featured in the episode.

Gipson Rankin also expressed concerns about data vulnerabilities and an inability to hold artificial intelligence systems accountable in a court of law.

She explained that governments and criminal justice systems have already employed the use of third-party algorithms and artificial intelligence to help decide everything ranging from whether someone has possibly committed fraud to if they should be released from jail on bail.

"An algorithm is hard-coded. That is, if a user does something, then the software is programmed by a human to do something in response," Gipson Rankin said. "Artificial intelligence is a system that is able to predict or determine what would be the next right thing to do."

This difference is crucial in the legal space because when one of these systems's decisions comes into question, an algorithm and its codes can be examined and its coder put on the stand to explain how the technology was designed, but artificial intelligence makes its own predictions.

"Who do I put on the stand when I need to find out why the AI decided to do this," Gipson Rankin said. "It understands how to get to an outcome. We don't have enough information on the process and that gets concerning under the law."

Despite her concerns, Gipson Rankin is ultimately optimistic about the technology and utilizes it a few times a week for fun and to explore its capabilities. Her family even had ChatGPT write a personalized song for her uncle's 80th birthday. She compared the current state of AI to the use of cars before they were made safer with seat belts.

"I'm very excited for new mechanisms and tracking measures so individuals can have appropriate recourse under the law," she said. "It's really great that we can have these ways to expand our own ideas, but we do want to do this in ways that best protect people's privacy."

Check out It's (Probably) Not Rocket Science to hear these topics explored in greater detail. Subscribe on Spotify or Apple Podcasts.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.