Design Justice AI Initiative Encourages Global Collaboration on Emerging Questions

Researchers from the UConn Humanities Institute are part of a new initiative exploring questions about bias in AI technologies

The rapid expansion of AI technologies has largely happened without due consideration of the ethical implications of these models. Design Justice AI will fund researchers from across the globe to address these issues. (Pixabay)

Spam filters, Face ID, Netflix recommendations - these everyday services, and many more, are powered by artificial intelligence (AI).

The rapid development of AI technologies in recent years has raised important ethical questions about how these tools are built and used.

Design Justice AI, a new multi-institution effort that includes the UConn Humanities Institute, is bringing together humanities scholars from around the globe to address issues of bias and the far-reaching impacts of technologies that operate within realms previously reserved for the human.

Logo on the glass entryway to the Humanities Institute, 2019. (Sean Flynn/UConn Photo)

Lauren Goodlad, chair of the Critical AI Initiative at Rutgers University is the lead on this effort. Design Justice AI is supported by a $250,000 grant from the Andrew W. Mellon Foundation. The group includes international collaborators at the University of Pretoria, in South Africa, and the Australian National University.

The UConn team includes Michael P. Lynch, director of the Humanities Institute and Board of Trustees Distinguished Professor of philosophy; and Yohei Igarashi, associate director and coordinator of digital humanities and media studies for the Humanities Institute and associate professor of English.

The UConn Humanities Institute's inclusion in this work acknowledges how the Institute has established itself as a global leader in the digital humanities.

"The institute has a long-standing research commitment to the ethics of AI," Lynch says. "And that is a way of trying to grapple with the changes that algorithms are bringing to our society, in particular the changes that they're bringing to how we think, how we treat each other, and how we distribute research."

The rise of "generative AI," like ChatGPT, which can produce remarkably human-sounding text, or the image generator DALL-E, has also generated conversations about the nature of creativity, culture, knowledge, and learning.

AI technologies are generally trained on data scraped indiscriminately from the internet, meaning they adopt human biases inherent in the data. This had led to, for example, Microsoft's 2016 "TayTweets" chatbot experiment learning to spew hate speech on Twitter.

"These models are trained on the internet, which raises all sorts of problems of bias and banality," Igarashi says. "So that's one of the core issues - what do humanists have to contribute to making artificial intelligence work for us in positive ways."

Design Justice AI will fund up to 20 interdisciplinary scholars to study questions that, rather than rejecting generative AI outright, explore how these technologies can be inclusive and have a positive impact on human communication and creativity.

"One of the things we need to hurry up and do is to try to figure out how we can actually use this technology in a way that reflects our better selves, rather than those parts of us that are undemocratic, non-inclusive - the worst parts of us," Lynch says.

Researchers at the University of Pretoria will help advance the goal of thinking about the relationship between under-resourced languages, like those spoken on the African continent, and technologies largely developed by English-speaking engineers. The hope of this work is to uncover and think critically about the kind of assumptions developers in the Global North are making when designing AI technologies.

Design Justice AI is a thoroughly interdisciplinary effort that will foster conversation between both humanities and STEM researchers in this field.

Funded researchers will disseminate their findings through Critical AI's public-facing blog, interdisciplinary peer-reviewed publications, and other channels.

The effort will conclude with a meeting at the University of Pretoria next summer.

Lynch says he sees this effort as the beginning of new collaborations between researchers studying questions that will only become more important as AI technology becomes more ubiquitous and complex.

"It's not the end. It's the start of something," Lynch says. "My hope is to form a stable sustainable research network between these universities on these topics."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.