PWGIA Event Spotlights AI in Academic Integrity

Since generative AI exploded in late 2022, Cornell faculty have questioned the impact it will have on assessment, critical thinking, creativity, and the classroom writ large. Members of the Provost's Working Group on Innovation in Assessment (PWGIA) will address these questions at "Academic Integrity and Artificial Intelligence: Strategies for Responding," from 2:00-3:15 p.m. Nov. 12 in Rm. G64, Kaufman Auditorium, Goldwin Smith Hall

"We've heard from many faculty that AI has challenged them to rethink their assignments. We can all benefit from thoughtful ideas for assignment design that tap into student's motivation to learn, and that incentivize students to do the hard work that meaningful learning entails," said Rob Vanderlan, CTI executive director.

"AI and Academic Integrity" will focus on promoting student responsibility. M. Elizabeth Karns, provost fellow and senior lecturer of statistics and data science at the Cornell Ann S. Bowers College of Computing and Information Science, will deliver the event's keynote, sharing guidelines and practical tips for both faculty and student audiences.

A faculty panel discussion will follow the keynote and focus on how students can take ownership of and responsibility for their learning. The panel will feature Tim Riley, professor of mathematics in the College of Arts & Sciences and director of the Active Learning Initiative; and Kate Navickas, senior lecturer in the John S. Knight Institute for Writing in the Disciplines and director of the Cornell Writing Centers, A&S. Q&A sessions will follow both the keynote and the panel.

As a provost fellow, Karns has developed recommendations for faculty to help communicate their generative AI-related expectations to students, along with how to proactively and productively respond to possible academic integrity violations. For students, she's also developed a Canvas Course module to help them reflect on their personal values and focus on what they want to get out of their courses.

Riley and Navickas will discuss examples from their own teaching, including how they've worked to create cultures of ownership and accountability in their courses, and what that looks like in practice. Riley will discuss how he changed his approach to student assessment in his Introduction to Analysis course. Instead of basing students' grades on two prelims and a final, he now requires a homework portfolio and gives students quizzes that they can take and retake, to better their scores.

Navickas will discuss how she uses a labor-based grading contract to create a culture of accountability in her First Year Writing seminar. This approach credits students for the work they do and the scope of their improvement as they revise their work, instead of simply grading the quality of their final product. She'll also discuss how she uses scaffolding and student reflections throughout the semester to encourage her students to evaluate their own writing and develop a deeper understanding of how the writing process works.

Other topics that may be discussed include formal and informal policies for generative AI and academic integrity in courses, and how students have responded to the presenters' course policies, assessment strategies and grading methods

This event builds on the work of the PWGIA, which is dedicated to working towards alternative and authentic forms of assessment and recently welcomed ten Cornell faculty into its 2025-2026 cohort of faculty fellows. It also has published new case studies from the 2024-2025 cohort that demonstrate novel assessment approaches incorporated into Cornell courses over the past year.

"Academic Integrity and Artificial Intelligence: Strategies for Responding" is open to Cornell faculty and graduate students.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.