TILT Master Teacher Initiative

The Master Teacher Initiative (MTI) is a university-wide program to enhance the quality of teaching within CSU’s colleges and libraries.

Visit TILT’s collection of Teaching Tips and the CNS collection of Teaching Tips

September 12, 2024

My teaching tip this week is another in the series on generative AI (ChatGPT, etc.). This series of tips is based on information posted by Joseph Brown, Director of the Academic Integrity Office, CSU TILT.  My department is wrestling with how to approach the use of generative AI in our senior thesis course.  While looking for information and guidance I ran across Dr. Brown’s very useful blog on the TILT website.  Rather than dump all the information on you at once I am sending each of his several posts separately.  Here is a post from Joseph on the issues and opportunities posed by generative AI.  I am also adding the first portion of a description of generative AI use detection tools.

The Coming Homework Apocalypse

Professor and writer Ethan Mollick (Wharton School, UPenn) recently published a succinct and clear-eyed appraisal of what educators will face this fall as Generative AI engines become better and more available entitled “The Homework Apocalypse” on his site, One Useful Thing. To paraphrase, he says this fall will be a moment of immediate disruption, but massive opportunity.

What I think is most valuable about this piece for faculty is that he identifies the most common assignments educators give students that will be disrupted by AI and provides, in the clearest terms, our options for how to approach the disruption and threat that AI engines pose as they become free, good, and ubiquitous. His most optimistic options involve innovating to include some component of AI competency alongside or within our traditional assignments and then holding students accountable for the accuracy of the final products.  

Like others writing in this space recently, Mollick wants to frame the advent of free and accessible Generative AI as a moment for creativity and innovation in pedagogy. The examples he provides are helpful, concrete, and can be used as a springboard for your own creativity in assignment/assessment creation.

The Homework Apocalypse threatens a lot of good, useful types of assignments, many of which have been used in schools for centuries. We will need to adjust quickly to preserve [what] we are in danger of losing, and to accommodate the changes AI will bring. That will take immediate effort from instructors and education leaders, as well as clearly articulated policies around AI use. But the moment isn’t just about preserving old types of assignments. AI provides the chance to generate new approaches to pedagogy that push students in ambitious ways. 

Read more here: (LINK).

Comparing AI Detection Tools: One Instructor’s Experience

Comparison of different programs that claim to detect AI-generated text

The post below was written by Dr. Ellie Andrews, an instructor in the Department of Anthropology and Geography. In this piece, Dr. Andrews shares her experience trying to verify authentic student writing in the two large section courses she taught in the spring of 2023. She shares the pedagogical challenges she faced creating AI-resistant assignments and the varying results she discovered while trying to use AI detection tools available to instructors on the internet. Thank you to Dr. Andrews for sharing this work. Her bio can be found at the bottom of this post.

Student paperSaplingCopyleaksZeroGPTOpenAI Text ClassifierCrossPlagNotes
1100%89%87%OK100%admitted to using “paraphrasing tools”
2100%92-95%96%OKOKadmitted using AI
3100%92%11%“unclear”98%did not respond to requests to meet
4100%87-90%7%OK100%admitted to “copy and pasting”
577%99%53%OK100%admitted using AI
675%80–93%89%OKOKboyfriend wrote paper, he denied using AI
774%99%92%OKOKdid not respond to requests to meet
889%OK33%OKOKdenied completely
976%OK7%OKOKI did not contact
1071%OK18%OKOKadmitted to using “paraphrasing tools”
1125%63–86%23%OKOKused outside sources for “connective phrases”
1224%82%22%OK12%denied using AI, used Reverso “rephraser”
1324%OK23%OKOKI did not contact
14a12%OK11%OKOKadmitted to using AI; 14a is entire essay; 14b is AI-generated paragraphs
14b100%98%46%“unclear”100%

Takeaways

  • I looked for signs of AI-generated text in student essays; most of the essays that failed the AI-detection programs had one or more of the following:
    • a lack of quotation marks
    • formulaic or bland concluding paragraphs
    • bland, overly generic statements
    • information that did not appear in the article students were asked to analyze
    • different formatting or tone in different parts of the text
    • Note: I formulated these signs after spending 1–2 hours experimenting with ChatGPT.
  • Programs are not very consistent with one another in detecting AI-generated text, meaning that instructors may want to use more than one.
  • Some programs are typically more “suspicious” (Sapling, Copyleaks) and others more “generous” (OpenAI Text Classifier).
  • All programs have trouble identifying shorter passages within longer essays that are likely AI generated, likely because of word limits (14a is the results of an entire essay, 14b is only two paragraphs from that essay that were AI generated). It is probably best to copy and paste only the text that seems problematic.

I teach two sections of Introduction to Geography with the help of two TAs (thank you, Tom Chittenden and Tanmoy Malaker!). For the latest writing assignment, I revised a prompt from the previous semester, asking students to apply three political economic terms / theories to a contemporary situation described in one of four articles that they could choose from. (The full assignment description is at the end of this article—many thanks to Dr. Heidi Hausermann for the original assignment). I knew the prompt wasn’t AI-proof, in part because ChatGPT may have access to the articles (all of them were published before 2021, the most recent data that ChatGPT was trained on, although they may or not have actually been included in the data that it was trained on).

My take aways so far:

  • I am not happy with a numerical percentage output for deciding whether generative AI was used by a student to produce or edit their writing submission.  In contrast, plagiarism detection software like Turn It In allows you to make a side-by-side comparison of the student’s writing and the putatively plagiarized source.
  • Generally, I don’t believe that prohibition is ever effective (if anything should be illegal it is cigarette sales and education has been much more effective in reducing the number of people that smoke cigarettes in the U. S.).
  • Therefore, I agree with Joseph Brown that the best approach is educating our students on the limitations of generative AI and it proper uses.

I hope you find these posts helpful for your teaching.  As always, I appreciate your questions, comments and feedback on this and other teaching related topics.  I am particularly interested in hearing from those that have been wrestling with the use of generative AI in software coding assignments.

Cheers, Paul