|
ACADEMIC INTEGRITY OFFICE
May 17, 2023
| | |
ALL ACADEMICS AT UC SAN DIEGO
| |
GenAI, Cheating and Reporting to the AI Office
| |
Dear Colleagues,
As expected, the Academic Integrity Office is receiving an increasing number of integrity violation allegations that students are submitting AI-generated content as their own work. When students submit work for academic credit that they did not do, or did not do according to the assessment specifications, then instructors are right for reporting to the AI Office as required by Senate Policy. However, since GenAI is a rather new tool, I thought it timely to provide some advice on documenting these types of violations.
While some instructors have hoped to depend on AI-detectors (e.g., GPTZero, Originality.ai, or Open.AI Classifier) to document integrity violations, these detectors cannot be trusted as their results are varied, their identifications unreliable, and they can be fooled. See here for a test of the detectors using the US Constitution: one detector identified the US Constitution as fully AI-generated, one said partially generated and a third said human-generated. According to OpenAI (the creators of ChatGPT), “the results [of their classifier] may help, but should not be the sole piece of evidence when deciding whether a document was generated with A.I.” (https://platform.openai.com/ai-text-classifier). GPTZero issued a similar disclaimer and warning.
Instructors should also not depend on asking ChatGPT itself if it generated a particular text. While ChatGPT might hallucinate and say “yes I did” or “no I didn’t”, it cannot actually detect AI-generated content. It was trained to finish sentences; it does not think, understand or analyze, and it doesn’t know the truth from fiction. As ChatGPT-4 itself says:
| |
Even though the output from AI-detectors is insufficient evidence to support an integrity violation, we understand that instructors appreciate having some sort of verification/support for their suspicion of an integrity violation. So, if instructors decide to use AI-Detectors, we suggest that they take the following steps:
- update their academic integrity policy to explain to students when, why and how it is dishonest to use GenAI for completing assignments
- tell students up front and in writing (e.g., in syllabus) that their work may be submitted to AI-Detectors, including when this might occur (e.g., for all papers or just ones where cheating is expected), how the output will be used, and which detectors will be used
- use multiple detectors to compare the outputs
- use the outputs as the beginning, not the end, of the investigation.
- If, after seeing the output, the instructor still suspects an integrity violation, arrange a conversation with the student. Ask the student about their writing process, the choices made in the text, their use/choice of references, and probe their understanding of the content.
- If the student cannot explain their paper or their choices, or admits to using GenAI, document this and add it to the integrity violation allegation.
For instructors who are not banning the use of GenAI in assessment completion, see the AI Office’s official statement issued in January 2023 for alternative ways of responding to the impact of GenAI on assessments and learning. We regularly update this statement, so I encourage instructors to bookmark it and return to it often for guidance, resources, and FAQs. Also, I am available for consultation on particular or suspected integrity violations, but also on rethinking assessments and pedagogies in the era of Generative A.I. Instructors can contact me at aio@ucsd.edu.
If you have any questions, or would like me to come speak with your department, please let me know.
Sincerely,
| |
Tricia Bertram Gallant, Ph.D. Director, Academic Integrity Office
| | |
|
|
|