Unauthorized AI detection tools threaten students’ privacy rights

Graphic by Keven Vaillancourt / The Concordian

In the absence of clear policy, some professors have used unapproved AI detection tools, risking leaking of students’ information. 

Professors using unauthorized AI detection software to check for plagiarism may be putting students’ privacy rights at risk, according to university guidelines.

Concordia’s Guidelines for Teaching with Generative AI, published in October 2023 and updated in August 2024, provide a framework of best practices on integrating AI in teaching and evaluation practices. They were developed by the Centre for Teaching and Learning (CTL).

The guidelines prohibit instructors from using AI detectors like GPTZero which have been “known to be unreliable – commonly producing both false positive and false negative results.” 

The use of unauthorized online AI detectors could also violate the university’s privacy policy and a part of the software acquisition process known as the Privacy Impact Assessment. These policies ensure that students’ information like name, email address, student numbers or personal content are handled by software providers in a safe manner.

Generative AI tools like ChatGPT are known for their ability to write like humans, causing teachers to be worried about plagiarism by students. In response to these concerns, AI detectors have emerged promising to identify plagiarism.

For example, professors can upload a student assignment to GPTZero and get a score that indicates how likely the text was generated by AI. 

Mike Barcomb, education technologist at CTL, developed the guidelines after a survey of over 200 instructors, consultations with pedagogy specialists and the university’s legal department.

Barcomb’s research showed that instructors were adopting generative AI tools in interesting ways, from helping refine ideas and research problems to cleaning up text.

Barcomb said that instructors were also concerned about potential negative impact on student learning, ethics and plagiarism. 

“From students damaging their critical thinking skills, not being able to reason their way through things, [and] other ethical concerns like the labour issues involved in AI,” he said. “The fact that it’s trained on text from the internet and, well… Who’s that representative of? Whose voice is that? And, more importantly, whose voice is left out?”

Nearly 12 per cent of plagiarism cases in the Faculty of Arts & Science since Fall 2022 were linked to generative AI, said Anthony Noce, associate professor and academic code administrator for the faculty. ChatGPT was launched to the public in November 2022. 

Noce also confirmed that professors in Arts & Science have used AI detectors when reporting plagiarism cases. He said that he was also aware of similar reporting at the John Molson School of Business through discussions among other code administrators. Noce was “not completely certain how widespread this knowledge is that professors should not be using AI detectors.”

However, when such reports come to Noce, he usually demands further detail than just the AI detector’s verdict, like a likelihood score that the text was AI-generated. 

“So I would inform professors that if they’re only relying on an AI detector to conclude that a student has used AI in the generation of work, then I just go back to the professor and I say this is not acceptable evidence,” he said. “If there’s nothing else about that, then I don’t even interview the students.” 

Noce and his counterparts in other faculties have agreed on an informal policy that AI detectors by themselves are not material evidence. When asked about CTL’s AI guidelines forbidding the use of unauthorized AI detectors, he was unsure if they were definitive.

“It’s not an adopted university policy as of yet,” Noce said. “Because it hasn’t gone to the Senate and the Senate hasn’t adopted it.” 

Barcomb acknowledged that the guidelines were mostly recommendations, but he said that the section of the guidelines prohibiting the use of AI detectors came directly from Concordia’s legal department and were therefore binding. 

He insisted that CTL guidelines did not introduce a policy, but rather referenced established university policies on student information privacy, which extend to AI detectors. 

“You can’t just bring a technology into class, you need to go through [the Privacy] Impact Assessment,” he said. 

Noce said that professors were aware of university policies about protecting student information. As far as he recalled, even when professors used AI detectors to report students, he hadn’t seen them uploading information like student IDs or names. 

He said that it would be unfair to expect professors to apply existing policies to new technologies. 

“I wait for the university, who has legal advisors, to then take that legal advice submitted to the Senate for adoption. And then use that new regulation, new policy… but if they’re just recommendations, that’s what it is,” he said.

From students’ perspective, Angelica Antonakopoulos, academic coordinator of Arts & Science Federation of Students, said she believes it is “a troubling double standard” when students are expected to follow the university’s code but that the faculty consider themselves exempt from it.

“As an individual in a position of academic advocacy, I find it incredibly concerning that professors are not only violating the students’ privacy but also superseding the university’s regulations using these softwares,” she said.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts