As concern over students using A.I. chatbots rises, teachers must prepare to deal with the issue constructively.
OpenAI, a leading artificial intelligence research laboratory, has recently launched ChatGPT, a text-generating tool open to all for free. This chatbot is capable of understanding and answering questions through prompts, and hence is becoming extremely popular among students.
Textbots like ChatGPT can rescue last-minute assignments that can range from writing Shakespearean poetry to doing calculus. As such A.I. gets exploited by students, teachers are looking for ways to detect such plagiarism.
“We clearly need to come up with new ways to evaluate learning if we want to avoid these bots to be used to fake student work,” said Bérengère Marin-Dubuard, an A.I. enthusiast and teacher in interactive media arts at Dawson College.
Marin-Dubuard also expressed her thoughts on the quality of the text written by the A.I.
“The text generated is interesting, but in the end I’d be surprised if many people just don’t do the work,” she said. “It’s probably even more work to set it up.”
Marin-Dubuard encourages her class to embrace the new technology as a tool, but she remains wary of the threat of plagiarism.
ChatGPT’s technology relies on natural language processing — a subfield of computer science based on the interaction between computers and human language.
“One part of how ChatGPT works is by learning complex patterns of language usage using a large amount of data,” said Jackie CK Cheung, an associate computer science professor at McGill University and the Associate Scientific Director at Mila A.I. Institute of Quebec.
“Think at the scale of all the text that is on the internet,” Cheung added. “The system learns to predict which words are likely to occur together in the same context.”
He explained that the developing A.I. would eventually improve as researchers and users feed it new knowledge, a process known as “deep learning.”
Cheung knew that the easily accessible ChatGPT and related models could increase students’ temptation to plagiarize. He noted that instructors will have to adapt their methods of evaluation, and try resorting more to in-person or oral communication. Cheung added;
“There could also be innovations in which ChatGPT-like models can be used as an aid to help with improving the learning process itself.”
A question of ethics has remained, as A.I. continues to develop in art and writing. Both art and text generators have been accused of plagiarism. Last month, artists online flooded art-hosting websites to prevent A.I. from generating proper images. Last week, a substack blog was outed as being written by A.I. by one of its plagiarized writers.
Julia Anderson, who finds new ways to interact with developing technology and has collaborated with the Montreal A.I. Ethics Institute, said that A.I. should not be simply used to do the work for you. She believes that ChatGPT and similar models could be used as tools to help conceptualize projects or aid in teaching and supporting students. A.I. tools like LEX offer support in conceptualizing ideas, something Anderson thought teachers could use to aid them in making a curriculum.
“You can make a similar argument with other technologies, like Google translate,” Anderson said. “But it’ll be at the discretion of the user to decide what to edit.”
With schools now beginning to look for methods of detecting A.I. plagiarism, Edward Tian, a 22-year-old computer science major from Princeton University, developed GPTZero. The program can detect work written by the OpenAI software.
Other methods of dissuading students’ temptations to plagiarize, according to Anderson, could include digital watermarks and the requirement to pay in order to be able to copy text.
Nonetheless, Anderson understands that such measures cannot strictly assure legitimacy.
“Going forward I’m sure there’s going to be more problems,” she said. “At the end of the day, it comes down to human discretion.”