AI programs can produce college papers. Should professors be worried?

Katie Langley, News Editor

As schools nationwide continue to ban online artificial intelligence programs such as ChatGPT, Quinnipiac University community members said that we should embrace the technology, not use it to cheat on class assignments.

ChatGPT, short for Generative Pre-trained Transformer, is a free online AI model launched in 2022 by OpenAI. Similar to programs such as and articleforge, ChatGPT can generate answers to prompts based on a few keywords.

This means that chatbots can “write” college-level essays with a click of a button, which English professor Valerie Smith said threatens academic integrity and the important process of learning to write.

“Writing is not just scribbling on a piece of paper, marking symbols and that sort of stuff,” Smith said. “It’s thinking… It takes a long time to develop thoughts and ideas, and then figure out how to revise them into something that is going to be clear.”

AI programs like ChatGPT are large language models, meaning it is trained to emulate human writing by searching through large amounts of data that is available publicly on the internet, said Jonathan Blake, professor of computer science and software engineering.

“The way these programs work is that they’re largely trying to guess the next word to put into a sequence of words based on how often these words end up showing up next to each other,” Blake said.

Unlike other computer programs, chatbots don’t simply take in the data fed to them by programmers; it can learn from anything that is published online. Blake said this means that AIs pose an issue to those who write online content such as blogs or articles.

“ChatGPT is using your words to learn how to write,” Blake said. “So as a producer of this text that you’re copywriting– it’s your stuff– you should be very upset that it’s being used without your knowledge.”

Illustration by Shavonne Chin

While programs like ChatGPT use a process called supervised learning to fix wrong answers to prompts, Blake said that AIs that use unsupervised learning will not correct themselves, which could increase the possibility that the information they produce is false or negative.

“If we send this large language model off on the internet to learn, what do you think it’s going to learn?” Blake said. “It’s going to learn how to swear. It’s going to learn how to make fun of people. It’s going to learn a lot of really bad things.”

When it comes to academic integrity, Provost Debra Liebowitz told the Chronicle that the university is not planning on changing its policies to specifically ban AI chatbots.

However, the current academic integrity policy already prohibits the “possession or use of unauthorized device or materials,” such as online sources on assignments. Liebowitz said this policy includes AIs if professors prohibit them on their syllabi or tell students they are not allowed.

Liebowitz said that AIs can possibly help teach revision, since humans have a better understanding of context and audience than computers. Blake agreed that AI-generated papers can serve as an example of what not to do.

“I think that faculty members can maybe even potentially embrace this notion of ChatGPT,” Blake said. “So you’re in an English class, you have (students) write papers… you have ChatGPT papers produced. You hand them out, and you say, ‘Edit this.’ Students are going to say, ‘Well this is crap.’”

Liebowitz said faculty members are in conversations about how to deal with AI in the classroom.

“People have used the calculator analogy, this is different than a calculator, but calculators in class were much less common a couple of generations ago,” Liebowitz said.

While AI could have a place in the classroom, Smith said that the issues faced during the writing process are all part of an important developmental skill, while AIs encourage the easier way out.

“Think of the analogy between the three-year-old who’s like, ‘I don’t want peas and carrots, I want candy corn,’” Smith said. “As the parent, do you just say, ‘Okay, here, have some candy corn for dinner.’ Or do you say, ‘Well, I think that will stunt you.’”

The application searches pre-existing internet data, making it possible for ChatGPT to produce material that is similar to articles that are already on the web, which raises plagiarism concerns, Smith said.

To combat plagiarism, OpenAI is currently working on a prototype watermarking tool which would identify all content produced by the AI, according to techmonitor. In addition, there are several AI detection programs which could allow educators to check if assignments are produced by AIs.

Junior software engineering major Christopher Rocco tried ChatGPT when it first came out in November 2022 and decided to show it to his professors, who were astonished by the technology.

“I was playing around with (ChatGPT),” Rocco said. “I’m like, this is something that could change the future of jobs, if you can just be outsourced to a bot.”

Rocco said that ChatGPT should be used with discretion, but it can be a useful research tool because of how quickly it gathers information about a given topic.

“In my major, software engineering, it’s an endangerment,” Rocco said. “You can look up how to code certain things and it’ll give you full scripts. But the thing about that is, if you don’t know what any of it means, it’s useless.”

However, Blake and Smith both agreed that papers produced by ChatGPT do not show the same understanding and critical thinking as those written by humans.

“I don’t think ChatGPT is ever going to write the next Pulitzer Prize novel, but it could write the next space opera,” Blake said.