A Teacher’s Worst Nightmare: On ChatGPT in the Classroom

0
295
Source: new.qq.com

By Simon Li

“I submitted a 4000 word essay in December, most of it was written with ChatGPT… I got an email today stating I have been found in violation of the student conduct of plagiarism… [what] do I do?”
— Posted by an anonymous graduate student on Reddit


OpenAI, an artificial intelligence company, released ChatGPT (GPT stands for Generative Pre-trained Transformer) late last year on November 30th. Modeled like a chatbot, ChatGPT can generate detailed responses to any prompt in a way that mimics human conversation. In just five days, the service had amassed over a million users and was already being widely praised for its far-reaching capabilities.


With the introduction of ChatGPT, fears over cheating have been exacerbated. People have even gone as far as to predict the total upending of education as we know it. After all, anything is possible—all you have to do is type in a prompt or question and it will generate a response in just a few seconds. Everything from writing and debugging code, to composing a poem about astrophysics in the style of André Breton, to solving difficult math problems, to explaining hard to understand concepts, to even writing full length essays is a breeze for the AI.


Unsurprisingly, the essay writing portion caught people’s attention the most. For English classes, and for other humanistic courses for which essay writing is crucial, the danger is clear. A student could easily copy and paste an essay prompt, receive a response in seconds, copy and paste the essay, and submit it as their own work. As obvious as it may seem, texts written by ChatGPT are not a student’s original words/thoughts, and thus the chatbot is considered plagiarism by schools (and in general). The question then arises: Given their disruptive nature, how should schools deal with ChatGPT and other AI technologies that could help students to cheat on essays and other assignments?


The solution is clear to some schools—simply ban it from school networks and devices. After all, cheating has been and is a serious thorn in the side of schools. A majority of students, if I am being truly honest here, have cheated before or have been tempted to cheat on an assignment. Just recently, a bitter cheating scandal in an AP class embroiled the halls of Williamsville East. Already, students, like the graduate student quoted above, are being caught using ChatGPT to cheat.


To my chagrin, our own school district, the Williamsville Central School District, banned it over winter break. Whilst doing research for this very essay, I noticed I wasn’t able to access the website in school. New York City public schools, the biggest school district in the country, also banned ChatGPT on January 3rd, 2023, one day after winter break. Jenna Lyle, a spokesperson for the New York City education department released a statement justifying the ban, saying “While [ChatGPT] may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.” Operating on the same principle, other schools will and have already followed suit. Already, an overwhelming amount of schools across the country have also banned the chatbot, including: the Los Angeles Unified School District (CA), Charlotte-Mecklenburg public schools (NC), Clifton public schools (NJ), Seattle public schools (WA), Baltimore County (MD), Loudoun County (VA), Fairfax County (VA), and Montgomery County (AL) public schools.


For others, however, a ban is out of the question. If a ban were instituted, in addition to blocking ChatGPT on school networks, schools must also use additional technologies that can detect AI-generated writing—like GPTZero—to catch cheating. However, in the article “Princeton should think twice before banning ChatGPT” for The Daily Princetonian, Mohan Setty-Charity warns that “[S]uch a policy choice would open up the door to a technological arms race between the University and its students.” It stands to reason that in light of a ban, students will simply find ways to circumvent it, such as through using personal devices with VPNs or data. At home, students can utilize other AI technologies, such as Quillbot, a free AI based grammar and paraphrasing tool, to rewrite a ChatGPT-generated essay to circumvent detection. A ban would thus not prevent cheating but actually indirectly teach students how to get away with cheating, further driving students away from learning.


To prevent a “technological arms race”, schools need to learn to coexist with AI, to integrate it into classrooms in a way that facilitates learning. Kevin Roose, a technology columnist for The New York Times, acknowledges this in “Don’t Ban ChatGPT in Schools. Teach With it.” In the article, he suggests that we should treat ChatGPT as more of a calculator than a cheating device, writing that schools should “[allow] it for some assignments, but not others, and [assume] that unless students are being supervised in person with their devices stashed away, they’re probably using one”. While ChatGPT is treated as revolutionary, capable of facilitating a new era of cheating; Roose suggests that we should instead normalize the technology. Just like a calculator, ChatGPT can give students answers with just a press of a button. While ChatGPT is being banned nationwide, students today all use calculators in high school math classes. It is a crucial element, without which tests and assignments would be virtually incompletable. For tests and assignments which allow calculators, the focus is not on the ability to do arithmetic, but rather the ability to understand concepts and topics taught in class. Students are not punished, for example, not being able to multiply 3235 by 0.9867, but rather how they apply that specific value to the rest of the problem. In a similar sense, ChatGPT could be used to stimulate questions, generate ideas, and give feedback while still allowing students to do most of the thinking.


When the modern calculator was first introduced, similar debates raged on about its role in the classroom. Looking back at the rhetoric, it’s hard to not draw parallels. In a 1986 article by David G. Savage in the Los Angeles Times describing the debate over calculators, he writes that “These math teachers and others like them are pushing hard to shift how math is taught—away from paper-and-pencil computations and toward solving more complex problems, possibly aided by calculators or computers.” As technology progresses, education must adapt with it as well. We went from what Savage calls paper-and-pencil computations to testing the ability to understand and apply mathematical concepts, while not completely getting rid of mental arithmetics—could not the same thing be done with ChatGPT? Of course, like no calculator tests, traditional essays will not disappear, rather mixed in with ChatGPT-allowed assignments.


What’s tricky with this approach is the fact that ChatGPT can just do more compared to calculators. For instance, while calculators cannot solve math word problems, ChatGPT can; its scope is mind boggling. If the novel technology is implemented into classes without proper guidance, schools could run the risk of essentially approving cheating. I must also acknowledge that calculators do not generate learning (partially why some colleges ban calculators for lower level calculus courses). If I were to compute an integral on my TI-83 graphing calculator, say f(x)=x from 1 to 3, I would get the answer, 4. I do not get an answer as to what integrals are, or how to compute them, or even why the answer is 4. I cannot understand integrals from a calculator. ChatGPT is somewhat similar for other fields—while it can provide step by step feedback and even answers, it cannot impart that knowledge into students.


Assignments, then, should be created with these facts in mind. For essay writing especially, it poses a serious challenge—how can teachers incorporate ChatGPT to help students write? Roose suggests one solution: to use the AI to create outlines and then have students write the essays by hand using the outlines. Describing one teacher who used ChatGPT to write an essay comparing two books, he writes that “The process… had not only deepened students’ understanding of the stories. It had also taught them about interacting with A.I. models, and how to coax a helpful response out of one.” By giving students a starting point, they are freed to focus their attention on analysis, connecting different strands of evidence, and just writing. They can obviously tweak the structure of the essay along as they go—but at first, that wouldn’t be their primary concern. This is just one way ChatGPT can be utilized—but its success shows that the technology is viable within the English classroom.


There is a problem, however: ChatGPT often gives inaccurate answers. Before we continue, I would like to explore just how exactly ChatGPT operates. ChatGPT is known as a LLM, or Large Language Model. Basically, this means that ChatGPT was fed a staggeringly large amount of texts, from which it pulls from every time it generates a response. Ermira Murati, Senior Vice President of Research and Product at OpenAI discusses this for Daedalus in “Language and Coding Creativity”, writing “As the machine goes along creating text, it asks itself over and over (and over): with the text I have been given, what word is most likely to come next?” Notice how Murati says “most likely to come next”, and not “correct”. Like all LLMs, ChatGPT is a machine that can only predict the next best word in a sentence, which is not necessarily correct. If we go back to the integral and ask ChatGPT to solve, it confidently gives the answer of 4.5. Here, the AI simply does its arithmetic wrong: concluding that (½)(3^2) – (½)(1^2) or 4.5 – 0.5 is equal to 4.5, and not 4. Furthermore, responding to the prompt “What is the role of NADPH in the Krebs cycle”, ChatGPT spits out an answer that is completely wrong: that NADPH is crucial for the Krebs cycle (NADH, not NADPH, is used). These are simple mistakes, yet they expose ChatGPT for what it truly is: fallible.


The implications of this cannot be ignored. Firstly, if ChatGPT is banned from schools, students will be forced to experiment with it at home, leading them to not question these answers. In a controlled environment, these inaccuracies could be turned into teaching moments, identifying what went wrong and why it was wrong. Secondly, these inaccuracies mean that for now, ChatGPT most likely will not upend education. Professor Daniel Lametti at Acadia University observes that ChatGPT is riddled with errors. Describing a writing prompt that he gave the AI, he writes for Slate that “Not only did [ChatGPT’s] response lack detail, but it attributed a paper I instructed it to describe to an entirely different study…. If one of my students handed in the text ChatGPT generated, they’d get an F.” Lametti isn’t saying that he would give a failing grade because of plagiarism, rather, because the writing produced by ChatGPT is just bad. Students relying on it for assignments or essays will produce answers that are sometimes completely wrong, which teachers can easily detect. As the technology evolves though, the accuracy of AI will undoubtedly improve. Schools, then, should take that into consideration and evolve accordingly.


Schools will need to shift assignments to better test critical thinking skills. By shifting to formative assignments such as discussions and practice problems from the exams and tests of old, Setty-Charity argues for this approach, writing that this would allow students to “engage more throughout the course, focusing on developing an understanding of the material.” This would mark a radical transformation in education, from focusing on tests to focusing on dialogue between teachers and students. These methods are merely suggestions for now, not yet a reality. From their actions today, however, many school districts have indicated that none of this is up for debate—ChatGPT must be banned because students will use it to cheat.


Given that this technology is so new, any predictions of its impacts on education will likely be overblown and exaggerated, exacerbated by the fact that people have historically feared change. Plato, speaking from the mouth of Socrates, famously disparaged writing, predicting that by recording information that one cannot remember, it would induce forgetfulness. Yet ironically, it is because of writing that we know of Plato and Socrates. Were the actions schools took against ChatGPT based on a concern for cheating and student success, or from fear of change, or from a combination of the two? Whatever the rationale, we cannot ignore it; AI is here to stay for the foreseeable future. We must critically re-evaluate how we teach. But by shutting down discussion on ChatGPT, we will only remain blind and hilariously unprepared for the future.