By Hank Bartholomew, Faisal Faisal, Jay Panesar, Raj Panesar, Nico Ray, and Will Stark
Introduction
In April of 2025, in the Williamsville Central School District’s central office, keys clacked, servers hummed, and processors whirred. A bright, luminous light contrasted sharply with the dark characters on the screen. Millions of pixels combined to form letters, then words, then sentences. Symbols blotted out a perfectly white digital sky. Phrases came into existence. Thoughts were given voice. Several hundred words later, the task had been completed.
The final product? A Non-Instructional/Business Operations memo to district school officials. These three sheets of black and white ink, bold and affixed with the district seal, laid out WCSD’s philosophy and campaign-strategy regarding what The Atlantic has dubbed an “arms race on campus:” the use of Artificial Intelligence in schools.
The existence and creation of this memo makes it clear that Williamsville public schools—and therefore Williamsville East—are not immune from the phenomenon of mass AI use in education. But what is not clear is the extent of AI use at East—or potential lack thereof. In an effort to quantify these happenings, The East Side News launched a survey to gauge opinions on AI use and ethics.
There are three primary findings as a result of this investigation. The first is that, on the subject of AI usage, East is heterogeneous. There are some trends, but AI debates have split East into ideological factions. Secondly, the largest disparity concerns AI opinions for students compared to those of teachers. Finally, and perhaps most significantly, no clear recourse has emerged.
Background
AI is still relatively new, being introduced to the general public in the form of OpenAI’s “ChatGPT” in November of 2022. Since this induction, AI usage rates in the United States have soared, with one Pew Research Center study finding that roughly 60% of Americans interact with AI regularly.
WCSD hasn’t been passive during this rapid AI growth. In the Spring of 2025, the Williamsville Central Schools announced a partnership with ZeroEyes, a security firm that “uses artificial intelligence and human analysis to detect weapons (i.e. guns) and threats that are identified on school campuses,” per the district website. And in 2024, Will East then-senior Grant Wang developed the “ML Offense App,” a software which uses AI to detect harmful language on social media platforms.
Suffice it to say, that AI does exist at East. But the prevailing attitudes towards it were still unclear.
Part of this is attributed to regulation—or rather, a lack thereof. The aforementioned memo is active but unclear; references are made to “responsible AI integration,” without an explanation of what exactly those practices would look like.
Some of this opacity might be explained by numerous and potentially contradictory guidelines; school principal Mr. Swenson noted that regulations are “a blend of federal, state, and district. AI, as you know, evolves so much quicker than policy and procedure, so the states gave us some guidelines and we interpret those guidelines and try to determine what we think is best. Some teachers,” he added, “use it more than others.”
Perhaps the word that best encapsulates East’s AI situation is “unclear.” In the following pages, the East Side News sought to explain feelings and trends regarding the subject—hopeully offering some degree of clarity.
Teacher Perspectives
As AI tools become more accessible, many teachers have begun incorporating them into the classroom. Our survey found that 63.7% of eleven teachers use AI in some fashion. These capacities and contexts include the creation of assignments, grading, and other aspects of teaching. This increasing reliance on AI in the classroom raises important questions about how AI should be used ethically. Through our interviews, we gained insight into how teachers at East regard the usage of AI in the classroom for teachers and students.
Social Studies teacher Dr. Redmond, for instance, strongly urges against using AI for students, suggesting that it removes the process of learning. Dr. Redmond explains, “You learn by teaching, and the reason for that is that you learn when you explain things to people. If you’re having someone else produce it, you’re not learning things. That goes for teachers as well as students. That’s a creative aspect that I think is important.” For some teachers—like Dr. Redmond—learning is a continuing process, and learning never stops. The use of AI, per Dr. Redmond, significantly limits critical thinking.
Chemistry and Anatomy honors teacher Mr. Harrison echoes this perspective. According to Harrison,”When you put a prompt into a chatbox, you are cheating yourself out of learning. Students are trying to find themselves in high school. These years are pivotal to your development, and when you are relying on AI, it takes away from your creativity and your ability to think.” When relying on AI to do your thinking, education will always be held down by training wheels. These training wheels prevent independence per Mr Harrison. Relying heavily on AI risks weakening critical thinking and creativity that education is meant to develop. If students and teachers continue to depend on this technology as a shortcut, they may sacrifice the independence that is essential for long-term success.
Student Perspectives
Many students feel that all they have heard in assemblies are the threats of AI usage in classrooms, but none of these conservations were centered around how to properly use AI for learning. Within our data, we found that a significant number of students (44.8%) at East use AI often on their assignments. However, are they using it to cheat on assignments? Well, it depends on what you define as cheating. Most teachers define cheating as completing assignments without the process of thinking for yourself. In other words, using AI on a test or an assignment to just get an answer is cheating.
With reference to proper AI usage, one case study is Drishtant (a senior), who uses AI to help him with his coding projects outside of school. This is how most students believe how AI should be used: to help students learn. Later in his interview, Drish revealed that he never used AI on an exam, only during assignments to help him comprehend a difficult topic. Drish is a prime example of the proper usage of AI in school settings, at least as defined by students. He is not the only student that uses AI ethically within the bounds of this definition; Sage (junior) and Maryalice (junior) both said that they “use it sometimes to check my work or make a quiz for studying.” Judging from this data, it appears that upperclassmen define proper AI usage as a tool for them to learn more about a topic, but, of course, proper AI usage is based on perspective.
Yet this definition of ethical AI usage is not uniform. One student, who spoke on a condition of anonymity, stated “I use AI to cheat on assignments.” This individual doesn’t use AI to help them understand a topic, but solely uses it to get an answer to receive a good grade on their assignment. This seems to reveal a deeper underlying problem within our school; namely, that students only care about getting grades. But if school is about learning and not about grades, students using AI to simply get an answer are cheating themselves out of the true meaning of school, as Dr. Redmond stated in his interview. It appears when students put a greater pressure on themselves to get good grades, they tend to use resources such as AI to help them get better grades.
Proper AI usage comes down to the student’s definition of ethical use. When some use it for learning, others use it to simply get an answer. Without clear ethical obligations on AI usage, learning remains to be impaired for the near future.
Comparison and Discrepancies
As expected, students and teachers had different views on AI use in school, but surprisingly also had many similar feelings on how AI can be used. One of the most shocking responses to the survey was that 73% of 157 students who filled out the survey felt that teachers should have to follow the same restrictions regarding AI as students do, while 73% of eleven teachers who responded feel like they should have their own, less strict guidelines.
Dr. Redmond pointed out that, “it’s hypocritical for teachers to punish students using AI, but then use it to write tests or lesson plans.”
But other teachers commonly believe that there are situations where they should be able to consult AI to aid in their teaching. Mr. Harrison believes that it is fine for teachers to use it when, “new curriculums are pushed out to teachers without any test questions for students to learn.” With this issue, there is some common ground between teachers and students.
Many students pointed out that if AI use is “beneficial for the kids, then they should [use AI].”
Several students emphasized that many teachers are extremely strict with AI use on their end, and teachers should have the same strict restrictions on when they themselves can consult AI in school. Students seem to want to use AI as a tool, but also want to make sure that if teachers use it as a tool, AI is not doing their job for them.
A common consensus between both groups is that AI has great potential in education, but can be dangerous to the school environment when it is used to an extent where students are not learning or teachers are not teaching. AI has potential as a tool for improvements in school, but both teachers and students seem to agree that it should not come in the way of learning.
AI Checking Software
Another commonality between students and teachers was a lack of trust in AI checkers. When surveyed, a clear trend emerged. Students were asked, on a scale of one to five, how much trust they placed in AI checkers, with the examples of self checking or teachers checking students’ work. According to the data, most students did not trust AI checkers (58.4%), while only about 16.8% do trust AI checkers. Will Crooks, a freshman, summed up the opinion of the student body: “They don’t work. AI checkers don’t work.” On the whole, students at East do not believe that AI checkers are accurate means for verifying the true origin of any assignment or task.
Interestingly, a similar trend emerged among teachers. A majority of teacher respondents (64%) reported that they did not trust AI checkers to accurately detect its use among students, while no teachers said they do trust checkers. The remaining responses were unsure.
“No, I’m not confident in AI checkers,” said English teacher Ms. LoVullo, “I make a point of collecting each student’s handwriting pieces early on, so that later I am usually able to tell the difference when students use other resources on assignments.”
Mr. Harrison responded similarly, saying that he uses writing style as a way to determine authenticity. It’s most difficult to detect AI at the beginning of the year, said Harrison. “I don’t know their voice.”
With AI detecting software out of the question, teachers opt to rely on recognition of student voice and writing style for determining authorship. However, concerns inevitably arise given the subjectivity of this approach.
One student shared an experience about such a concern. “I was accused of using AI by a teacher to do an assignment,” they said. The teacher cited “common phrases AI uses” and an unfamiliar writing style as evidence that this student had cheated. “To prove my innocence, I basically just sat down with them and I had a very long conversation with them. I explained everything I did, all my research. I just talked with them about how I would never do such a thing. It definitely was a very long conversation.” This student also said that the teacher used AI checking software to reaffirm their suspicion, making it more difficult to prove they completed the assignment themselves.
Despite widespread distrust in AI checkers, the task of determining true authorship still exists. There are no perfect solutions. Even the recognition of writing style will still lead to false positives. Indeed, false positives are inevitable, regardless of how an assignment is assessed.
“It’s almost an arms race. Certain AI checkers use certain details to check AI, then AI companies eliminate these tells. Then you run into these dilemmas. Maybe one day AI will become accurate, but it’s hard to tell,” summarized Dr. Redmond.
Conclusion & Looking to the Future
AI isn’t just a concept anymore—it’s a present tool being used to reshape societies globally. Ignoring this huge advancement in technology would slow progression; instead, it can be assumed that those who use AI responsibly and strategically will lead to outcomes with significant benefits. According to the responses, East is completely divided in the ways you’d expect, with both teachers and students being extremely opinionated and biased in regards to their own usage. Teachers were in favor of restrictions on AI, believing they would encourage students to use their critical thinking skills in preparation for college. Although the usage is relatively minimal, students, on the other hand, believe that times are changing, and if we don’t keep up, there might be drawbacks.
A possible solution could be to establish a middle ground, a set of boundaries for both parties. AI, it appears, shouldn’t be specific to teachers, nor should it be to students. Instead, based on the data gathered, perhaps it ought to be utilized for those who understand ethical boundaries and seek to use it in order to learn, develop skills, and develop a foundation for creativity.
The more industries adopt automated AI generated content, the more backlash it faces. This is heavily mirrored in the world of education, where AI is increasingly growing, leading to a massive surge in its usage. Although schools such as East continue to establish new regulations to prohibit this use, AI continues to advance and remains one of the most controversial topics globally regarding the advancement of technological evolution.
Topics of AI conversations have generally been about students’ usage of AI, but what about teachers’ usage of AI. When we interviewed multiple teachers from different departments, our group found that most teachers haven’t used it on the student end. According to one of our math teachers, she uses AI to create diagrams to help her with the clerical side of being a teacher, but has never used it on the student end. While some teachers use AI to help them with clerical work, one of our science teachers claims to be quite opposed to AI usage for his work. He loves his job because of the autonomy he is allowed and believes using AI takes out one of his favorite parts of being a teacher. His philosophy is not one off, 36.4% of teachers surveyed have never used AI in the classroom.
With the varying usage of AI among our teachers it raises the question of whether teachers should have different AI privileges than students. When we asked this question in our survey we found that the results were quite contrasting between students and teachers. We found 72.7% of teachers said that they should have different AI privileges than students, but 73.7% of students said teachers shouldn’t have different AI privileges than students.

