What would you do if we told you that this article was written by AI? Would you take it less seriously? What would you think of the Kaleidoscope? How would it reflect on Ward Melville, or on us personally?
This article wasn’t written by AI, but the rise of artificial intelligence in the classroom has been remarkably swift. According to Statista, a staggering 86% of students used AI to complete their schoolwork in 2024. Similarly, Forbes reported in 2024 that 60% of teachers had incorporated AI into the classroom. Along with this, 98% of the survey respondents identified a need for education on ethical AI usage.
It’s clear that AI will be an enduring component of education in the future. However, today, students are punished if caught using AI technology in Three Village, and the school district does not currently have any AI-specific academic honesty policies. Considering all of this, where should we draw the line when it comes to AI usage? To us, the answer is simple: the district should adopt a concise and standardized process for determining whether or not usage of such tools is ethical.
Today, AI offenses committed by students are categorized under plagiarism. Plagiarism is listed under the Academic Dishonesty bracket of our Student Handbook, defined as “the work(s) or idea(s) of another, not necessarily those of a student, including the work of artificial intelligence applications when not specifically directed to use by teachers, as one’s own through the deliberate omission of acknowledgement or reference.”
The Three Village Assistant Director of Technology Gary Young, clarified that “if you’re using AI to do your work for you, that’s basically plagiarism. You’re using somebody else’s information and work and trying to pass it off as your own.” Consequences of this offense might include parental notification, oral reprimand, written reprimand, no public recognition of academic honors, ineligibility for/or removal from honor societies and withdrawal of District-sponsored scholarship.
We do not necessarily disagree with this policy. Maintaining an academic environment that prioritizes original work is the bare minimum for any proper education. However, it’s important that district policy reflects that, meaning we need an AI-specific honor policy, or at least recognition of AI issues in the Code of Conduct. The consequences of our lack of this policy are apparent in disparities between the treatment of students and teachers when using such resources, in questions surrounding AI-detection software and the vagueness between proper and improper usage.
Through conversations and interviews with both Ward Melville students and educators, it is clear that teachers are encouraged to take advantage of modern technology like AI, while students are reprimanded and forced to face inconsistent and unclear punishments. In an interview, Young said, “Knowing teachers and their mindset, I don’t think they would be using [AI] for essays and things like that,” casting doubt on whether or not teachers would use the resource in grading essays or writing feedback. On the contrary, our own investigation discovered at least a dozen teachers who may have used AI in lessons across this academic year, witnessed by multiple students.
One Ward Melville senior, who asked to remain anonymous, wrote that a teacher “told [students] directly that she was going to use AI to leave comments when we were writing an essay about short stories in her class.” They added: “I don’t think it is okay for a teacher to be leaving comments like this, especially when we are AP students preparing to go to college.”
Still, not all AI use in the classroom is negative. The core issue is that the district has yet to highlight a difference between good and bad. AP Literature and Composition teacher Ms. Jacqueline Shaw said she used AI to her benefit in certain scenarios, explaining, “I’ll plug work in and say, what are the general strength of my students? Where do you see areas for improvement for my students? Sometimes when you’re grading and are so in the thick of it, it’s hard to look at the big picture.”
Ms. Shaw did reject the abuse of this resource by educators, noting how there is a time and place where they shouldn’t be using it. “If they’re using it to do all the grading and they’re not reading anything, how could you possibly help a student in that way, if you don’t know the writing that they have done for you?”
While one teacher’s opinion on the topic isn’t always universal, it is distressing for a student to be unsure how their school district treats AI-related offenses, especially when their own teachers are encouraged to use the same platforms. Students can be punished under the plagiarism category, but this punishment depends solely on the teacher in their classroom. Regarding policies or punishments put in place for students, Young said, “When it comes to a punishment for using AI, it depends on how it was used, what rules were put in place by the teacher on how it could be used … there should be a punishment if you’re breaking the rules that were set by the teacher in the classroom.”
Not only is the punishment different case by case, but the district has demonstrated a clear double standard for who is allowed to access AI resources by installing the Brisk AI Chrome extension for all teachers. Brisk specializes in automating various tasks for teachers. It is capable of planning lessons, creating teaching resources, differentiating materials and providing personalized feedback. Brisk can also scan students’ documents to see if their text was copy and pasted, helping teachers identify work that may be AI-copied or plagiarized. Having teachers use Brisk AI to discourage students from using AI is contradictory and fuels a problem it is trying to prevent.
Our district knows of AI’s availability and convenience and have recently shut several websites down on student-issued chromebooks and email accounts. Young touched on this topic during our interview, “At first, we were allowing access to ChatGPT and Gemini. We have shut those down since. That was something that the Board of Ed specifically wanted us to do.” The district’s AI detector should be viewed as its own AI source, just like ChatGPT and Gemini are. Prohibiting high schoolers from searching these websites is a physical statement made by Three Village that they are against AI use. It’s ironic that the same district provides their teachers an AI resource to use in the classroom.
Regarding the Board of Education, we were able to converse with Dr. Stanley Bak, one of the Three Village Board of Education’s newest trustees. Although his stated views do not reflect any official stance of the Three Village Board of Education, we wanted to understand the perspective of a Board member. “I believe AI has unfortunately had a generally negative impact on education so far,” Bak said, “The goal generally is not to provide specific punishments, but rather make clear what is and is not acceptable. Is brainstorming ideas using AI tools allowed? Outlining an argument? Creating a draft document? Polishing a few sentences? At what point do you need to acknowledge your use of AI? A clear policy prevents misunderstanding.” He also provided an example of a sample district AI policy — one that Three Village has yet to implement.
The rise of artificial intelligence in education has been quick and widespread, causing a lot of ambiguity between what AI is deemed good and what AI is deemed bad. We aren’t sure what the answer is, but there needs to be a definite answer. As Ward Melville students, recognizing when to not use AI resources in the Code of Conduct would limit the misunderstandings we have about where Three Village stands on modern technology.