An article in New York Magazine titled “Everyone Is Cheating Their Way Through College” gained traction in mid-2025, and I often find myself reflecting on it—especially when those persistent ads pop up, promising that “with our AI tool, you can cheat without consequences.” It’s a disheartening reality.
However, the core issue isn’t students utilizing AI; I’m a user myself, and it’s vital for everyone to learn how to leverage this technology. The real dilemma surfaces when students claim AI-generated work as their own. At its essence, the conversation around academic integrity and artificial intelligence in higher education transcends mere technology—it delves into ethics.
I genuinely appreciate generative artificial intelligence and apply it in various aspects of my life: workouts, recipes, article outlines, lecture revisions, crafting multiple-choice questions, coding for data visualization in R, citation tracking, and beyond. The potential applications are boundless. When employed judiciously, AI amplifies productivity; misused, it magnifies folly. The discussions surrounding academic integrity and AI compel us to confront our identity and intentions.
This debate has divided us into unproductive factions. One side likens AI to a calculator, while another proclaims it heralds the demise of human thought. Both perspectives miss the crux of the matter. The “just a calculator” argument disregards how calculators and their software counterparts have relieved us of many burdens associated with quantitative reasoning. While the comparison holds some truth, it’s hardly comforting. Knowing the keystrokes to produce a parabola does not equate to understanding what a parabola is and its significance. Conversely, the “end of thought” camp overlooks AI’s potential as an empowering tool when used wisely. Is it an assistant? Fantastic. Is it a substitute? That’s a different story altogether.
Ultimately, the issue lies not with the tool but with its user. Individuals can wield AI either wisely or maliciously—just as with any other instrument. An axe can serve as a resource for a lumberjack like Paul Bunyan or become a weapon in the hands of a horror movie villain like Jason Voorhees.
In 2023, as we began to interact with our new AI companions, I penned a response to cynical students posing the question, “when am I ever gonna use this?” about the humanities and other non-vocational subjects. My response remains unchanged: “every time you make a decision.” Why? Because the choices you make reflect who you are, and that identity is molded by your experiences and influences. Engaging with history, philosophy, literature, economics, and the liberal arts fosters meaningful connections and cultivates a person who has wrestled with the best thoughts and writings across time—someone entrusted with significant decisions. It’s about refining the art of judgment.
Yet, in a world where outsourcing our cognition to ChatGPT and Gemini is alarmingly simple, we can practice this art poorly. Let’s draw an analogy. If you haven’t yet seen the film Aliens, I implore you to do so—it’s a timeless classic. For those who have, recall the climax where Ellen Ripley dons a P-5000 power loader suit to confront the alien queen. This tool amplifies her strength, allowing her to achieve what would be impossible otherwise.
Regrettably, many students approach AI as if they were donning Ripley’s power loader to hit the gym. Sure, you might “lift” 5000 pounds with the suit, but believing that the suit is enhancing your strength is a comical self-deception. It’s equally absurd to think you could lift that weight unaided or to convince others that you possess that strength. When you submit work primarily generated by AI, you’re not developing your muscles or learning to lift; you’re merely accumulating impressive figures while your intellectual capacity atrophies.
Of course, using AI can sometimes resemble having a spotter while lifting weights. Personally, I utilize AI as a fitness coach, guiding my next exercise. That’s one productive approach, but far too many students treat AI as if it were lifting the weights for them.
Tools like ChatGPT, Gemini, Grok, and Claude should enhance our capacity for higher-order work rather than obscure our inability to engage with it. Technology has significantly boosted my productivity: I drafted the initial version of this essay on my phone using voice dictation and revised it with the assistance of Gemini and Grammarly. What distinguishes that from submitting AI-generated content? Using dictation tools and AI to refine an essay is akin to employing Ripley’s power loader to shift heavy items. Conversely, utilizing AI to create text and passing it off as your own is like using the power loader suit to feign a workout.
I extend my gratitude to ChatGPT, Gemini, Grammarly Pro, and GPTZero.me for their editorial assistance..

