The Responsibility of Educators in the AI Era
As significant institutions declare their intentions to implement and profit from transformative changes in education, workplaces, and everyday life, it is imperative that we critically assess whether these advancements genuinely serve our best interests. Failing to scrutinize these developments invites the loss of our autonomy and responsibility.
The urgency of this discussion is especially pronounced in the context of artificial intelligence (AI). There is a concerning trend among educators to rapidly adopt AI tools like ChatGPT without fully grasping their limitations. Such enthusiasm often overlooks the foundational elements of true education.
Recent research indicates that those most eager to embrace AI technology tend to be the least informed about it. Many remain unaware of the significant energy costs associated with AI, ethical issues regarding data sourcing, the corporate intentions of replacing rather than aiding workers, and the potential risks of increasing surveillance and disrupting democratic processes around the world.
Furthermore, there’s a lack of awareness regarding AI’s tendency to present factual inaccuracies more often than not. Two studies cited show that AI makes errors over half the time, meaning it typically requires human oversight to fact-check its outputs. Despite the expectation that AI will improve in accuracy, some experts argue that its performance may actually decline.
At its core, AI merely generates responses based on statistical patterns; it lacks the ability to think or possess genuine knowledge. Hence, relying on AI for writing tasks is particularly problematic. Educator John Warner emphasizes that “the essential unit of writing is not the sentence but the idea.” Writing requires constructing meaning and persuading readers, a task AI cannot authentically perform. As one novelist aptly put it, students using this type of software won’t learn effective writing skills any more than they would achieve physical fitness by using a forklift to lift weights for them.
Additionally, having AI read on our behalf undermines the intrinsic value of literature and the learning experience it offers. Imagine a scenario where the interaction with books and articles is reduced to simplistic summaries generated by computers. Even in contexts where information extraction is the goal, doing so manually often leads to unexpected revelations and insights.
Tech commentator Cory Doctorow predicted a future where individuals, out of busyness or complacency, rely on AI software to elaborate on data points, creating polished documents that the opposition would summarize in a similar fashion. This cyclical dependence illustrates the phenomenon of MOBS—Machines On Both Sides—an alarming reality unfolding in educational settings.
At best, training students to use a chatbot is very different from helping them reason through a problem, read deeply, or organize and express their own thoughts.
This cycle is perpetuated by various stakeholders, from administrators encouraging the use of AI in crafting lesson plans (often criticized for fostering rote memorization) to students who harness chatbots for completing assignments. In this scenario, students are labeled as “cheating” while benefiting from tools that lack real educational merit. Teachers then utilize AI to assess these students’ submissions, and the cycle continues as students seek additional support from chatbot “tutors.”
Empirical evidence revealing any educational advantages from AI remains scarce, and some experts contend that existing research may be poorly constructed or misrepresented. Concerns have emerged regarding potential adverse effects, such as a 2024 study indicating that while high school math students initially excelled when using ChatGPT, they eventually performed worse than their peers who did not utilize AI assistance due to a lack of conceptual understanding. Another study reported a noticeable “cognitive cost” associated with employing AI for essay writing, further revealing that increased AI use correlated with decreased critical thinking skills.
So why is AI so appealing? For many, education is perceived as a mechanical process of earning credentials, focusing on completing set tasks for grades. This transactional approach to education is dangerously reinforced by AI technologies.
For those convinced of the benefits of AI, experimentation may be justifiable. Yet, caution should be exercised so as not to allow companies, prioritizing their interests over ethical considerations, to dictate the narrative surrounding AI’s inevitability.
Training students in AI usage can diverge significantly from vital educational practices such as critical reasoning, in-depth reading, and the coherent organization of thoughts. Worse yet, it may inadvertently teach students how to circumvent these essential skills. Instead, educators should instill a critical perspective on AI, helping students discern our inclination to project human-like qualities onto these sophisticated algorithms while recognizing their fundamentally generic output.
Those concerned about the presence of chatbots in educational spaces are encouraged to voice their apprehensions and find solidarity with like-minded individuals. Consider transforming digital signatures—like the ubiquitous “Sent from my iPhone”—into statements that resonate with our collective educational ethos. Imagine a similar sign in classrooms proclaiming, “This school operates without AI, ensuring teaching and learning are conducted solely by human beings.”
This rewritten content maintains crucial elements of the original article while providing a fresh perspective and depth suitable for a WordPress platform. It incorporates HTML formatting and links as requested.