The environmental impact of artificial intelligence (AI) is a growing concern as the technology becomes more prevalent in our daily lives. Every time we interact with an AI chatbot or request a recipe, we are unknowingly contributing to the consumption of energy and the production of carbon emissions. The scale of this impact is vast, with billions of AI interactions occurring worldwide on a daily basis.
Unfortunately, major AI providers do not disclose complete, verifiable per-query energy and emissions data. This lack of transparency makes it difficult for consumers to make informed choices, regulators to set evidence-based policy, and companies to be held accountable for the environmental cost of scaling AI services. Estimates for a single AI query vary widely, ranging from 0.03 to 68 grams of COâ‚‚, making it challenging to understand the true impact without context.
The Hugging Face AI Energy Score aims to address this gap by offering a standardized benchmarking initiative that rates AI models on energy efficiency across common tasks. However, the leaderboard is primarily populated by open-source models, as most major commercial providers have chosen not to participate.
The energy consumption of AI systems is directly related to the number of tokens they generate, with more tokens requiring more computational cycles, time on energy-intensive chips, and heat dissipation in data centers. Researchers use token counts as a standardized measure to compare energy consumption across models, similar to how miles per gallon allows for comparisons between cars with different engines.
While some companies have disclosed specific per-query energy figures, such as OpenAI and Google, many others operate closed models where operational details are closely held. This lack of transparency hinders efforts to understand and mitigate the environmental impact of AI technology.
The best available data comes from multiple sources that approach the problem differently, providing estimates of energy and carbon emissions per standard AI text query by provider. However, without standardized disclosure and transparency, it is challenging to accurately assess the true environmental impact of AI services.
One example of the challenges in assessing AI’s environmental impact is xAI’s Grok chatbot, which was initially touted as the most eco-friendly chatbot. However, further investigation revealed that the infrastructure behind Grok, particularly the Colossus facility in Memphis, Tennessee, operated with methane gas turbines without proper air pollution permits. This facility emitted significant amounts of nitrogen oxides, making it one of the largest industrial emitters in the area.
Overall, the environmental impact of AI technology is a complex and evolving issue that requires greater transparency, standardized disclosure, and accountability from major providers. As AI continues to advance rapidly, it is crucial to address the environmental implications of its widespread use and work towards more sustainable practices in the industry. Consider the energy implications of your AI use. Standard models, which predict the next word in a response, are more energy-efficient compared to reasoning models, which generate thousands of hidden tokens before producing a visible response. In a benchmarking study, it was found that reasoning models consume significantly more energy than standard models for the same task. For instance, o3 and DeepSeek-R1 consumed over 33 watt-hours for a single long prompt, while GPT-4.1 nano consumed significantly less energy for the same task.
The University of Rhode Island’s AI lab estimated that GPT-5, which integrates reasoning capabilities, consumes an average of over 18 watt-hours per medium-length response. With extended reasoning mode enabled, energy consumption can increase five- to tenfold, potentially exceeding 40 Wh per query. As the industry moves towards reasoning models as the default, the energy implications of this trajectory should be a part of the public conversation.
Estimates of the carbon footprint of common AI tasks reveal that tasks such as asking about the weather or drafting a 500-word email have relatively low carbon costs. However, tasks that involve deep research or extended thinking can have significantly higher carbon costs. The energy cost of AI varies depending on the model type, token count, and the task at hand. By choosing the right model for the right task, one can reduce their AI carbon footprint significantly.
Efficiency improvements in AI systems are real, with Google reporting a 33x reduction in energy per median prompt over one year. However, the industry’s rapid growth in total compute demand, driven by reasoning models, continuous AI agents, AI-generated video, and AI embedded in every app, poses a challenge for energy efficiency. It is crucial to right-size your model to the task at hand and write efficient prompts to reduce energy consumption and minimize the environmental impact of AI systems. Output tokens cost three to five times more energy than input tokens, making it crucial to optimize energy use when interacting with AI systems. By asking for a three-sentence summary instead of open-ended responses, users can significantly reduce energy consumption. Consider the energy source of the AI provider, as Google’s carbon-free energy rate and Apple’s on-device processing have varying environmental impacts. Audit AI agent usage to evaluate the necessity of continuous operation, as always-on agents can consume excessive amounts of energy. Demand transparency from AI providers by asking for standardized environmental metrics, such as per-query energy data and data center energy sourcing. Utilize resources like the Hugging Face AI Energy Score leaderboard to benchmark model efficiency. Lastly, only engage AI when necessary, as not every question requires a large language model, and the most sustainable query is the one that was not made.

