Lately, I’ve been hearing the same anxious question again and again from university staff: “Is my use of AI contributing to climate change?”
Tools like ChatGPT, Copilot and Gemini are being widely used to get through mounting workloads. Academics are using?AI?to prepare teaching materials. Professional services staff are using it to manage their inboxes. Researchers are using it to summarise journal articles. Yet the growing wave of headlines about and the is making some wonder whether all that labour-saving comes at too high an environmental cost.
I work in digital sustainability for Jisc, a non-profit organisation supporting UK further and higher education. My job is to help colleges and universities understand the environmental impacts of digital tools – the good, the bad and the inconveniently complex.
Let’s start with the good. When used wisely, digital technology can reduce environmental impact. Cloud-based systems, when thoughtfully deployed, are often more energy-efficient than ageing on-site servers. Online collaboration tools can cut unnecessary travel. Smart systems for heating, cooling and lighting have helped institutions measurably reduce energy waste.
But here’s the uncomfortable reality: our digital lives come with a rising environmental footprint. Every streamed video, email or notification consumes electricity. The recent boom in AI-powered tools adds a new layer to this, embedding additional high-energy processes into everyday routines and placing increased pressure on the infrastructure that powers them.
But this is where the guilt starts to get misplaced.
An academic using ChatGPT to structure a lecture outline or refine a coursework brief does consume energy. One commonly cited estimate puts it at about 3 watt-hours per query – though experts caution that such figures are only based on rough estimates, largely due to a lack of transparency in data provided by AI companies. Still, in the context of everyday digital activity, the energy use is likely modest – certainly no more than the energy used for a short video call or a few minutes of video streaming. For reference, it to boil a full kettle.
Let me be clear: this isn’t whataboutism, and it’s not about saying “AI is fine because Netflix and Nespresso exist.” I’m not suggesting that individual use of generative AI tools is environmentally irrelevant. But we do need to weigh those tools’ impact proportionally. If we’re not routinely questioning the footprint of watching YouTube videos, doomscrolling social media, or leaving webcams on in online conferences – all of which rely on the same energy-intensive infrastructure – then is it fair to single out those using AI tools simply to manage an overwhelming workload?
And the pressure on those in higher education is very real. A recent sector-wide survey found that less than half of academic staff felt able to comfortably manage their workload, and just 44 per cent felt that their well-being was adequately supported at work. In that context, turning to AI tools to save time isn’t indulgent – it’s a practical response to relentless pressure.
This is where the concept of green shifting comes in – a cousin of greenwashing. Instead of exaggerating an organisation’s environmental claims, green shifting subtly redirects responsibility downwards: from corporations to consumers, from systems to individuals. When the climate burden lands on the shoulders of the individuals using AI – rather than the tech giants building the infrastructure or the governments regulating it – our focus is in the wrong place.
We should absolutely be concerned about AI’s energy and water use – but let’s focus our concern where it can make the biggest difference. That means demanding accountability from AI developers: transparent energy reporting, cleaner infrastructure and real investment in renewable power. It means governments stepping up with regulation, standards and incentives for greener data centre operations.
It’s also worth recognising that not all AI is created equal. Generative chatbots like ChatGPT represent only a small fraction of total AI-related energy use. The real heavy energy consumption is happening elsewhere: in video generation and analysis, targeted advertising, recommendation engines and training new AI models – not just chatbots but also the large, general-purpose AI systems that are on the horizon.
The substantial environmental threat, then, isn’t the individual user relying on ChatGPT to summarise meeting notes or using Copilot to draft an email. It’s the unchecked expansion of opaque infrastructure, the systemic absence of emissions data and the lack of guard rails to ensure sustainable deployment.
Individual awareness is good. It helps drive informed choices, build pressure for change and hold institutions to account. But individual blame? That misses the point entirely. Guilt won’t fix the problem – but regulation, transparency and system-level accountability just might.
Cal Innes is a digital sustainability specialist at Jisc.
请先注册再继续
为何要注册?
- 注册是免费的,而且十分便捷
- 注册成功后,您每月可免费阅读3篇文章
- 订阅我们的邮件
已经注册或者是已订阅?