AI And The Climate: Minimising Climate Harm When Using Big Tech's Newest Invention

AI is destroying the environment. Is there anything we can do? Here I talk about the strategies I implement to make sure I get the most out of using it as little as possible.

A smartphone displaying a folder titled 'AI' with Gemini and ChatGPT apps installed
Photo by Solen Feyissa / Unsplash

There’s been a lot of encouragement at my workplace to get more involved with AI - to play around with it, figure out how to work alongside it and so forth. I’m still not wholly convinced, but I’m willing to give it a try anyway. I guess you can label me a reluctant early adopter. Reluctant as I am though, I can’t deny that it has its uses, particularly when I need help parsing through large amounts of data (though not without its pitfalls) or if I need to brainstorm some content. To be clear, its ideas are usually terrible, but they make for a good sounding board of sorts.

But among all this, one question looms over me like a storm cloud more than any other: How much energy am I consuming by using AI?

It’s no secret that AI is incredibly energy hungry. That amount of processing power, running on high-spec data centres that consume more energy than entire countries, feels… wrong somehow. Like an option I shouldn’t even be entertaining.

The reality, though, is different. I’m using AI regularly, and I’m sure many of you are too. AI is in everything, to the point where it feels almost shoehorned in where it doesn’t belong. So if AI is here to stay, what am I doing to minimise my environmental impact when using it? Well, a few things.

I actively avoid using it as long as possible.

That’s probably a no-brainer, but I find that it’s important for me to avoid falling into a trap of thinking it’s just another tool like Google or Youtube that I can incorporate into my regular rotation. The thing is, it’s not - ChatGPT is like a gas-guzzling SUV, whereas Google is a Honda Civic. Both have their uses, but my default is always going to be the Civic over the SUV, unless I need to tow something heavy.

Same goes with AI. Anything solvable with a Google search, that’s my go-to. If that fails, then I’ll turn to AI for help.

Although, considering Google have now begun injecting AI summaries into some of their search results, using them as an alternative is probably not as energy-efficient as it used to be.

I customise the AI before using it.

A man in a black t-shirt writing in a notebook with his laptop open beside him.
Photo by Kit (formerly ConvertKit) / Unsplash

Customising my ChatGPT is actually really fun, because it flexes my creative muscle. What kind of AI do I need to interact with today? When customising, I make sure to clearly and in great detail describe exactly what role I want it to play, and how I expect it to respond to me.

This really helps because it cuts down on the back and forth - the less you need to provide followup context cues or request for clarity, the less energy you wind up using. Simple, hey?

I take my time writing a prompt.

Sometimes I’ll take half an hour to come up with a really solid prompt. Is that long? It feels long to me - it’s like spending thirty minutes trying to come up with an answer to an icebreaker at work, silently panicking as your turn draws nearer and nearer as you realise you are, in fact, the most boring person in the world and don’t have anything remotely interesting to say about yourself wouldn’t that just be craaaaazy hahah-

Anyway. I spend a while making sure I get it right. There’s no time limit to a prompt, no person on the other side getting real impatient at how long it’s taking me (although…). Spending more time writing a solid prompt noticeably reduces the amount of followup prompts I have to make, which means less energy used.

I ask it to ask me questions.

Short-coated brown dog sitting with one paw raised up like it's about to ask a question
Photo by Camylla Battani / Unsplash

Again falling into the bucket of context-setting, I like to ask my AI to ask me some followup questions on a prompt before it gives me its answer. Usually three, sometimes five, these questions shave away any last pieces of ambiguity that may exist from a less-than-perfect prompt, and allow the AI to perform its best work.

There have been times when I’ve been so pleasantly surprised by the questions it asked that I’ve begun anticipating that question and answering it before it’s even asked in future prompts. This process of improvement and prompt refinement has really helped me drill down on how to write a good prompt.

I accept the imperfect.

This one’s subjective. To me, AI will never generate the perfect answer. Even with all the context-setting in the world, nine times out of ten I’m getting a response that makes me shrug and go eh, close enough.

Now, I could keep asking for refinement. I could refine endlessly, and every prompt could get me closer to that perfect response. But it’s just not worth it. Almost every time, that imperfect answer is good enough, and solves my problem.

A man in a white shirt and tie holding a blue transparent plastic folder
Photo by Invest Europe / Unsplash

Resources that might interest you:

AI already uses as much energy as a small country. It’s only the beginning. - Vox
The energy needed to support data storage is expected to double by 2026. You can do something to stop it.

Generative AI in Search: Let Google do the searching for you - Google
With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching.

Custom instructions for ChatGPT - OpenAI
We’re rolling out custom instructions to give you more control over how ChatGPT responds. Set your preferences, and ChatGPT will keep them in mind for all future conversations.

Prompt Engineering - OpenAI
This guide shares strategies and tactics for getting better results from large language models (sometimes referred to as GPT models) like GPT-4o.

This AI startup claims to automate app making but actually just uses humans - The Verge
Who could have seen that coming?