This claim is a lie. It ran in The Washington Post, in an article provocatively titled “A bottle of water per email: the hidden environmental costs of using AI chatbots”. ChatGPT almost certainly does not consume a bottle of water when writing one email and never has. Those cited as the authority for this claim are well informed enough to know it isn’t true. They either deliberately lied to a newspaper with millions of readers or allowed that newspaper to claim their authority for this statement without issuing a correction or qualification of any kind.
Why it matters
Being correct matters. If you are wrong about something, you will have a much harder time changing it.
LLMs replying to users simply do not use up that much water. Some water is sometimes used for cooling, but this is negligible. Most of the water attributed to LLMs is used up for generating power, because power generation requires water. Querying an LLM generally uses up less power, and therefore less water, than making toast or leaving one of your lights on for a few minutes.1
Focusing on the behavior of LLM users is counterproductive for reining in the environmental impacts of AI. Both because AI companies are large, and because some of them make a point of behaving unethically to move faster, they can have substantial bad effects. Focusing on permitting and enforcement, especially around power generation, could mitigate these impacts. Focusing on whether or not specific people are using an LLM is extremely unlikely to ever help.
This claim about water use has been republished in dozens of other outlets. It is probably the most influential single statistic when talking about AI’s impact on the environment. Anyone who believes it is true will be trying to solve a problem that doesn’t exist.
If you want to understand the power and water use of AI or LLMs in more detail, I would recommend Andy Masley’s writing about AI's environmental impact or The MIT Technology Review’s series on the subject.
Power generation is an important point to pay attention to when we are contemplating grid expansion and opening new power plants for some of these companies. I am very grateful that those outlets are keeping track of this in detail because it prevents me from feeling like I should do so.
Why it’s a lie
AI as a whole uses up enough energy and sometimes water to be worth keeping track of, but generally does not use up an absurd amount of it yet. Querying ChatGPT or any other LLM to write an email uses up almost no energy or water whatsoever.
Awkwardly, the Washington Post does not publish the reasoning for their headline, nor do any of the other media sources covering this claim. They do publish a link to a paper by the researchers they are working with, and from that and other media quotes by those researchers we can try to figure out how you could possibly arrive at 519ml of water per email.
For a worst-case estimate using the paper’s assumptions, if
you query ChatGPT 10 times per email,
you include water used to generate electricity,
the datacenter hosting it is in the state of Washington,
the datacenter uses the public power grid or something close to it,
water evaporated from hydroelectric power reservoirs could otherwise have been used productively for something other than power generation,
and LLMs were not more efficient when they were being sold for profit in 2024 than they were in 2020 when they had never been used by the public,2
then it is true that an LLM uses up 500 or more milliliters of water per email.
You can reach a similar estimate by different methods, since they break out the water use per state differently. For example, if the datacenter hosting ChatGPT is not in Washington, it will have a higher carbon footprint but a lower water footprint and you will have to query it 30 or 50 times to use up an entire bottle of water. This is not what anyone imagines when they hear “write a 100-word email”.
That study’s authors are well aware that none of these assumptions are realistic. Information about how efficient LLMs are when they are served to users is publicly available. People do not generally query an LLM fifty times to write a one hundred word email.
It is completely normal to publish, in an academic context, a worst-case estimate based on limited information or to pick assumptions which make it easy to form an estimate. In this setting your audience has all the detail necessary to determine if your worst-case guess seems accurate, and how to use it well.
Publishing a pessimistic estimate that makes this many incorrect assumptions in a newspaper of record with no further detail is just lying to readers.
Even if the figure were true, the reporting is incredibly misleading. It fails to note that most of the claimed water use is water used for power generation. “AI is using up too much power” is not nearly as interesting a headline: people can compare how much power you’re saying it takes to write an email to their toaster or their PC (both use more). People often do not know that electricity generation uses up water. Presenting water use statistics without clarifying that they are from electricity generation is incredibly confusing if you don’t know that.
Comparisons of resource usage to farming, here and in other articles citing the same researchers, consistently underplay the impact of farming on water availability. Perversely, this seems to exonerate the farming practices actually causing crises in water-poor regions in pursuit of scoring extra points against LLMs.
If someone cares about the environment, these things seem like table stakes. You should avoid lying in the newspaper of record. You should especially avoid doing this in a way that blames customers instead of corporations or that blames entirely the wrong category of business for a major environmental problem. We aren’t going to get any meaningful change if we lead people to solve the wrong problem.
Why it has spread
This still leaves me with an itch about this specific claim. How did it become the dominant story in spite of being a complete lie?
The article is well-written and has very good graphics for laying out its data. It is persuasive and millions of people will have read it. Even if people do not read it, the headline, “a bottle of water per email”, makes sure everyone gets the message. Everyone will know that the Washington Post is claiming that ChatGPT uses up a bottle of water per email.
The article, compellingly, centers on the morality of the customer’s actions. You, the end user, are held responsible for consuming half a liter of water every time you use an LLM. If this were true, you could, clearly, have a meaningful impact by boycotting ChatGPT. It would also be important to try to prevent other people from using ChatGPT, since they are directly responsible for using up a lot of water.
Personal moral choices make for a compelling story. They are also, unfortunately, a very good way to deflect attention from the business to the customer. Passing moral judgement on people we know or talk to is satisfying in a way that criticizing a business is not. It is more difficult to be morally outraged at a corporation.
Why would you lie about this?
In short? For attention.
Newspapers at large seem to prefer negative coverage of tech companies. Negative coverage generates a lot of attention, and we are, famously, living in an attention economy. (Also, the tech companies frequently deserve it.)
When covering a story about two or more people, it is a normal and expected journalistic practice to at least contact everyone involved for comment. When covering a story about a technology, it is apparently considered acceptable to consult one “expert”. If this person lies to you, or is willing to let you lie under their name, you can publish the story. You will probably get more attention on the story if what they tell you is inflammatory, so you have good reason to seek out inflammatory experts.
Experts, in turn, are also part of an attention economy. If they work within academia, their prominence within their field depends upon how important their work is perceived as being. If they do not, their income depends directly on their reputation among potential clients or on their ability to attract subscriptions.
Quite possibly everyone involved thought they were doing good work. You could maybe argue that even if the headline isn’t true and the numbers are made up, the story still helps to raise awareness about environmental problems. This seems like a weak justification. We would most likely be better off on this issue if they had said nothing about it at all.
This is true even if the the paper I am criticizing is correct to estimate at 0.004 kWh, is even true at their pessimistic estimate of 0.016 kWh, and is extremely true if their estimate is high, which it definitely appears to be.