In this episode, Iâm joined by colleagues James Camilleri and HĂ©lĂšne SauvĂ© to explore the complex topic of AI sustainability. Itâs a conversation that spans the environmental impact of AI, from carbon emissions to water usage, and examines whether the industry narrative matches reality.
We delve into the trade-offs between energy and water, the role of lifecycle analysis, and how approaches such as federated learning and smaller models could help. We also ask whether âgreen AIâ is more than marketing hype, and what practical steps organisations can take to make AI genuinely sustainable.
Useful links for this episode
-
Technology Carbon Standard â Open-sourced by Scott Logic
-
Balancing AI Innovation and Sustainability: Our presentation at HM Treasury ID25 â Oliver Cronk
-
Greener AI - what matters, what helps, and what we still do not know â James Camilleri
-
Introducing AI-related emissions into the Technology Carbon Standard â HĂ©lĂšne SauvĂ©
-
Mapping the carbon footprint of digital content â HĂ©lĂšne SauvĂ©
-
Sustainability in AI - why we must listen to the smaller players â HĂ©lĂšne SauvĂ©
-
Beyond the Hype: Can technology sustainability really make a difference?
-
Beyond the Hype: Can we do better than âcarbon awareâ computing?
-
Artificial intelligence threatens to raid the water reserves of Europeâs driest regions â Marianne Gros and Leonie Cater, Politico
-
Will new models like DeepSeek reduce the direct environmental footprint of AI? â Chris Adams
-
Empire of AI â Wikipedia
-
V-Bucks â Fortnite Wiki
Subscribe to the podcast
Transcript
Please note: this transcript is provided for convenience and may include minor inaccuracies, and typographical or grammatical errors.
Oliver Cronk
Welcome to Beyond the Hype, a monthly podcast from the Scott Logic team where we cast a practical eye over whatâs new and exciting in technology. Everything from AI to APIs, carbon aware systems, to containerisation, Kafka to Kubernetes, and message queues to quantum computing. We look beyond the promises that buzz and excitement to guide you towards the genuine value. My name is Oliver Cronk, Scott Logicâs Technology Director.
Each month on this podcast, we bring together friends, colleagues, and experts for a demystifying discussion that aims to take you beyond the hype. In this episode, Iâm joined by my colleagues James Camilleri and HĂ©lĂšne SuavĂ© to explore the complex topic of AI sustainability. Itâs a conversation that spans the environmental impact of ai from carbon emissions to water usage and examines whether the industry narrative matches reality. We dig into trade-offs between energy and water, the role of lifecycle analysis and how approaches federated learning and smaller models could help.
We also ask whether green AI is more than marketing hype, and what practical steps organisations can take to make AI genuinely sustainable. I start by setting the scene for the discussion. This is something weâve all weâve been exploring quite a bit recently, particularly, particularly myself which is the topic of AI sustainability and yeah, there is a number of different dimensions, which weâll get into, but I think the large sort of portion of what weâre going to be talking about on this episode, this conversation is going to be the perception that the technology providers put out about AI and some of the reality. So, what is being reported versus what isnât â will AI solve climate challenges â and of course, there is the sort of question marks about enabled emissions from using AI to do extractive things and perhaps environmentally harmful things.
So, I said this is a massive topic because sustainability covers so many areas and thereâs also so much happening in the AI space as well, which makes this quite challenging to sort of get your head around. But I think, weâll cover a spectrum of issues, but I think the social issues are probably ones weâre going to have to park for this episode. Otherwise, the episode would be way too long.
So, weâre going to, yeah, weâre going to focus mainly on, the kind of environmental impacts, and this will build on some of our previous episodes that, that many of our listeners will have, will have, will have listened to, including the episode we recorded with, with Hannah Smith about Grid Aware, and also the conversation we had with Jeremy Axe about technology sustainability, which we recorded a while back now. So thatâs, for those that havenât checked out those episodes, I highly recommend those as sort of introductory episodes to the topic weâre going to talk about.
And Iâm really excited and actually heartened by the fact that mainstream media, the BBC and newspapers, have been reporting on the impacts of, AI and what, what its unchecked growth might, might look and what it, what it might do. So, yeah, letâs, letâs kick off, but weâll, weâll obviously get back into that. But before we do, letâs, letâs kick off with some introductions. So perhaps, James, if you can kick us off with an intro, please, thatâd be great.
James Camilleri
Sure, Iâm James Camilleri. I am a Lead Developer at Scott Logic, and Iâve been doing some research on this subject for, on and off the last year and a half, I suppose contributed to the Tech Carbon Standard and being a software developer for about years or so.
Oliver Cronk
Great. Thanks, James. And HélÚne, welcome back to the podcast.
HélÚne Sauvé
Thank you. Iâm HĂ©lĂšne SauvĂ©. Iâm a Software Developer in the Bristol office and Iâve been researching the environmental impact of AI since July this year.
Oliver Cronk
Brilliant. Thank you. And James, actually, I wanted to start with what you just touched on in your intro, right? Because, weâve been wanting to do something in this space for a while, having Tech Carbon Standard, also doing AI projects.
Weâve been wanting to sort of connect up the dots somewhat and go actually what is, what is the narrative around AI sustainability and what practically can, can, can we do about it? What can our clients do about it? And so perhaps if you can perhaps kick us off with a bit of, yeah, your, your work that youâve been doing to look at sustainable AI research.
James Camilleri
So, we got the team together, which includes HĂ©lĂšne, a few months ago back, back in July. And the first thing we wanted to do was a literature review because when we spoke about this couple of years ago. There wasnât a huge amount of literature available. Not many people had done a lot of research because I think at the time we were still focused on the âhowâ, and âcan weâ, not what are, what are the effects of this, none of the âshould weâ.
So, thatâs changed in the last few years. In fact, since we finished the literature review, Iâm pretty sure thereâs been a pile of more papers published. I think the day I, the day I wrote a blog about it, someone went, âOh, why didnât we include this one?â And it was, yeah, that, that came out a few days later.
But yeah, what we were trying to do was get an idea of the state of knowledge around this. And we were looking at what methods people were using to evaluate these impacts, what kind of tooling they were using, and whether or not theyâd come up with any mitigation strategies around it.
Oliver Cronk
Yeah. And HélÚne, you are also looking at this and looking at the, the kind of beyond the obvious things that, that, that are happening, right? Not just the energy usage, but also things water, right.
HélÚne Sauvé
Yes, absolutely. So first, the first thing is following the review that James mentioned, we introduced a new category called Content to account for content and data that foundation models are trained on, but also, digital content as a whole. And as you pointed out, I personally came into the research looking for carbon, and I did find carbon, but I also found water. And itâs an interesting topic because it is less talked about itâs perhaps less obvious but itâs actually a very pressing issue that needs to be, needs to be talked about alongside the carbon footprint of AI.
But the conclusion that, that I, I understood following that research is that both carbon footprint and AI footprint need to be thought together, theyâre not substitutable to each other and that in fact, optimising for one might hinder the other.
Oliver Cronk
Yeah, itâs, itâs quite a kind of interesting relationship. And I think before we kinda get into this more, maybe we should just unpack why water, right? Because I think if you donât, unless how data centres work and know about the power grid, you wonât necessarily know what the role of water is. So James, perhaps if you could help by sort of unpacking where is water involved in, in AI?
James Camilleri
Okay, so itâs in two main, main places. The, the obvious one is the water cooling used in data centre. So, theyâll take in an amount of water, pipe it around the data centre, along with other coolants to keep the machines nice and within operating temperatures. And then do something with that water.
Now what kind of depends on their water usage and as I understand it, there are two primary types of system that are used: open and closed. So, in an open system, you take in water from the water, the main water supply, you cool all your stuff, and then you dispose of it and you keep pulling in fresh water, and
Oliver Cronk
Maybe we should touch on the two, the two key ways that happens, right? Because you can obviously dump that water into a water course of some kind, which has environmental knock-on implications for any wildlife living in, in that water course. because that waterâs often very warm and, and the water may be quite cool that itâs going into. Secondly, the other thing, I guess, is evaporative cooling, isnât it?
Where you have the, the cooling towers and you essentially just, just sort of create clouds that you by evaporating water into the sky to get rid of the heat.
James Camilleri
So, thereâs a couple of problems that this can cause, as youâve just alluded to, changing the temperature of the water can have huge impacts on the wildlife. I, I know there was a place in France that I was told about that, they started having algae blooms, which then of course suffocated the fish, because they werenât meant, there wasnât meant to be huge amounts of algae in that part of the river. It also, potentially can deprive the local water system of a supply.
So, if you do this in a water-scarce part of the world that needs it for agriculture, itâs possible data centre is actually depriving local farmers of the water they need to irrigate their crops.
Oliver Cronk
Thereâs some specific examples of this happening in Spain and we can perhaps put a link in the show notes to those, because thatâs been quite a famous example in Europe. But yeah, this is happening all over the world, right where there is water scarcity. But sorry James, I jumped in because you were talking about the main, most obvious for those that perhaps know about data centres and know they need to be cooled down. But there is another one, which is less direct, isnât there?
James Camilleri
Well, before we move on from data centres, thereâs also the closed loop, which we should mention briefly as well which is essentially the, you take the water in and it stays more or less in a closed system. So, theyâll pipe, kind of pipe it through, cool it, and then pipe it through again. And thereâs very little wastage in that, but itâs a more expensive solution. So, not all data centres have got it, and itâs got its own, of course itâs still radiating, right, large amounts of heat and, and so itâs not, itâs not a silver bullet solution, but itâs definitely better than the open water cooling.
Oliver Cronk
But I guess when you say better, itâs probably better from a water perspective, but Iâm guessing it probably uses more electricity.
James Camilleri
I imagine so. I havenât done enough reading about it because I donâtâŠ
Oliver Cronk
But I, but Iâm pretty sure you were talking about the sort of trade-off earlier and there is definitely a trade-off going on between water and electricity, right? Some of the cheaper, from electricity use point of view, ways of cooling a data centre often involve wasting, effectively wasting or being more wasteful with water than not, is my understanding.
James Camilleri
Yeah, that, thatâs mine too. And this leads nicely into the other source of water usage thatâs less obvious. And thatâs the water usage at the power station. So, if you are using more electricity, the most power stations involve water in power generation at some point.
Most not all. And if you look at different forms of power generation, they have different water usages associated with them. So, so interestingly if you look at nuclear, itâs got the lowest carbon emissions, but the highest water usage, because they have to pump all that water around, to keep everything at the correct temperatures, for everything to work correctly and to, and to do some of the transformation into electricity. So they, again, we see the trade-off between carbon and water, even within the power centre.
But of course if, if youâve got a power centre that you now put a data centre next to and start pulling gigawatts of extra electricity, itâs going to, the power centre will start using lots more water to go along with that. So, we call that the source water.
Oliver Cronk
And thereâs an obvious example as well, I suppose with hydro, hydro plants are, are using dammed water, right? So, if you want more power, you need to release more water down that dam, which means more water is going downstream. But I guess there are other power sources renewables, wind and solar, that obviously donât typically use water in their, in their operation. So, I guess there isnât always a correlation between the lower the carbon, the more, more water is used.
I think itâs probably, in general, the correlation I would expect would be the more carbon intensive, the more water is used, because itâs typically used by fossil-fuel-fired power plants. Because to your point, I, I used to work in energy, so I, thereâs this concept of thermal power plants, and the point is that you basically use heat to create steam, and that steam turns a turbine round and that turbine is attached to a generator, and as the turbine spins, the generator spins and creates electricity, right? Thatâs how for many, many years power has been generated. Itâs just renewables have come online that, that, that change the game, but, wind, a wind turbine is exactly the same.
Itâs just spinning the generator itself rather than needing, needing a, you know, hot, hot water or hot, hot steam, sorry, to turn, turn the blades around. Itâs using the wind. So, I think sometimes some of these things require you to understand, a bit more about the energy system than the average person does. And the interconnectedness of, of these things, right?
So, itâs tricky. But HĂ©lĂšne, I wanted to come to you next, because of course weâve spoken quite a bit about nuclear and I know the French have got a lot of, cheap, abundant nuclear power, which is often why in Europe theyâre perhaps the place that, that people are picking for, for running AI. And I know Mistralâs been a bit of a poster child, obviously a French AI model, and I, yeah, I wondered if you wanted to talk a little bit about the motivations I suppose that the French had for developing their own LLM.
HélÚne Sauvé
Yes, of course. Yeah. So, Mistral is one of the startups that was selected by a programme in France called French Tech. Itâs an investment plan for innovative startups in strategic sectors, and AI included, but thereâs also cyber, cyber security, quantum computing and other sectors.
And the purpose of it is to build France sovereignty in terms of tech and other, other sectors and in, but also to increase economic attraction, of course. And I think that more broadly, itâs also to build the sovereignty, tech sovereignty of the European Union. So, Mistral is a good example as other similar companies that drive innovation in tech whilst also meeting the requirements of the European Union.
Oliver Cronk
Yeah, itâs, itâs really interesting. I mean, I guess I slightly picked on you because obviously youâre French for that question. But, and, and I wondered how much of this yeah, the, because I did also wonder whether the French are obviously very passionate about their language. And I, and I can, I have to be a little bit careful here because as an English person, because Iâm incredibly lucky that English is generally used all, in a lot of parts of the world as the business language.
And so, I completely understand, other, other languages wanted to be quite defensive, but I got the sense that the, yeah, that the French being very protective about their language, wanted to perhaps have a, have a model that was French- language centric rather than English-centric. Because, letâs, letâs face it, Silicon Valley is often very English-first, in fact, American English, not even, the, the British form or, or other forms of English. So, yeah.
I wonder whether that played into it as well, do you think? Or is that just me being a, being a classic Brit and stereotyping French people?
HélÚne Sauvé
I think thatâs possible. Yeah. I think maybe more than protecting the French language, itâs also the attraction of a product that is made in France.
Oliver Cronk
Before, yeah. Before we move on though, you, we, we potentially skipped over something quite important. You mentioned some of the updates youâve made to the Tech Carbon Standard. I wonder if you could talk a little bit more about those.
HélÚne Sauvé
Yes, sure. So, when we made this latest update of the Tech Carbon Standard, we introduced AI-related carbon emissions. And in order to do that, we looked at the different actors and also the different scenarios of where AI is used. And that includes training, pre-training a foundation model, fine-tuning it, but also self-hosting and of, and of course, the case of inference, so querying an AI model.
As I said, we introduced a new category called Content. And we also included new practical guides on how to reduce carbon emissions. And credit to my colleague, Sarah, who wrote those guides. The guides introduced are on how to reduce carbon emissions in AI, but also how to reduce carbon emissions in testing.
Oliver Cronk
Great. And James, on that, I wanted to come back to you to talk about the difference between training and inference, because I remember that was quite a prominent thing out of the research that you both, that you all, all of you working on this came up with, right? Was this sort of awareness of the difference and perhaps some of the myths around it.
James Camilleri
Absolutely. I mean, Iâm not sure that there are myths as such, but, I think the story, yeah, perception has well, perception and reality has changed a little bit over the last six months to a year, Iâd say. So, when LLMs first kind of made the big time. There was a, there was an implicit understanding that all of the energy, the bulk of the energy was being used in training, right?
because I mean, intuitively thatâs true. It takes many hours and many thousands of documents and a grid of GPUs and whatnot in order to train a model. And thatâs going to, thatâs going to soak up a lot of electricity. But whatâs happened is over that time with the AI hype and the AI bubble and all the rest of it is the uptake of these services has been massive.
And there was a really interesting paper where they had a look at, at the cost of making an inference at, a query to an LLM. And more specifically, the kind of query, because not all queries are created equal, right? So the difference between generating words of text and generating an image is quite different in the amount of processing power. And also, you get a difference across how big is the model, how many parameters was the model trained on that, that has an impact.
Thereâs a few different aspects to how this is done and a couple of the papers we found made a good job of, of splitting those up and had a good methodology for measuring them. But the conclusion at the end of it was that weâve hit the point now where inference is dwarfing training in power consumption, simply because training is kind of a one-off cost. So, itâs a big hit right at the beginning. Itâs your big investment, but inference happens constantly.
So, I think I, I read something that this summer ChatGPT was processing something like 2.5 billion queries a day. So, even if itâs only using a, a teaspoon full of water, how much is 2.5 billion teaspoons of water?
I canât actually wrap my head around that number about how much that must be. I canât visual it.
Oliver Cronk
Yeah, the scale is pretty staggering. Right. And itâs interesting sort of thinking about the different sort of aspects of the AI sort of journey I suppose, or supply chain or value chain. And weâve had some recent conversations with colleagues of mine on the GDSA, Government Digital Sustainability Alliance AI Working Group.
So yeah, and, and I think that also kind of drew out the point about lifecycle analysis as well, didnât it, James? Thatâs sort of an interesting sort of piece that okay, you need to make sure we donât overlook because thereâs some sort of fundamental sustainability approaches that are really important to stick to. And this is where I think itâs not helpful when the cloud vendors sort of talk about AI being sustainable, but they often say that just through a lens of talking about the operational energy. So, the fact that they might have renewables or theyâve purchased renewables or invested in renewables, that perhaps allows them to claim that the energy that theyâre using in that data centre is, is low carbon.
But in some cases, itâs sort of more of an accountancy thing rather than a full lifecycle analysis type thing, isnât it?
James Camilleri
I mean, so most of the studies we looked at used some form of lifecycle analysis, which is, which makes perfect sense because I, I mean, itâs, itâs a tried and tested way of looking at a full impact of whatever the system under scrutiny is. So, for those who donât know a lifecycle analysis goes right from dragging the materials out of the ground through to manufacturing, distribution then the, using the, the product, whatever it is, and then finally disposing of it. So it, itâs proper cradle to grave kind of way of splitting up the problem and looking at each section. The problem is, is that when youâre looking at tech, especially, although I suspect almost every industry has this problem, itâs really hard to get data on that whole spectrum because we only exist in this tiny part of it, and often we donât see where our goods go after we manufacture them, or after we write software, we donât know much about, in the grand scheme of things, about where itâs run, how long itâs run for and that kind of thing.
And itâs quite hard to get data on all of those stages for even the laptop that, and the microphone that weâre using right now. So, sometimes it does exist, but how, how reliable is that data?
Oliver Cronk
Yeah. And that kind of strikes a chord with the conversation I had at Treasury. And this is definitely something we can put in the show notes is the talk I gave, which talked about sort of the importance of measurement. And of course, I talked about Tech Carbon Standard being one possible way of sort of having a standardised approach to, to measurement, but also the importance of not losing the, the, the human-centric sort of approach.
So, I think there is also a general theme, I suppose, that Iâm seeing, Iâm sure others are seeing as well where itâs trying to justify these very expensive investments, these quite brute-force investments in the fact that we can just replace a whole load of people in the future, or some unspecified point in the future, weâll be able to replace people. I think the reality for those that work in technology is that we know that thatâs just fairly unlikely, particularly with the current generation of, of technologies, right. Large Language Models are really impressive, of course they are, but they are largely language mimickers, theyâre kind of reconstructing content, which can be very impressive, but itâs not really intelligence and to sort of think that we can just hand over decision-making and various other tasks that are quite complex, particularly those in, high-risk areas, highly regulated areas, to think we can kind of do that is, is, is a bit of a fallacy.
So, part of my talk track there was sort of talking about being more human-centric and empowering people to pick the right tools for the job. And I guess this starts to kind of come on to sort of solutions, right James? in terms of thinking about using smaller models or using models that are more refined to the task at hand.
James Camilleri
Yes, so, one of the, one of the things that came out of the research that we found was that using smaller models that were more targeted in their capability, used up less energy they, they take up less to train and they also take up less to infer because theyâve got a smaller knowledge base to draw on. And, if you think about it, thatâs intuitively true if youâve got a system thatâs made for answering questions about Harry Potter; it doesnât need to know the capital of Canada or have read the complete works of Shakespeare because, itâs outside of the, the knowledge that it requires to answer those questions. And, equally with programming tools, if it, if it, if itâs seen enough examples of good programs, it doesnât need some of this general knowledge that goes around it.
So, the idea of using LLMs for, for simple code generation just seems a, a strange direction to take it, given the, the extra power required in the training and, more importantly, in the inference. Because you could make the argument that the LLMs, the Large Language Models have already been trained so that that ship has sailed; that carbon is embodied, right? So, using it for as many different things is a, is another way of looking at it, because then youâre not repeating the training cost, but because the inference cost is also higher, it becomes a scale problem.
Oliver Cronk
This feels to me a classic sort of trade-off, and âit dependsâ type situation, right? Is it worth creating a specific model? Well, it sounds it depends on how much that inference is, is the inference going to of impact, going to wash its face, and so on and so forth. But HĂ©lĂšne, was there anything else in the research that sort of struck you as quite interesting or sort of surprising?
HélÚne Sauvé
Yes, when I, the, the question of small language models compared to large language models totally makes sense and, and I to compare it to commuting by plane to work when itâs a -mile journey. Itâs absolutely not necessary and itâs not efficient at all. Another important element that I took away from the research was questioning the use of AI in the first place. And that AI shouldnât be the default tool.
We need to carefully think about when and how it actually benefits the situation.
Oliver Cronk
So, being more conscientious about its use. Yeah, I know. I really that. The other thing, James, that Iâve been looking into, and I think weâve talked about this before, we did the Green Tech Southwest event, that again, is something we can put in the notes, and we were talking about, about the sort of financial sustainability, right?
And I think the cost of these models is potentially going to be maybe what drives better behaviours, because I just personally canât see that the economics stack up. And so Iâm kind of hopeful, and I donât know about you, do you want to introduce your perspective on this? that the financial, model and the need for cost saving might drive reduced impact because theyâll have to drive for efficiencies.
James Camilleri
Yes, absolutely. At the moment, the cost of AI has been massively supplemented by the investment thatâs going into it, the billions that are going into it. So, almost everybodyâs operating at a loss. Now some of the agentic AI tools Iâve been using recently have - I naughtily call them V-Bucks â but, theyâre, theyâre, meant to be, theyâre some kind of virtual representation of how much processing power youâre using.
And they could be, theyâre used partially for billing and also to give you an idea of the context size, so that if it gets too big, it encourages you to create a new session so that you donât get context wrong, where the, where the LLM kind of forgets things that youâve already told it. And, essentially I think, I think these are kind of good to, to see. And obviously while the cynical bit of my brain is, oh yeah, thatâs a nice way of, making some money, By, by, by just having this counter go up, it also forces you to actually think about how much youâre spending and how much you are expending as far as carbon and waterâs concerned. If I had one ask of these services is, is that they make the environmental impacts transparent as well.
If they can put together a V-Buck to represent the billing, then I donât see why they canât put together something similar for carbon and water. I do think that companies are also very cost-conscious, and I actually think this extends beyond AI. If you look at cloud usage and data centre usage, generally the more youâre spending, the more youâre using. It, it is a rough correlation that, you might not be able to, compare your spend to someone elseâs spend.
Because the rates might be different, but certainly the more you spend, the more processing you are using. So, by actually looking at your usage and trying to lower your costs, a natural extension of that is you should normally lower your environmental impact as well. I donât think itâll always be true, but itâs a decent rule of thumb.
Oliver Cronk
Yeah. Yeah. No, this, it, itâs a bit using cost as a proxy. It doesnât always work, but if you donât have any other data, itâs not a bad place to start.
Youâve got something that you can use as a metric, but itâs not perfect by any means.
James Camilleri
Especially when it results in less usage of, you know, being able to shut down cloud appliances and things that theâŠ
Oliver Cronk
âŠare unused. Yeah. OrâŠ
James Camilleri
âŠyeah, the, yeah, exactly. So, itâs, it, it again, itâs intuitive that by using less, you emit less.
Oliver Cronk
I wanted to talk a bit about the sort of shift that I also sort have been speaking about recently, which is this sort of move from big, centralised sort of compute because you talked earlier, James, about, the thinking around training and thinking about inference. Now, clearly for, for those that donât know, training AI models is a lot easier if you have a big, centralised compute cluster, itâs just a lot easier. Itâs challenging to try and do that in a distributed setting, but what strikes me is a lot of the sort of stories Iâm seeing about sort of Apple in particular, they have some amazing capability on their devices for running AI locally. the M4 and M5 series of chips are particularly, very good at running AI models.
And so, I suppose I wanted to chat to you both a bit about this sort of, you know, running AI locally, potentially being, a step in the right direction. And again, itâs a trade-off, right? Iâm sure thereâll be other things, but what occurs to me in this space is that as per our work with, with clients on the Tech Carbon Standard, 50â60% of, of an organisation, organisationâs emissions can be from that hardware purchase, typically the end user hardware that, that, that theyâre buying if there are a large organisation. And so, we can find more ways to sweat that asset to get more, bang for carbon buck as it were.
Wouldnât that make sense? So, wouldnât it be great actually if we can run some more of the AI workload inference locally, because thatâs, existing hardware we already have? We donât have to build more and more centralised hardware if we can actually leverage hardware thatâs already going to be on peopleâs desks or in peopleâs hands. And then the other thing on this is the sort of diffuse nature of that power.
And Chris Adams at Green Web Foundation talks a lot about this, that actually part of the problem we have with the large data centres, particularly in Europe and the UK, is getting a new grid connection for a big data centre is nigh-on impossible, right. Itâs, itâs not impossible; it takes many years, though, to get those grid connections. And we talked earlier about water being a sort of tricky resource to sort of manage where we might be taking water and giving it to a data centre, or, or, or agriculture. Thereâs a similar thing here around energy, if we approve a big data centre or two, that might mean we canât approve, , housing development because of the electricity that, there isnât enough to go round, particularly in the southeast of England, the South England.
So, I kind of wondered, yeah, what, what your thoughts really are on, things Hugging Face and the sort of potential shift to using open source models. Either of you want to sort of kick off on that one?
HélÚne Sauvé
Just when you were talking about centralised training, part of the research that we did was also around, federated learning. So, basically a move away from centralised learning and have more federated learning. So, which means training AI models on data distributed across millions of remote clients which also, in turn benefits data security.
Oliver Cronk
Yeah, because I guess if you can run, keep the data where it, where itâs stored already, thereâs less concerns about that data being sort of put in places that are more public cloud and yeah, I think there are, there are a whole bunch of other benefits. Iâm sure there are also drawbacks as well. Certainly, yeah. That there are more challenges sort of coordinating training when itâs, when itâs federated.
But yeah. I mean, James, whatâs your sort of take, do you, do you see a sort of medium- to longer-term future where a lot more AI inferences running on peopleâs devices rather than in the cloud?
James Camilleri
Yeah, I mean. If we look historically at technology, itâs an obvious direction for it to move in. So, if you look at, computer graphics as, as, as a really great example, back in the nineties, if you wanted to produce really high-quality 3D graphics which rendered in real time, you needed something a silicon graphic station. This was, a huge piece of hardware that they used to run at movie studios and the, in order to do special effects.
And the idea of doing something that on a home computer was just Space Age, you know? Whereas then we brought out graphics cards that could produce higher quality things and the processes got faster. So, we could do those 3D calculations, âDoomâ came out and, and then after that you end up with, dedicated 3D cards that just take that on. And then past that, we integrate the 3D cards into the main graphics card.
So, it becomes one thing again. So as time goes on, we, the industry gets better at packing this stuff into, moving it out of the server and into the home. So, I think itâs only a matter of time before we see something an AI card on the shelf you can just plug into your PC.
Oliver Cronk
Which I guess is not far away from where we are with GPUs. right now, Iâm, yeah, Iâm, Iâm running quite a bit AI locally, but Iâm aware not everyone has, has a sort of mid- end or high-end, gaming computer. Thatâs, that is a bit of a niche and itâs a bit of a privilege, quite frankly to have that. But I think, but I guess my point is itâs going to be more of a, just, every phone will have that kind of capability, because phones are designed to be as efficient, I mean, theyâre very powerful, right?
Our phones, right? Theyâve got more compute power than, than a lot of mainframes and, and supercomputers of, of, of past eras. But theyâre also designed to be super-efficient, so they donât drain the battery. So, thereâs an interesting sort of, itâs the one area, I suppose where there has been more focus on efficiency.
I think thereâs a danger that, as developers sometimes we trade off developer efficiency with machine efficiency. I think we do that a lot. But in the mobile space, there, there is in things ARM processes and TPUs and other things, thereâs, there is sort of a general push towards efficiency, because I think people have realised, wow, this workload is one of the most inefficient power-hungry workloads weâve ever created. We better get creative about how we run that, that workload.
Anyway, weâve been chatting. Quite, quite a bit here. And I wanted to sort of start moving us towards the kind of classic, is it hype or is it not hype? So do, do you think the kind of sustainable AI as itâs, I suppose currently being sort of touted is, is, is hype and if you think it is a hype, if you think it is hype, do you think itâll always be a pipe dream, or do you see signs that can be made more sustainable?
And HélÚne, do you want to have a go doing that first?
HélÚne Sauvé
Sure. Yeah. So, Iâve been wondering if sustainable AI is, is actually possible. And I think to answer that question, we, we need first to understand the, impacts AI has on the environment and on society.
And we also need to ask ourselves what future we envisage for AI and what, what role we want AI to play in our, in our lives. I do think that putting profit before people will always lead to unsustainable practices. But, but I also, Iâm, Iâm also hopeful that, with all the research that Iâve done on different ways of doing AI and one bit of research being about doing AI frugally can lead to, more, more sustainable more mindful ways of, of doing AI.
Oliver Cronk
So, you are optimistic, but do you think the current sort of way AI is being pitched as sustainable, kind of, green by some of the big tech companies? Do you think thatâs, thatâs hype?
HélÚne Sauvé
Yes. So I, I donât think AI is being developed currently sustainably, the demand for new data centres at the moment cannot be met in a sustainable way. I, I wouldnât say I am, I am hopeful, but I know that there, there, there is there are people working towards alternative ways of doing AI. So, it depends on whether we, we build AI in resource-constrained environments that benefit people, or if we just carry on building AI for the benefit of the few at the expense of people and the planet.
Oliver Cronk
Yeah, no, Iâll, Iâll come back to that in a minute, when I wrap up, the point about constraints, because thatâs a really important point, which weâve not talked about yet. James, whatâs your take?
James Camilleri
I do think that thereâs a lot of hype around this. More specifically, what I look at is the way that itâs being reported. I mean, Hugging Face are amazing because theyâve got these open source models. Theyâve got a published methodology as to how they measure the environmental impact of them, and theyâve got a system whereby you can upload a model and test it, which I think it, to me, thatâs the current gold standard.
We contrast that with the private companies doing it, the likes of Open AI and Grok and things that. And they either try not to tell us anything at all or when they do itâs soundbites that lack context and method. So, thereâs no real way to judge how impactful those systems are because theyâre not giving us enough information to actually make an informed decision as consumers, which I think is deliberate.
So, that part I definitely think is hype. I think the other strand of this is that you often hear that AI will pay for itself environmentally by somehow saving the planet, that itâll process some workload, that will make life better. And what, that, thatâs plausible. But if thereâs one thing weâve learned about LLMs in the last months, itâs that you have to word your requests to them very, very carefully.
So I, I think itâs all, all very, dependent on somebody actually using this technology in that way, in a purposeful way, I, and I do think thereâs, I do think thereâs some real opportunities there. For example, with code generation, we could actually start building in rules that encourage the AI to generate sustainable code. At that point, maybe the offset of more systems being run in a more efficient way might offset the inference of generating that code in the first place. But again, without methodology and openness about the costs, how do we make that value choice?
Oliver Cronk
And my worry at the moment is, unless you prompt them to say, give me the most sustainable, efficient code, theyâre just going to give you the average of whatâs in their training set. Right. And even if you ask them for the, for the, for more efficient code, they, thatâs just going to, it is not surely going to give you the right answer, because these things are just doing it based on word, you know, probability rather than truly understanding the concept of what it means to write sustainable code. But anyway, thatâs a whole other rabbit hole, we donât have time to go down.
So, I did want to come back to that point about constraints because I think itâs a really important point that HĂ©lĂšne was touching on. And again, itâs part of the conversations Iâve been having with people, which is to reframe constraints and regulation. I think often regulation and our âlack of abundanceâ in air quotes in Europe and the UK is often seen as a negative. I actually think itâs not, I actually think itâs what keeps us quite conscientious â quite focused on trying to do things efficiently.
I think itâs, we have a massive scientific community in the UK, a big engineering community, and I think, we pride ourselves on, on making things efficient. ARM is a great example of this, right? ARM came out of the UK and ARM processes are super-efficient processor designs. So, I think this constraints thing is really interesting, but my take it, it wonât surprise to hear, is very similar to the, to the pair of you, right?
I think there is an awful lot of hype around sustainable AI. A particularly bad example I saw is, if you move your AI workloads to cloud, I wonât name the vendor, you can go and find it out if you want to, the, the workload will be up to % greener if you run that AI workload on cloud rather than on-premise. And when you dug into it, the on-premise example was just crazy. It was just, a really, really bad example.
And the cloud just looked looked amazing because they cherry picked the most low-carbon region and all that sort of stuff. So, the reality actually that other people in the GDSA community has called this out as well as it can actually be worse. It does, itâs not guaranteed that moving AI workload or any workload to the cloud will make it greener. And we touch, I think weâve touched on that on previous episodes of the podcast.
Anyway, this is a topic thatâs dangerous because Iâm very passionate about it and you are both very knowledgeable about it as well from the work youâve been doing over the last few months and years. So, thereâs a danger we could go on forever, but unfortunately, we canât. So, weâll bring the episode to a close, but before I do, any other last points or things that you wanted to sort of mention before we wrap up?
James Camilleri
Yeah, I think I think the other thing that came, kind of came out, of our studies of the last few weeks is that when you are designing a system, and this, this really applies to any system, right, but it, it just seems to be especially important to AI is that you need to look at it from different perspectives. So you, you need to try to right-size the tool at hand, and you need to try and understand the consequences of using it. So, looking at not just the immediate effects that this is going to have on your bank account and on your IT estate, but also what effects is this going to have upstream and downstream from where you are? How is this going to affect the local area?
Howâs it going to affect the power grid? There are all these questions, and Iâm not saying you need to consider absolutely everything, but just shifting that perspective for a moment and taking a little while to consider how this impacts end users and their lives. Or upstream vendors and what theyâre supplying could cause you to make a decision that makes a big difference.
Oliver Cronk
Yeah. The one thing I guess I, that we, we should probably touch on just briefly before we do close, is that there are also sort of recommendations weâve come up with, right? So there are, weâll definitely point to those in the show notes because whilst weâve spent a lot of the episode talking about the challenges, there are also things you can do, well weâve touched on some of them, right? choosing smaller models and thinking more conscientiously about how you use AI and all that sort of stuff.
So, weâll make sure we include some links to that, so itâs sort of balanced. But look, with that, thank you both. Thatâs been a great conversation and hopefully those of you listening have found it interesting and yeah, thanks so much.
HélÚne Sauvé
Thank you.
James Camilleri
Thank you.
Oliver Cronk
And so that brings us to the end of this monthâs episode on AI sustainability. If youâve enjoyed this discussion, weâd really appreciate it if you could rate and review Beyond the Hype. Itâll help other people tune out to the noise and find something of value, just this podcast aims to do. If you want to tap into more of our thinking on technology delivery approaches and tech sustainability, head on over to our blog at scottlogic.com/blog.
So, it only remains for me to thank HélÚne and James for their insights and to you for listening. Join us again next time as we go Beyond the Hype.



