Share Podcast
Managing AI’s Carbon Footprint
What are the immediate impacts of AI on climate?
- Subscribe:
- Apple Podcasts
- Spotify
- RSS
Artificial Intelligence is on every business leader’s agenda. How do we make sense of the fast-moving new developments in AI over the past year? Azeem Azhar returns to bring clarity to leaders who face a complicated information landscape.
This week, Azeem joins Sasha Luccioni, an AI researcher and climate lead at Hugging Face, to shed light on the environmental footprint and other immediate impacts of AI, and how they compare to more long-term challenges.
They cover:
- The energy consumption and carbon impact of AI models — and how researchers have gone about measuring it.
- The tangible economic and social impacts of AI, and how focusing on existential risks now hurt our chances of addressing the immediate risks of AI deployment.
- How regulation and governance could evolve to address the most pressing questions of the industry.
Further resources:
- Power Hungry Processing: Watt’s Driving the Cost of AI Deployment (Alexandra Sasha Luccioni et al, 2023)
- The Open-Source Future of Artificial Intelligence (Exponential View, 2023)
- AI is Dangerous, But Not For the Reasons You Think (TED, Sasha Luccioni, 2023)
AZEEM AZHAR: Hi, I’m Azeem Azhar, founder of Exponential View and your host on the Exponential View podcast. When ChatGPT launched back in November 2022, it became the fastest growing consumer product ever, and it catapulted artificial intelligence to the top of business priorities. It’s a vivid reminder of the transformative potential of the technology. And like many of you, I’ve woven generative AI into the fabric of my daily work. It’s indispensable for my research and analysis. And I know there’s a sense of urgency out there. In my conversations with industry leaders, the common thread is that urgency. How do they bring clarity to this fast-moving noisy arena? What is real and isn’t? What, in short, matters? If you follow my newsletter, Exponential View, you’ll know that we’ve done a lot of work in the past year equipping our members to understand the strengths and limitations of this technology and how it might progress. We’ve helped them understand how they can apply it to their careers and to their teams and what it means for their organizations. And that’s what we’re going to do here on this podcast. Once a week, I’ll bring you a conversation from the frontiers of AI to help you cut through that noise. We record each conversation in depth for 60 to 90 minutes, but you’ll hear the most vital parts distilled for clarity and impact on this podcast. If you want to listen to the full unedited conversations as soon as they’re available, head to Exponentialview.co. Today I wanted to chat to Sasha Luccioni. She’s an AI researcher and climate lead at Hugging Face, which is the cutest named company in the world with the cutest corporate logo. It’s an open source platform for machine learning models, and she’s published some really interesting work on the potential climate impacts of AI, in particular generative AI, that technology that’s got everyone really excited during 2023. We’re speaking just as the COP 28 Climate Convention is coming to a close, so it’s great to have Sasha here. How do you feel about COP 28?
SASHA LUCCIONI: Well, let’s say I didn’t have the highest expectations of COP 28, and I think that, I mean, in general, there’s so many moving parts and so many actors involved that it’s going to take a while for the dust to settle and for us to really see the results of this meeting of so many different nations.
AZEEM AZHAR: I, like you, sort of felt, well, gosh, this is the 28th time of asking, but I guess it’s also the case that you only fail when you stop trying. So you put out a paper and that was the trigger for me wanting to speak to you a couple of weeks ago, looking at the climate impacts of AI. And I guess to make sense of that, there’s been a growing understanding, of course, AI is not this dematerialized thing. It is, yes, it may be a dematerialized algorithm, but it runs on physical complex hardware that sits at the top of a very complicated supply chain. I think my friend Kate Crawford, who’s been on the podcast before, talks about this very well in her book at the Atlas of AI. And then of course, every time we ask an AI to do something, there is a hot process that runs on some servers that then need to be cooled down, and all of that uses energy. And I think today, correct me if I’m wrong, so three to four percent of all electricity goes into data centers. That proportion is growing at 30 to 40% a year because the demand is so insatiable. So in a sense, it’s become a, would you say it’s become a really live and pressing issue?
SASHA LUCCIONI: I think that people don’t consider it to be as pressing of an issue as it should be. I mean, AI is being deployed in pretty much everything. The thing is that AI is not… So typically, for example, when we talk about climate change and greenhouse gases, we tend to think about verticals like transportation, agriculture. We have these very well-defined verticals so AI doesn’t really fit into any one of those. It’s typically put in the ICT box. But the thing is that it can be used in a bunch of different verticals, a bunch of different domains. And so I think it’s really hard for policymakers for the general public to put AI in a specific climate box. And I think that’s part of the issue because first of all, it’s dematerialized, and second of all, we don’t really know where to put it and how to categorize it in our brains.
AZEEM AZHAR: Yeah. Oh, that’s a really good point, yeah. I think about the five buckets that we normally see when we look at the sources of emissions. And yes, there isn’t one for as general purpose of technology as AI. I guess people have looked at this question before, but in your recent paper, which I love the funny headline, “Watts driving the cost of AI Deployment?” Where it’s spelled W-A-T-T-S, nice little physics pun there. We don’t get too many of those. You were really looking at particular tasks and how energy intensive they were. And then I guess from that, how carbon intensive they were. Just talk us through that because you do quite a nice like for like comparison that allows us to get a sense of what these things entail.
SASHA LUCCIONI: So, so far, people have mostly focused, and research has mostly focused on the training energy and carbon emissions of AI models and inference. So the actual deployment has always been, it’s harder to get a handle on because it’s often distributed. It’s often, people have their own AI models running. It’s all sorts of places. For training, it’s really like you press start, you press stop, it’s relatively well contained. And so, inference has been overlooked and also people to use different hardware and all sorts of different models and tasks. And so what we try to do is essentially paint a broad picture of AI model deployment for different tasks, for different types of models, and also to compare models that are specifically trained to do a single task, so fine-tuned for single task, and these general purpose “models” that are supposed to do several different tasks. And the goal is really to compare these two categories of models and to also compare the different tasks and show that some tasks are more energy intensive than others.
AZEEM AZHAR: Right. And so the important point there being that when we have hundreds of millions of people using these services as we already do today, and I think I had this rough moment where I realized more people have access to generative AI tools today than have access to clean sanitation. Wow. Yeah, right. Because we are not rolling out flushing toilets and clean water to large parts of the world, whereas if you’ve got a smartphone or an Android, you’ve got access to these tools, which is pretty remarkable. So the inference is being used billions of times every hour right across the globe. And the interesting piece, I think the headline that everyone grabbed to was that when we go around and mess around with Dall-E or Mid Journey, it uses a substantial amount of electricity, but I think it was characterized as fully charging a smartphone. But just help us understand that a little bit.
SASHA LUCCIONI: Right. So what we found is that image, I mean generation tasks, generational tasks are more intensive in general than, for example, discriminative. So for example, if you have an image and you want to, say if it’s a cat or dog, that’s a discriminative task. You have a set number of categories that you can choose from. And essentially if you only have, for example, two categories, it’s a relatively straightforward task, doesn’t require a lot of energy. If you’re doing generation, so either image generation, but also text generation, it uses more energy because essentially, I mean on a kind of fundamental level, you have more choices. So for example, even if you’re generating only text, you do have any word in the English language or sometimes multiple languages. So you have more choices than just, for example, two categories. So that’s kind of the first thing. And then what we also found is that generating images is more energy intensive than generating text. And also these increasingly high definition models are creating these images that are very, very good quality. And so what we found is that, so the smartphone charge is a figure given by the EPA in the United States. They do give comparison when they try to make some, they have these calculators, so it’s like driving a car is the equivalent of this many, I don’t know, well, they actually do per mile. But anyway, they have these standardized metrics. And so one of those metrics is a smartphone charge. What we found is that per generating a stable diffusion high definition image, it takes as much kilowatt-hours of energy as the EPA’s figure for a smartphone charge. And the thing is that these models aren’t like, the stable diffusion high definition model is probably not the one that’s being used by Mid Journey, and Dall-E has a completely different architecture, but it just goes to show that the difference between, for example, a text-based task and an image-based task is really significant. So I was hoping that people would focus on those kinds of relative differences, like the fact that between the tasks that we looked at, the 10 tasks that we looked at, there are significant differences between these tasks.
AZEEM AZHAR: I love the way that you’ve qualified that because, of course, we don’t want to get into the world where we say your use of a particular tool is not valuable, even if you think you want to use it and my use is valuable. But there is clearly this idea that constructing a funny cartoon, half horse, half Penguin, which I did today as a sort of test before we spoke, doesn’t feel that it’s as useful as transcribing the notes between a doctor, a patient from audio into text. You sort of feel that there’s a different level of utility, but we don’t want to get into that world of trading these things off. I love the-
SASHA LUCCIONI: This is a critique I’ve often heard is like, well, how about a human artist, for example, creating this image? And I’m always like, well, you can’t first of all compare humans and machines because there’s actually a preprint that tries to do this. But fundamentally speaking, you can’t take our carbon footprint for a year divide it by 365 and divide it by the amount of time it takes a graphic artist to make an image. And that arithmetic doesn’t make sense. There’s a lot more to human beings than just their jobs. But for a machine, you can actually isolate the energy. And actually there’s even more to it than that if you want it to be more complete. There’s things like life cycle analysis, which will actually take into account above and beyond the couple of minutes where you use the GPU but there’s all of the life cycle as well that comes around it that also has a carbon footprint.
AZEEM AZHAR: And what do you think is the purpose of understanding these types of outputs, the carbon footprint or the energy cost of a process like this? I mean, what do you want it to turn into?
SASHA LUCCIONI: So, the way it started for me was, I was working in, so climate positive AI applications, things like, I don’t know, tracking biodiversity. And there’s actually a lot of very impactful work that can be done using AI to have better climate predictions, et cetera, et cetera. And then someone came to me and said, “Well, have you ever considered the trade-offs? What is the cost of the research that you’re doing? And is there a way of saying, well, in this case it’s worth it. In this case it’s not.” And at the time, this was like four years ago there wasn’t. And so that’s when I started working on better understanding of the different factors that influenced the environmental impact of AI. We created a tool, actually it’s called Code Carbon, and it runs in parallel to actually any program, doesn’t have to be AI, and it gives you the amount of energy and carbon emissions. And so essentially since then I’ve been trying to really answer this kind of question someone asked me and to really help AI practitioners and policymakers to make informed choices. And so for example, in an ideal world, the way I could see this research being used is that, okay, well, so we don’t want to preclude people from using AI just based on environmental impacts, but maybe if you’re using a model and it’s deployed 24/7, and I don’t know, it’s used by 10 million people a day, then maybe you should have some kind of legislation that says, well, per inference, per query, it should only use less than zero point, I don’t know, zero two kilowatt-hours of energy, something like that. And if you only run a model once a week because you’re doing climate prediction, then it’s okay if it uses more, right? So I think there’s different kind of gradations.
AZEEM AZHAR: The way I thought about it a little bit was about fuel efficiency guidelines on car. With petrol cars, the European Union or European countries pushed out fuel efficiency standards ahead of the US and we ended up with much more fuel efficient cars. And arguably, if you’ve driven in a Mercedes or an Audi, it drives better than the typical American cars. You ended up with better cars. But until you were measuring that in a standardized way, you didn’t really have a way of arguing for those types of standards or directions or guardrails. So I looked at this and I thought, “Hey, this is kind of interesting.” Because ultimately, if a company wants to offer generative AI and charge less to the consumer than it costs them in electricity, that’s their own decision. You won’t make a sustainable business that way. But there could be a role for this turning into standards and a sort of standards process that even starts to label the tools according to these types of behaviors.
SASHA LUCCIONI: Yeah. I like the allegory or the metaphor of energy star ratings as well. So per load of laundry, you have a relative rating of different appliances. And I think for AI it’s the same. I would actually try to do it per token, per word generated or per image generated to make it kind of more standardized. Because it’s true that if you’re comparing a very efficient model to a very inefficient one across different tasks, it doesn’t really make sense. But if you really hone in on a specific task and trying to define that somehow. So honestly I would love to see that. And my upcoming work is going to be more around how do you actually define ranges for different tasks? Because of course things will evolve and you’ll have different data sets and different models. So it’s going to be hard to really have something that stands the test of time. But having some standardized testing would be really great because also when you’re looking at leaderboards, when you’re looking at comparing different models and choosing a model, I think that accuracy or performance doesn’t tell the whole story. I think that efficiency should be part of it. If a model has, I don’t know, 99% accuracy, but it’s 10 times more consuming, consuming 10 times more energy than this other model that has 97%t, maybe that 2% difference is not worth it.
AZEEM AZHAR: I mean, let me confess one thing. When I looked at some of the original work that was done on training of AI models and their climate impact, and it was all about you could build five cars for the cost it trained to, the amount of energy used to train a particular large language model. And your work, I also just sat and I wondered, well, electricity is already decarbonizing rapidly and lots of data centers are 100% solar or wind power. And when I look at my energy work, the general view would be that electricity is a solved problem from a carbon emission standpoint in a way that cement isn’t, right. We know how to solve electricity. And if you look at the amount of solar based power that’s coming on board, I mean in Europe it grew 40% year-on-year from ’22 to ’23. Does the provenance of the energy actually matter? Because I think what you did was you did sort a watts calculation, which is kind of electricity, but if that was happening in a data center that was powered by solar power, that wouldn’t really have any, apart from the embodied carbon from the panels, wouldn’t have any carbon impact.
SASHA LUCCIONI: So, there are actually very few, I actually don’t know of any, the AI specific or kind of high-powered computing HPC specific data center that is a 100% energy. Most of them have some part of renewable energy. Some providers have on-premise solar panels or some source of energy, but it’s usually 10 to 30% maximum of the actual consumption. And what most of the carbon neutrality claims in data center press releases come from is it’s either based on renewable energy credits or power purchase agreements, which is essentially accounting. I mean, it’s offsetting, right? It’s offsetting based. But when you look at it, when you look at where, actually, I’m doing a study on this, where data centers are located concretely, most of those places are in the mid to high range of carbon intensity per kilowatt-hour of electricity. And so, the thing is that it’s quite costly to, for example, build a data center in somewhere like Quebec where I live, whereas hydropower, you actually, since there’s no existing ones, you have to build all the infrastructure and all the connections and agree with the government to purchase energy at a certain scale. But for example, if you’re building somewhere like Iowa, which has this data center alley where there’s, I don’t know, dozens of data centers in the same zone, it’s relatively cheap because the infrastructure is there, the cables are there, et cetera, et cetera. And so what we’re seeing is that even in recent years, the same locations are growing in terms of data center capacity building. And it’s quite concentrated in some places in Europe, some places mostly the United States, and less so in places. But also with renewable energy it’s, sadly, there’s a challenge of always delivering enough capacity. So for example, if you have solar panels, what happens if everyone wants to train an AI model and it’s cloudy? And so there are challenges to be solved with regards to batteries.
AZEEM AZHAR: Yeah. But I would say, my challenge to that would be that when I look at my solar research and I talk to people as they revise their forecasts upwards, there is a sense that the electricity system will be 100% renewable at some point. The more aggressive people say it could be 20 years, other people say it might take a few more decades. But there’s a path to it, which is very different, as I said, to other sectors. So I guess the question that I have when I look at something like this, and I wonder, well, and I’ll add another piece to this, Sasha, which is that does work like this actually then illuminate the importance of having fully priced energy in a sense that is it really, it’s a pity that it’s falling on you to embed the carbon price in this activity rather than it being embedded at the point of purchase by whoever’s buying it. In other words, I’m paying more for energy that comes from a fossil fuel system than one that comes from renewable system because there’s a carbon load on top of that, right? So these are sort of structural things that are further back in the energy value chain that you are having to deal with because they haven’t been dealt with at the point of origination.
SASHA LUCCIONI: I think that there’s also a transparency issue. People don’t factor this in. Often when you request compute, I mean, very few people have the on premise compute to deploy large scale AI models, right? I mean there’s actually probably a handful of companies that really dominate the space. And then as a consumer, you’re not really exposed to this information, even if… They’ve started to create dashboards, for example, with the carbon intensity. But for example, in my experience, every time I want to request a recent GPU, I’m very, very limited to a handful of regions of compute that will have a couple of, I don’t know, A100s for me to use. Whereas if I wanted to use something that was in Montreal because it’s hydroelectricity, it doesn’t have this high-powered compute that I need to train an AI model. And if you make that, multiply that for training an AI model, there’s very few regions that have, what, a thousand GPUs if you wanted to train an LLM. And all of them are either Iowa or Oregon, for example, they have concentrations, right?
AZEEM AZHAR: Actually, that’s super interesting. So what that does is that sort of constructs the right sort of pressure on the hyperscalers to articulate more authentically where their energy sources are and make those decisions around renewables. Although as I know they’re really struggling to find locations now because the electricity demand is so great.
SASHA LUCCIONI: Exactly. And it fluctuates and you need a constant, it’s not even cyclical because it’s going to be global. So you need a 24/7. And for example, if you’re using solar power, it’s going to be hard to have nighttime electricity. You need really big batteries if you want to do that.
AZEEM AZHAR: Yeah, I mean, we’ll get there, right? You had this great TED talk, which I enjoyed on risks, and I really wanted to explore one of the comments you made because I find this one really, really fascinating. You said, “Focusing on AI’s future existential risks is a distraction from the current very tangible impacts and the work we should be doing right now or even yesterday for reducing these impacts.” And we’ve talked about one of those impacts, which has been sort of the climate impact. But I’m really curious about this existential risk question, and I’ll be talking to a few people over the podcast on all sides of this aisle to unpick it. But I’m curious about why you think that thinking about existential risks is a distraction. Is it because you don’t buy the logic that takes us there, or is it because you do think the logic makes some sense but we are so far away from that that it’s a bit like worrying about over population on Mars is one of my, what somebody once said?
SASHA LUCCIONI: Maybe I’m a bit too pragmatic, but I see a lot of things that are already wrong with AI today. The way that we train these models, the way we source our data, the copyright issues, all these things I talked about in my Ted talk, and I really see ways that we can be addressing those concrete impacts. And also that by doing so, we are helping, I guess, create guardrails for these models. And that indirectly will help make sure that they don’t kill everyone if that’s what people believe. But whether or not that’s a risk, you can’t really create guardrails for existential risk because it’s a very nebulous term. First you have to define how exactly do you think AI models can wipe us out? And so for me, a lot of the discussion has been going around there, how do we make sure that we know when we hit AGI? All these kinds of more philosophical questions almost. How do we actually define this stuff? Whereas I see it kind of more simply, let’s focus on what we already know is harmful to populations, things like facial recognition, that being flawed and leading to false arrests, things that we already know. A lot of people know this is an issue, let’s focus on those. And I think that, I really believe it that while working on these things, we will have a better idea of these technologies, first of all, because generative technologies are new and learn more about them and learn more about how to make them safer, et cetera, et cetera. And it would be kind of like the first step to take, an existential risk for me is more of a philosophical question, and the more we talk about it’s like we’re not focusing on the current day specifics, we’re focusing on philosophical questions that are hard to answer. It’s a lot of back and forth and a lot of debate that’s not necessarily grounded in fact.
AZEEM AZHAR: I mean, it’s the kind of discussion you can have in a bar after the fourth whiskey.
SASHA LUCCIONI: Yeah. And currently what we’re seeing is that it’s taking up a lot of space. I’m not saying we shouldn’t talk about it, I’m just saying that it shouldn’t be the only thing that we talk about. For example, the recent AI safety summit, it was mostly focused on this kind of AGI risk, but I think that it would’ve been maybe more constructive if a good chunk of that, since we had all these high-powered people in the same room. It would’ve been really helpful if a good chunk of that summit was focused on, okay, well how do we make sure that predictive policing is better regulated? Or how do we make sure that we watermark outputs of AI models so that we can prove that this is a deep fake and not a real video of the mayor of London or whatever that was. So let’s focus on these things in parallel to existential.
AZEEM AZHAR: So, what I hear you say is that in a sense, it’s a process of what we might call praxis, right? This sort of future is unknowable. We can understand from our philosophical games, and explorations, and thinking that there are paths that could take us to this existential risk. And I roughly think that the sort of structure runs like this, which is, is it possible for there to be non-human intelligence that is more capable than us? If it is possible, is it possible for that to be engineered or evolve somewhere-
SASHA LUCCIONI: Emergent.
AZEEM AZHAR: … or emergent or evolve somewhere other than this planet? If that is the case, can we guarantee that it would act in our benefit? And if we can’t guarantee that it would act in our benefit, could it act in ways that are always catastrophic to us? And you can kind of walk through each of those, and you can sort of say, well, unless you’re literally a dualist and you believe that it’s our biochemistry that gives rise to our intelligence and consciousness, which is I think quite an extreme position to take, it’s quite hard not to walk through that chain of logic. But I guess what you are saying is, well, yeah, maybe that’s the case, but actually there’s so much of this is unknowable. There are so many leaps of faith at each point of those that the way we actually construct our ability to understand and ask the next more sensible question is actually by moving through that frontier of the uncertain. And we can do that in particular because we, as you talk about facial recognition, improving the parameters around which we use facial recognition would surely add to either the discipline or the practice of delivering safe systems.
SASHA LUCCIONI: I agree with you, but I would add a specificity. I think that for regulation and legislation specifically, this is really important time for defining what is acceptable for AI in terms of deployment, or should people really be using chat GPT to provide mental health therapy? And if so, should there be any laws that govern that kind of usage? I think that right now we’re at a really crucial time when these things are already being deployed. These models are already being deployed, and that’s why for regulation specifically, we should be focusing on these already happening things. But of course, research has its place. Discussions have their place. I definitely don’t think that people shouldn’t be working on this either from an ethical perspective or a philosophical perspective. But what I just feel particularly frustrated about is that when legislation should be coming out yesterday, we’re still talking about existential risk in 5 to 10 years.
AZEEM AZHAR: Right. I can hear that. I mean, I’m sympathetic to the perspective that one should think about tail risks. And when we don’t think about tail risks, we end up in things like the global financial crisis. But I’m also sympathetic to the idea that we’ve already dealt with systems that we can’t control for a really long time. The global financial crisis being a good example. Or frankly, it’s much harder to control Exxon Mobil than it is to control a six-week-old puppy. And those are quite hard to control as well. So, there’s a lot of, what I would say, there’s kind of a space of validity in a lot of these perspectives. And then the question is, how much of the airtime and how much of the policymakers agenda should be taken? Is it reasonable for these to take at a given time? One of the things I often challenge people on this particular subject is I say, “If you look at climate change, there’s a really, really high degree of concordance from the scientists about what the paths look like.”
SASHA LUCCIONI: But currently we’re saying that AGI is a bigger risk than climate change. I’ve heard that said several times, and I take offense to that particular statement.
AZEEM AZHAR: But there’s also not a concordance amongst the scientists. I mean, there is not an agreement amongst scientists that AGI is an imminent risk. I think that when you look at surveys, Nature did one in September this year, it’s like 15% of AI scientists said they thought this was the case-
SASHA LUCCIONI: But in terms of airtime and media coverage and legislation discussions and things like that, it’s a lot more than 15%, right? It’s not proportional. It’s more like 80 to 85%, I would say, of the discourses around existential risk.
AZEEM AZHAR: Absolutely. And I think that that has definitely been, has distracted from where, I think both also the benefits of the technology. I mean, you talked about it, the drawback of someone using chat GPT for mental health. And I think that what’s exciting is that lots of people can now experiment, that includes psychologists around this, and then you have to move really quickly to put in the regulation.
SASHA LUCCIONI: Right.
AZEEM AZHAR: But as you say, this airtime has been sucked out by the discussion predominantly from the very large companies.
SASHA LUCCIONI: And I honestly see it as a kind of magic trick. So by making sure, and also existential risk is this visceral thing that people really react quite strongly to. And you talk about, I don’t know, even predictive policing or facial recognition that doesn’t really take people by, right, their heartstrings, like existential risk, and Terminator, and the singularity and all that. I feel that people tend to focus on that, maybe it’s a cognitive bias, maybe it’s because it’s so scary or kind of very, very big and scary.
AZEEM AZHAR: Or could it be that they think it affects them in a way that they don’t think that predictive policing or facial recognition might affect them?
SASHA LUCCIONI: Maybe. But honestly, I would say that, so also, we tend to focus on the latest and greatest of AI, which is I guess language models and generative AI. But AI has been used in our societies for over 10 years, I would say 15 years for all sorts of stuff. And I think that people don’t really think about the fact that every time they do a Google Maps navigation, that’s AI, and even email is AI. And I mean, there’s so many all these, and you only notice when things go wrong. And that’s why Joy Buolamwini’s work where she realized that it doesn’t work for women of color, facial recognition doesn’t work for women of color. I mean, that was a big kind of eye-opening moment for her.
AZEEM AZHAR: And was that eight years ago or something?
SASHA LUCCIONI: That was ages ago, yeah. And we’re still using it. And then every couple of months, I feel like there’s a new story about facial recognition gone wrong. So for me, it’s like a no-brainer. Let’s try to make sure that there’s some systematic testing. There’s a human in the loop for, you can’t just arrest someone based on an algorithm recognizing them. So I think that we should be, this is really, for me, the basic things we should be regulating right now, given that this kind of work has been done eight years ago. And maybe in eight years when we have a better idea of AI safety as this field is becoming called, when we have a better idea of that, maybe we can start regulating the safety side of things.
AZEEM AZHAR: Well, thanks for listening. What you heard was an excerpt of a much longer conversation. To hear the rest of it go to exponentialview.co. Members of Exponential View and the community get access to the full recording as soon as it is available, and they’re invited to continue the conversation with me and other experts. I do hope you join us. In the meantime, you can follow me on LinkedIn Threads, and Substack for daily updates. Just search for Azeem, A Zed-E-E-M, or if you’re in the US and Canada, A-Z-E-E-M. Thanks.