Share Podcast
Azeem’s Picks: AI, Accountability, and Power with Meredith Whittaker
Discrimination and bias have influenced the development of artificial intelligence. How can we account for that as we implement AI?
- Subscribe:
- Apple Podcasts
- Spotify
- RSS
Artificial Intelligence (AI) is on every business leader’s agenda. How do you ensure the AI systems you deploy are harmless and trustworthy? This month, Azeem picks some of his favorite conversations with leading AI safety experts to help you break through the noise.
Today’s pick is Azeem’s conversation with Meredith Whittaker, president of the Signal Foundation. Meredith is a co-founder and chief advisor of the AI Now Institute, an independent research group looking at the social impact of artificial intelligence.
They discuss:
- How discriminatory culture influences the values embedded in AI.
- Whether ethics officers can actually influence firms and their products.
- Why responsible AI design must include recognition of the shadow workforce helping to create it.
AZEEM AZHAR: Hi there. It’s Azeem Azhar, Founder of Exponential View. We are moving into an age of artificial intelligence. These tools of productivity, efficiency, and creativity are coming on in leaps and bounds even if they remain incomplete and immature today. Implementations of AI are becoming priorities amongst top execs and the largest firms all over the world. Now, one big question is how do you make sure your AI systems behave ethically and fairly? It’s a huge issue and it’s one I’ve been exploring since 2015 in my newsletter, Exponential View. And over the years I’ve hosted some of the leading experts on this subject, on this very podcast. I know that ethical AI implementation is top of mind for leaders like you. So to help you think through the questions of responsibility, accountability, and power in the context of AI development, I’m bringing back some of my previous conversations over the next five weeks. This week I want to bring back my 2019 conversation with Meredith Whittaker, President of the Signal Foundation. Meredith is a co-founder and chief advisor of the AI Now Institute, an independent research group looking at the social impact of artificial intelligence. Previously, she worked at Google and was one of the instigators of the huge 2018 Google walkout when 20,000 staff protested against a culture of harassment and discrimination at the company. In this conversation, we reflect briefly on that event and then go deeper into how discrimination creeps into AI systems and what organizations can do about it. We cover history, power theory, and accountability practices of AI implementation. Here’s my conversation with Meredith. Meredith, welcome to the Exponential View podcast.
MEREDITH WHITTAKER: Thank you so much. I’m happy to be here.
AZEEM AZHAR: We are recording about a year since the famous Google walkout. Could you take us back to that day? What happened?
MEREDITH WHITTAKER: Well, on that day, I got very little sleep. I woke up at maybe 5:00 AM. I met some of my co-organizers in a park near the Google New York office, and we started to try to make a makeshift stage so that we could give speeches and realize that that was going to be hard because the park service didn’t allow us to move the tables and chairs. We finally got something set up and we waited. And a little before 11:00 when the walkout was timed, globally every office that was participating walked out at 11:00 local times. So we had rolling thunder, starting in Tokyo and Singapore and moving all the way to Mountain View in California, and around 11:00 AM in New York, a few people, and then suddenly hundreds and thousands of people filled the park, filled the streets around the park, just completely took over an intersection on the west side of New York.
And I realized at that moment that this was not the couple hundred people I had hoped for in my most optimistic vision. This was something much bigger. This was evidence that there was a movement here and that people were really looking for new options to change the structures of these companies and to change the way decisions were made and to make sure that those decisions benefited everybody.
AZEEM AZHAR: It was nearly 10% of the staff and the contractors, so it was really an enormous number. Were you being kept informed on your mobile phone and your instant messaging apps and you could get a sense of the momentum that was building?
MEREDITH WHITTAKER: Yes, and that still didn’t quite give a clear picture until after the fact. It was a massive day and there was a core team of organizers of which I was part, which was eight people who were a switchboard, and then there were thousands of organizers across the company, across the world who were organizing their local offices. And it was around that time that it hit me just how monumental having a global labor action of this proportion was, let alone having it at arguably one of the handful of most powerful institutions in the world right now.
AZEEM AZHAR: So, for those who don’t know, what was the mission behind the walkout?
MEREDITH WHITTAKER: The walkout was catalyzed by a whistleblower report that was reported that in the New York Times that detailed not only a culture of harassment, racism and discrimination at Google, something that many of us were very well aware of, but also reported on a $90 million payout that was given to Andy Rubin, who is often called the patriarch of Android. And this was given to him even though there were claims of sexual assault against him. So this was the catalyst. We are no longer able to argue about whether there is a problem. The problem is on full display. Now the task at hand is what we do about it. My dear friend and colleague Claire Stapleton floated the idea of a walkout on an internal mailing list called the Mom’s Mailing List, and that caught on and a number of folks around the company who’d already been engaged in labor organizing around a number of different problems that joined in to help make that happen. And ultimately, there were thousands of organizers who just stepped up in a matter of days and pulled it off.
AZEEM AZHAR: Well, help us understand that labor culture within these organizations. Google is only a couple of decades old. It doesn’t have labor unions, but it does have very well-trained, well-paid employees and you also have these internal tools where you can express your ideas and you can organize, and these tools are also, I think in some cases, public to the rest of the company. So give us a flavor for how that operates in this young but very dynamic and powerful firm.
MEREDITH WHITTAKER: There is a long and heady tradition of discourse inside Google. There are mailing lists, there’s fora to express ideas, and until about a year ago, it was possible to ask a question directly to executives during a weekly meeting as long as you were in Mountain View. So there’s certainly a tradition of expressing ideas. You can see reflections of this in the internet message board culture and the way in which the techie cultures often have spaces for discussion online. The PR arm of Google has pointed to this as like, “Look, we have an open culture. We allow extensive freedom for our workers.” What became really clear in the last couple of years is that the ability to say what you think is not the same as having power to change the structures within the company. You can point out inequities, you can point out unethical decision-making, but simply naming them is not enough.
AZEEM AZHAR:Many people, yourself included, have cataloged the issues of misogyny and diversity and harassment within many of the larger tech companies. Why as an industry does it have such a dire track record in these areas?
MEREDITH WHITTAKER: I think there are a number of answers to that, but I would point back to the history of computing itself and note that these stark issues of racism and misogyny weren’t always true in computing. Historians like Mar Hicks and others have documented that for a long time, computing was considered women’s work, it was menial, it was low paid. As it became more prestigious, more well compensated, the gender balance changed and programming became synonymous with a type of genius. If you look at Wall Street in the 80s, if you look at other large sectors that were equally non-diverse, that had equal issues with racism and misogyny, I think you need to look at power and not at the particularities of whatever a given industry or sector does. Whereas 20 years ago, parents would tell their children become a doctor or a lawyer. Now parents are like become an engineer if you want a steady paycheck, as it has become more prestigious, as it has become more highly compensated. You’ve seen gender and racial balance become increasingly lopsided. We need to look at the cultures that continually allocate power to primarily men, primarily white folks, at least in the west, and find different excuses for marginalizing people who don’t fit in that mold.
AZEEM AZHAR:That power dynamic does get reflected in the product as well, doesn’t it?
MEREDITH WHITTAKER: Absolutely. And this is something that we at AI Now documented.
AZEEM AZHAR: Sorry, AI Now is your new research institute, yeah.
MEREDITH WHITTAKER: We look at the social implications of AI, so we ask questions about who benefits, who’s harmed. So yeah, absolutely this culture is reflected in the technologies that come out of this industry and it’s also reflected in the way in which these technologies are used. Who has the power to use these technologies and on whom are they used? Across the AI industry, there are an increasing pile of examples where we see that these systems embed biased and discriminatory logics. In almost every case, these biases are effectively replicating histories of discrimination, so against women, against Black people, against trans people, et cetera. I have never seen an AI system that is biased against white men as a standalone category.
AZEEM AZHAR: Can you give me an example of an algorithmic system that is discriminating in the way that you’ve described?
MEREDITH WHITTAKER: There’s been a number of documented examples, and it seems almost every week there is another one. There was a paper just published by some machine learning researchers at Google that showed that sentiment analysis, software that uses natural language processing to identify hate speech or negative sentiment in texts were consistently flagging discussion about disability and people with disabilities as negative or even violent. You had another report recently that looked at melanoma detection AI system, so these use machine vision technologies to detect whether, say, a mole on someone’s skin is pre melanoma, whether it should be checked out. Well, it turns out it only works for people with lighter skin, not darker skin that could be harmful for people with darker skin if this is a technology that is implemented on the front lines of clinical diagnoses.
AZEEM AZHAR: This is why you refer to this as a power issue. It falls back to this question of who is doing the designing, and in the case of the melanoma system, is it that the teams are not diverse? Is it that they believe it’s too expensive to get training data across people with multiple skin colors?
MEREDITH WHITTAKER: Going back to your assessment this is an issue of power, that’s exactly right. These are issues that are not going to be fixed by simply tweaking an algorithm or in many cases, even augmenting the data sets that are being used. These are problems that are going to be fixed when the culture that would think it was normal to collect only samples of white skin to train a melanoma detection algorithm, when that culture changes, when those structures change.
AZEEM AZHAR: Do you need to affect that change through culture change or can it be done through regulation and then the compliance to that regulation and the enforcements if those regulations are not complied with?
MEREDITH WHITTAKER: Regulation is absolutely part of the picture. We needed regulation 10, 20 years ago and now is a good time to start. The AI Now Institute has suggested just common sense regulations. If you are using AI technologies to make socially significant decisions, like whether someone should receive a job or whether they should receive benefits or resources, then those technologies should not be protected from scrutiny under trade secrecy. Trade secrecy should be waived so that we are able to examine the mechanisms at work within these systems and some of the claims that are made by the people selling these systems often to governments and large businesses. We have also recommended truth in advertising laws be applied and enforced to these systems. If you say it can do something, then it needs to do that or there needs to be penalties. I think that doesn’t take away from the fact that these systems are part of our existing culture and social institutions. They are drawing on the history and the presence and thinking of them as somehow more neutral or objective or separate from those systems fails to take that into account and will lead to us not addressing the complexity of the problem we’re seeing.
AZEEM AZHAR: Yes, this idea of the neutrality of technology is convenient fallacy, and it seems to me that the history of Silicon Valley over the last 40 years or so has really been to drive this false notion that the technologies are neutral. They’re hard for ordinary people to understand, inspect or query, and they’re necessary for us to get beneficial outcomes in society.
MEREDITH WHITTAKER: We have been under the mistaken assumption that technical innovation as defined by increased revenue from Silicon Valley companies equals a type of social and economic progress. And that to regulate Silicon Valley, that to interfere in these extraordinary leak complex workings of these systems and the people who build them would be in effect to arrest progress. What’s important about these systems is where they’re being used, in what context? Who is benefiting? Who is being harmed? How are they being tested? How are they being integrated into our core social institutions and what are the effects of that integration? And for some reason, these questions have been left by the wayside, considered marginal afterthoughts that you hire one or another trained ethicist for one day a week to give seminars to engineers and you will be able to answer those extraordinarily complex questions. But what’s really important is to continue to drive the pace of so-called innovation forward and not hinder that. I think one thing we need to do is also look at how the narrative of AI itself is playing into this, just a quick detour to look at the history of AI as a discipline. I think when I talk to a lot of people who aren’t in the field, they’re often surprised to learn that it’s not particularly new, that it’s over 60 years old and that it’s been ongoing for a long time because their introduction to artificial intelligence beyond Skynet and Terminator has been in the last less than a decade. And I think we have to look at what are the resources that are required to create AI? Why did it suddenly ascend to be the center of everything? And if you look at it from that perspective and begin asking those questions, you get back to the commercialization of internet technologies in the nineties that led to today where five companies have the resources that are needed to create AI. They have the infrastructure that is already designed to collect and process large amounts of data. They have the market reach to continually collect and process that data, which is not something that is easy to get and they’re able to pay the highly trained technical talent to create these systems. So a lot of the algorithms that are being used now are not new. They’re decades old. What is new is this concentration of resources in the hands of a few actors. I see AI as the way that these companies are answering the question, how do we continually expand and grow our revenue given that advertising and search are not going to allow us to continue that increased growth forever?
AZEEM AZHAR: There’s a lot in there. I think unpacking a little bit of this, 91% of all PhDs in AI according to a report from Element AI are employed by five American tech firms. And to your point about how well treated they are, I note that the median salary at Goldman Sachs, which is not the most unprofitable company in the world, is $135,000 a year. But the median salary at Alphabet, which is Google’s parents company, is close to 200 and at Facebook it’s $240,000 a year. So fully half of the employees in Facebook are making essentially a quarter of a million bucks or more per annum. And what I understand, and perhaps you have a better evidence for this than I do, is that if you’re an AI star, you’re going to be most likely north of that.
MEREDITH WHITTAKER: Yeah, absolutely. Because for a number of reasons, the commodification of AI, the interest in AI at these companies is fairly recent, there hasn’t been time to get a new crop of PhDs interested in writing elegant linear algebra into the workforce. So there’s 3,000 to 4,000, that’s a guesstimate, but folks with this training, there’s a number of outliers toward the top where these companies will often hire AI talent almost without having a job for them just to make sure that another company doesn’t employ them. And again, you can’t just spin up massive data centers and spin up massive data stores. There are only five companies that have these infrastructures and they represent an AI monopoly if you look at the AI industry outside of that, so a crop of startups. First off, most of those are vying to be acquired by the big companies. You start a startup hoping you get bought. And second, all of them are licensing their infrastructure, at least in the Western context from Amazon, Microsoft or Google in that order. You don’t run your own data centers or compute and oftentimes, you’re buying data and even licensing AI APIs and repackaging them from these same companies.
AZEEM AZHAR: There are a number of these flywheels that I think are being described as a rich get richer effect. So you acquire the talent and that means that you attract more talent. The other competitors don’t have it. When you lease out your computing infrastructure as Amazon does through AWS or Google through GCP, you get to see emerging use cases the way that clever founders might be using infrastructure for novel applications, and that’s market insight that you then have that other people don’t have. And of course, the data network effects that you get by acquiring the data creates such a moat. And I found it fascinating that in the week that we’re recording this, there were two stories about Google acquiring health data by the tens of millions and thinking about launching financial services products, a checking account that had no fees attached to it, and of course, health and finance data are two of the most valuable types of data that currently don’t flow as well as they might through the infrastructure of these firms.
MEREDITH WHITTAKER: I think this also shows the ambitions of these companies to effectively provide the infrastructure for every part of our lives and institutions. One of the things that was really striking to me about the health data dust up this week was that people weren’t clear whether this was standard or not, and I think that gets back to the way in which a lot of these things are happening in obscurity. What was clear was that patients were not informed that they’re incredibly sensitive data was being transferred to Google. What was clear was that a lot of people were extremely uncomfortable with it. What wasn’t clear was whether that was in or outside the bounds of the law. What that shows us is we have this massive accountability gap, right?
AZEEM AZHAR: Right.
MEREDITH WHITTAKER: We know that this happened because of a whistleblower. Someone was uncomfortable enough to come out and say, “This doesn’t feel okay. The public needs to know.” But we don’t know what else we don’t know because most of those contracts are confidential. The fact that that contract even exists is often itself confidential, and that means that we can’t trace the deployment of these technologies through our social institutions. It’s very difficult to determine whether there’s bias or harm or exploitation that is being optimized through these technical systems because we’re left in the dark about even the most basic relationships between big tech and other large institutions.
AZEEM AZHAR: We’ve recently started to see companies invest in AI ethics training and invest in ethics officers. Is that a sensible thing for them to do or is that just an attempt to signal to the market that they kind of care about these issues?
MEREDITH WHITTAKER: Whether it’s sensible or not is not clear now. There’s a lot of PR and fanfare that goes into pointing to these new positions and these ethical commitments, but these commitments don’t have teeth. They’re not backed by oversight. I do think in some senses it is effectively providing air cover for business as usual. I think the example of Microsoft and any vision which is an Israeli startup they invested in that is effectively providing real time tracking and surveillance of Palestinian residents, and it is pretty clear that the facial recognition system that any vision is providing that turns public space into a panoptic surveillance is violating democratic rights and violating their six principles of facial recognition. And yet whatever the ethics process they have did not catch that.
AZEEM AZHAR: This is an interesting observation about where this function should actually take place and how it should manifest itself. In the financial services industry, the rules are laid out by regulators and then the banks themselves don’t make the decision about whether or not something is ethical. The compliance departments essentially ensure the application of those regulations, and even then, we have problems in the financial services industry. And the way the tech industry has generally approached it has been rather a self-regulatory standard where they’ll say, “Well, listen, we have an ethics team, we’ve done ethics training and someone signs off on it, but there’s no external judge and that seems to be an important essential component of having due process.
MEREDITH WHITTAKER: Oh, I absolutely agree. This cannot be a substitute for regulation. It cannot be a substitute for democratic oversight, and frankly, we need to recognize that people who are ensconced in conference rooms in San Francisco or Mountain View are not in a position to judge the ethics of the use of technology halfway around the globe.
AZEEM AZHAR: One of the things that might surprise people who don’t work in the tech industry is that really, it’s a two tier workforce. We have the prize ponies on their $200,000, $300,000 a year salaries who are fully employed, and there’s a very large group of people working in technology who are contractors and who don’t have the same terms and protections. How has that come about and is there a problem with it?
MEREDITH WHITTAKER: There is absolutely a problem with it, and it’s something we’re seeing in tech, but we’re seeing in the US across the labor force, that there is this loophole in labor law where there’s two kinds of workers. This is something that is happening across the industry and in a number of cases appears to be extremely illegal because the law that allows contract workers allows it in cases where those workers are not core to the business. You would hire contractors for, say, janitorial work, but you wouldn’t hire contractors for engineering work if you’re an engineering company. And now I think we can contest that people who do janitorial work are core to the business and should be paid and given benefits and treated as full workers. But even within that division that is laid out in the law, you see across these companies, people who are contractors working alongside full-time workers on engineering teams, on product teams, on design teams, and just filling the gaps with contractors who are cheaper and who can often be exploited more readily than somebody who has more of the protections of labor.
AZEEM AZHAR: What’s the dynamic of being a contract worker rather than a full-time worker, say, if you’re working on an engineering team?
MEREDITH WHITTAKER: People often apply for jobs thinking that there is a path to full-time employment, but you see a dynamic, an incredibly troubling dynamic that works along racial and gender lines. When you look at who make up contractors, you have a much larger percentage of people of color and of women than you do in the full-time workforce. I do want to mention that beyond the product development and engineering workforce, there is a huge amount of precarious labor that is required to create AI to begin with. You have a huge number of click workers who labeled the data that is used to train AI systems. AI doesn’t work without this type of labeled data, and so this is irreducible workforce that often works outside of the US and India and Pakistan and other places making extremely low wages doing this work. You also have content moderators who are the people who have to clean up the mess where algorithms won’t suffice, and researchers like Sarah Roberts have looked closely at the lives of these workers. There’s often significant amount of trauma involved in that work. In Amazon, we have these warehouses where the working conditions are just dire. You are seeing people managed by algorithmic systems given an impossible rate of performance metric they have to meet who are subject to chronic injury to chronic stress because these working conditions are so dire. We have to look beyond the shiny office parks to realize the dependencies of these companies and the way in which this exploitation is actually built into the production of these systems.
AZEEM AZHAR: Now, we could wait for the law to catch up, but the law is, in some sense, hostage to the politics, and that’s pretty volatile in different parts of the world. Is there some mechanism by which these internal systems change through the activism that you were part of triggering a year or so ago?
MEREDITH WHITTAKER: Worker organizing is critical here. And if you look at history, it’s rare that labor laws change without significant worker organizing. It’s critical both for changing the working conditions within these companies and these exploitative pipelines that these companies rely on. It’s also critical to changing development of harmful biased and exploitative technical systems that are being used outside of these companies, so building AI enabled weaponry or building facial recognition that is being used by racist and biased police forces. There are a number of workers within these companies that are effectively saying, “I don’t want to be exploited at work and I don’t want to contribute to the exploitation of people outside of these walls.” I’m really grateful they’re there because at this point I don’t see many other levers that will be capable in the time we have of checking some of these problems within the tech industry.
AZEEM AZHAR: Can we think about some of the AI applications that you have come across that get you excited? You’ve got a fantastic foundation from your time at Google and obviously now through AI Now. What are the types of things that are coming through that make you think that this is a technology that could be deeply beneficial for human wellbeing?
MEREDITH WHITTAKER: I’m going to give you a disappointing answer to this because I can see that there are potential positive applications of these systems. To me, it’s less about the technology than it is about how do we build the guardrails that make sure the technology is used safely? How do we make sure that, say, image recognition, technology that is used to survey some area after a natural disaster is only used for that purpose and isn’t used also for military applications. And these get back to core issues about politics and power and don’t have as much to do about the potential use of these technologies as the probable use of these technologies given who’s going to use them, who’s going to profit from them, and how that might play out.
AZEEM AZHAR: Meredith, it’s important work that you are doing at AI Now Institute. Thanks again for your time.
MEREDITH WHITTAKER: It was wonderful. Thank you.
AZEEM AZHAR: Well, thank you for listening. If you found this conversation with Meredith valuable, I recommend you listen to last week’s conversation with Laetitia Vitaud in which we discussed the new world of work. I’m Azeem Azhar and this podcast was produced by Fred Casella and Marija Gavrilov. The audio engineer was Max Miller. The audio editor was Bojan Sabioncello. The researchers were Elise Thomas and Florian Dangle. Exponential View is a production of E to the Pi I Plus One, Limited.