WASHINGTON AI NETWORK PODCAST TRANSCRIPT Ep. 21 Host: Tammy Haddad Guest: Madhumita Murgia, Financial Times AI editor and the author of the new book, Code Dependent, Living in the Shadow of AI June 28, 2024 WASHINGTON, DC
Tammy Haddad: [00:00:00] Welcome to the Washington AI Network podcast. I'm Tammy Haddad, the founder, and today we meet the Financial Times AI editor and the author of the new book, Code Dependent, Living in the Shadow of AI, Madhu Murgia, where she writes about how people are affected by AI. She has done scientific work herself in biology, working on an AIDS vaccine. We'll ask her about that. She's been covering tech for Europe, but today as the Financial Times, AI editor, she's covering the world. In this book, she reports how people's lives have changed and will change in the blazing hot world of AI innovation. We'll talk to her about the book, what she's hearing from the innovators, researchers, and civil society leaders. And of course, we'll ask her about regulation. First, Welcome to the Washington AI Network podcast. Thank you for coming. Madhumita Murgia: Thank you for having me Tammy Haddad: Thank you for crossing the pond for this great [00:01:00] book. And as I said, you are talking about people and how they're affected. Congratulations on focusing on people and AI. You are the only one doing that. How did you come to this? Madhumita Murgia: I've been writing about AI for 10 years, and I started doing it at a time when it was this sort of maverick fringe science, really, that a few crazy people were trying to do. And I remember writing stories about chips being implanted into the brains of Paralyzed people that allowed them to move a robotic arm just by thinking about it. And we didn't really call it AI at the time, but that's what it was. It was these algorithms that were able to do these seemingly magical things. But over the decade that I wrote about it, and I grew up from being a baby journalist, the technology grew up as well. And it became far more sophisticated, but also embedded in our day to day lives in this really widespread way, but yet very insidious. So I knew that this was the case and that this wasn't just an advanced technology that was [00:02:00] being tested in Silicon Valley, I knew it had reached all these places outside of California, and I felt that the missing link was stories of people, ordinary people like you or me, and to tell through their stories how AI is changing our lives today, not at some amorphous point in the future, but, but today already, So I decided to go looking and the one thing I really wanted was for it to be a global book. So many of the stories we tell about AI are basically in the West, mostly American or maybe the UK at most, but I wanted to cast the net wider. And so this book has ended up traveling to nine countries, including Argentina and Kenya and elsewhere. And I went looking for people who'd collided with these AI systems. And it all actually started here in D. C. with an American researcher called Carl Risenek, who developed some of the early facial recognition systems for the U. S. Navy, but has come to a point now where he has this ethical dilemma [00:03:00] because he feels he's invented something that's harming his own community. So I found that sort of dilemma really fascinating, and I wanted to explore the gray areas of AI, not just is it good or is it bad, but everything in between. Tammy Haddad: And what did he tell you? Madhumita Murgia: When he started out, he, like all technologists and engineers, was innately optimistic, and he believed he was going to build a technology that was going to save lives. He wanted it to catch criminals, find missing children. And in the early days, he explained how it was just so inaccurate and all these flaws in the technology. It's not as good at recognizing women as it is men. And it also struggles with people of color compared to Caucasian, um, faces. And so he knew all of these limitations of the technology, but it was just experimental. So he never really considered the real world impacts of it. But fast forward, 10, 15 years, and this technology is being used by police, by shops and bars to track people all over the world, and he suddenly [00:04:00] realized that it is having an impact, and it still has those same flaws and limitations that he knows are inherent, but it's being rolled out on the streets onto citizens, right? And so you're seeing the effects of that. He was seeing the effects of mistaken identities where the wrong people were caught and he became really worried that in the moment in the heat of the moment, using this technology to identify someone as a criminal could result in people being hurt. Tammy Haddad: You tell that story in other chapters of the book as well. Madhumita Murgia: Exactly. I think he's now having a moral reckoning and thinking about actually how This technology should be regulated and has turned his perspective around really from being so optimistic about it, to saying, actually, I do see how this can harm people and why we need regulations and lines drawn around AI technologies. So yeah, so that's where it started. And then I kind of went looking for other examples of AI systems around the world. Tammy Haddad: I would like you to spend time talking about AI in medicine because I think it's the most [00:05:00] critical area where there's opportunity but also hazard. Madhumita Murgia: It's so hard to jump between these because AI has become this umbrella term for so many different technologies. But really, it's ultimately a software system that can look at a huge amount of data and find patterns in it. And with ChatGPT, it takes the form of words. It can predict patterns of words, right? But equally, it works in healthcare, where you can look at millions of examples of chest x rays and find a disease that you're looking for, be it tuberculosis or pneumonia or COVID. Similarly with cancers, too. You can train these AI systems on scans and They learn to look for tumors really early on and are better than pretty much most doctors, most radiologists at diagnosing things like breast cancer now and doing it early so you can save lives. So for me, that is one of the most promising and exciting applications of AI. In my chapter on [00:06:00] healthcare, I actually go to rural India and spend time with a doctor there who is training and implementing AI that helps to diagnose TB amongst her tribal population of patients. She's a great example of how you need the human doctor who cares for their patients, but you can augment that or even use that where there are no humans to save lives and to change health care access and that's not just true of rural India. It's true of parts of the US where you have communities that can't access health care. It's true of the NHS in the UK where we're struggling with a lack of doctors now, radiologists. I think that this will be the first place where we can really see how AI is augmenting what we can do with what we have today. Tammy Haddad: Van Jones, who's an African American activist in the United States, talks about how those that are left behind. Those that say, Oh, that's not for us. For the first time, we'll have [00:07:00] complete access to all of these opportunities. But you're really talking about actually what's happening today, which is some people get this and other people are not going to have these services. Right? Madhumita Murgia: Yeah, well, this book has really been a sort of exploration of what it looks like today when AI is rolled out. I went in very agnostic, right? I didn't go in with a hypothesis of this is bad, or I'm pessimistic. I actually, um, I went in very optimistic about what it can do and how it's going to improve society. The reality was it kept falling over where, um, the example in Amsterdam, where you have an AI system predicting whether a child will go on to commit a crime. Good intentions. It was not meant to put these kids in jail. It was supposed to go in and help these families. In practice, you were basically putting a mark on these young boys' backs, breaking apart families and blaming the parents, creating huge social instability. And examples [00:08:00] like this come up again and again, where you go in with good intentions of giving somebody better access to public services or improving their access to education or whatever it is. But because of how AI is implemented today, which is without putting the human at the center of it, but essentially thinking it can solve any problems, it's actually harming people rather than helping and those that it harms are those who are the most vulnerable among us. Again, without planning it, the people that I talk about are single mothers, migrants, black and brown people, Uyghur Muslims, these are all already marginalized and vulnerable populations and their lives are made much worse by how AI is implemented. So for me, it was a wake up call of how can we change this? How can we take a good technology, a powerful one, and make it work for the benefit of everyone, not at the expense of those who need it the most? Tammy Haddad: And have you found any government or any institution, anyone or any group [00:09:00] that is actually doing that. Madhumita Murgia: So with this book, I was like, I'm not going to go to Silicon Valley. I'm staying away from there. This is not a book about those people who are fascinating and clever and are changing our world. Get just realms of inks spill on. Right, right. And I'm talking about the founders and the entrepreneurs. I wanted to look at the other end of the chain, uh, at the, the rest of us who are being impacted by it. But you can't escape. But what about government? You can't escape the fact that there are corporations that have power and there has to be other third party public. Representatives that help to limit that power. So even though this was a book about ordinary people, it became about power and how you keep it in check. And that's where governments come in. Throughout this book, what I found is that the few corporations that create AI do hold just an inordinate amount of the data, the investment, the knowledge, [00:10:00] right? And it means that governments feel locked out because they feel like they don't understand how it works or it's changing too quickly. But I think that's a mistake because whether the technology changes or not, we can still measure the impacts of it. We can look at the harms. We can look at existing laws and regulations that say you can't discriminate on the basis of gender and race and so on. And you can draw a line in the sand. And for a long time, governments didn't do that with social media, right? You've had the whole growth of the internet economy, the app economy and social media without any regulation, because again, governments came to it too late. You think they see it now? I do think that they see it. I think that the U. S. is very aware. You've already seen the White House has put out an executive order that looks very practically at impacts of AI today, rather than some sort of futuristic Terminator scenario where it destroys humanity. They're looking really at the kinds of things I report in my book. This is the same in the [00:11:00] UK, the EU has an AI act that's already passed that also looks very specifically at areas like healthcare surveillance. And so I think governments are much more alive to the threat of this. Now you're already seeing antitrust, not so much action, but definitely investigations in this area led by the FTC, the DOJ, the CMA in the UK, they're looking at. The impact of these partnerships between these companies and how it concentrates power and therefore how it will impact the human on the other end. Governments will have a role to play in a smart way that doesn't choke innovation, which is obviously what technologists always fear and talk about. I think we all understand what the potential benefits could be, including governments, and that they don't want to cut that off. But this is the time to come up with a smart way to regulate. Tammy Haddad: And what do you think about the European Union's new AI policies? Do you think they're going to make a big difference? Because here in the U. S., frankly, people are, no, we [00:12:00] won't do that. Everything is hand wringing and this is really bad and it's too much and it's going to stop innovation. Madhumita Murgia: That's generally the view towards regulation, right? GDPR as well. There was a lot of hand wringing there. Eventually, it came out and now there are rules around what you can do with people's data. You can't collect people's health data and sell it and the rules mean something. Not that it's the best law, but at least it's a starting point, a blueprint. That's what the AI Act is too. From a regulatory perspective, it at least shows what you could do as a law. And of course, the U. S. can decide. They can pick and choose. There was a lot of lobbying from the companies, but they did spend time on it. years on it. This was prior to ChatGPT and they've brought in experts from around the world. So I think it's a good starting point. It's not a perfect law. Companies, if they want to operate in Europe, will have to toe the line with it. Tammy Haddad: They are, right? They don't have a choice. One of your chapters is called Your Boss. Tell us about that. Madhumita Murgia: This chapter is about [00:13:00] work and gig work in particular. I thought it was really fascinating how not only is an AI system making a decision that a human would otherwise make, but actually takes on the roles of your boss, but a faceless boss, one that you can't complain to, one you can't knock on the door and say, that's not fair. Just this invisible. Kafkaesque system that you'd never understand why it's handing out these decisions. And so I wanted to find somebody who'd explored that idea. And so I came across Armin Sami, who's this amazing combination of being both a Berkeley trained computer scientist who's worked in tech, but who also was an UberEats driver for a short period in Pittsburgh. And because of his unique combination of skills, he was able to see what it meant to be locked in this opaque system and what it meant that it was controlling what work he did, what wages he got. And he just found that really frustrating. It's a fascinating story. He realized one day that he was being underpaid because he was [00:14:00] sure he'd gone a longer distance. And all you're told by Uber is what distance you've traveled. There's no equation that helps you calculate your wage. They can pay you whatever they like today, tomorrow, they can change and they do. Based on data, but what he managed to do is reverse engineer his receipts to really find out how far he had traveled compared to what Uber told him he had traveled and showed there was a disparity there. So he had been underpaid and he put this tool out into the world for other Uber Eats drivers and couriers to use and thousands of people around the world discovered that they had been underpaid and there was no explanation for it. That story really shows why you need to have rules. I'm not, um, particular about what those rules specifically look like, but I think we can all agree that there needs to be some accountability. If I'm paid something today and something else the next month, I should be able to ask somebody why and whether I've been underpaid or not. This is something that's not available to millions of gig workers because all they get told is, this is [00:15:00] what the algorithm says. And I think a really interesting sort of phenomenon that's come out of gig work, because it's such a global thing now, started out in the U.S. with Uber, but it's now spread across Europe, Asia, Africa. I talked to Uber drivers in Nairobi, in Buenos Aires, here in D. C., people would come from Ghana. It's a global industry now. And in Asia, you have the whole rise of apps there that does food delivery and so on in Indonesia and India. But an interesting thing that's come out of this disempowerment is unionization. When you started out with an Uber, it was like, Oh, you guys are self employed. We don't employ you. You just work for yourself. It was all about flexibility. But I think there's now been a realization that's not the case. The companies, the algorithms collect your data and make decisions like any employer would. And so you're seeing collective action, people coming together and unions fighting for transparency in this relationship as you would get in [00:16:00] any employer employee relationship. That for me was a really optimistic view to see humans coming together and fighting the machine. Tammy Haddad: They're fighting machine. But then that also makes companies fight the unions back, which is an even larger disruption, right? People don't feel like they're in control of their lives. And then they think sometimes the companies are undercutting them or what is the government doing? Why are they taking this part of it? And all the institutions are being questioned, which I think causes massive disruption. I'm going to totally change the conversation to elections because we have had these global elections and with your A. I. Editor hat on. What have we learned? If anything, are there any rules that could be put in place while we're going to these global elections? That'll protect citizens and the elections themselves. Madhumita Murgia: It's a really hard one because essentially you can't control the flow of information in a democracy to that level. You [00:17:00] can't cross the line over into censorship, so you can't just suspend the algorithms for a day or anything like that. But there is this threat of deepfakes. You know, if you see what the images and the videos look like today, they are so realistic that you often can't distinguish with the naked eye if that really is Modi or Biden saying something or whether it's been manipulated. And I think even the existence of doubt, as you say, sows chaos and mistrust in institutions because the very fact that it could be fake means that people can use that as an excuse. And that did happen in India where people were saying about certain videos, Oh, that's a deep fake when it was real. Nobody knows what the truth is and we're not on the same page and that's what I think creates more divisiveness than having a common truth that we can all agree on. There are technical solutions if things are AI generated, companies can help to identify the provenance of that and say, this is AI, this is not real. And I think that is their responsibility. They've created the technology that allows you to [00:18:00] make the stuff. So we should at least be able to tell what's AI and what isn't. And I think that's a good first step. Tammy Haddad: Well, OpenAI has come out and said that they are going to go after anyone that's using their synthetic material, and they've announced a variety of countries that are violating all the way along the line. Do you think that's enough? No one else has said that. And do you think they're strong enough to do that? Madhumita Murgia: This is not the first time, you know, you've had technology that's able to disrupt the democratic process. We've had this with social media. Tammy Haddad: Isn't this the first company (OPenAI) that's actually stood up and said, or no, we're going to do everything? Madhumita Murgia: They've been very closely monitoring. How people are using their AI generated content. And in fact, recently they said they were surprised that it hadn't been as much of an issue, particularly in some of the elections that have occurred already, like India. So even, I don't think OpenAI are the only ones who've come out and said it. Twitter for years has been reporting. Has been. Yeah. Not anymore. Uh, yeah, [00:19:00] I think obviously they've got new leadership and change. And so I don't know how they will approach it now, but it's good that OpenAI has joined the rest of, of, of the tech platforms and saying, we will also take a stand here. Not sure it's enough to stop it. I will say that even identifying this kind of material and saying we will continue to do it is a great first step, but we need to keep on top of it. It's not good enough to say, Oh, Meta says it's fine. Nothing to see here. Tammy Haddad: That's the fear. That everyone's just passing the buck and that it's going straight down the line as opposed to this is what you need to focus on the U.S. Government has really focused for our election on working with the Secretary of State to run elections to try to move that. But as we're just watching these world elections go along, it sounds like you've seen some results, but not really any grand plan to protect the rest of those elections in the democracies. Madhumita Murgia: No, but I think it can't just be tech companies who we decide are responsible for it, right? They [00:20:00] can create technical solutions, sure. But I think, you know the big, big job is to educate people. Until quite recently, my mom was saying, “Oh, I saw Elon Musk talking about how you can make money from home doing this really easy day trading thing.” I was like, pretty sure that's fake. She's “oh, but it was definitely him.” I was like, Oh, my goodness, you've literally read my book. You've got to apply this, but it is because it's so hard to mistrust what you see with your own eyes. So I think a big part of it is just to be seen that education amongst the general public. This stuff is really good, but a lot of it is fake. It's the same with audio. It's not just images. The Tammy Haddad: Audio in a way is even more powerful than video because it's a feeling. Madhumita Murgia: Exactly. It's so visceral and you're already seeing scams that use audio deepfakes. You can't completely immunize people to this and say, Oh, we'll just cut off all the technology from them or we'll brand everything saying AI people have to apply their own critical thinking as well. And that is part of what government can do, [00:21:00] um, as well, education, as well as the technical solutions are required. Tammy Haddad: One of the things you do in the book is just really explain in the simplest terms of how companies use your data. Can you just walk us through it? Because I've never heard it said that way. Madhumita Murgia: That's where this whole journey with AI started, like, in 2013 because it was a time when there were no privacy laws, so there was no GDPR or California's data law, none of that existed, and so really, anybody could collect your data and sell it to anybody else, pretty much, broadly, and so I was trying to figure out who are these companies that basically bartering in data that are essentially the economy of the internet. It's what allows the internet to be free. And that's the business model but nobody really knew who these companies were or where that data ended up or what it meant. And so I decided to track and to figure out if I can find what companies know about me and if it's true or not. And I did and it was much harder because at the time without laws, there was [00:22:00] no way for me to request my own data. So I had to use startups to help me to find it and I did piece together this kind of profile of myself and it was so deep and detailed and not just about my ethnicity or my job or my age, the obvious demographic stuff, but things like, I don't own furniture or I live in a rented flat in a certain part of London, or I have a cleaner who lets herself in when I'm at work because I'm a young professional, you know, that kind of stuff tells you something about someone or where I buy my groceries, but saying it's not because she's loyal to the brand. It's because it happens to be on her way home from work and you really can build up a picture of someone's stage in life, but also their personality, their propensity to take risks, their ambition. And that was what they were selling. And that is the bedrock of AI, right? It's taking this data and then being able to predict what comes next. And it started with advertising and targeting ads, but it's gone on to essentially [00:23:00] encompass health data and shopping data and financial data. But it's now gone on to essentially encompass all of the words that we've created that are on the internet to essentially now write and make art and video and music and stuff using technology. But it all started with that data scraping that helps you to profile somebody, target and predict what comes next. Tammy Haddad: Do you think that these frontier models should be regulated in the sense of what they can use. Many would argue that they've already pulled all the data and put it in. But do you think there'll ever be any kind of regulation or rules like that?So you can't take my personal data or you can't use Wikileaks or pick something. Would any country do that? Would any company say this is toxic? We won't put it in or everything's just going to Keep going into these frontier models. Madhumita Murgia: Well, it's already. all of the internet. And I resist the [00:24:00] characterization. That's all of human knowledge, but that's what lots of people in the valley tell me. That's all of human knowledge. What about oral traditions or stories? There's a lot of human wisdom and knowledge outside of the internet, but it's a pretty good representation and it's already there. I think the cat is up, the horse is bolted from the stable or whatever the metaphor is. I don't think that we're going to go back now and say, Oh, you have to remove this bit of data and this is why you're seeing media companies, record labels, struggling to figure out what to do now. Do we make a deal and get them to pay us for data they've already got? Or do we sue them, which is obviously what the New York Times is doing, and try and once and for all decide who owns this data and what it's worth. Tammy Haddad: Has the FT made a deal with OpenAI yet? Madhumita Murgia: Yeah. They have, yeah. So they already have a deal. Tammy Haddad: And the Washington Post, where you used to work, it's unclear what they're going to do. But isn't that almost like the internet? I've heard Scott Galloway, he was on the board of the New York Times when the internet and Google became so hot and then went to them and made a deal, and he tells this great [00:25:00] story how he's, don't give your information for free. You're giving it all away. And it feels a little bit like that moment. We're going to give you this piece of change in exchange for your information, but you're never getting all of that knowledge back. Yeah. All of that work. Madhumita Murgia: Yeah,I mean, it's, it's really interesting because you're right. There was a previous era of disintermediation. We've seen this movie before. It happened with social media. Now we're dependent on these platforms. to be the pipes that bring our stories to you. Everybody has deals with Google. Tammy Haddad: Did it bother you when the FT signed with OpenAI? Madhumita Murgia: No, I mean, I see it in an analytical way rather than an emotional way. I see that there's many different ways in which people are approaching it. And I think the FT's view is the data's already been scraped. We should be compensated for it. Therefore we will make a deal. So we're paid for it. What I find interesting is that it's so hard to put a number of value on it. All of a newspaper's archives, because how do you value that kind of knowledge [00:26:00] and information over decades? I do believe that we should be compensated. I don't know that there's a right and a wrong way of doing it. I think everybody's just figuring it out as we go along. Tammy Haddad: Is anyone trying to put a number on that? Any of the companies? Madhumita Murgia: Really, they all have. The FT has, News UK Publications or The Journal, I think, has done a deal. There's been quite a few deals now that have been announced, it's just nobody's said that out loud. That's for sure. And I think that's a heavily protected thing, because the way I understand it, every deal is different and some of it is the entire archive, but some of it might be just the archive, but not the future stories. So each one is being architected in this unique way that gives each news publication different content. Benefits, but obviously because it's so live, nobody's sharing that. And so I think they have put numbers on it, but it will remain to be seen if they undersold or not. Tammy Haddad: What about the tie-ups like between Apple and OpenAI? Does that worry you? Madhumita Murgia: No, I think that was the natural course of things. [00:27:00] Yes, 100 million people have signed up to chat GPT, but why would you keep visiting a different website when you could just access it through your phone? So it's very obvious that would be the next frontier. I think it's interesting that they've done that deal when, you know, obviously, ChatGPT is Microsoft's main partner. Tammy Haddad: What is your take on them? Microsoft and OpenAI? Madhumita Murgia: Microsoft has been their biggest backer since the beginning. But it's clear that Microsoft is diversifying now, right? They've invested in Mistral, which is the French AI company. They've now hired Mustafa Suleyman, who was running Inflection, previously founded. DeepMind, which now is part of Google, and he now works for Microsoft. Tammy Haddad: Were you surprised by that? Madhumita Murgia: What, him working for them? Yeah. Everybody kind of agrees that Microsoft has made a really good bet, and they've been really smart at how they've come at this AI era, going in early with open AI and funding the infrastructure to allow something like ChatGPT to exist. In that [00:28:00] context it makes sense to go work there and see how they're building these partnerships and they're at the forefront of this because they also own the infrastructure, right? The cloud computing, right? It's an amazing place to be and obviously Inflection wasn't working out. They weren't making money and so it made sense not to lose all of that talent. But it is interesting because he obviously co founded DeepMind, which has been bought by Google and has worked for Google. Has been on the inside of some of the AI systems there and now has left and gone over to Microsoft. There's a lot of movement there and from a sort of analytic perspective, it's interesting. Tammy Haddad: Do you think that competitive nature right now and even the reporting of, Oh, Microsoft's ahead of Google because Google originally, I mean, they were the first to develop the transformer, right? The basis of chat GPT. So do you think that's going to stimulate more innovation? Because who wants to read every day that Microsoft is ahead of you, or Amazon is ahead of you on this or that? Or is it the same old thing? Madhumita Murgia: I'm not [00:29:00] convinced that it's gonna stimulate more innovation because really there's a race right now and what that means is that you have to put a product out or, or you die. So what it's really doing is speeding up. The consumerization of AI, while AI itself is still quite nascent as a technology, it's still a science in many ways, you know, it's being developed. We don't quite understand how it works, how it scales up, what are going to be the emergent properties of this. So while it's being developed at the frontiers, it's also being put into consumer devices and software and reaching billions of people. So I think, yeah, certainly speeding things up on that front. But if innovation is scientific research that is thoughtful and measured, I'm not sure that is being impacted in the most positive way. Because it's also kind of reduced public publishing. Up to this point, all of these companies were publishing openly all of the work that they were doing with AI development. And that helped to bring the technology to this [00:30:00] point because they shared that knowledge. That's how science works. You sort of grow from there because you're sharing what you've learned and the research is built upon and built upon. And I was talking to Jan Likun, who's the chief scientist at META, and he said, the only reason we've come this far and we've made so much progress is because we were publishing our work as scientists and learning from it as a community. But that's not the case anymore. Because of the competitive dynamics and the sort of who's going to win, people have become more closed. So actually I think it might have the inverse effect where if you're not publishing as much in papers anymore, actually the field doesn't progress as quickly. So you're for open source, clearly. You've mentioned that I used to be a scientist. I'm not. Tammy Haddad: Yeah, I want to hear about that. Well, it was a long time ago. Looking at the AIDS vaccine. It was a very long time ago. It was prior to my journalism days, but I think from having studied immunology and respecting the scientific process of peer review and [00:31:00] believing, I definitely agree with why it's a good thing to create a stronger technology to have openness because that's how you have third parties without skin in the game who can come in and evaluate these systems and say, here's where it's going wrong. And that's what my book is. It's examples, dozens of them of how things are going wrong and why has nobody else found that out yet? Why do companies not know about it? It's because there's no independent parties that are looking. When you have open publishing and peer review, that's what it does, it dusts away the cobwebs and brings sunlight to these problems and you end up with a stronger technology without as many flaws. So yeah, I'm not saying I think all AI should be open source, but I do fall more on the end of that than closing things away into walled gardens. Tammy Haddad: I think it's interesting that you work for the FT and they were {fine] with you, focusing on people and the repercussions of this technology as opposed to Elon versus that [00:32:00] person. Madhumita Murgia: It's funny because people who know my work well as a journalist picked up my book and I was surprised by it because they expected it to be a book about the business of AI or, and maybe its impacts because I've always been interested in the impact of technology on society. It's why I've written about surveillance, about data, about privacy over a decade, even at the FT. I made a place for that and which they were very supportive of. But I always do it through a corporate lens. So I think people expected that would continue, but were surprised and in some cases were happily surprised. Hopefully in all cases that this is a different lens. It's a reportage approach, but I think it tells you a lot about the corporate world as well and issues of power, the place that government has to regulate and ultimately, you know, what kind of future we want for ourselves. Tammy Haddad: Right, it forces people to focus on IT. What is your take on the work of the U.N. World Bank, THE IMF I'm at [00:33:00] these global institutions that try to elevate society and economies and people all around the world. Do you think they have any sense or are working in the right direction on AI? Madhumita Murgia: I've met with members of the UN, they had a high level group on AI, and they brought together actually really brilliant group of people from across the spectrum, including Mira Murati from OpenAI, and also Father Paolo Benanti, who's the Pope's advisor on AI, who's also in my book, two very different people from very different worlds, but together in this, in this group to kind of create a set of principles. So I think it's amazing that they are engaging with it and really looking at bringing in different perspectives. I don't know what happens with those principles, who adopts them and how much water they hold in a world where it is companies that are self governing and making these decisions Which is why I think there's a role for actual government regulation. I find it fascinating that they have the convening power to bring together [00:34:00] people who aren't just technologists, who are economists or educators or, um, priests. Tammy Haddad: Yeah, we have to talk about the Pope because you do write about it in your book. So tell us that story. Madhumita Murgia: It was while I was writing my book and looking for interesting people that I came across Paolo Benanti. It was kind of like a random podcast. It might have been somebody here in Washington who'd done this conversation. And I said, I was curious what Franciscan priest was doing talking about AI and I basically just called him up and he studied at Georgetown, he studied tech ethics and bioethics and because Franciscans work as well, they don't just work as priests. So I went to visit him in Rome and discovered this amazing story of how he was advising one of the senior archbishops and the Pope on AI and what he thought was what the ethical and moral dilemmas around it and basically they had come up with something called the Rome Call, which is a set of Governing principles to preserve human [00:35:00] dignity with within a very fast changing technological world. But the greatest thing about it, I thought, was he'd also brought to the table companies. So he got Brad Smith, the president of Microsoft, and IBM's chief scientist, Dario Gil, to the table, as well as the Italian government and the UN's FAO. So all these different stakeholders and institutions, including the companies, to come together and create this kind of agreement together. And I wrote about it then, but a year later, I went back to the Vatican, and for this occasion, Father Paolo had also brought in heads of two other religions, two other Abrahamic religions. So he'd got the chief rabbi from Israel and an Islamic scholar of jurisprudence from the UAE. And basically, they were coming together with the Pope to jointly sign a declaration for human rights in, in an AI world. And what they were worried about as the harms of AI, including harming the vulnerable [00:36:00] or being used in war. For example, Ukraine war had just broken out at the time, the Russian invasion of Ukraine. And it was just an amazing moment to witness parties that don't agree on many things and in the outside world, but who are coming together on this topic. And I think was the first time these leaders of these three religions had ever signed. Any sort of declaration together on any topic. So it's pretty historic. It comes at the end of my book and it might seem like a departure from the stories and the companies and the technology. But for me, I think it was an example of just having new voices in the debate to help us imagine new futures and solve problems in different ways, but also to figure out what we care about. Companies have their own priorities and their own sort of incentives, right? And that's fine. That's what corporations are. But with a technology that's so powerful and can have so many benefits, you need other incentives. You need parties that have the [00:37:00] public's best at heart and other things. And of course, religions aren't perfect. So my point is not at all that religions have it down and religious leaders know exactly what to do. But actually, I write about this where I spoke to one of the rabbis, David Rosen, who heads up the inter religious affairs. in the U. S. And he was saying, we are an example of what happens when you have too much power. This religion, it shows how things can go wrong when you do have too much power and you misuse it. That's a profound statement. Yeah. And he was saying it's a lesson for what he feels tech companies are today is what the roles that religious leaders occupied century ago and saying, learn from our mistakes. You can't just do it on your own. And that is the message of my book to bring all of us readers of this book, people who don't think they understand the I or have a stake in it to the table to sort of have a voice in deciding what we wanted to look like and not just walk into it blindly. Tammy Haddad: And what do you think https: [00:38:00] economically will happen, you talk about Uber driversand all the various changes in society as these algorithms, I don't want to say take over, but play such a prominent role. How do you think economically governments will have to change? Will there have to be some sort of Universal basic income, will things have to change because what you're talking about is something out there controls how people live their lives, right? So if you're the government, how do you protect society? That's part of your job. Or do you have any predictions of what might come as a result of all of this? Madhumita Murgia: On the one hand, it's very difficult to predict what kind of wealth it can generate because it's a bit like trying to put a number on the internet. It's not like one thing that changed one part of our lives. It's just infused into everything we do. The wealth created by the internet would be very difficult to quantify, right? Similarly, and maybe even to a larger scale, AIs [00:39:00] might help invent new materials or new types of drugs to treat diseases, increase longevity, and obviously in the shorter term productivity and efficiency is what tech companies are focusing on shareholder value. Thank you. Exactly. And lift the bottom line. We're not, we're not seeing that yet. I think it's still early days. People are very much still experimenting with it. But the hope is that because of how it can change scientific invention and healthcare and all of these kind of huge problems, you know, climate and energy, maybe it will help us to solve nuclear fusion. These are all maybes, but this is what we hope it can do. Then it's just going to have kind of untold impact. So it's hard to quantify that. But I think in the shorter term, of course, there will be impact on labor markets. People like the IMF and others have been trying to look at what would governments have to do. And there is a lot of discussion of, Oh, is it going to replace jobs or tasks? You know, we'll still have our jobs. We just won't have to do all the boring tasks. But I [00:40:00] do think that of course, in the short term, it's much easier to just replace. We've already seen that with customer service, for example, Plana, which is the FinTech company, which lends money. They've already said that they moved over from human to AI customer service. I think 20 percent of those queries are answered by AI now. And we've seen that for years too. Yeah, it's getting better, right? There was a point where you could say you want to replace it, but I think it is getting good enough and that's true of copywriting for ads. That's going to be true of graphic design and illustration, video, video making. Um, I was at this literary festival. in London and there was a famous Indian film director there called Shekhar Kapoor. He made this film called Mr. India, I think it was in the 80s or something, which was a huge hit and he continues to be very famous. He announced that he was going to make an entire film with AI, starting with the script, but all the way through to the filming, the production, post production on his laptop. And the audience were a bit like that. Doesn't sound good at all. He should [00:41:00] have seen his audience or kind of noted that a bunch of creatives telling them that we're just going to replace you. But that's the way things are going. You can do things like that. You can create films to that point. Tammy Haddad: How do you think that you can convince people that really what I told you Van Jones is trying to do. This is really good for you. You should jump in here now and learn this and make it a part of your life because your life will be better. Don't let it pass you by. Do you have any sense since you've traveled the world, you've talked to all these people? Madhumita Murgia: I don't think it's a given that today AI is just going to make everyone's lives better. I think we're figuring out how it's going to make our lives better. Is it going to be healthy? Is it going to be energy? But the short term promise is just use it. If I'm being honest with myself, I wouldn't be able to go out to everybody and say, this is definitely brilliant for you and you should do it. But I believe that we can only figure that out by trying it out for ourselves, by educating ourselves, by playing with [00:42:00] it, experimenting with it, and then figuring out what it's good for and what it isn't. And for that reason, it's kind of don't knock it till you try it. Just remove yourself from the technology, but then criticize it or be worried. It's taking your job. I think the more you engage with it, the more authority you have to say, Yeah, it's pretty good at doing this part of my job, but actually, it's really terrible at this thing. And then you can say you can't replace this. And that helps, I think, businesses to understand what AI can do and can't. But if we don't engage with it I think it will be very easy if you want it as a cost cutting measure to just say, Oh, we'll just replace it for now. And then you'll see the repercussions of that down the line. So for that reason, I think if you want to be more empowered, more educated and have more authority when you're discussing these issues in your work or at your kids school or wherever, you've got to try it out to understand for yourself. So that would be my advice for why you should engage with it. Tammy Haddad: That's a beautiful way to end it but I do have one more question. With all your reporting [00:43:00] and writing this beautiful book, do you have any sense that we'll ever find out exactly how AI works? What's inside the black box? And, how those decisions are made? Madhumita Murgia: I think that is the big open scientific question today. And for me, that's the exciting part. Of course we can. try. We don't know today. As far as I know, everybody I've spoken to says we don't quite understand, but there are people studying how you can put in filters to learn how the algorithms are learning things. And I think that's what we'll be doing for the next 10 years, trying to break it open and adapt it for us rather than be enslaved to a system we don't understand. So yeah, I feel optimistic that scientists are going to work that out. Tammy Haddad: Oh, good. I'm so glad to hear you say that. And thank you so much for this great book. It's Code Dependent: Living in the Shadow of AI. Thank you so much for being here. Thanks for having me. Thank you for listening to the Washington AI Network podcast. Be sure [00:44:00] to subscribe and join the conversation. The Washington AI Network is a bipartisan forum bringing together the top leaders and industry experts in To discuss the biggest opportunities and the greatest challenges around AI. The Washington AI network podcast is produced and recorded by Haddad Media. Thanks for listening.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.