WASHINGTON AI NETWORK PODCAST TRANSCRIPT EPISODE 3: CAT ZAKRZEWSKI SEPTEMBER 5, 2023
Tammy Haddad: [00:00:00] Welcome to the Washington AI Network Podcast. I'm Tammy Haddad, the founder of the Washington AI Network, where we discuss innovation, ideas, and issues shaping artificial intelligence policy. And this week, what and who will regulate AI advanced systems. It took government 30 years to regulate phones, 20 for radio, five for TV, and 10 for the Internet. And as Congress returns this week, the leaders of top tech companies from Elon Musk to Mark Zuckerberg will meet with senators from both parties, led by Majority Leader Schumer, alongside labor leaders and civil rights advocates. This may not be the cage match dangled by Musk and Zuck, but industry and government are wrestling with what kind of regulation and who should lead. My guest today will take us behind the scenes before the meetings convene. Cat Zakrzewski is a Washington Post technology policy reporter and was the first anchor of the Post’s Technology 202 Newsletter. Who better to guide us through the collision [00:01:00] course between the tech industry and government? Cat, thank you for joining us. Cat Zakrzewski: Thank you so much for having me on the show, Tammy. Tammy Haddad: So Cat, I worked with you a long time at the Washington Post, so I know you're deep, deep, deep in all kinds of tech. Have you ever thought you'd see this day that Congress, the White House, and the tech companies come together for regulation? Cat Zakrzewski: This is really unprecedented. In the past five years I've been working in Washington, we've seen a number of hearings involving these tech CEOs, but never an event like this where you have everyone from Musk to Zuckerberg, to top civil rights leaders all in the same room talking about these issues. Even if you think back to some of the high profile hearings we had in 2020, one: they happened on Zoom and two: they were on topics like antitrust. This AI moment and the release of ChatGPT has really spurred Washington into action in a way we haven't seen before. Tammy Haddad: So what is their plan? Cat Zakrzewski: So right now, the plan is to have a full day of meetings [00:02:00] with this long list of CEOs, everyone from Microsoft, Google, OpenAI, Meta, and they will all be in person with Schumer and lawmakers. And one of the things that's really interesting about this is if you think about the way a tech hearing normally works, you have lawmakers walking in and out of the room only catching, you know, the answers to the questions they ask themselves. The goal with this forum is to actually have many members of the Senate present, listening to a full day of briefings about this technology so that they can really start to understand what are we talking about here? What are the threats? And what action does Congress need to take to respond? Tammy Haddad: And why do you think the industry's here? I mean, Mark Zuckerberg never wants to go on Capitol Hill. Elon Musk doesn't come into town unless it's to salute an Air Force general or something. Cat Zakrzewski: So I think you see the industry learning from some of the mistakes that they made with social media. There was really a resistance for many years for [00:03:00] Silicon Valley to have any sort of outreach to Washington. And with issues like Cambridge Analytica and the fallout of the 2016 election, they saw how that kind of blew up in their face. And so I think there's a real recognition of, okay, with this new technology, we need to be more proactive with our outreach. There's a sense that other countries are moving on regulating this technology. Certainly, the European Union has been working on that for years. And so these CEOs see Congress as a place where they can probably find some friendlier regulation in order to also spur greater innovation in the US and not just limit it. And the third thing is when you talk to these CEOs, I mean, I know you watched, I'm sure, Sam Altman's testimony on Capitol Hill. There's a real fear among some of these executives about what they're building and where this could go, and a real feeling that they have a responsibility to warn lawmakers about those potential future threats. And so I think that's one of the other reasons we're [00:04:00] seeing a different type of alignment on AI than we've seen in the past. Tammy Haddad: Fear from a tech CEO…Hmm. These meetings are gonna be very interesting. No reporter is allowed, right? Cat Zakrzewski: That's right. They're going to be close to the press, unfortunately. If any Schumer aides are listening to this podcast right now, I'd love it if they'd change that. So we'll be obviously closely reporting from our sources within Congress and also within the companies to get quick readouts on what's going on in the meetings. One way that these meetings will differ from a typical hearing where you see a lot of fireworks and an attempt to get sound bites that will go viral on social media afterward. The closed door nature of these discussions could certainly change the tenor of how lawmakers and tech executives are speaking to one another. Tammy Haddad: And what about the civil rights leaders? And you've got Randi Weingarten coming from the Teacher's Union, you've got a lot of people, the AFL-CIO, Liz Shuler. I mean, they're usually not in the room with Elon Musk. Is Senator Schumer looking for a little [00:05:00] fireworks here? Cat Zakrzewski: You're certainly going to have some tension in this room. If you think of the relationship that the civil rights leaders have had with Elon Musk since he took over Twitter, now X, and all the changes he's made to the platform. It's very tense, and Schumer has said that he wants this process to be one where you bring opposing views into the same room to kind of hash this out. I think that's a big question though, of how that's going to actually play out in practice, especially at this level with this number of people. You know, how much will they be able to accomplish in a few hours? But certainly the wide range of civil rights advocates, you mentioned labor, teachers. It just underscores how far reaching the effects of this technology are. You've got people concerned about what will happen to workers, what happens to schoolwork as kids go back to the classroom this week, and, you know, so many other effects beyond that for our democracy. Tammy Haddad: Well, I was gonna go back to President Biden's statement and the commitment from [00:06:00] the leading AI companies, the seven companies, that one of the goals is to address society's greatest challenges. That's as far from business, I would argue, as you could get. If you have these CEOs lined up who are worried about, you know, dividends and quarterly earnings. How is that gonna get squared? Cat Zakrzewski: I think one thing we've seen from the recent debates over social media is that these business discussions aren't divorced from those societal discussions. If you look at kind of the reputational toll, some of the recent scandals, certainly the fallout of the 2016 election took on these companies. They're aware we can't ignore the broader societal effects that our products have. And so I think that's why we're seeing a greater willingness for them to come and take pledges like the one we saw at the White House that was totally voluntary. The other thing I would say here that's happening is the companies are being very forward-looking in their discussions. They're often [00:07:00] trying to talk about how do we regulate this moving forward and not so much how do we regulate the products that we've got out in the market right now. Tammy Haddad: But no one can even explain how it works. So how do you do that? How do you gain the trust of the public? Cat Zakrzewski: It's a great question, and it's one that some of the CEOs in this room don't have a great track record on gaining the trust of the public. If you look at a company like Meta. And so I think that is why there's this attempt right now to have a more conciliatory approach. Take voluntary pledges, show up for these joint meetings. But the big question is whether or not that trust and kind of feel good atmosphere will remain as this technology becomes more ubiquitous. And certainly as we head into the 2024 election, given the problems that could arise with AI. Tammy Haddad: Oh, we're gonna talk about the election, Cat, but the survey that came out of what scientists, computer scientists, and experts from university, what kind of regulation they prefer. [00:08:00] Most of them prefer the thing that no one in Washington seems to want: a new federal AI agency. 37% want that. 14% say AI cannot be regulated. 16% want Congress. Do you have a sense of where these conversations are going to have existing agencies regulate or to create this new federal agency? Cat Zakrzewski: So what we've seen is existing agencies are already looking at what authorities do we have right now that can be used and applied to AI. So one thing we saw earlier this summer, actually, we had a report in the Washington Post that the Federal Trade Commission is actually investigating OpenAI and whether it is violating existing consumer protection laws, specifically looking at whether their data security practices run afoul of some of the commitments they've made publicly to people, and also asking questions about whether their products are generating defamatory statements. And [00:09:00] so you've seen kind of as there's been this greater push around crafting new legislation, existing agencies, I'm thinking of places like CFPB, EEOC, the Housing Department, DOJ, looking at, well, how is AI already changing the industries and issues that we regulate? So you have that going on. But at the same time, there's really this feeling that no single agency in Washington is up for the task of regulating these companies and the new technologies. Both OpenAI and Microsoft have endorsed this idea of creating a new government agency, but, certainly, we've seen real resistance to that idea among some lawmakers and even civil society advocates warned that if you had a single agency tasked with regulating this, it could become subject to industry capture. We know these companies have extremely sophisticated lobbying operations here in Washington, and so would it be too easy for the industry to [00:10:00] influence that agency and the direction it takes? Tammy Haddad: Can you explain what Democrats want? And then I'm gonna ask you to explain what are Republicans looking for? Cat Zakrzewski: It's a good question, and I think we're really at the beginning stages of lawmakers just figuring out what they want in this debate. One of the things that's most difficult is that you have this effort from Schumer to craft this broader AI package, and then at the same time you have lawmakers in their individual committees introducing single bills. Different things related to how we detect fake images heading into elections or even things around IP and copyright and making sure people's intellectual property isn't totally copied and replicated by these systems. And so, I think one of the challenges in Washington right now is that AI is going to have such wide ranging effects that there really is a lack of focus right now among lawmakers about where to even start. And when you talk to people about that, I think that's one of the goals of having forums [00:11:00] like this. Certainly, Senator Schumer has talked about the need around AI safety and the concerns that these systems could become biased against certain protected classes. We've seen already Democrats introduce legislation on that front. It seems to me that the uniting theme that is maybe bringing Democrats and Republicans together on this issue is the threat of competition here and what US competitiveness with China has at stake when you talk about it. Tammy Haddad: I think you're right. It starts and ends with China. But what does that actually mean operationally? Cat Zakrzewski: Certainly, we're going to see that come up as a theme with CEOs talking to legislators about this issue and what they're going to say. And you know what? We've seen some reception on Capitol Hill of this idea that if you go too far in regulating these technologies, that could really harm our ability to compete on the world stage with China. And so there's going to be this constant push and pull as we have this debate over legislation that [00:12:00] although, you know, lawmakers, especially Democrats, may want to address things around the safety risks of these systems and creating new requirements for business or around how these systems could potentially discriminate against different groups, you're going to have this constant push and pull of, well, if you do that, will that harm these companies' ability long term to compete on the global stage? Tammy Haddad: Now, let's go to elections because you had an amazing story, and in a way, it's the story of AI about OpenAI when they started ChatGPT, and they came out and said, no political ads or that political campaigns could not use ChatGPT...for messaging. Why don't you talk about how it started and within six months it changed. Cat Zakrzewski: That's right. So over the course of that time, OpenAI realized it would be completely impossible to enforce this policy and prevent political [00:13:00] campaigns from using their technologies. And so instead of trying to have this blanket ban, they tried to come up with what are the most risky ways that our tools could be used in politics, and they came up with a list of things. One thing is you can't use their API in order to create a chatbot that would respond to people as a politician. So if you think of like what a Trump bot would look like, where voters would be interacting and asking it questions. They also were wary of microtargeting, and this is an issue that's come up a lot over the years in the debate around social media is what happens when you have such a fragmented political system where every voter is getting targeted with a different personalized message. And you can think about how maybe that could be abused if you had an authoritarian leader. Or political opponents who wanted to maybe say different political messages to different groups. Maybe you make a different promise to suburban [00:14:00] moms versus, you know, people in their twenties living in the city versus older adults living in a rural area. And so if you think about that kind of a situation, it raises the question of how do we track what politicians are telling people? And the OpenAI system ChatGPT, it can generate really quickly in a matter of seconds. It can write ads, it can write political emails, it can ask for donations, and so it just creates this supercharged ability to write political messages in a way we've never seen before. Tammy Haddad: Well, we have seen all of that, not supercharged, in 2016 and in 2020. So really what you're saying for the 2024 election, nothing has really changed except there's new technology and there's no regulations right now. So even though all these meetings are gonna take place as we go into this election year, there's really no additional regulation, right? Cat Zakrzewski: And so you're looking to the [00:15:00] companies themselves to develop rules and enforce them. And what our reporting found is that even though OpenAI had taken this time to create this process of developing these very specific rules, It doesn't actually have any mechanisms in place to enforce them. So when I went in and asked ChatGPT to write these highly targeted messages for different groups, no matter which candidate I asked for, Biden, Trump even, down ballot candidates, it would just keep generating new messages. It would pull sometimes accurate information about these policy positions. We've also seen it's incredibly accurate at predicting survey results, and so it can effectively say, oh, okay, so you're talking about this voting demographic. These are the types of issues that will appeal to them. And so that just would allow campaigns to do something, as you already mentioned, they've been doing for years. Now, they can do it much more cheaply, efficiently, and with fewer people. And that opens up, you know, whole new issues and potential [00:16:00] risks for democracy. Tammy Haddad: Sounds like there's gonna be an AI forum just on election issues. Cat Zakrzewski: We don't know all of the topics yet for the AI forums, but we do know this meeting that Schumer is having initially with the CEOs is just the first of several, and so I would not be surprised if election issues were a major focus in the future. We've already had some representatives introduce legislation specifically addressing the issue of deep fakes. So these AI generated images and videos that could be potentially used to dupe voters is a big issue, top of mind for them. Tammy Haddad: What about some of these ideas? Like, YouTube has this idea of putting what I would call a stamp or some people call it watermark. I think Google's talking about it. Is there really any movement? To have some sort of standard and do that right away. Because, to your point, this is affecting people today, voters, not just 2024. There's an election in [00:17:00] November. Is there anything that's going to become dominant or prominent in a way that will impact elections in a safer, more fair way? Cat Zakrzewski: For years there's been efforts to develop this kind of watermarking technology, but what researchers have found is that it's very easy for bad actors to manipulate a watermark. And there's things that they can do like flipping the image or simple steps that often render it ineffective. What we've seen now is Google in the last couple of weeks actually released a tool that would enhance watermarking technology. It's available for anyone using the Google Enterprise products to generate images. And so that's an interesting first step in this space. We've also seen Microsoft has created an industry coalition to try to develop a standard so that you don't have a situation where Google has one type of watermark and Adobe has another, and Microsoft has something totally [00:18:00] different and none of them end up meaning anything. So there are these industry-wide efforts, and we've seen the pledges from the big companies to the White House to work on this technology. The big problem, though, is that it only encompasses these models that are being built by these big Fortune 500 companies. And there are also now a number of other models that are being built by startups that haven't necessarily taken this pledge and aren't in any way committing to use this type of technology. I'm thinking of models from groups like Stable Diffusion, and I think we're at a point right now that unless you have federal regulators step in, there's no way to ensure that there's a common type of stamp or watermark technique used across the industry. Tammy Haddad: And what about photos and likenesses? We've already seen a President Obama viral video, President Trump viral video. What is the mechanism, if there is one today? If another one [00:19:00] rolls out, who is the regulator? Is that the FTC? Is that the CFPB? I mean, who's in charge of that now, or no one? Cat Zakrzewski: It's a big open question. There's currently a debate within the Federal Election Commission, what their role is when it comes to regulating generative AI, and there's been civil society groups that have asked the FEC to actually embark on a rulemaking in order to make sure there are ground rules for how campaigns can use this technology. But right now there isn't a clear agency that would take up that issue. The other challenge is we know how slowly these regulatory agencies move, especially when it comes to tech cases. If you had an image or a video come out, especially something in the month or weeks before Election Day, there's no clear way that the government would be able to quickly respond to that. We're actually in a moment right now where, after the 2016 election, there were a variety of efforts [00:20:00] to improve communication between the tech industry and the government so that they could more rapidly respond to false rumors online, but there have recently been a number of conservative lawsuits and also investigations in Congress that have brought new scrutiny to those activities. And we're hearing that, you know, in many ways they're getting rolled back and so... Tammy Haddad: Right. Completely rolled back. So we're actually back to say, 2012, but with 2024 AI tools. Cat Zakrzewski: That's correct, which you can think of all the ways that could possibly go wrong over the next several months and year. Tammy Haddad: Oh yeah. That's painful. Let's turn to another painful area and that's for workers who are very afraid that they're going to be replaced by AI systems. How will the Schumer meetings discuss that? We know who's gonna be there for this upcoming meeting, but how much time do you think will actually be spent, or are you hearing will be spent, on the issues with employment and workers? Cat Zakrzewski: Well, I think Liz Shuler, [00:21:00] president of the AFL-CIO, her presence at the meeting is an indication that this is a priority for lawmakers. And in a lot of ways, if you think about what we've been seeing over the last 10 years in Washington, often the issues related to the tech industry take a backseat to other issues related to the economy here. Certainly, we saw this big push to change antitrust laws and draft new laws around social media and then amid the pandemic amid efforts to pass infrastructure legislation. There were so many other things going on in DC that that often took a backseat. With AI, one of the things that I think is different from other tech topics is the impact that this is going to have on the broader economy and on workers. And lawmakers can't afford to ignore that, right? I mean, when they're going back to their districts and thinking about the races that they have to run in 2024, this is going to be an issue that's top of mind for many Americans. So I think it would be impossible for lawmakers not to focus [00:22:00] on that. I think the question is how the tech CEOs themselves will respond to that. And… Tammy Haddad: You mean the tech CEOs who've laid off thousands of employees? Those ones, right? Cat Zakrzewski: The tech CEOs who have laid off thousands of employees. And, you know, if you also look at these companies who are developing the models, in some ways they can be pretty divorced and disconnected from the average American worker. The topics that they're working on… Tammy Haddad: Burning Man? Cat Zakrzewski: Right? Yeah. As long as they're not stuck in the mud somewhere in Nevada at the moment and are able to work on this. But I think the presence of the AFL-CIO at this meeting and certainly what's been going on in Hollywood. The actors who are striking are part of the AFL-CIO and there's a… Tammy Haddad: The Writer's Guild's gonna be there, too. Cat Zakrzewski: The Writer's Guild. Yeah. I mean, there's a real recognition that these are issues that are already affecting workers in many different industries. And so I think it's certain that will come up. Tammy Haddad: Now the UK has said they're [00:23:00] going to have an AI summit in early November. They're not taking the same stance as Europe, more aligned with the US. What impact do you think that will have, or any, with US efforts? Cat Zakrzewski: It's really interesting to watch where the UK is after Brexit because they're a country that really is at the center of a lot of AI innovation. If you think about DeepMind, which is part of Google, some of the most innovative work on AI, they're based there. OpenAI just opened their European headquarters there, and you have a university system with schools like Cambridge that really has made a lot of the discoveries that allow this technology to exist. So they're coming to this debate at a place that's more similar to the US where they're trying to balance the need for innovation in their country with how do we create responsible guardrails? And so I do think you'll see a greater alignment moving forward between the US and the UK on these matters. And we have seen a [00:24:00] greater focus in international bodies on how we talk about AI and make sure we're aligned. The UN has been very active recently. This has been a topic that's come up at events like the G7, so this UK Summit, in a lot of ways, is an extension of that work and an attempt to say, "Hey, okay, there's going to be these different approaches to regulating this technology, but how can we ensure that we're on the same page when it comes to some of the greatest threats that AI could pose." Tammy Haddad: Wait a minute. This scenario I see now is that the US and the UK make a deal before Congress makes a deal for regulation. Is that what you're saying? Cat Zakrzewski: It's been interesting to watch in this space how other countries have been able to move much more quickly than the US Congress when it comes to regulating these technologies. And certainly there's a feeling when you talk to lawmakers that this time has to be different given the stakes that are at play here, and that's why they are having events like this forum. But it's still a big open question [00:25:00] whether or not Congress can actually get something significant over the finish line when it comes to ai. You're looking at a sharply divided congress, really competing views about how to regulate the tech industry, and you're heading into an election year, which is going to disrupt all of this. Tammy Haddad: It's gonna be wild, but we have to talk a little bit about the White House before we go because it seems to be energized as much as we've talked about Congress and that each agency is looking at not just their role and regulation, but how the rules should apply to them. Let's talk about…start with the White House. Who do you think is leading the fight? If you can say there's a fight there or leading all the Biden efforts, and who do you think is the, the strongest voice really? Cat Zakrzewski: That's a great question. I mean, one of the things that's been interesting to watch with the White House is that President Biden himself has attended some of these meetings with the CEOs and with civil society leaders. And in a lot of ways that's just an indication that this [00:26:00] is a broad white House wide initiative. We've certainly seen the OSTP be very active on these issues. One of the really interesting things when you talk about the White House in AI is that. Although there's been so much attention over the past few months since ChatGPT came out, OSTP, since the start of the Biden administration has been working on creating an AI bill of rights and thinking about the ways that should be applied throughout agencies to prevent AI from exacerbating discrimination in society. And so we've seen this ongoing work from OSTP. There's been a lot of turnover there, as you know, but even as leadership has changed, that Baton has kind of passed. So I would look at OSTP as kind of the center for where a lot of this work is coming. And we know there's also the office that was created by the 2020 legislation that does AI advisory work for the federal government. And part of that is NAIAC, which brings in a number of industry representatives. They actually have a meeting the [00:27:00] same week that Schumer is getting together with all the CEOs, so, yeah. Tammy Haddad: And Miriam Vogel is chairman of that, the leader from Equal AI, and Victoria Espinel, who's president and CEO of BSA, is also part of it as well. Cat Zakrzewski: And so that group is doing a lot of really interesting work. I was looking at next week's meeting agenda, which is not yet out, but they are having a specific session on law enforcement and artificial intelligence. Something one of the nightmare scenarios we didn't even get to yet in this conversation, but how facial recognition and different tools could be abused by various law enforcement groups. And so you see that group has taken a really wide ranging lens across the government of how do we think about AI and defense? How do we think about it in law enforcement? And they are really leading a lot of that work. And we know the Biden administration is working on an executive order related to AI and certainly the work that that council is doing and the work that the AI advisory office and OSTP [00:28:00] those groups are all informing that work. Tammy Haddad: And, and I would also say on the higher end of that group, you've got Jeff Zients, who's the new chief of staff, who came from business. Gina Raimondo over at Commerce, but who was also in business and maybe more sensitive to the implications for workers. And as we've talked about all the different issues including law enforcement and how people live their lives. Right. Because this is going to change the way you live your life. Cat Zakrzewski: It is, and for some people it already is. I mean, that's why it's been so fascinating to hear the debate coming out of Hollywood and looking at the ways writers and actors are seeing their jobs affected. My colleagues at The Post have had stories about copywriters who are already seeing cuts in their field as this technology becomes more ubiquitous. So it already is having intense effects in the way we live and work. And as these models become more advanced and we see companies release new products, that is only going to pick [00:29:00] up. Tammy Haddad: Well, Cat, if you sneak into the meeting, promise you'll come back and tell us everything. Cat Zakrzewski: Of course. I thank you so much for having me on the show. And yes, another shout out to the Schumer aides to let me in. Tammy Haddad: Let her in. Let her in. Thank you so much for listening to the Washington AI Network podcast with Cat Zakrzewski from the Washington Post. Be sure to subscribe and join the conversation. The Washington AI Network is a bipartisan forum, bringing together the top leaders and industry experts to discuss the biggest opportunities and the greatest challenges around AI policy. The Washington AI Network Podcast is produced and recorded by Haddad Media.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.