WASHINGTON AI NETWORK PODCAST TRANSCRIPT Host: Tammy Haddad Guest: Ben Buchanan, White House Special Advisor on AI June 19, 2024 WASHINGTON, DC
Tammy Haddad: [00:00:00] Welcome to the Washington AI Network podcast. I'm Tammy Haddad, founder of the Washington AI Network, where we bring together official Washington, DC insiders, and AI experts who are challenging, debating, and just trying to figure out the rules of artificial intelligence. I have a very important AI guest today.
In Washington, he is the most important guest. No, it's not Sam Altman. Satya Nadella, Mark Zuckerberg, Sundar Pichai, or Andy Jassy. My guest today is Ben Buchanan, the U.S. government's lead AI advisor. Even in this divided government, both parties agreed that Ben Buchanan, President Biden's special advisor on AI, is the most critical voice on safety, innovation, and national security.
Prior to joining the White House in ‘21, Ben was a professor at Georgetown University, focusing on the intersection of cybersecurity, artificial intelligence, and statecraft. He was named Special [00:01:00] Advisor on AI in June 2023. Now, one year into this post, we'll get his take on the state of AI innovation and regulation. Welcome, Ben.
Ben Buchanan: Thanks for having me.
Tammy Haddad: Ben, where do we begin? It's been a year. Are you where you want to be?
Ben Buchanan: A year in AI feels like a very long time. So it's like dog years. We've made a lot of progress. So in the last year, we've had the voluntary commitments from 15 leading American companies. The president's executive order. A series of international agreements. G-7, United Nations. So we've had a lot of fun. We've made a lot of progress. It is worth noting, though, that the Biden administration was into AI even before ChatGPT. Things like the blueprint for the Bill of Rights preceded ChatGPT. So this has been a priority of ours since 2021.
Tammy Haddad: I think it's interesting that one of the things you say, if it was illegal before AI, it's illegal now. So you're trying to say that rules have already existed on AI.
Ben Buchanan: That’s right. So we often say that quote in the context of [00:02:00] discrimination in healthcare and housing or anything like that. And we have had laws in this country against those things for a very long time. And we've enforced those laws for a very long time. And AI doesn't get you off the hook. Now, we might need to update, modernize regulations. Of course, we're doing all that. But AI is not a get-out-of-jail-free card for activities that are prohibited in the United States.
Tammy Haddad: As most people know, private industry is leading this technology. It's the first time. So you're really in this position of trying to not just figure out AI and how it will change people's lives, but also how to even talk about it. But let's go back to the AI companies. Can you give us specifics on how you're now working with them? It's a year since the voluntary commitments. How does that work?
Ben Buchanan: Just to set the scene here, I think you allude to the question, but one of the things that's so striking about AI is that this is a technology invented by the private sector.
And if you look at other revolutionary technologies, their roots were much more strongly in the U.S. government. Think, of course, the nuclear age or the [00:03:00] space age. And AI does have strong U.S. government roots if you go back to the 1960s. But the modern paradigm, this deep learning paradigm, really has its roots very strongly in the private sector. So that prompted the question, as this was all taking off, which is: What should the relationship be between the private sector and the U.S. government? And President Biden and Vice President Harris were very forward leaning on this. They brought the company CEOs to the White House in May of 2023 to talk about the need for voluntary commitments. We then worked with them on those commitments, and the companies came back in July and said here are meaningful voluntary commitments about how they're going to make sure AI systems are safe, secure, and trustworthy.
Concretely, what does that mean?
The companies all agreed to do independent red team testing of their systems, before they release the systems to the public. They committed to testing those systems for a range of security threats, but also for a range of societal risks, bias, and discrimination. They committed to posting transparency reports, sometimes called model cards, which is a big request, a big idea that came out of civil society, when [00:04:00] they release their systems.
Tammy Haddad: How does that work? So do they send those to you first?
Ben Buchanan: No. So the whole point here, about these voluntary commitments, is that they're commitments to the public as well as to the president.
Tammy Haddad: But no one even reads their privacy settings.
Ben Buchanan: Yeah. That is true. But we have seen — and I think these have actually gotten much more attention than the typical privacy setting — every time a company has released a major system, so Google's Gemini, GPT-4o from OpenAI, or Anthropic’s Claude, they publish pretty lengthy reports around the testing that they did of those systems. And that's out there for all to see. It's not just for us to see. Everyone can see and everyone can hold the companies accountable for living up to their word.
Tammy Haddad: So it's been a year and you're seeing these reports. You're probably one of the few reading them. But the question is, will the government now say, for frontier models — which is really what we're talking about, right — that there will be certain limits? Will there be specific regulation or guidelines that the federal government will say, you have to do it this way, or you need to limit?
Ben Buchanan: The voluntary commitments were last July. Then we built on that [00:05:00] with the executive order in October, and the executive order in October uses the Defense Production Act, this wartime authority, to hold the companies accountable to make sure they're actually doing the safety tests.
It doesn't limit what they can develop. It says you need to make sure what you're developing is safe. And I don't think we want to be in a position of unduly limiting the potential of AI. We would want to make sure systems are safe and secure and trustworthy. And that's what this authority does. And that compels the companies to turn over, separate from the voluntary commitments, much more detailed safety testing information to the U.S. government. Before they release their systems.
Tammy Haddad: And is that through the U.S. Safety Institute?
Ben Buchanan: It's through the Department of Commerce. And it's an interesting authority. The Defense Production Act authority lives with the Bureau of Industry and Security. But as we get the AI Safety Institute off the ground, the information that's required to be turned over is informed by the AI Safety Institute standards.
Tammy Haddad: Tell us more about the Safety Institute, because it's still pretty new.
Ben Buchanan: Yeah, it came out of the executive order and the key here is we need to really advance the science of what makes AI systems safe and secure and trustworthy. And [00:06:00] there's technical questions embedded in this, which are really important. And there's also what we call sociotechnical questions about the interaction between human and machine. And it's a job of the AI Safety Institute to study these questions and to put out standards.
Tammy Haddad: And you've got Elizabeth Kelly.
Ben Buchanan: Exactly. I was just going to say, the president appointed a rock star who's well known in Washington, DC for getting things done, Elizabeth Kelly — she was a key partner in the executive order — to go over and lead the AI Safety Institute. We've hired some top technical talent. We had a lot of great technical talent already at the Department of Commerce.
Tammy Haddad: We just talked to Elham Tabassi.
Ben Buchanan: Elham is great.
Tammy Haddad: So you've got this White House rock star and you have this scientific rock star, but then you've got all of these technology companies with many, many, many rock stars. So I'm trying to figure out what will change. Or maybe there won't be new rules. What's the goal?
Ben Buchanan: Right now, the AI Safety Institute is going to put out its first set of standards by the end of July. Those standards will issue guidance on testing AI systems.
Tammy Haddad: Do you want to tell us one of those now, in advance?
Ben Buchanan: There's no secret here. So the guidance will include [00:07:00] things like what kind of tests you have to run to make sure your system doesn't increase biological weapons risks and the like, and won't assist people in developing things like that, is one example. Or how do you think about the process through which you red team your systems. So that guidance will come out in July. We will then build our questions to the companies, to the Defense Production Act based on that guidance. So we think it will have an effect on how the companies proactively change their procedures. But also this is the case where the companies have come to us many times and said, we are supportive of the AI Safety Institute. We are supportive of what you're trying to do here. They've lent technical expertise. They've promised early access to their systems, so we can actually proactively test their systems, which we'll do starting this summer.
Tammy Haddad: But you know what people want to know. Where are you getting the information, or will we ever know that? And I ask this because I think the American people look at this and say, this is a black box and anything that the U.S. government can do, or companies can do to [00:08:00] sort of open that up. Will the U.S. Safety Institute give any insight into or requirements for companies to explain more where they got the material?
Ben Buchanan: So we don't have the legal authority right now to compel companies to turn that kind of information over to the public. I think that it's fair to say the Safety Institute will issue guidance on transparency reporting and the like, but we can't compel right now companies to turn that information over to the public. That is not an authority vested in the United States government.
Tammy Haddad: Do you think someone should know, do you think companies should report it? You think they should give more information?
Ben Buchanan: I think in general we, from the start here, have been very big fans of transparency from the companies. And in many cases, to their credit, they've acceded to it. I think we see a lot more about these systems, including their training data, than we did even a year ago. The exact question of what can the U.S. government compel is ultimately a congressional question. But we are using, I [00:09:00] think the executive authorities we have to the fullest.
Tammy Haddad: Mira Mirati, CTO of OpenAI, was in D.C. And she was talking about how they're helping on elections. And she told this story really explaining how they're taking again so much information and they're able to get answers faster, which is how they found Russia, China, Iran, I guess not really a surprise… all of the bad actors that have already come in. How does that relationship work on elections?
Ben Buchanan: Yeah. There's a couple of things here. I think the first piece of it is what's just called trust and safety so who are the bad actors misusing their systems? And that's what [00:10:00] they published. And some of that's related to elections and some of that is not related to elections. They put out some stuff maybe in February about bad actors using their systems to try to develop computer malware and so I think it's actually a broader frame than just elections but the whole point here is these companies in part because of their commitments in part because it's good for their business are proactively looking for bad folks misusing their systems and then trying to stop that and report that to the public and that's what they did in this case. This is not something that the White House particularly mediates. I don't think we want to be in the business of trying to determine who is using the AI system inappropriately or not. But it's more appropriate I think for CISA and the like, which has cybersecurity relationships and does in other contexts to be the interlocutor there.
Tammy Haddad: Again, it's confidence that I worry about. If OpenAI is out there talking about, we're going to make sure your election is safe, right? I wonder if you think, in their goal to really do something positive and help democracy, not just here, but around the world, does [00:11:00] it also hurt an institution saying that: Wait, you guys came up with AI, you're out there ahead of the U.S. government. Yeah, we're all working together. But this outside company now is going to tell people when they are being hacked?
Ben Buchanan: I don't think there's some people when they're being hacked, they're telling people when their platform is being abused or misused. I think there's an important transparency that when a company discovers their platform is being misused by trying to develop computer code, if I think that was one of the earlier ones that they stop the activity and then they disclose the activity. So I think we're supportive of that kind of transparency. I do think it's the case, as the president has said many times, that these companies are responsible. For making sure their systems are not misused and that we hold them to account to make sure they're doing that.
Tammy Haddad: Yeah, and speaking of that, DHS was holding Microsoft to account. There was a report about how they really are not as safe and secure as they should be pertaining to U.S. [00:12:00] government. Millions of people were hacked. Do you think that they are putting the provisions in place to protect the American people's information?
Ben Buchanan: That is a cyber issue more than an AI issue. So that report came from the Cyber Safety Review Board at DHS. I had nothing to do with it.
Tammy Haddad: It's like a confidence thing though, right?
Ben Buchanan: Yeah, but I think it's fair to say we have spoken to Microsoft a lot in general about the security of AI systems, and for that matter, all of the companies. The companies have included that in their voluntary commitments. There's some discussion about should there be additional authorities in play that we could use to require more secure AI systems and the like. We don't have those authorities right now. But I think it's fair to say that we absolutely want to make sure AI systems are secure.
And the security of AI systems is really interesting because there's two dimensions to it. The first is, AI systems can have all the traditional kinds of software vulnerabilities that you've had for decades, and buffer overflows, and all the traditional software vulnerabilities. But you can also get a new kind of software vulnerability that emerges from the neural network architecture of the AI systems themselves. Sometimes we call these adversarial examples. And [00:13:00] there's all kinds of attacks that are specific to AI. And the companies know this and have talked about this for a very long time. But it's a new paradigm for cybersecurity and AI. And it's a very interesting intersection, intellectually, but also really important from a policy perspective.
Tammy Haddad: So every day the president gets his CIA briefing. Does he also get an AI briefing from you?
Ben Buchanan: I think it's fair to say that AI is often part, given how important it is in geopolitics. It's part of the presidential daily brief.
Tammy Haddad: Are you feeling the pressure? Of all of this?
Ben Buchanan: I think the most exciting thing about AI is also the scariest, which is the pace of change. And I think this is a case where the president and his staff, Jeff Zients and the likes, said very early on: We cannot move at typical government speed on this issue, or we're gonna be left in the dust. So there's a pressure to move fast, but that brings out the best in us.
Tammy Haddad: Speaking of moving fast, you've got the EU, their AI act. They just put that into place. What's your take on that?
Ben Buchanan: We've talked to them throughout the whole drafting of that law and I think we've had positive conversations with them. And a lot of the details of that act have been kicked to [00:14:00] this new AI office they're setting up. So we're gonna keep talking to them. Elizabeth, who we talked about before, was a key interlocutor in a lot of those conversations that will continue. And I don't know that we have enough details yet to know where they're landing on a lot of key questions.
Tammy Haddad: I was at the executive order announcement, which I thought was incredibly clever because you invited a lot of important voices, including Kara Swisher and Scott Galloway, who went there. We had lunch together at the mess. Scott had never been at the White House. And they were both really thrilled to be there. We were all thrilled to be there.
Ben Buchanan: It was fun to have 200 AI people in the East Room of the White House.
Tammy Haddad: Oh, yes. And by the way, just like the rest of us, they were taking selfies. I want to point that out. But the president that day had a little aside, as he tends to do, about the Europeans — “we don't really agree with them.” Do you remember that? It was just a little aside. That's why I'm asking you.
Ben Buchanan: I actually don’t remember that one. I thought what you were going to say was he did the aside during that speech, which he had talked about previously in the Oval one time, he had seen a deep fake of [00:15:00] himself and said he couldn't tell the difference and that this was a catalyzing thing and wow, this technology is really very impressive. So that was the aside I remember from the speech. I also remember him coming back up to the lectern at the end of the speech, after he'd signed the executive order, and saying to the folks in the room, you know — this is something to the effect of — this is a technology that's very new for a lot of us and we know the private sector is inventing it, civil society is a key role, and we're going to need all of you on this. Those are the asides that I remembered.
Tammy Haddad: I remember the other one. Alright, I'm not going to get anything more out of you.
Ben Buchanan: I think on the EU, I think it's fair to say it's probably places where we don't see it the same way, and we're talking to them, but it's also fair to say the devil's in the details. And I think we're looking for more details and the devil's in the details for us too.
Tammy Haddad: What about the Paris AI safety summit? Are you working with them on that?
Ben Buchanan: Definitely. Definitely. Yeah. That's the third one. There was the British one in 2023 that we just concluded, the Seoul one a few weeks ago, and then Paris next. So we've been involved in all of them.
Tammy Haddad: And what are you suggesting?
Ben Buchanan: It's a continuation of what can we do to build on the success of the first two. And then the first one, the [00:16:00] Bletchley Declaration, I think set out a lot of key principles from 27 nations on AI. And then we built on that in more concrete ways in Seoul. I'm talking about what's always called responsible scaling policies and the like. And I think we're well positioned in 2025. We've got a ways to go before we get there, but well positioned to build on the international momentum. And it's not just the summits. It's also the G-7. It's also, it's very hard in the United Nations to get 193 countries to agree what color the sky is, but 193 countries joined unanimously the U.S. resolution on AI and the General Assembly, more than 120 co-sponsors. So we've had some international momentum here.
Tammy Haddad: And what about the G-7? You have the Pope.
Ben Buchanan: Yeah, the Pope was weighing in.
Tammy Haddad: And were you there?
Ben Buchanan: No, I was not there.
Tammy Haddad: What is the point of being the number one AI advisor? You need to start showing up at those meetings. You can't stay at the White House all the time.
Ben Buchanan: I'm sure we'll get to it. But we're working right now on the National Security Memorandum. Which is due at the end of July. We have met 100 percent of the deliverables on schedule but that means we can't miss on July 26th.
Tammy Haddad: Okay, but that's still not a good reason not to go with the President to meet with the Pope. Sorry. I'm not even [00:17:00] Catholic, and I'm saying that.
Ben Buchanan: Point taken.
Tammy Haddad: But to that point, it was very interesting, because again, part of this — Van Jones and Arun Gupta from the NobleReach Foundation, who I think you know, they are trying to get more tech scholars and more communities to take advantage of AI. Van Jones talks about, listen, this is the best thing that happened to my community. It makes everyone equal. Everyone has equal access. Jump in. Don't step aside. Having the Pope talk about it is a real equalizer. But let's go back to elections. We've had a couple world elections. The election in India just happened. Can you talk about what you've learned from the elections that have taken place from an AI point of view?
Ben Buchanan: I think we're watching carefully how AI factors into elections. I don't know that it's been decisive in many cases yet. And I think the whole point of elections is that AI shouldn't be decisive, that people should be decisive. That's probably our North Star here. But there's going to be a number of elections, including [00:18:00] two in Europe in the next four or five weeks here, so we'll get more data as things develop.
Tammy Haddad: You just met with the Chinese in Geneva.
Ben Buchanan: In Geneva, that's right.
Tammy Haddad: Can you talk about that meeting?
Ben Buchanan: Yeah, I thought it was very candid. I thought it was very constructive. I think there's always a lot of value in sitting down in the room with your counterparts and talking things through and we don't agree on everything. But it's worth knowing that that meeting came out of a conversation that President Biden and President Xi had in San Francisco in November when they talked about this technology and they talked about the importance of the United States and China engaging in a detailed way on this. And that's what we did in May. None of that changes. None of those conversations change what we have done with regards to AI in China. So the controls on advanced semiconductors and the like because of the way China's using that to modernize their military and perpetuate human rights abuses. So we're not changing our policies here, but I think it's very important for us to continue to have open lines of communication and I would say both sides were very candid.
Tammy Haddad: That was one thing I wanted to ask you about, the deal with the UAE and the companies called G42. Microsoft and G42 made that deal. Does that make you nervous? It's investment. They're close to the Chinese. Is that like a big change in how you're looking at relationships?
Ben Buchanan: So that's a private sector deal. I think it's fair to say that we've had conversations with Microsoft about the terms of it.
I think this has really been a place where Secretary Raimondo and the Department of Commerce have led the way.
Tammy Haddad: So it sounds like there's two different departments. One is the White House, National Security, and then you've also got Commerce. They're driving innovation, making sure that companies are able to make these kind of deals. This is a pretty big step, right? on the national security side too.
Ben Buchanan: I think it's a symbolic step. I don't know that we've seen a ton in public about the substance of the deals, in terms of the number of AI chips or anything like that. So I would just probably defer on commenting on how big the step is, but it certainly is symbolic to see an American company do this in the UAE. I think that's fair.
Tammy Haddad: Let's talk about chips.
Ben Buchanan: Yeah.
Tammy Haddad: Do you know where they all are and where they're all going?
Ben Buchanan: No. [00:20:00] I think there's millions of chips in the world, many of which are going to democracies. We're not in the business of spying on all the chips in the world. It is fair to say we are trying to stop them, and by them I mean the most advanced chips in the world here, from going to the Chinese military. So there are places where we've taken, through the Department of Commerce, which does have a security role, we've taken pretty significant action that the president has himself been very clear about to stop these chips from going to the PRC military.
Tammy Haddad: I put President Biden on TV for many years. Every time he says chips, this is not an age thing, because you could say it about any Senator, former Senator. Anytime he says chips, you just have to laugh because I'm sure he thinks to himself, when did I ever think I'd be talking about chips as a major national security event? Do you find yourself sometimes, not with President Biden per se, but when you're talking to people to have people understand the importance, or do you think they know now?
Ben Buchanan: He totally gets it. He was in Pennsylvania a couple weeks ago talking to Pennsylvania steelworkers and just unprompted, I think they asked about something, started talking about the chip [00:21:00] controls on China and what they mean for national security and how this had come up with a conversation with Xi Jinping, who had complained about them or something. And President Biden said, look, you're using these chips for modernizing our military and human rights abuses. We know they're important and we're not going to let you do that. And so, if he's bringing it up unprompted like that, he knows that he cares, and that's been my experience, even in private too.
Tammy Haddad: And also, aren't the Chinese, they are taking every single piece of material again ingesting it into their frontier models or whatever they call theirs. And then the U.S. is taking a different approach. Do you worry that they'll get ahead of us?
Ben Buchanan: No. I think it's pretty clear right now, our companies are leading the way.
No matter what metric you're looking at, whether it's technical benchmarks and the like, it's pretty clear to me that Anthropic, OpenAI, Google are leading the way. There's an argument, you could find someone who'd be like China has an advantage here because they can centralize things a lot more where the U.S. ecosystem is more diffuse across several companies. We're splitting our computing power in many different [00:22:00] directions. I think the response is we also have this really dynamic ecosystem where we're bringing talent from all over the world. We're the one designing the chips. We're the one importing a huge number of chips. And we're the ones doing the engineering to make this work at scale. I am all for betting on the United States.
Tammy Haddad: I know you have a talent program to get more tech scholars into government and to just make sure you're on top of everything, but the U.S. government is on top of all technical issues. How's that going?
Ben Buchanan: Honestly, better than I expected. So if you had come to me a year ago and said, could the U.S. government get tech talent? I would have said it's going to be a struggle because the companies are paying really well. We started the, we call it the tech talent surge. We've hired hundreds of people already. We'll have thousands by the end of the year or next year. And frankly, I think we're turning away a lot of good people because there's always some limit on how fast you can grow given appropriations and so forth. And we've just been flooded with really good talent. Folks know this is a really important time in AI, and many people are saying, I want to go and serve my country.
Tammy Haddad: President [00:23:00] Obama started the U.S. Digital Service.
Ben Buchanan: Yeah, so it's one of the mechanisms used to bring people in. We've used all available levers. So the U.S. Digital Service brings in techie folks to help the government use the technology. You've got the Presidential Innovation Fellowship, folks who are a little more mid to late career, senior, experienced leaders coming in to drive projects at agencies. U.S. Digital Corps, folks who are younger coming out of school and then in agencies of individual programs, DHS has a huge one. And we did, and I know you will appreciate this given how long you've seen Washington bureaucracies. We did this Herculean and bureaucratic feat of taking all the AI jobs in the government from all the different agencies and putting them in one place on AI.gov. So you can go to AI.gov and then just see everything in one spot. And it's been great and we've gotten a ton of interest through that. So AI.gov for those of you looking to serve your country in AI.
Tammy Haddad: And what about working with Congress? How have things changed? What's your take on the latest issues for legislation?
Ben Buchanan: The day after the president signed the executive order, he hosted the four main [00:24:00] senators in the Oval Office and said, I want to talk to you about this. I was there for that meeting. I was really impressed by how bipartisan, collaborative that meeting was. And those meetings are not always bipartisan and collaborative, but this one was and we had some good momentum there. I think there's a lot of interest in Capitol Hill. As Congress has had a lot to deal with. The Ukraine funding, who's going to be the speaker, funding the government, so forth. So there hasn't been a ton of movement in terms of actual bills being passed. It's an election year. We'll see. But I think it's fair to say that we are using the full extent of executive authorities. And at some point we are going to need Congress. And the president has called for that since the day he signed the executive order.
Tammy Haddad: I saw Senator Schumer this past weekend and he's still talking about AI.
Ben Buchanan: He's been a great leader on this and his group of four put out that roadmap. We will let Congress work out the details, but it's fair to say we've had good conversations with them.
Tammy Haddad: Do you think that hallucinations will ever go away? I want to talk about technology. Have you seen any progress from a year ago?
Ben Buchanan: Yeah, I think it's fair to say that if you look at the technical benchmarks hallucinations, sometimes called [00:25:00] confabulations, the notion in which an AI system basically just makes things up, that the companies have made progress on that. Obviously there's a substantial way to go. There's some domains in which that's not a problem. Okay. You just redo it and it's fine. And there's some demands with the consequences could be very severe. So part of why we're trying to put out safety standards is to help folks differentiate what is what.
Tammy Haddad: What is your take on the tie ups we have Apple and OpenAI combining? I don't mean from an antitrust perspective. How do you look at that? I'm not sure that
Ben Buchanan: I think there are tie ups, I think OpenAI is probably closer to Microsoft in terms of their integration there, which I will defer all antitrust questions to the FTC. Yeah. But I think the Apple integration of AI into their system I think shows how the technology is going mainstream and is developing, and Apple, which is a company that has not historically done these huge AI projects relative to some of the others, is now engaging in this area as well.
And I think Apple, a lot of the technical details got lost in their announcement, but Apple has done a lot that I think it's interesting and [00:26:00] probably important around improving privacy in AI systems and the like. And this was a big theme in the president's executive order. We've certainly been tracking the technical details here of what they're doing pretty carefully.
Tammy Haddad: What about the criticism of OpenAI? I think some of their former staffers said that they're running to shiny new products.
Ben Buchanan: I probably should not be commenting on criticism of American companies. I will leave that to the free and open debate in the public square.
Tammy Haddad: What do you think about their new safety and security committee? You mean it's a model for other companies?
Ben Buchanan: I think there's different ways of doing this. So I don't know that we would endorse one over the other, but all of the companies have talked to us in a fair amount of detail of how they're trying to prioritize safety and they've done it differently. So Anthropic has a little bit of a different model as does Google, but this is something that they've all done and they've all been pretty clear that they know this technology is not without its risks. And then we have to manage those risks so that we can harness the benefits.
Tammy Haddad: Speaking of risks, is it 77 million Americans who were affected by, I always call it a hack because I'm not a professional of the healthcare industry through [00:27:00] UnitedHealthcare? And for months people have not been able to find out where things stand in some cases, they're still trying to figure it out. But when you look at healthcare and AI, just hearing about that makes you even more nervous. This is not about hallucinations. This is about, wait a minute, it's not in the government's hands. It's in the company's hands. It's a perfect example of it. What's the role of the company? It's still not fixed because they just haven't been able to figure it all out yet. And then when you think about the idea that health care would be turned over to AI, it makes many of us very nervous. What about you?
Ben Buchanan: Yeah, so the UnitedHealthcare thing is a cyber incident. Unrelated to AI, but I think what it does is it shows the stakes of the American healthcare system, which I think we can all take for granted. Everyone knows how important healthcare is. This is a case where you see both the benefits and the risks of AI. So on the benefit side, I think some of the most promising applications of AI are to science and to healthcare and to drug discovery and the like.
A great [00:28:00] example of the impact of AI for the positive is something Google created called AlphaFold, which is used to solve the protein folding problem. This very fundamental biological problem that is at the core of a lot of science and medicine. And I think it's fair to say has revolutionized a lot of how we do that kind of biology and how we do that kind of medicine. So that's not a hypothetical. That is underway right now.
Google made this big breakthrough, I think it was in 2018 and 2020. So this is, again, not a hypothetical. Google. But what's also not hypothetical in healthcare is some of the risks, and the risk of discrimination in particular and we've seen healthcare algorithms discriminate against people of color and the like in terms of the treatment recommended, and so forth. And that's not acceptable. And this is a case where as we said at the top of this conversation discrimination is still discrimination, even if it's AI that does it. And the executive order has a very lengthy section on healthcare discrimination using algorithms that went to HHS they've already produced on schedule the guidance.
And I think they'll build on that further in regulations and the like. So this is the case where we see the risks and the benefits and we are not being idle about either. [00:29:00]
Tammy Haddad: And one of the points of the executive order was to have an AI officer. in each department, in each agency. How's that going?
Ben Buchanan: Done. on schedule. So I think the technical framing here, I think is going from memory here, but I think it's all the agencies that are what's called CFO Act agencies, so the very big agencies. I think it's the top 30 or thereabouts. And I believe we've done all 30 or 31.
Tammy Haddad: What about the legal issues? Intellectual property. Are you starting to focus on that?
Ben Buchanan: Yeah we've been focusing on this from, again, the very start of this. So this is an area. That is addressed in the executive order. It's actually tricky because this is a case where the executive branch doesn't have a ton of authorities.
So as you might know, the Copyright Office is a legislative body, not an executive branch authority. And of course the courts are a place where a lot of this stuff is going to get resolved. And OpenAI and the New York Times are engaged in a lawsuit around this.
Tammy Haddad: They sure are.
Ben Buchanan: Obviously not commenting on pending litigation. But I think it's fair to say that this is an area of a significant focus to us. And the [00:30:00] president's priority has been to make sure that people who are actually creating the content get rewarded for their work. And I think it's fair to say you can see that theme in the executive order. And I'm sure you'll see these conversations play out in legislative and judicial settings.
Tammy Haddad: Do you think there'll ever be a time when a regular citizen goes to, it doesn't matter, name the platform, and they can just look and say, that's synthetic. This is original. Will there ever be, I don't know if that's watermarking?
Ben Buchanan: We've been very clear that we want that to be the case and that some of the companies committed to is watermarking on their content and they've made really good progress. And the standard here is called C2PA.
Tammy Haddad: And what does that mean? Is it actually like a CNN bug in the corner?
Ben Buchanan: It depends on the platform, but they're building towards that. And we want something where you can, exactly as you say, a citizen can look at something and say, okay, this is generated by an AI, and we don't want it just embedded in the metadata. We want it to be appropriately visible. I think it depends, of course, on some of the context. [00:31:00] But, yeah. That's something that we can't mandate under our current authorities at all. We don't have the authority to mandate that. But the companies have committed to building that tech, and they've delivered, I think, in moving the ball forward substantially over the last year.
Tammy Haddad: Is there anyone that's doing it well now that we should all run to our computer and look up?
Ben Buchanan: Probably shouldn't endorse particular products. The White House lawyer on my shoulder is saying stay away from that.
Tammy Haddad: Isn't that part of it that the lawyers are just hanging right here to jump in whether it's copyright, intellectual property, trademarking, all of that.
Ben Buchanan: I like to say I'm grateful for the counsel of my lawyers who keep me out of trouble. Exactly. Okay.
Tammy Haddad: We're looking at the black box where AI lives.
Ben Buchanan: Sure.
Tammy Haddad: And will there ever be a reveal, not unlike that executive order reveal that I attended, where President Biden will open up and he'll say, here's everything you need to know about how AI works.
Ben Buchanan: You mean understanding the AI systems themselves at a sector level? This is not a case where the cynic in me would have [00:32:00] been surprised by how much progress we have made. A year or so ago, I would have said, it's basically impossible to understand the workings of these systems.
And there's a couple of interesting research results, one from Anthropic, one from OpenAI, just in the last couple of months, that have given me a lot more hope for the future. That we can understand these systems at a deeper level. Sometimes this is called mechanistic interpretability and this is an area where the companies are doing very good work and we have tried to catalyze this work in the government through things like the AI Safety Institute and the National Science Foundation and the like.
So I am more bullish than I ever have been before that we can start to understand what's actually going on in these systems.
Tammy Haddad: I look forward to that day.
Ben Buchanan: Yeah, it'd be great. And it is striking that just, again, this may be the professor in me, but it's striking that this field is. For a very long time has been different from other fields where we know the physics at a theory level that makes a suspension bridge stand up. But we really don't have the same kind of developed technical theories for how AI systems work inside the neural network. [00:33:00] And I think we have started in the last year to begin to make some progress and theory build and so forth. And that is intellectually exciting, of course, but it's also exciting from a policy perspective because it can help us understand the systems better and then therefore use them better.
Tammy Haddad: Ben, what about the doomsdayers? People that walk around every day and say AI is going to end the world. And I'm thinking specifically about Elon Musk. Let's start with him because he has such a large platform. How do you handle that?
Ben Buchanan: Has it been reported already, Elon came to the White House and just we had a good conversation about the risks and the benefits here? And he also runs an AI company, so he, I think, is very much a participant in the ecosystem. And I don't think we agree on everything, but I think it's fair to say we've talked to him the way we talked to any of the other companies and the companies talked about the risks here and they talk about the benefits and at the end of the day, they're all developing the stuff. So we need to make sure that they're developing it in a way that's safe, secure, and trustworthy.
Tammy Haddad: And you think they're telling you everything.
Ben Buchanan: I'm sure no company is telling the government everything.
Tammy Haddad: All right, I had to ask. And what do you think [00:34:00] about the impact of someone like Elon Musk, who has such a big following as an entrepreneur now with Twitter, and the power of that? How do you, again, get more people to look at it in a more positive way? Or maybe you don't think you need to.
Ben Buchanan: I don't think we're trying to put our thumb on the scale of discourse on AI. What we're trying to do is make sure that people developing it and the people deploying it are accountable. I don't think we're trying to be in a system in which we're trying to make the technology seem more or less positive in the minds of Americans. We're trying to make it earn its reputation and make it live up to the lofty claims that are sometimes given to it.
Tammy Haddad: Ben Buchanan, thank you for being with us. This has been an incredible hour.
Ben Buchanan: My pleasure. Thanks for having me. Thanks for what you're doing.
Tammy Haddad: Thank you for listening to the Washington AI Network podcast. Be sure to subscribe and join the conversation. The Washington AI Network is a bipartisan forum bringing together the top leaders and industry experts to discuss the biggest opportunities and the greatest challenges around AI. The Washington AI Network podcast is produced and recorded by Haddad Media. Thanks for listening.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.