Episode 11 - YouTube Reps. Obernolte & Mullin - Harbinger v2 Tammy Haddad: [00:00:00] Welcome to the Washington AI Network podcast. I'm Tammy Haddad, host and founder of the Washington AI Network. Where we bring together DC insiders, influencers and policymakers who are challenging, debating and just trying to figure out the rules of the road for artificial intelligence. In this episode, you'll hear from Republican Congressman Jay Obernolte, Vice Chair of the House AI Caucus, and Democratic Congressman Kevin Mullin on how Congress is approaching AI regulation and what's at stake, and Equal AI's Miriam Vogel, the Chair of the President's National AI Advisory Committee, who just returned from Davos. Thanks for listening. I hope you enjoy. Tammy Haddad: Welcome to the Washington AI Network. We started in July. Come on in! Come on in their seats. It's all right. Oh my god, Cat Zakrzewski, everyone stand up and applaud her. The great Washington Post tech reporter. You can sit right here. There you go. There you go. [00:01:00] Great. See, it's really bad to be late at these things now. Anyway, just kidding. But we were just getting started and I'm really thrilled to start with colleagues and folks who have really worked hard to bring technology to all of us and to make sure it's safe. We're honored to welcome our featured guests this evening. Congressman Jay Obernolte, vice chair of the House AI Caucus, and Congressman Kevin Mullin serves on the House Science, Space, and Technology Committee. Two amazing members of Congress who are doing so much in this field. Tammy Haddad: Let's hear it for them. Tammy Haddad: There's so much to talk about, but I have to point out. That bipartisanship is alive and well in AI. Maybe this is a lesson. . Thank you, Tiffany. So, Congressman Obernolte, I mean, I know you're a computer scientist. I didn't know you were a pilot. You and Heather are pilots. That's totally cool. And I'm happy to ride with you back to California anytime. But, you've coded millions of lines of code. over 30 year career in software, giving your technical experience. [00:02:00] I want to start with what do you think the greatest opportunity that AI presents for the American people? Rep. Jay Obernolte: You know, I'm, I'm a huge AI optimist and I think that the greatest is the amount, the degree of human empowerment that it can enable. Rep. Jay Obernolte: Think about the fact that for the last 200 years, every major expansion in the GDP of the United States has been heralded by an increase in worker productivity. Every single one. You really can't have one without the other. And yet, over the last six years, We've seen a gradual decline in worker productivity in the U. S. So I think AI has the potential to be that next big catalyst that expands the productivity of our workforce and that will allow Lead to not just growth of GDP, but a wave of prosperity that Lifts all the boats in America. So I'm an optimist about that. Tammy Haddad: Congressman Mullin Rep. Kevin Mullin: By nature, I'm an optimist, too. Let me just say, Tammy, [00:03:00] I'm honored to share this stage with you, as well as my colleague, Jay, who I got to meet in the California Legislature, and we partnered Tammy Haddad: in that. Yeah, you guys go way back. You Rep. Kevin Mullin: We go way back. And we are going to be doing a district exchange. Uh, he's coming to my blue district, and I'll be visiting his red district in California, and that's going to happen a little later this year. And Jay's one of the smartest guys I've encountered in politics, so The really tough technical questions are going to go right over here. Tammy Haddad: Yep. Rep. Kevin Mullin: And on that score, I have to tell you when I got the invitation to participate in the podcast, I hesitated just a little bit because the reality is I'm in the listening and learning is of something as vast as artificial intelligence and how we're going to ask ourselves the right kinds of questions as we, as we craft regulation. But I come from an area of the country. which is the hub of the innovation economy. I represent YouTube and Tammy Haddad: Yeah let's hear it for that. Come on now. Rep. Kevin Mullin: 300 plus life sciences companies, [00:04:00] many of which I'm going to introduce Jay to. And I think there are incredible opportunities coming our way if we get this right. Tammy Haddad: And have you noticed a change in your district, literally like there's an excitement. here in D. C. And in Davos, like two places of such importance. But is there a new energy? Are there new startups? Has it made a big difference in the last six months? Rep. Kevin Mullin: I think the A. I. Boom is going to be fully underway in the San Francisco Bay Area. We have lived the boom bust cycle, I think there's great excitement around the economic pieces of Rep. Kevin Mullin: It's the drone that comes over every time we gather people here, we'll try Rep. Kevin Mullin: we'll try this one what the heck yeah so so great excitement about the economic opportunities. But [00:05:00] look, fear of the unknown apprehension. I took a ride in a Waymo autonomous vehicle. Incredible. I was a little apprehensive at first, but after about two minutes, I was like, Oh, this is a piece of cake. Tammy Haddad: Did you sit in the back or the front? Rep. Kevin Mullin: I sat in the back. Okay. Tammy Haddad: Did you start perspiring or anything? Yeah. Or, like, how did that go? Did you have your seatbelt on? Rep. Kevin Mullin: I mean, yes, I was perspiring, but it wasn't because I was nervous. I just happened to run hot. But, um, But it was a very interesting experience. Rep. Kevin Mullin: And, you know, I'm just struck by the thought, you know, 43, 000 people are killed in car accidents every year in this country. When widely adopted, AVs are going to save lives. Thousands of lives. That's just one slice of the AI world and the AI potential. So, I am energized by what this can mean, yet I'm also concerned about our democracy. And 88 elections across the world in 2024 and bad actors [00:06:00] wanting to manipulate the outcomes of those elections in democracies that may not be up to speed in terms of their regulatory framework. So huge issues for our democracies, not only here, but around the world. Tammy Haddad: Congressman Obernolte, what do you think is the best way to protect these elections? Rep. Jay Obernolte: There's two answers to that question. There's the government regulation piece. There's the public education piece. I think we need some regulatory framework built around the spread of mis information I think that the federal government has an important role to play in making sure that deep fakes that seek to deceive people in order to shift their voting behavior are not allowed and are not promulgated by anyone that's aware of them. Tammy Haddad: So, what's the legislation for that, or how does that happen? Paul Rennie: Well, it was already in the house on that. I think it can be done thoughtfully. I think it could be done quickly. But I think that it's starting to become in the public eye because of what recently happened with the Biden robocall in New Hampshire. You know, that's a [00:07:00] perfect example of the mischief that can and will occur. And that wasn't even a particularly creative or well done piece of forgery. You know, even the current state of generative AI can do better than that. But then there's a bigger answer to that, which is that we are going to have to do a better job of educating ourselves as a society about the power of this technology and what it means for our shared experience and for credibility. So, I'm always amazed that still I have to tell my mother that everything she reads on the internet is not true. I mean, and she believes it in this visceral sense because she grew up in an era when if you looked at something that looked like mass media, you could believe it, right? It was, it was Dan Rather everything had been vetted, and you might not agree with everything that he said, but you knew he wasn't going to completely mislead you. And she, she has, she struggles to deal with a world where that's no longer true. As we will struggle as a generation to [00:08:00] deal with a world where you can see A video of Congressman Obernolte robbing a 7 Eleven, because someone could type that into a prompt. Make me a video, security camera video of Obernolte, Congressman Obernolte robbing a 7 Eleven. And then you could say, make me a video of the hosts of CNN discussing how Congressman Obernolte is really in big trouble this time. And in the time it takes you to type that, in 10 years, you will have it. And so, think about the implications for us as a society. You know, we, we have to have, we have to build a distrust of everything that we see, unless we can prove its providence, unless we can prove its authenticity. And I think we have to shift from A stance where we just accept what we see as true until proven false, to where we're skeptical of everything that we see, unless it comes from a trusted source, or unless we can verify that it's authentic. Tammy Haddad: Well, your idea of watermarking the authentic materials. [00:09:00] How would that actually take place? Is that congressional legislation or do you pull all the companies in the room and say this is what you have to do? Because it needs to be done now if it's going to happen. Rep. Jay Obernolte: Well the beauty of it is it is kind of an anti regulatory system. If you think about the proposals right now for requiring an encrypted watermark on AI generated content, I think that it's well intentioned, but very misleading. Because if you do that, everyone will follow the law except for the malicious actors who really, really want to deceive people. I mean, it's the same reason why we can't require a stamp that says fake on currency that's not valid, right? You know, the, we laugh at it, but, but the truth is that everyone would do that except for the people that really, really want to mislead you. Rep. Jay Obernolte: And that's what will happen with, with a watermark that's required. And in addition to that, it will desensitize the public to the [00:10:00] fact that these This malicious content is out there because you'll expect to see the watermark. Whereas I think it should be the opposite. We should be mistrustful of everything. We should realize that in 20 years, it's going to be so easy to generate content with AI that you'll be able to generate a whole movie with a paragraph of prompt. Tammy Haddad: Oh don't say that I work for HBO. That's like not good. That is not good for us. Rep. Jay Obernolte: No, no, it is good for you. I think it is good for you. Tammy Haddad: How is it going to be good? Rep. Jay Obernolte: Oh, well, I mean, think about 20 years from now, when I, as an HBO subscriber, am going to HBO, not to see a show that's generated for everybody, but a show that's generated just for me. Based on what it is I'm interested in today and what I want to watch. I mean, that's incredibly empowering, not just for me. It's great for HBO, but it's great for content creators. Because there'll be an explosion of demand for all kinds of different content. And yes, it will be content that requires Knowledge of the use of AI to enhance your productivity, but still [00:11:00] many, many, many more jobs for content creators than before. Tammy Haddad: I want to follow up on that with something else. I heard you talk about these when we're talking about dangers, about the raw files that are part of generative AI. So what you're putting in and how do you protect that? How are you recommending to protect that? Rep. Jay Obernolte: Oh, that's a great question. And when I think about what Congress needs to do, to create a regulatory framework for AI, this is one of the major pillars, I think, that we are have an obligation to step in on. Because right now, this question of copyright infringement, that's a question of fair use. And there are also questions around the use of generative AI. You know, what's a derivative work? How much do you have to alter the, this input before it's no longer a derivative work of the input when AI creates it. The problem right now is that we're seeing these lawsuits that are being brought forward and justifiably so by content creators that say, that's not okay. The New York times lawsuits is a [00:12:00] great example. But the courts do not have a framework to make these decisions in because we so far as a federal government have abdicated our responsibility to set that I mean the court's job is to interpret the law And they're not interpreting law that was intended to deal with a situation. So, this is something that I think we are going to have to create a legal framework within. It's gonna require a lot of work. I think if you, if you rank all of the things that we have to do in Congress, the different, the different aspects of AI we have to regulate, this is one of the most complicated and will be one of the most controversial, and yet, it has to be done. Tammy Haddad: Congressman Mullin, you've talked a little bit about elections. And the thing that, that people are worried about is, how in this world, when, as he's just discussed deep fakes and all the rest, and also by the way, all the opportunities AI is giving to elections to candidates again, around the world, how do you keep them free and fair? How do you get people [00:13:00] to believe they're free and fair? Rep. Kevin Mullin: Well, there's a lot there. Just a couple of, of thoughts on just what. Tammy Haddad: The reason I ask you is you're focused on civics and educating and you've spent a lot of time on your career doing that and making sure that people know what their rights are and how they should live and that they should be able to live a good life and not be unduly influenced. Rep. Kevin Mullin: I was just going to reference an experience I had in the California legislature that Jay and I went through. I'm the author of the Social Media Disclose Act, so you can see who the true funders are of campaigns in California are not the phony names committee for cats and dogs and well being. It's the actual funders behind elections. But it was this iterative process with the social media companies. We wanted to make sure any law we put in place that the Social media companies could actually comply with, but that was a [00:14:00] way to sort of build trust in, you know, who is behind this content that I'm consuming. But that really is just this small slice. Candidly. When you're talking about AI and just the volume of info of video coming into just YouTube and how they develop an algorithm and there are people in this room that know much more about this than I. Develop an algorithm to detect that algorithms are being employed to detect that AI is being employed on that technology come in. That's an enormous task. But YouTube has an interest in that being, you know, quality content with, with full disclosure. We in the democracy business want the end user to have confidence in what they're seeing. So this is a central challenge for us. What I'm struck by though, is just, just Jay's example about personalized HBO content, we have to create a regulatory framework that is adaptive, that can be [00:15:00] adapted to kind of the new uses and the future uses of AI. We are at the inception here. This feels to me like the beginning of the internet when people were just trying to figure out what, what, what is this going to mean? And, or, or even like the Y2K fear that never really materialized. We are at this stage where we're trying to figure out what this, you know, this will be one of those clips that they go back at 20 years, look at how these people are talking, how Tammy Haddad: We should put this year in a time capsule, like an AI time capsule and put it up and bring it down right in 10 years. Do you think that schools, I mean like high schools, middle schools should start educating? Especially when it comes to, again, voting and how would you do that? Rep. Kevin Mullin: Well, absolutely. I think AI is going to be part and parcel of, of our existence. And if you want engaged citizens and voting citizens, absolutely the question is, you know, How do you embed that? i'm a believer that we don't do enough [00:16:00] civics education as a as somebody who's a steam and stem supporter We need to be doing more on civics education. My father was a 33 year civics teacher. So that's coming out I guess here, but I authored the uh, allowing 17 year olds to vote in primary elections if they'd be 18 by the general election. We need to be doing so much more with our high schoolers with our young people, teaching the value of true citizenship, but we need to be educating savvy political consumers. And I'm hoping that because these young people are so kind of conversant with technology and with social media and mobile phones that you're, you're sort of speaking to an audience that may be kind of natively more better equipped than folks my age to kind of embrace some of these challenges when it comes to technology. Tammy Haddad: Congressman Obernolte, what is the status of the Create AI Act? Rep. Jay Obernolte: The Create AI Act [00:17:00] has been introduced with wide bipartisan support. It creates the National AI Research Resource, which I think is critically important for solving one of the biggest potential consumer harms of AI. Think about the fact that the current generation of frontier large language models, require about a hundred million dollars to train. And that's estimated to grow at a rate of about ten times in every generation of large language models. So one of my fears is that we're going to quickly get to a place where the only leading research on AI is done is in private companies and not in academia. And that's very disturbing because we have a rich history in this country of publishing research that comes out of universities. Someone publishes it, they make it available. There's a system of robust peer review where we've, we determine what's true, what's not, what worked, what didn't, and it's all transparent. So we're in danger of [00:18:00] losing that. And we're also in danger of. Making monopolies out of the few companies that have the resources to train these models I think that would be a very unhealthy place for us as a country So, what the create ai act does is it establishes a research resource where universities and researchers can apply to get access to this compute, which is getting more and more expensive to train these models to make sure that that does not occur. So, the legislation is moving through the process, but at the same time, NAIRR has been established and we've got wide Tammy Haddad: Can you explain NAIRR and take it all the way through? Rep. Jay Obernolte: Sure. So, so this is a pool of compute resource and this is a big deal because this resource right now is highly sought after. If you want a new chip from One of the leading chip manufacturers like NVIDIA. Tammy Haddad: We have someone from NVIDIA. You want to give us some chips? Rep. Jay Obernolte: Well, they have, I mean, NVIDIA is one of our partners on there. Tammy Haddad: Oh, there he is. Yep. Rep. Jay Obernolte: And Google's one of our partners on NAIRR. I mean, the industry has embraced this, which is, which is a great thing. Tammy Haddad: So, who hasn't embraced it? Government? Rep. Jay Obernolte: Well, I mean, I think everyone so far, you [00:19:00] know, it's funny. I had a discussion in my office today With with someone who was asking about NAIRR and I said one of the fears I have is that it's gotten so Such widespread acceptance people are gonna be skeptical. Like why would industry agree to this if we're trying to prevent them from having a monopoly. But, you know, the, the, the honest truth is that we all see the hazards that AI is going to bring. And this is very important because when we seek to regulate, I think it's important to know why we are regulating. You can't just regulate for the sake of regulation. You're setting yourself up for failure. It's one of the things that the European Union got wrong, I think, when they in their approach to AI and when we're regulating here in the United States, the reason for regulation is to prevent these consumer harms that are foreseeable from AI. And I think the big companies get this as well. You know, they're on board with this. That's why they're on the leading edge with talking about the things, the qualities that we want to see, eventually in the application of AI, how we want it human centric, how I want, I want it to be safe. I want it to be trustworthy. I want it to be transparent. I [00:20:00] want to be, to be responsible, you know, however you define that. And, and the, the leading AI companies are completely on board with us. And that's very encouraging. Tammy Haddad: That's great. By the way, we're going to take some questions from the audience, so if you want to think about your questions, Virginia has a microphone, so get that ready. So I have to ask you, Congressman Obernolte, today, POLITICO, Steve's here, Steve Overly, where are you? There you are. Great interview with Will Hurd. Did everyone read it? Give him a round of applause. Incredible interview. I'm going to quote the interview. From former Congressman Hurd. Tammy Haddad: "At one point in my two years on the board of open AI I experienced something that I experienced only once in my over two decades of working in national security..." Tammy Haddad: You'll recall he was in the agency. Tammy Haddad: "...I was freaked out by a briefing. I saw for the first time what was going to eventually be known as Chat GPT 4 and the power of the tool was clear this signified just the first step in the process to achieve Artificial General Intelligence if unchecked [00:21:00] Artificial General Intelligence..." Tammy Haddad: I feel like I'm on Meet the Press or hosting Meet the Press. Tammy Haddad: "...could also lead to consequences as impactful and irreversible as those of nuclear war." Tammy Haddad: What's your response? Rep. Jay Obernolte: I'm a little bit more optimistic than that well, let me lemme point out that Will, actually, has a degree in computer science. He was one of the few computer scientists in Congress, and so Congress is lessened by his departure. Uh. So you said something really interesting in that statement that I think is the differentiator, right? So to the general public, it seems like these large language models are this revolutionary advance in AI. But if you look at the field of AI, it's been a very incremental process and the advent of these large language models was only possible with a huge training data set, most of the public facing data on the Internet going through it and what we've created is an incredibly useful tool, but almost [00:22:00] like a parrot, right, that understands language because it has all of these different examples of how sentences come together and how how to create thoughts and narratives, but it's a long way from what you just said, artificial general intelligence. And there is no academic agreement that AGI is even going to be possible through a large language model, you know, because there's no reasoning there. There's no planning there. There, there, there's a lot that needs to be happening between those two. Tammy Haddad: And why is that? Rep. Jay Obernolte: Well, because of the way that large language models work, they're just trained to regurgitate what they do, so they're generalizing ways that, that humans communicate, but I mean, the, the, the leap from that to something that's a reasoning entity, a thinking entity, which is what we mean when we talk about AGI is still quite large, and many computer scientists, in fact, some of the leading computer scientists, I would venture to say the majority of them do not believe that AGI is eminent. So, we would be having a different discussion [00:23:00] if we were talking about Advanced general and artificial general intelligence because that is, you know, the nuclear weapon, but I think we've got some runway on that. So, it's not time to panic, but we need to be aware. Tammy Haddad: Okay, we're going to come to questions next. Maryam, you should ask your question that you gave us because Maryam Mujica is so smart because she gave us the question in advance. I'm going to let her ask it. You guys get the microphone over while she's going over there. What do you think about? Sorry, Justin. What do you think about the New York Times going after OpenAI? What's your take on that? Do they have a right to do that? Rep. Jay Obernolte: Well, sure. I mean, well, we're, we live in the United States, right? Anyone has a right to sue anyone for anything, you know, but, but I mean, it's I'm not saying the New York Times is correct, right? But they have a point. And the point is we generated all this content at great expense to us. Yes, we put it out on the internet. You can't just copy what we did verbatim. That's not fair use. And we put out [00:24:00] disclaimers that say the circumstances under which you can use what we created. And those disclaimers do not include using our content to train a commercial AI model that then, by the way, you're going to monetize and sell to people. So, I think they have a point, you know, how do we resolve this? I don't know. Tammy Haddad: Okay. Thank you for the answer. Sorry. The New York times isn't here to get that, but the Washington Post recorded it. Go ahead. Maryam Mujica: Hi. Tammy Haddad: Maryam, will you stand up please? Maryam Mujica: Congressman, to both of you. Thank you so much for being here. To Congressman Obernolte, I know that a big part of what you've, how you've talked about how we should regulate AI is an outcomes based approach as opposed to, you can't have any discussion on, on AI without invoking the Europeans, I feel like at this point, because they've been sort of ahead of the curve in terms of wanting to regulate whether it's a good or bad thing. We can debate that, but what do you mean by an outcomes based approach for the average person who's not in the tech [00:25:00] industry? Could you give an example of what that means? Rep. Jay Obernolte: Sure, so the European Union has adopted this model where they say we want to keep AI safe and we're, therefore, they're going to require anyone that use, wants to use AI in anything but a low risk context to apply for a license and get a license from their government and ahead of time, the government will check through ostensibly to make sure that their use is safe. You know, that that they have, they have done all the appropriate things to make sure that their AI is safe for consumers. I think that that is not the right approach. And if you want an example of that, think about the fact that the FDA has already processed over 500 applications for the use of AI and medical devices already. Over 200 of those devices are already available to consumers. And think about the fact that we're already having to navigate this. So, if you, if you look at what the EU did to spin up a whole new bureaucracy, ask yourself the question, is it easier to teach a [00:26:00] brand new bureaucracy, everything the FDA already knows about ensuring patient safety and medical devices? Or is it easier to teach the FDA what it might not already know about AI? And I think it's pretty clearly the latter. You know, there's nothing like super special about AI that would differentiate it from other tools like software. We don't have a department of software, we don't require license of software in medical devices. So that's what, when you talk about an outcome based approach. It means that we don't care about the fact that it's specifically AI. We care about the consumer experience. What are you doing with it? What are the potentially harmful effects of it? Which is what the FDA is doing when when they're approaching medical devices. And I think that that's you know, that's a fork in the road that we're at right now. We're gonna have to decide are we going to follow the, the lead of the EU, or are we going to instead empower our sectoral regulators the way that the UK has done, and I hope we take the latter course. Tammy Haddad: [00:27:00] Okay. Steve, you have a question? Stephen Overly: Thank you. Hi, Stephen Overly from Politico. The Biden administration put out its AI executive order that's in the process of being implemented. I wonder what are the big gaps you see that remain even with that order in place that Congress needs to step in and fill and how realistic is it that that can be done anytime soon? Tammy Haddad: Either one. Uh, you go. Rep. Jay Obernolte: Well, I'll tell you the I think there was a lot to like in the executive order. It clearly was written by some folks who had done deep thinking about the potential hazards of AI and what government response needed to be about that. It's important to answer your question, you know, the, I'll tell you one thing that, that I thought, well, let me back up, that you asked, what, what doesn't it do? Rep. Jay Obernolte: Well, the executive order does what the executive branch can do, which is to regulate the use of AI by the bureaucracy, by the federal government. What the executive branch cannot do, and what only Congress can do, is to regulate the use of AI by [00:28:00] industry. And that's the role of Congress. And if I had to criticize part of the EO, what I would say is, It attempted to step into the role of Congress and to evoke the Defense Production Act to require some reporting by private industry on the use of AI. And I'm actually not even saying that that's a bad idea. I'm just saying that the executive branch does not have the authority to do that. Under the constitution and so I think if they were to try to do that They would be subject to some legal challenges and they would probably lose but that's something that you know. We are behind in congress. We need to to take up the mantle to craft this regulatory framework and get it passed, which will be good for everyone because everyone will know what the rules are Rep. Kevin Mullin: If I could just add to that I I I generally agree with what Jay said, I'm encouraged that the administration is engaged in this. I've looked at the laundry list of actions, department by department within the last 90 days. And I, I counted almost 20 different federal entities that are engaged [00:29:00] here. So just on Jay's earlier comment, I would tend to agree that you need to harness the expertise department by department. Folks that are already engaged in their particular arenas. NHTSA, for example, when it comes to AVs, as opposed to creating some separate brand new, big bureaucratic entity at the federal level. FTC and Commerce are taking the lead here, but to Jay's point, I mean, Congress needs to engage, create a regulatory framework that's flexible, So people know what those rules are, but something that's flexible and adaptable. So we know where this, so we can adapt when we see the direction that AI is, is actually going. But in partnership really, with, with private entities who are driving the innovation, there really is, trying to be optimistic here, an incredible opportunity for partnership between private sector and government if we, if we balance this properly. Tammy Haddad: Thank you. Paul Rennie from the British Embassy. Paul Rennie: Hi, thanks. Thanks again for hosting [00:30:00] us tonight. Always a pleasure. Congressman Paul Rennie from the British Embassy. Very much support this idea of outcome based AI, like you described, Congressman. Certainly our idea is to upskill existing departments and see where we get to. And a question for both of you really is, we had our AI Safety Summit last year, which the Prime Minister hosted and very passionate about. It's tough work getting through international activity. It's completely necessary. Companies like YouTube are international companies, but a lot of the companies in the US will always look to the US federal government, US lawmakers in terms of how they're setting the rules for the road. But how do you think we can work best with the US administration, the government and so on about getting good international regulation so that we're not simply looking at U. S. centric policies for U. S. Companies. Thank you. Rep. Jay Obernolte: Sure. Well, I'm so glad that I said earlier that we should follow the lead of the U. K. I didn't see you sitting in the front row. Um, you know, you're absolutely right. It's gonna be critical to work together. And in a number of different ways, I know a lot of people think we should harmonize the regulatory framework across [00:31:00] countries. Not sure how realistic that is. I think there's going to continue to be a variety of approaches. But, We are going to have to work together to limit the spread of malicious A. I. And I think it's going to kind of take the same framework as the international cooperation to limit nuclear proliferation, where you admit that as technology advances, it becomes easier and easier to maliciously employ the technology, but we keep very careful track as an international community about who's enriching uranium and who's mining it and where it's going, where those stockpiles are. And I think similarly, we are going to have to keep track of where the big accumulations of compute are. And, we might have to impose international know your customer laws on those purveyors of compute the same way we require banks to know their customers to limit money laundering. So we're going to have to work together. And all that's going to require also overcoming some differences in our government. And, I was nice earlier and so I'll say something a little, a little spicy. Uh, [00:32:00] you know, we were uninvited to the UK AI Safety Summit, because it was decided at the highest level that, this was an executive branch event, and so Vice President Harris went and represented us very well, but I had to explain to my counterparts in the UK, our system is not like your system. Rep. Jay Obernolte: Your system, when you form a government, it's within the ranks of parliament, and so it's, it's kind of unified. In our system, the executive branch and the legislative branch are very different, and the regulatory structure for AI is going to come from the legislative branch here, not the executive branch. So if you are interested in harmonizing our approach to regulation, you need to talk to us. So, so, you know, well, that's fine. All is forgiven, but, uh, but yeah, you're absolutely right. We need to work together and, and getting together and talking is the first step. Tammy Haddad: Can you talk to the prime minister about that? Ha ha ha ha. Well, thank you so much, Congressman [00:33:00] Obernolte, Congressman Mullin. Thank you. Let's hear it for them. But stay with us. We're now going to talk to Miriam Vogel. I think some of you know her. I know you know her work. She is the president and CEO of Equal AI. And also the head of the President's National AI Advisory Committee. I don't know how you do it all, but you'll tell me that later. But right now, you just got back from Davos. It's all AI, all the time. Someone said that AI was the Taylor Swift of Davos, which I really kind of like. I'm not sure Taylor Swift would like that, especially after what happened. But first of all, tell us what you heard there, what your message was to these world leaders and business leaders. Miriam Vogel: It was all AI all the time. The one thing I noticed is, I'm sure for all of you, when you have conversations about AI, the predominant feeling the sense, whether it's verbalized or not, is A fear. Am I being left behind? What will happen to my jobs? Are my kids going to be ready? Is the government going to be able to handle this? [00:34:00] Just any flavor of that sentiment. That was not the predominant or even minority feeling there. It was AI Optimism up through AI omniscience of AI can and will solve everything. My message was, I agree that AI is exciting, that it creates opportunities. I myself am AI net positive. Because it only happens, it can only create those opportunities and lift everyone up if we're very intentional at this pivotal juncture to make sure that we are including more underrepresented communities, more women, more people of color in all of the AI ecosystem. Tammy Haddad: Well, you have used your platforms to make sure people are looking at all the biases. And there you are with all of those leaders who are looking at success, the elections, deep fakes. And do you think there's a sense that they all have to be as responsible as you're trying to [00:35:00] get them to be to make sure that it that AI is successful and it's helping people around the world? Miriam Vogel: I mean, I do think on the leadership front, fortunately, people are understanding the gravity of this situation. I feel it personally. It's affecting all of their elections. It's affecting their constituency. So, I think it's hit home enough that people understand the power for good, the power for harm. I think we see a lot more savvy amongst the leadership is we'll see tonight here in D. C. The one problem with our leadership here tonight is it gives a false sense of quite how sophisticated DC is. But you can see with in the past few years, there's a much deeper sense of the gravity, the responsibility, the guardrails that need to be put in place. Tammy Haddad: Is transparency key? Is trust key? Miriam Vogel: Trust is key. I think what are the key ingredients of trust? Transparency is a part of that, but it also, we have to really unpack what that means. So if you say, what's an algorithm, are most people in this room going to [00:36:00] understand what those data points are? Is it going to be meaningful? We talk about the nutrition label for AI, which I think is fundamental. We have to know what is in these systems, particularly as they're being purchased and licensed and used across borders around the world. We have to have consistency in explaining what it is. Is that underlies these AI systems? The challenge is it also has to be stated in such a way that the user understands what it means and knows what to do with it. The user is going to be different in each different context. Tammy Haddad: Congressman Obernolte has this idea of, instead of all the conversations about watermarking AI generated materials to watermark the authentic materials, the original material. What do you think about that? Miriam Vogel: It's a neat idea. Like the, the GMOs, you know, we know what's genetically modified by what has been self identified as earning the label of not being genetically modified, and that also has gone through some layers of nuance and complication. So, you know, [00:37:00] I think it's a great idea to start somewhere. I think it's a great idea to set standards across industry and that might be a very thoughtful and reasonable place to start. Tammy Haddad: So last Friday, for those of you who Rep. Jay Obernolte: mis Tammy Haddad: it, we had a little happy hour. H A I P P Y, happy hour, actually it wasn't an H I P P I, but Prior, earlier in the day, Miriam and Miriam's team works with Congress, works with staff and, you know, and briefs and has great conversations, really moving the conversation forward. So you were up on the Hill all day Friday. There were other meetings that happened. There were a lot of CEOs in town again that day because of the alfalfa dinner. You walked away Friday night. I know you were exhausted, still recovering from Davos, but what was your sense from talking to the staff? Because, you know, Miriam's been doing this for three and a half years, five, five and a half. Let's hear it for her. Five and a half years. Okay. What did what was your sense coming out of those conversations? Miriam Vogel: Coming from the Hill is really [00:38:00] energizing on AI these days. I mean, I think, particularly five years ago, it was harder to have a conversation. We have the privilege at Equal AI of working with Members and staff supporting members who are savvy on AI, who want to move forward on AI. And so, the program you're mentioning was we were asked, can you give us a deep dive on, on AI? There's wonderful programs out there where they've gotten context. They've been steeped it through the AI forum. They had a few specific questions they wanted to dive deeper on. They wanted to have inter government, intra government conversations. There's too many silos where people are not understanding what's happening in different agencies, the executive branch, the Hill. So, the session that day was to try and bridge those silos and give people an opportunity to see what's happening across agencies. Even, uh, there's so many commonalities in what they're grappling with and they need to have that conversation, which our happy hour hopefully helped facilitate. Tammy Haddad: We're very helpful, I think. Miriam, thank you so much. I have one more [00:39:00] question for you. There's a NAIAC meeting coming up. You have a report coming up. Can you give us a little sense of what that's going to be? Miriam Vogel: Thank you for that. Yeah, so our National AI Advisory Committee has a public meeting coming up February 22nd. We meet publicly, hybrid, at least three times a year. We've totally changed our cadence instead of just these in person meetings three times a year. We actually have public meetings almost monthly now because as opposed to the first year when we followed a traditional government committee tradition of having an annual report, now we get things out almost monthly. It was because we needed to respond to the time because AI cannot wait for an annual report. And so at this meeting we'll have a few pieces. As a, as a federal advisory committee, all of our documents, our materials, our statements need to be deliberated in a public setting. And so, this will be one of those opportunities where we can have the setting to have the deliberations. The other thing we're going to be doing there is one of my favorite things, which is we have public stakeholder [00:40:00] sessions where we invite other voices to be educating us and the general public at the same time. Tammy Haddad: Is there anyone here that wants to volunteer for that? Ben Kobren. Oh, wait. Maryam will do it. Miriam Vogel: Come and join us. Yes. Tammy Haddad: Thank you for listening to the Washington AI Network podcast. Be sure to subscribe and join the conversation. The Washington AI Network is a bipartisan forum bringing together the top leaders and industry experts to discuss the biggest opportunities and the greatest challenges around AI. Tammy Haddad: The Washington AI Network podcast is produced and recorded by Haddad Media. This episode is sponsored by YouTube. Thank you for listening.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.