Washington AI Network WASHINGTON AI NETWORK TRANSCRIPT EPISODE 1: AI SUMMER DOWNLOAD FEATURING VICTORIA ESPINEL AND KELLEE WICKER JULY 20, 2023
OPENING Tammy Haddad: Welcome to the AI Network Podcast. I'm Tammy Haddad, host and founder of the Washington AI Network, where we highlight the conversations of DC insiders and influencers who are challenging, debating, and just trying to figure out the rules of the road for artificial intelligence. From national security to digital privacy, education, healthcare, the future of jobs, and building confidence and trust in AI systems. Come with us and explore the most pressing issues and the biggest players in AI.
If my voice sounds familiar, you may have heard My Cone of Silence podcast, or Bloomberg's Masters in Politics, or maybe the Washington Insider. Welcome to the first episode and the first meeting of the Washington AI Network. We gathered at the House at 1229 and kicked off the discussion on the latest AI developments.
We began with two leading AI policy voices. Victoria Espinel, a key member of the National Artificial Intelligence Advisory Committee, advising the president and working with countries. She's also the president and [00:01:00] CEO of BSA, the Software Alliance. And Kellee Wicker, the Director of the Science and Technology Innovation Program at the Wilson Center, providing analysis, advising Congress, and translating how AI technologies will impact the world.
Tammy Haddad: This is our first conversation for the Washington AI Network. And we at Haddad Media put it together because there's nothing more important than AI now. It's bigger than the internet, it's bigger than everything. It's gonna change how we live, how we love, how we govern, everything about our lives. So we thought, well, let's just create a place where people can come and talk about things and, really, what they're doing. In the future we'll focus on elections and specific kinds of things. And I'm really thrilled to have these two folks here to get things started for us.
First of all, Kellee Wicker, she's at the Wilson Center. She's director of the Science Technology Innovation Program. She's been working with Congress. She's been working actually with all of us [00:02:00] to educate people on AI. Everyone, welcome Kelly. And then Victoria Espinel, you guys know her. She's the president and CEO of BSA. And through this whole experience, she's always the person I turn to on the internet, the National Artificial Intelligence Advisory Committee, which also we're gonna talk to Miriam about that too. She's the chairman today. But we gotta start these conversations.
We've got to make sure that we're doing it the right way. Everyone here has something to say. And by the way, we're gonna want you to say it. But I wanna begin with you guys and Victoria, I'm gonna start with you and really talk, talk about this committee 'cause you've been working on this with people around the world for the past year.
So why don't you get us started.
Victoria Espinel: So we are about 14 months. We just started year two. We issued a report, with recommendations for the Biden administration in May. So I would encourage anyone who hasn't read it to read it, there are lots of different recommendations. I won't attempt to go through them all, but I [00:03:00] will mention one in particular because it's relevant to the international cooperation working group that I chair, which is to take the great work that our Department of Commerce has done through something called the NIST framework. But essentially it's a, it's a set of practices that companies can put in place to try to mitigate risk in artificial intelligence and then, internationalize that, or have as much cooperation with other governments as they, as, as they can. I can also tell you that just yesterday, the international working group that I chair, within the committee as a whole issued, finalized a recommendation. Two governments that as you are putting together global summits and global gatherings, that you think about reaching out to countries, emerging economies and make part of that conversation. There are lots of great conversations happening now around the world in the G7 and the OECD between the United States and the European Commission, but having the voice of emerging economies in those conversations, I think is something that the NIAIC committee as a whole thinks is very important.
Tammy Haddad: How's that going so far though?
Victoria Espinel: I think in [00:04:00] terms of pulling those summits together, and there's a couple people in the room that could probably tell us a little bit more. Oh, can you tell me about pulling summits together? I think there's a real appetite for that. Mm-hmm.
Tammy Haddad: Great. Thanks for getting us started.
Kelly, so you've been working with Congress? Educating Congress, bringing all the important players together, and also the rest of us, and just really trying to put the right things in front of the right people. And I know you had this pilot program at Howard University. Maybe start there because some of us were on an off the record call today.
And there was a lot of red teaming going on there, which is something I never heard of. They don't have that in politics. Maybe they should, but why don't you talk about that pilot. I think that sort of takes people into what you've been doing and how you are really working through what Victoria's talking about.
Kellee Wicker: Yeah, so the Wilson and Center is really thrilled to be a part of a coalition of a lot of partners that are putting on the largest and probably the first public generative AI red teaming event. For those of you who are not techies,[00:05:00] red teaming is when essentially you've built the tech product and before you release it to the public, you wanna make sure that it's safe and secure. So you hire people to attack it, to use it as if they were trying to get it to be used for malicious purposes. And when those people discover that they forced it to do something malicious, they report it to you. You fix it before it goes public. This is a really long standing practice in cybersecurity.
Red teaming is very established. Red team versus blue team is also a thing that we do. But in ai, it's kind of a new concept, partially because when you report a vulnerability that you've found with ai, it's very difficult to do anything about that. It's more of like a now, you know, so live around it kind of concept.
So early on in the field, there wasn't a lot of talk about public rights, attending especially. There was kind of just like a, you know, do it for your own good. Internally, we are doing this large pile, this large exercise at DEFCON next month, and the goal here is to talk about the importance of not just red teaming in public, but also diverse [00:06:00] red teaming. We are contending that if the people who test the systems are the same kind of person as the people who made them, they are not going to find any mistakes that the original team wasn't gonna find already. But, and that's why you see things released.
Chat GPT exploded on the scene and immediately started blundering everywhere. Not because of anyone's fault, but just because these systems are very complex. It's hard to find the mistakes. Especially when the people who tested it, the same people conveyed it. What we've seen, so yesterday we did a pilot of this red team exercise at Howard.
So at Defcon we're talking about having 3,500 people. Red teaming. We have all of the eight major companies on board contributing their models, which is really exciting. Yesterday we had 25 students, a little smaller red teaming, and an open source model. So much less powerful, but just kind of testing the concept.
Did we actually see new stuff happen when you had non-traditional people, red team? The answer was yes. So we had about 25 [00:07:00] students who in one hour generated 1000 attacks on the system. And within those, there was some stuff we've never seen people be able to make a model do. The one that won was they managed to get it to tell them a credit card number.
Now this could have been a hallucination, but obviously if this is a system that actually had access to credit card numbers, that would be a big problem. And so it kind of showcased that the core concept that we arrived with, this concept, that diversity of opinion, and the background of a person matters.
Hasn't really been tested, so we weren't sure, does it actually matter? And we kind of feel like, okay, we were right, like this is, we're onto something. Why this is all important is because Congress has been talking about red teaming. Increasingly, when we talk about AI security, should we be mandating red teaming?
Should we be mandating public red teaming? That helps the public have a better understanding of what's happening? And the White House is about to say, yes, we should mandate both public and private, red teaming. And it should be a big part of how we develop systems. And so we are really excited. The White House, OSTP [00:08:00] is a partner on our event in Defcon.
So we're looking forward to them being able to package what we learn and keep moving with it. And we're gonna, we're not gonna stop at Defcon. After we finish that, we're planning on continuing to work with community colleges and historically black universities to teach people how to red team so that they can continue to work.
We're just really stinking excited about it.
Tammy Haddad: That's fantastic. Okay. But what everyone wants to know is what is Congress gonna do for the first time? You've got political parties, right? Both parties. You've got the White House saying, we're gonna, we're gonna, we're in this, we're gonna regulate, you've got Congress, so Victoria, what's gonna happen? Or what do you think the priorities are?
Victoria Espinel: Yeah, so I'll do it. So I think, I think, you know, in the US you have Congress, you have the administration, and you have the state. And I think regulation is gonna happen in one of those areas. But in terms of what I think should happen, and it's not just the U.S., there are governments around the world that are looking at this.
I would say, I would say three things. One is I think, I think [00:09:00] focusing on the biggest addressable risks and the place where we can have the most beneficial impact is, is what the government should do. There's so many different issues. But there are some like bias and discrimination, which is one that's very important to me personally, that I think can be addressed.
And so I think governments are focusing on what the biggest risks are that can be addressed. And then I would really encourage governments to also use their, you know, considerable powers to encourage affirmative use of artificial intelligence to try to address societal challenges. So I think that is number one.
Tammy Haddad: Do you mean you need a statement that says that?
Victoria Espinel: Because I think that is, there's, there's a lot of conversation as there should be about the risks and what they are to address them. But I think there also needs to be, and I've said this for years, there needs to be at least an equal conversation about, are there positive uses to which AI could affirm to be put and have government is only one of the voices, but have that government voice in promotion of that.
The second thing I would say is, you know, think about the long haul. Make it work for the long haul. This technology changes very quickly. Like many technologies we've seen that even [00:10:00] in the last few months. So having something that's going to still be applicable two or 3, 5, 10 years from now, I think is important.
And then the last thing that I would say is, I feel very strongly about this, I think there is a, there's someone's government interest right now because of chat, G B T and because of sort of the, the impact that that's had. And I think that creates an opportunity that will not last that long for governments to come together and cooperate.
And so I would say I think we have maybe six to 12 months, but I think there is, and I think this, based on conversations I'm having with governments around the world, I think there is a real sincere desire for a number of governments to come together on a harmonized approach. Maybe not identical.
But, and I, and I just, that won't last forever. So I think it's now. So I would, I would encourage any government that I talk to, to seize that opportunity and move as quickly as possible.
Tammy Haddad: Well, I have to turn, by the way, first that, but I wanna follow up on that to Miriam Vogel, who, if you guys have not met her, has been running [00:11:00] Equal AI done such a remarkable job.
Do you mind standing up and telling people what you're doing because you're on the front lines of this and have made so much progress, but so much to do?
Miriam Vogel: Certainly. So Equal AI, uh, I'll tell you briefly, you, and instead of light myself in the process. We've been around almost five years. We were created with the express purpose of reducing bias and harms and responsible and, and, and artificial intelligence. Because I'm an optimist and don't like to just focus on the negatives.
We also spend a lot of our time focusing on the upside, and that's responsible AI governance, and luckily that's gone from a topic that people didn't know about five years ago. It's something people are really interested in, and this is very room, We had one of our first AI summits for our batch participants, and we run a match program for senior executives.
To learn what is responsible AI governance. And it is a unique bubble of companies and executives who [00:12:00] care about this that are already invested. And they just want to know until we get this legislation, until we get international consolidation on best practices, we align with AI leaders together.
Victoria is a key part of that. She's one of the judges in that summit, and her staff helps to teach it. And, so one of our main focuses is first of all, helping companies understand you're now an AI company because you're using AI in pivotal ways. So you thought you did products and services.
Wake up, now you're an AI company, and then we'll help you figure out what to do from there. Our second main constituency is policy makers, which thanks to the work of the Wilson Center is now a lot easier than it was a few years ago. There's a lot more savvy. I think many of you know, from personal experience.
There's a lot of interest. There's a lot of, I'm optimistic. We had a lunch earlier this week with this bipartisan huge coalition, and they are focused on what is tangible, but what not just, not just the sexy topics, but what can we do today as well as the bigger picture.
So, between that and what we're seeing [00:13:00] across the executive branch, we've seen almost all the alphabet supa, federal agencies speak on this and talk about how they're going to be mindful of how they're using that internally as well as the regulatory front. So that's our second constituency, these lawyers.
My bias as a lawyer is we need to be doing more to help our clients and our organizations reduce bias with frameworks and other ways that we mitigate risks and biases. That says we have a podcast. You can often hear Victoria co-host, it's called n ai We trust question mark. We try to bring on leaders and responsibility and governance.
I also have the privilege of serving with Victoria on NIAIC. Uh, we're lucky to be hearing from,
Victoria Espinel: to be very clear, the chairman, the. She's the chair.
Tammy Haddad: She's the chair of the whole thing. I'm
Miriam Vogel: the chair. So what, there's 25 of us, across the industry who were appointed by the president to advise the president in the White House on AI policy.
We are arranged into working groups to get the work done because you have worked with 25 experts. You know that, that that is scheduling alone.[00:14:00]
So we are very smart and have Victoria lead as many of our working groups as possible. No joke, people have asked us to switch their working groups so they could be on Victoria's and get stuff done. So I'm very glad to be listening to you today.
Tammy Haddad: Thank you, Miriam. Thanks for everything.
So open. AI today talked about government licenses for these AI systems.
What do you guys think about that?
Victoria Espinel: I think it's, it is potentially interesting to see what that would look like, but I will also say, I think there are a lot of different ways that I think you can put guardrails or regulation or whatever words you want to use in artificial intelligence and, and I, and I think it makes sense to do that.
I think licensing schemes aren't necessarily the most efficient way to do that. So that, you know, I think that was something that would need to be worked through.
Kellee Wicker: Yeah, I, very similar opinion actually, but, I think the concept of licensing the foundation models, the most powerful ones, like Chat GPT four, that makes sense to me [00:15:00] in that, you know, there's sometimes there's concerns that through legislation we're, we're gonna create a moat where only the established players can, can actually conform to this regulation and we're gonna keep all the smaller players out. I think given that the future of AI is most likely going to be largely driven by hyper specialized, small models that are tuned to a very specific task, creating a moat on the most powerful models and not necessarily an issue because the amount of compute required, amount of data required and money required to create these models is such that there already was a moat.
So in terms of licensing foundation models that are generally very flexible and adaptable and can be used for good or for bad in a very powerful way, that tends to make sense to me. It is, I would agree. Not necessarily the most effective way to do it, but it is a way that the government knows how it works.
And I think right now when we're trying to prioritize things that are, Relatively quickly moving, but still a smart decision. We wanna use tracks that the government already understands instead of [00:16:00] creating an entirely new thing that we're trying to grapple with. So I think, yeah, generally it makes sense I think.
Tammy Haddad: So with the company's line, like what you're talking about, trying to figure all of this out, that companies are aligned in some way. I mean, literally like how do you get control? Of all of it. I mean, Congress always gets caught up in all the details and rightly so, you know? But how do you think, because, because you're talking about today, people talk about it all the time, we've got the six months, we've got 12 months.
How do you cut through all the noise and get really down to it: “Here's three things we can do?” I mean, is that gonna be on you guys or is that gonna be the Schumer guys are here, right? Are they still here? Uhhuh and who's, who's gonna do that?
Victoria Espinel: So, I think so, so I represent the enterprise facing part of the software industry and I think that makes it easier for me in some ways because those are companies that have a business model that is largely aligned.
And so when you say the industry like [00:17:00] that becomes much harder when you have business models that are, you know, companies that are, that have different ways of making money. I think certainly in the enterprise space, you know, I think the companies have been pretty forward leaning on regulation and we have a very detailed.
Set of ideas about what regulations should look like, and we've kind of leaned into that early and often. But I think, you know, in terms of, if I were a policymaker, like in terms of, you know, stepping back from that, I, I, it goes to what I was saying before. I think one of the things that I think what governments need to do is think about, okay, there are, what, 25, a hundred. There is such a long list of risks that you can come up with. So I think like filtering that by like our, here are ones that are really consequential and ones that we actually think we can address and we're gonna focus on those. And so to the extent there's gonna be a regulatory focus, I think that is the universe that policy makers need to focus on.
And I think once you. I think that's the big task at hand. I have my own views on what that universe [00:18:00] looks like, but I think that's what policymakers need to do because trying to, trying to move in a hundred different directions at once is going to make it impossible to get anything done.
Tammy Haddad: Right. And then the other piece of it is, Kelly, that people are afraid of the technology, the robots. What's, I love the name of this act today, Senator Casey, No Robot Bosses Act. We don't, someone in my office said, we don't have at Haddad Media. Someone else said, I wish we did. And then, well, we'll talk about the workplace in a minute, but there's this fear of this technology, which you can't really define even as we're talking about it.
It's. You know, it's long, it's pieced. It's this, it's that. It's how are you going to get, um, why are you gonna get people to not be afraid to embrace it when you come up with this, these trust areas of trust and safety, whatever you wanna call it, whatever way you do it, how are you gonna get people to trust and believe that, [00:19:00] you know, this is all in the up and up.
Kellee Wicker: I, I have a three part answer. So first part is there are, to a certain degree, I don't ever want you to fully trust it. Mm-hmm. I want you to remain skeptical, especially as we start looking at potentially the enabling of a wave of disinformation. We, of course, always have Photoshop. We could always tamper with photos, but now the skill level required to do so is significantly reduced.
So, Don't fully trust things. That's a smart approach. Second thing is, I think there's like an old poster that's been up in MIT's computer science department since like the seventies that says "A computer should never make a decision." And while there are many tiny little decisions that we should and can and are in the process of automating with AI, a decision that has sweeping consequences like pressing the nuclear bomb button or wholesale laying off 20% of your workforce, those are decisions that should never be made by a computer.
And so determining [00:20:00] when to use ai. Let's not do what we did when apps came out and everything was an app suddenly and we downloaded 1700 apps and none of them worked. Let's not go to that. Let's think very critically about what we should automate? Where, where is AI actually tuned to do a good job, and where should be kept out of the loop?
And then last thing I think is we bring things into the light. Um, we bring more transparency. And I'm gonna go back to my red teaming hobby horse. Part of, one of the reasons we think public Red teaming is really important is because understanding intimately the problems that these systems face and publicly dealing with it is part of building trust.
You don't tell people they're perfect, it's fine. Don't worry about it. You tell people these are flawed systems and you're a part of fixing them and now you know how we're fixing them. And now you know where to report something that happens and now you know what happens. If something happens, bringing transparency.
There are some things we can never be transparent about. We're not gonna share the training data sets that are building these models. That would be a huge industry secret. We're not gonna share the [00:21:00] model waste to make the model work, but we can tell you how it works when it's being used. When are you talking to a chat bot?
When is your health insurance using a machine learning model to make decisions about your care? You should know those things. Transparency is essential for building trust, and building trust is one of the reasons that Americans trust technology far more than you see in other countries. We tend to believe that someone has thought about this and someone has put a structure in place that can let us trust it.
That's what we're grappling with right now. Are we putting that structure in place?
Tammy Haddad: Right. Thank you.
Teresa, do you want to jump in here? You've been dealing with all this for a long time?
Teresa Carlson: Yes. Hi everybody. I'm Teresa Carlson. I manage a supply chain and logistics company, but I worked for 25 years in tech, Microsoft, and Amazon, and I was around when Amazon actually created our first AI tool. Cloud computing. Nobody knew that In 2010. In 2010 when I started doing this with AWS, we had to go to Capitol [00:22:00] Hill.
And when I would talk about cloud, they would say, what? I would say, I'm here with Amazon. And they would say Here to talk about books or taxes. And I would say no I'm here to talk about cloud computing and they were saying, what's cloud computing? And people, it's kind of the same thing with AI. So one of my, we, we had a great discussion today and Victoria, they were gonna do an amazing job.
Love your comments. I like the red teaming design, but I, but what I would say is education, education, and more education. We've gotta be realistic that this has to be early and often for everyone because you can start regulating on one, but you're not gonna stop technology evolution. And just to give you one example of how quickly this moved.
When I was at Microsoft in the 10 years I was there, we would release one service every three years, and we thought that was innovation. When I got to a w s and you got to cloud, the first year I got there, we released 25 new services in [00:23:00] 10 years. When I left there, we were releasing over 2,700 new services and features a year.
It's gone way beyond that. So when you, when you have AI involved, you're going to even go way beyond that. So you have to look at it from teaching, training, educating every level. You have to look at what are the risks. I totally agree. They have risks. Government has to be involved early and often. They can't be behind the ball. They have to be in front of the ball.
If that makes sense. Where's the pet going? And I respectfully say we are oftentimes trying to figure out where it's going. So it's just, we've gotta be more, more thoughtful and bring in experts. I'm glad that you're doing this. The committees have, I mean, we were so far behind in cloud with government doing that, honestly, and took way too long.
So I will say on the positive, people are paying attention, they are bringing experts. And my last comment was figuring out, you know, like when we went from horses [00:24:00] to wheels and cars, we have to figure out again, what the good things are and respectfully to the British. I'm married to a Brit, but in America, innovation is off the charts.
And I was always really proud of my work with the Brits. We did amazing things together. But in America, we need to determine, let's embrace our innovation and celebrate it and make sure that we are celebrating how we take these companies to the next level. And I loved your comment about let's not forget about the small men and women that create these companies.
We have to take care of them. And the big mammoth companies will do all this heavy lifting, let 'em, let 'em go do it, and they create a model that allows the smaller companies to move much faster. And in fact, Tammy, thank you for doing this.
Tammy Haddad: Yeah.
Teresa Carlson: I think it's such an important topic.
Tammy Haddad: That's great. Thanks so much. Karen. I know you, you've worked with all these companies. You wanna jump in here? You've had to deal with a lot of these legal issues for a long time. I mean, I don't know how you figured this out. That's a, that's a long contract [00:25:00] coming.
Karyn Smith: I mean, the question that I had, that I asked last week is the event we're at, and Victoria and I worked on this together for a number of years, is, you know, we still aren't here on the privacy front with the elements.
So I'm curious, what makes it, what's different about this? Like, how are we gonna get there with regulating AI when we still don't have a federal privacy law all these many years after GPR.
Victoria Espinel: So I do think it goes back to this window. The, the, the congress, the states, the administration, governments around the world, and I say the UK has been really thoughtful in AI policy for five or six years.
I think you were the first government that I know of to set up an internal AI advisory structure six or seven years ago when governments weren't thinking about it. The chance to meet with the gentleman. But I do think there's this moment right now where people are seized with it in a way that they never were, where the privacy legislation never had that momentum behind it.
And I think there's a real opportunity there. And I, I, but I will go back to like, I don't think it'll last forever. I think 12 months from now it will have dissipated. So I [00:26:00] think this is really the moment to seize, to try to move together. And if we can move like a number, if we can sort of move collectively, globally together. I agree, you don't, 193 countries together is not gonna be feasible. But I think there's a number of countries that would be very excited to work together now.
Kellee Wicker: I can jump in just really quick. I totally agree with this. There is a window now that's closing. I'm not sure we're gonna hit it. Um you know, the AI is one of the few issues that's not partisan on a hill.
I don't know how long that will last, but I do know that even so, there's a lot of these questions that the EU, that the UK, that Canada are grappling with in terms of things like algorithmic justice, that we are just like, whoa, whoa. And Congress is not addressing those things. The questions of bias, the questions that really I think at the core are gonna affect people's lives.
We're not touching that because that's when it's gonna start getting political. So if we can't touch the things that matter, if we can't actually create these [00:27:00] guardrails that are gonna do anything, Like my take, and this is maybe not Wilson Center, is that rather than do what Congress does sometimes and create a new agency and then say we did something, alright, move on.
I would much rather see the empowerment of the existing regulatory agencies to just go ahead and extradite exercise our jurisdiction on AI systems. And a lot of them are already taking steps to do so, but they need clarity from Congress to be able to actually act. That is something Congress can do because they're not gonna be the ones holding the bag.
If it goes poorly, it makes a lot of people on the Hill are concerned. They don't wanna get it wrong and then be the ones who got it wrong. So, you know, I think Congress, we heard this last week that PE there's a lot of interest in legislating specific use cases, which is great. But I think we should empower regulatory agencies while we can still agree on doing something.
I think the NIST framework is great. The blueprint for the Bill of Rights is all great. We're talking about things in the right way. I just am not sure we can ever make that wall so,
Victoria Espinel: Right. But I would say, I mean, but regulation is gonna [00:28:00] happen. Like it's the European Commission is going to move. I think the UK is going to come up with a policy approach, like it's going to happen.
It's just, I think the question is like, is the United States kind of an active participant in that conversation? Yeah. While it's happening. And, and I would argue that they should be, and not just the United States. I think there are a number of other countries that should be too. So it's, I think it's going to happen.
Just the question is, I think whether or not, you know, how, how is the U.S. They're kind of at the table early and often to use a phrase that someone else used.
Tammy Haddad: We have to talk about election security, and I don't know if anyone here too, if, if someone wants to jump in along the way, please do. Okay, what are you gonna do about the deep fakes?
What are you gonna do? I mean, I just don't understand what regulatory body, I don't understand what group. As someone who's worked in TV news for a long time in news, I don't know what mechanism could possibly exist right away, even though we're talking about a lot of cooperation all the way around, right?
But how can something [00:29:00] some deep fake that just looks, you know, amazing can be stopped and. Who was doing anything about it? And I'm gonna make you talk to Alexandra.
Okay, go ahead.
Victoria Espinel: I'll say two things quickly then. So I think, I think, I don't think it can be stopped, right? I don't, I think that's, That's not really achievable.
So I think the question is, you're kind of going to what you said, like how do you educate people enough to be aware of what's happening and how do you give them more information? And there is an initiative called the Content Authenticity Initiative, which I also was started by one of the companies that represent, Adobe, that now has over a thousand companies that are involved in it.
And the idea is that people have a very easy way of telling when they look at something, whether or not it has been manipulated. And then they can decide what they're going to do about it. So it doesn't stop it from existing, but it does, the idea is to get people very easily accessible, understanding, um, about what has happened to that image.
Tammy Haddad: Right. And the watermarking too. Right. This is another thing. Mm-hmm. How big is that?
Victoria Espinel: Yeah. Mean watermarking in part [00:30:00] goes to try, you know, you have AI that's being trained on large data sets and are, I've heard it mostly in the context of, are those. Is what is being trained on protected by copyright? Is it creative content?
And I think there are gonna be a lot of really interesting issues around the creative community and how AI is being trained, and then what AI generates and what the copyright rules are around that.
Kellee Wicker: Yeah, I think, you know, we're seeing a lot of companies tackle the question of labeling generated content or making it distinguishable in some way.
We already have a problem of disinformation. We already have this problem. You know, the 2016 and 2020 elections were, we didn't have all these generative systems, but we still had these problems. So I think the existing effort that we started for digital literacy, or just like information literacy. Does what you're reading sound too good to be true or sound too crazy to be true?
Or is it from some weird website you've never heard of before? These things remain the most important things to talk about. You know, we can try our best to create a tool that will mark things, and [00:31:00] the people who we're gonna abide by the law anyways are gonna use them. And the people who weren't aren't.
So I think. It's all back to education.
Alexandra Veitch: If I could just introduce maybe two new topics, and, Victoria, you started to talk about one, which is a copyright issue, which is something we are thinking a lot about at Google. I'm Alexandra Veitch from Google. I lead public policy at YouTube. I would just love others' thoughts about whether the existing legal regime is sufficient, both on the, what is ingested by the systems, as you called out, but then also what the outputs are.
So we'd love thoughts there. And then I think one concept we really haven't talked at all about here is national security. You know, keeping front of mind that we are in a global race on every front against China. And so, how do we make sure, uh, AI remains a, a tool for the liberal democracies of the world and to the U.S.
Tammy Haddad: Yeah, she asked it. You go.
Victoria Espinel: The national security implications are, [00:32:00] are significant. And I, you know, I think if the administration's been focused on one aspect of AI for, you know, for the last few years that's, that's been it. I do think it, you know, I think. I keep talking about this sort of group of countries, but I think one of the reasons that's important is to create a counterweight. China has already sort of been out there with what its approach is going to be in terms of artificial intelligence, and there are a number of aspects of that that make me uncomfortable.
Tammy Haddad: Kelly? National security.
Kellee Wicker: I'm not gonna touch the copyright one because I don't know the answer but on national security, I think, one thing that I think sometimes we lose track of is that our intellectual economy and our innovation economy is very deeply entangled with Chinese academics. And especially when you talk about AI. I think some incredible number of papers are co-authored by Chinese and American researchers, and I know there's a lot of concern about that information being funneled back to China, but we have to also remember that. If we discourage those academics from coming here. [00:33:00] Because generally what happens is you do your undergrad at Xan and you come here and you go to Stanford, you go to MIT, you go to Carnegie Mellon, and then you stay for the most part.
And yes, maybe you are funneling things back, but you're here, you're creating your work here. If we shut off all these pipes and you close all these doors, you're gonna be in China. Creating the same work and our collaboration will be cut off. So these innovative things that are happening here are gonna slow down.
We're gonna lose a lot of the talent that drives a lot of what we do. Some of the people who we're gonna become amazing Americans and love democracy are gonna stay in China and they're going to be part of this iron curtain of information. And then, you know, we can talk for days about the actual, like literal military applications of this, but for me, this question of national security in China always comes down to this symbiosis between our two nations.
The importance of the interchange, the fact that there are normally 14 to 15,000 American students studying abroad in China. Right now, there's [00:34:00] 350. They have closed their doors. They are turning people away that have valid visas. We should all be very concerned about this because when China controls everything that people know about America and there's no Americans going there to learn anything about it, and no Chinese coming here to learn, that's when things get really bad.
There is no humanity between the two nations anymore. Like I'm not here about AI anymore. Yeah, but this is, I think AI is just like this, this. Collaboration ground and we gain a lot from China. We gain a lot from open doors, from integration policies that allow them to come here, and we shouldn't decide that.
Tammy Haddad: Okay. Anyone else wanna jump in on that? Go ahead, Liz. Liz Johnson. You have to stand though. Sorry. We made everyone else stand.
Liz Johnson: Liz Johnson. I'm Senator Mitt Romney's, chief of staff. Thank you for doing this. I was just curious if you could speak to what obviously legislation in our democracy is slow and it's hard and it's, I think at a minimum months away. Could you speak to what tech [00:35:00] companies are doing currently or should be doing any interim to sort of self beliefs, if you will, or self-regulate?
Victoria Espinel: Yeah. Sure, I'd be happy to, but I'll try to keep it brief. So I think, so you said tech companies, and this actually goes to the point that Paul made too.
I think there's, there's, there are companies that are creating or training or developing AI and then there are a lot of companies that are using ai. And I think to focus on either, I think we need to focus on both, right? So it's, I think there needs to be obligations that are for the developers or for the deployers to use kind of more technical word.
And then of course there are a lot of companies that are gonna be in both of those camps, depending upon what they're doing at that moment. I think when you are. When the AI is going to be used in a way that's going to be consequential or high risk, or you know, these various terms that are out there, then I think companies, whether they are using it or deploying it, need to put a, a number of stuffs in place, some of them as they're going through the development process.
But then a lot also is being used to be retesting, reevaluating, [00:36:00] have internal evaluation processes that will move quickly if they're seeing outcomes that are inappropriate or inaccurate. And so I think those are things that companies can and should be doing now.
We at BSA put out sort of a 50 page framework of very practical operational things that we think companies can and should be doing right now in order to be developing or deploying responsible ai. But a lot of that goes to testing at every point of the system and then having a very quick internal process to stop or correct where you are seeing outcomes that are inappropriate, Inaccurate, skewed in some manner.
Kellee Wicker: I have a five word answer, the NIST risk management framework. I think somebody like today, a couple of senators put out a thing that said that they wanna mandate that government vendors actually apply that retrain framework.
It's voluntary, but it's very thorough. They've done a lot of work to make sure that it's usable. There's case studies that help you apply it, you know. It's, [00:37:00] it's a very solid framework and so I think companies.
Victoria Espinel: A hundred percent echo.
Kellee Wicker: Yeah. If, if you're concerned, if you don't wanna develop your own thing or if you wanna keep building public trust, like using the established one that the government already put together, don't reinvent the wheel.
That would be my thing.
Teresa Carlson: I'll just say, I'm really interested. When this came out. This goes back to when we did something called the Federal Desktop Core configuration to lock down the microsoft operating system with them years ago, and it worked really well. They did the same thing and then they did the NIST definition of cloud, which got used around the world. We would get RFPs and NIST would be quoted. So people really look at it. So I do love your idea.
Then if you could do compliance and security controls through GSA or you know, however you're gonna do the updates, you can start quickly on making recommendations on controls. I think that would be super impactful too. Because people will use them, other enterprises use them a lot. You just, you know, especially if you like, you can bring those folks in and guide 'em toward those [00:38:00] security controls.
Victoria Espinel: And Secretary Raimondo. The head of the Department of Commerce has been a real leader on AI inside the administration, and I think what NIST has done, which is part of the Department of Commerce, is very powerful.
Tammy Haddad: Well, thank you ladies. I think they did a great job. Much appreciated. We're gonna continue to meet. If you have ideas, tell us. If you wanna co-host with us, let us know. Thank you so much. Thank you ladies. Much appreciated.
CLOSING Thank you for listening to the Washington AI Network Podcast. Be sure to subscribe and join the conversation.
The Washington AI Network is a bipartisan forum, bringing together the top leaders and industry experts to discuss the biggest opportunities and the greatest challenges around AI. The Washington AI Network Podcast is produced and recorded by Haddad Media.
# # #
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.