WASHINGTON AI NETWORK INTERVIEW TRANSCRIPT Host: Tammy Haddad, Founder, Washington AI Network GUEST: Elizabeth Kelly, Director, US AI Safety Institute Johns Hopkins University/Bloomberg Center August 12, 2024
WASHINGTON, DC
Cybele Bjorklund (00:27:00): Good evening. My name is Cybele Bjorklund. I'm the executive director of the Hopkins Bloomberg Center and the Vice President for Johns Hopkins University and Medicine. We're thrilled to welcome you tonight to this Washington AI Network event hosting an important and timely conversation between the inimitable Tammy Haddad and Elizabeth Kelly, [00:27:30] the director of the US Artificial Intelligence Safety Institute . At Johns Hopkins, our research faculty and subject matter experts have led the way on AI machine learning and related topics for decades. As Ed Schlesinger, our dean at the engineering school recently reminded me, AI is just math right here at the Hopkins Bloomberg Center where we convene people across disciplines and differences to solve complex problems. Our AI programming runs broad and deep. Earlier this summer, we [00:28:00] launched a year-long partnership with noted tech journalist Kara Swisher and Vox Media to convene top minds on AI, starting with a conversation between Kara and open AI CTO Mira Mirati. Earlier this year, our bipartisan bridge series kicked off with a critical AI focused conversation between Senators Mark Warner of Virginia and Todd Young of Indiana. And just over the past few weeks, we've offered short courses on AI to congressional staff and journalists. So whether you're looking for an expert to [00:28:30] guide policy discussions or just wanting to come together with colleagues to hear about interesting AI topics, there's a hub of activity right here at the Bloomberg Center. We hope to see you again soon. Let's get to it. Please join me in welcoming Tammy and Elizabeth.
Tammy Haddad (00:28:53): Welcome everyone. This is the Washington AI Network. Our first time here. How about Johns Hopkins University Bloomberg Center? Can you believe this place? Is this the first time for how many people? That's what I like to see. Welcome. It's so important that you're here.
Those that have been following politics or covering politics, everyone's talking about unprecedented times. Well, if you ask me, the unprecedented times is because of AI and the reason this night and this event is so important is because we have the person who is actually driving the ship that's going to bring us into shore — our special guest here, Elizabeth Kelly. Elizabeth, thank you so much. (applause)
Okay, so you are basically working between man and machine, trying to figure all this out for us. We are at Johns Hopkins University, one of the great universities of all time and we give out grades to 170 days, and you guys have met every single deadline, every single guideline, right? So let's give her an A plus. What do you guys think? (Applause) You can take that back to White House and say you got an A plus from Johns Hopkins University and you've just released all these new rules, and can you tell us how you came to these decisions?
Elizabeth Kelly (00:30:22): Absolutely. So Tammy is talking about my old job where I was at the White House helping lead drafting of [00:30:30] the AI executive order, and we just hit the 270 day mark where on every single deliverable under the executive order, everyone in government, no one has missed a deadline. So I think the entirety of the administration gets an A plus here.
Tammy Haddad (00:30:45): What do you guys think? (Applause)
Elizabeth Kelly (00:30:49): But we as the US AI Safety Institute also completed our homework, or at least the first tranche of it, just to take a little bit of a step back so we don't explicitly [00:31:00] appear in the executive order, but we are very much a creature of it. We were announced by Vice President Harris in London in November of last year. We are tasked with really advancing the science of AI safety with identifying and mitigating the risks of AI so that we're fully able to realize all of the tremendous potential and innovation. And we've got three primary tracks of work. We're focused on doing testing and evaluation of [00:31:30] highly capable AI systems, putting out guidance that will help shape the ecosystem and really building a global ecosystem of AI safety and a key part of that is what we just did, which is issuing an initial public draft of guidance on how to mitigate the risk of misuse of dual use foundation models, or to put it in English, highly capable foundation models.
(00:31:56): My team has been heads down on putting out this guidance, which really provides guidance [00:32:00] for developers from the start so toward the end of the process, from when you're training a model to when it's in deployment and you're looking at incidents, what are the best practices and steps that you can take to assess the risks that could arise and really put measurements around those risks? How do you create frameworks around incident reporting? How do you actually put out the transparency that's necessary so that we are able to create an ecosystem of safety, where you're seeing in-house evaluators, third [00:32:30] party researchers, everyone really contributing to driving this forward. Now, this isn't final. We've got public comment open until September 9th and are really excited for all the feedback we have been getting and are hoping to receive from the community as we work to finalize it towards the end of the year.
Tammy Haddad (00:32:47): Well, that's not very much work. What do you guys think? (Laughter) Right. Let's talk about testing. I don't know how you test while all of this technology [00:33:00] is being developed, how do you look at that?
Elizabeth Kelly (00:33:04): Well, let me first say, I have an amazing team who is doing this. So over the last six months, I've really been focused on getting complete rock stars in place. I may be biased, because I hired them, but we've got leading computer scientists. One person on my team actually invented reinforcement learning by human feedback, which is a key innovation in this space. I've got ethicists and anthropologists who are helping take an interdisciplinary approach to how [00:33:30] we look at this technology because we know that it really requires all hands on deck view. And as of today, I have Conrad Stosz who joins us as our head of policy,
Tammy Haddad (00:33:40): Conrad, Conrad stand up. And Conrad, didn't you help write originally the executive order? Were you involved with that?
Conrad: Yeah, and a couple others.
Elizabeth Kelly (00:33:57): I first got to know Conrad when we were drafting the EO together. [00:34:00] Conrad was the mastermind behind all of the sections about how does government use AI to better serve the American people and for the real wonks among you. He also was the primary drafter of the OMBM memo or the AI management memo, which talks about what are the protections we need to put in place as the government is looking at higher risk use cases for ai. And now we're thrilled to have him on board as our head of policy, and he [00:34:30] has been working very closely with the rest of the team to really create the foundation for us to start doing testing prior to and after deployment of highly capable models. This isn't something that is new. We've seen a lot of testing happen inside the companies. We've seen third party researchers and companies do testing of these models across a range of risks. We've seen our colleagues in the UK AI Safety Institute who I see in the crowd doing this as well. But really [00:35:00] at the US Safety Institute, we feel like we have a unique role because these are American companies. They're excited to work with us, and we are deep in negotiations to have in place soon agreements that will enable us starting this fall to do that important testing
Tammy Haddad (00:35:17): When in the fall?
Elizabeth Kelly (00:35:22): The fall is only a couple of months, so you've got a limited window there.
Tammy Haddad (00:35:27): Okay, that's great. And then you've got Apple to be part of the voluntary commitments.
Elizabeth Kelly (00:35:32): I can't take any credit for that. That was my colleagues at the White House who have received voluntary commitments from, I think we'll work 16. Is that what we're up to? Yes. I leave the building, I lose track companies who have made really important commitments to doing testing of their systems, putting in place risk mitigations, making sure that they're mitigating potential impacts around bias and discrimination. We got most of those commitments last [00:36:00] July, a couple more last September. And now that Apple is really pushing forward, they joined those commitments just in July as well.
Tammy Haddad (00:36:07): So OpenAI announced search, GPT. So I just want to ask this as an example. So does Sam Altman call you and say, Hey, we have a new thing. It's called SearchGPT, and we're going to start testing or what do we need to do to test? How does those relationships work?
Elizabeth Kelly (00:36:26): So we are in the process of putting in place agreements with [00:36:30] all of the companies, OpenAI, Anthropic, Meta, all the leading labs that you might think of to enable us to start doing testing of their systems prior to deployment. It's still early days. There's not much that I can say, but I think this is a really important part of the approach that we're taking at the AI Safety Institute where we have testing, guidance, research, all informing each other in a really virtuous cycle that helps advance the science of AI safety so we can start to answer the really knotty problems and help create trust in the American people that these systems are safe, secure, and trustworthy.
Tammy Haddad (00:37:05): I'm really focused on this trust issue and that people feel like AI is for everyone. It's not for the elite or those who have a particular product. And I did notice that that's part of your responsibility. So how do you do that? Are you going to go on tour to all the universities? Don't you think she should go on tour the college tour (applause) to explain [00:37:30] how AI is for everyone and how you can use it? I don't know about that, but I mean, how are you going to make people think that it's for everyone?
Elizabeth Kelly (00:37:38): So I think you hit the nail on the head, which is that we really view our job as enabling innovation, right? Safety enables trust, which enables adoption, which enables innovation. And driving that innovation is really our north star at the AI Safety Institute under Secretary Raimondo’s leadership and under the president's leadership, I think the sky is the limit in terms of all the beneficial use cases that we [00:38:00] really want to see in terms of drug discovery and development, carbon capture in storage, individualized education. The list goes on. But part of that is making sure I think the executive order did a really good job of emphasizing that this is something that is part of an open ecosystem. They're academics, entrepreneurs are all able to explore and develop, that we're getting the education that we need in our classrooms that we're training up workers. All that were key pillars of the EO (Executive Order).
Tammy Haddad (00:38:27): Oh my God, that's so much to cover. Let me ask you about dual use [00:38:30] foundation models. The guidelines recommend that developers disclose their risk management practices and any misuse cases for transparency and accountability. So is Mark Zuckerberg calling you and saying, Oopsie, this one didn't work. Who are you getting calls from? I'm trying here guys. (Laughter)
Elizabeth Kelly (00:38:49): So I think what's important to note is this is about transparency for the public in this guidance, right? We really view ourselves as cultivating an ecosystem of safety, and we are a part of that, but [00:39:00] it really is incumbent on us to create additional folks like third parties, like researchers like the in-house evaluators who are all helping hold companies to account and ensure that these systems are safe, secure, trustworthy. And that's why the transparency guidance rules are so important. This is something that I think we actually went beyond in the guidance, what you've seen from most of the major companies. And I think it's because that is the best way to really inspire[00:39:30] trust.
Tammy Haddad (00:39:31): So is it say on the company websites or you're talking about transparency – that they're telling the government?
Elizabeth Kelly (00:39:40): So I would distinguish between two different things. So I'm going to wonk out here for a second. For those of you who have made it through the EO, you might've seen this.
Tammy Haddad (00:39:50): How many have made it through the EO? Look at that. Look at this crowd. Such smart people.
Elizabeth Kelly (00:39:54): This is a delightfully wonky crowd. I love it. (00:39:57):
Tammy Haddad: The rest of you, you guys need to do some homework tonight.
Elizabeth Kelly (00:40:00): [00:40:00] You're really making Conrad and I feel good. So thank you for that. For those of you who've perhaps have better things to do with your weekends, there's an important provision in the executive order, which requires companies that are developing the most advanced models to tell the government about the existence of those models and to report in the results of any testing they do. Now, this doesn't come to the U.S. AI Safety Institute, this comes to the Bureau [00:40:30] of Information Security, our colleagues at Department of Commerce. But really it's supposed to enable the government to help keep abreast of what's coming, what the risks are, so we can all work together. That's sort of reporting to the government. What we're talking about in the guidance is how do we ensure that there is, I don't want to say reporting, but transparency with the public. So it's not just the US government that has this knowledge, but the companies are helping foster this broader ecosystem of trust. And that's why we think it's such [00:41:00] a key component of the guidance.
Tammy Haddad (00:41:01): I want to turn to synthetic content, and NIST also submitted a report to the White House outlining ways to reduce the risks of synthetic content generated by ai. White House has said “image-based sexual abuse has emerged as one of the fastest growing harmful uses of AI to date.” This is obviously parents, I know you are a parent, parents are really concerned about it. What are the recommendations, if you can talk about that for [00:41:30] combating image-based sexual abuse generated by ai?
Elizabeth Kelly (00:41:34): Yeah, so just to take a quick step back, there were a number of deliverables that NIST issued at the end of July as called for by the executive order. One was a donor of AI profile of the AI risk management framework. Another was a secure software development framework, new international engagement plan, lots of things, but one of the key ones was really this landscape overview of the tools [00:42:00] and techniques to help detect synthetic content and to authenticate content. And there's many reasons that this is so important, but part of it is because these are going to be really key tools to combat the risks of CSAM and NCII, which to be fair are longstanding harms, but have really been exacerbated because of the greater ease in creating and disseminating the content that AI creates. So we'll be finalizing that report towards the end of the year and [00:42:30] then putting out additional guidance is called for by the EO on what are the best practices to help mitigate that.
(00:42:35): So stay tuned. This is an area where we know that the technology is evolving so quickly but what is the best practice for watermarking or for detecting synthetic content? What's best practice now might not be best practice in the six months. And that's why we're so eager to get out the final version of this landscape report to issue follow on guidance on what best practices are and why it's something that [00:43:00] we called out in our guidance on dual use foundation models. I think it's really important to note that that guidance is not only focused on emerging risks. For example, the possibility the model could be misused, perpetuate a more dangerous offensive cyber attack or to help with the development of a biological weapon, but it's also focused on the here and now risks, the things that are existential for people on a daily basis. And I think one of those key risks is [00:43:30] the risk of increased proliferation of child sexual abuse material of non-consensual imagery. And that's part of why we called it out there as well.
Tammy Haddad (00:43:38): What about, let's go to the biological risks and that side of all of this, that many people are worried about every day and cities and states. We've got a lot of states with legislation being passed, bragging about this or that or worried about this or that. How do you cope with that whole piece of it? Because we're really talking about infrastructure [00:44:00] and how it affects people at their home.
Elizabeth Kelly (00:44:03): So we really view our job as taking an approach that lets us look at the whole picture, monitoring what are the risks now? What might the risks be in the future, and being proactive and identifying risks that may not be here today so that we're able to mitigate them as they come down the pipe.
Tammy Haddad (00:44:21): So wait, so you have to get rid of the old risks, the current risks, and then predict the new risks. This is not a good job, Elizabeth. Honest to God, you and Conrad, [00:44:30] we need to chat. That's a lot.
Elizabeth Kelly (00:44:31): More of my team here too. Please tell 'em
Tammy Haddad (00:44:33): How to quit. Where's the rest of the team? You can In the back. Okay.
Elizabeth Kelly (00:44:37): A Johns Hopkins student, clearly I'm recruiting the best.
Tammy Haddad (00:44:39): Oh yes, some Johns Hopkins students. Yes.
Elizabeth Kelly (00:44:44): But I think this really comes back to why it's so important to take a whole of government approach here and why we are just one part of this broader ecosystem. We are working incredibly closely with our colleagues across the government to better understand [00:45:00] the risks, to produce threat models, take advantage of their tremendous expertise. And I think this is an area that really speaks to why this is so important for the USAI Safety Institute and Other Safety Institute across the globe to have this capacity because we bring to bear not only the technical expertise from the folks that we've hired out of top BHD programs, the labs, the incredible scientists at NIST, but also all of that expertise across the defense community, the health community, and are able [00:45:30] to really bring that to bear in a way that I think is incredibly important.
Tammy Haddad (00:45:34): How do you deal with, Microsoft is a partner of OpenAI, but they're also a competitor with Open ai, right? That's now Microsoft ai. You could talk really about most of the companies, but to me that's the most glaring example. So I mean, you're dealing with the same company, but sort of two sets of people. How do you do that? Who do you talk to first is really [00:46:00] What I want to know, who's call do you return first? We really are talking about all these companies or many people from these companies that are on the frontier models are interconnected, intertwined in so many ways. So how does the AI Safety Institute deal with that?
Elizabeth Kelly (00:46:18): I think this is a classic example of just how everyone recognizes the incredible potential of this technology and why companies are approaching this from so many different angles through investments in different companies, [00:46:30] through their own innovations. And we are working with all of them. That's why we have the AI Safety Institute Consortium, which brings together more than 290 members from industry, civil society, academia in order to really leverage that incredible diversity of talent to inform the efforts that we have ongoing at the AI Safety Institute at NIST and across the broader government.
Tammy Haddad (00:46:53): Well, in many ways you're the partner of the industry, but you're also the watch dog, right? So you have different hats for different days?
Elizabeth Kelly (00:47:00): [00:47:00] I wouldn't quite put it that way. I'd say that there is a shared interest amongst all of us in making sure that these systems are safe, secure, and trustworthy. We all know that we're something to go wrong that would really impede innovation, and that's why it's so important that we get this right and why we all have a role to play here in making sure that the systems are safe and we're ultimately enabling that innovation.
Tammy Haddad (00:47:26): Alright, let's talk about global cooperation. By the way, we're going to take a few questions. [00:47:30] We have a little time left if you want to think about your questions. In April, Secretary Raimondo and our UK counterpart announced an agreement to work together on testing. Where does that stand?
Elizabeth Kelly (00:47:41): Well, I believe some of my colleagues from the British Embassy are here today. So I can tell you it's going very well indeed. I think that we view it as incredibly important to work with other countries to make sure that we're learning from each other's efforts, standing each other's shoulders and not duplicating [00:48:00] or having conflicting efforts.
Tammy Haddad (00:48:03): The other piece was there's supposed to be a joint testing exercise. Has that happened?
Elizabeth Kelly (00:48:07): So we are working very closely with our UK counterparts to learn from the testing that they have done. I believe it was publicly reported that there was some tests in the UK did on Anthropics model, which were then shared with the us and we're in close collaboration and I think we'll have more progress to show in the coming months.
Tammy Haddad (00:48:27): Okay, we'll get that later folks. Alright. [00:48:30] What about the Safety Institutes? Because you're not alone, of course. We just talked about the UK. How are you guys all working together? Are you at a spa at White Lotus Spa somewhere in Italy later this month, just like the Fed meets? Well,l they meet in Jackson Hole every year? They do, yes. You guys need something like that? I would go who would go? We'll come with you.
Elizabeth Kelly (00:48:53): Considering how hard my team works. They certainly deserve that, but I think they're going to have to settle for San Francisco. [00:49:00] So one of the things that we announced or that the secretary announced on the heels of the Seoul Summit is that we are launching a network of AI safety institutes. We're seeing AI safety institutes pop up across the globe, and we think it's incredibly important to build on the leaders and ministers levels, commitments that have been made first the Bletchley Park Summit, then at the Seoul summit and really begin to work together at a technical level so that we're learning from the research [00:49:30] and best practices of different countries, the work the Singaporeans are doing on content authentication, the Canadians are doing on risk mitigation, all the evaluations that UK has built up and that we're not only able to learn from each other's efforts, but also move towards more aligned testing and evaluation common benchmarks, because that's really how we're going to enable the innovation that we all want to see.
Tammy Haddad (00:49:53): And when's that going to happen?
Elizabeth Kelly (00:49:54): Well, the conversations are ongoing now, but I think that part of the formal launch [00:50:00] is going to be a convening later this year in San Francisco that brings together all of those safety institutes and really their technical experts.
Tammy Haddad (00:50:08): Well, that's what I was going to ask you. Are these technical folks, political folks, who are you bringing together? We're
Elizabeth Kelly (00:50:13): We’re really focused on bringing the technical folks together. We think there have been really great fora to bring together the political folks, and we've seen incredibly important commitments. First at Bletchley and its Seoul through the G seven, the OECD, the list goes on. But we really want to enable the technical experts to [00:50:30] sit together in rooms and have the detailed conversations that'll drive the action, drive the deliverables we all need to see.
Tammy Haddad (00:50:36): And what about the State Department? Are they involved?
Elizabeth Kelly (00:50:38): The State Department is very much involved. They've been an incredibly close partner on all of our efforts here because we really view it as essential that America leads not only on AI innovation, but also leads on AI safety and the State Department agrees with that.
Tammy Haddad (00:50:52): And what other countries, if you'll call them out, what are they committing to? Anything you can tell us.
Elizabeth Kelly (00:50:57): I don't want to get ahead of our November news release, but [00:51:00] I do think that you saw a number of countries in Seoul commit to really working together at a deep level. The Seoul 11 I think is a good instance of that. And we've had a number of partnerships already that we've spoken about publicly. Obviously there's the MOU with the UK, we have a dialogue with the European Union that was announced with the Trade Technology Council and that we kickstarted this summer, exchanges with Singapore, with Kenya, with other countries.
Tammy Haddad (00:51:27): What about China?
Elizabeth Kelly (00:51:30): [00:51:30] Just
Tammy Haddad (00:51:31): I slipped that in…(Laughter)
Elizabeth Kelly (00:51:32): In. Yeah, I think we are focused on working with countries that have safety institutes that are really trying to advance this work. But I think we had an incredibly productive dialogue with China in Geneva. A number of us were there to exchange learnings about AI safety….
Tammy Haddad (00:51:54): They are focused on it, talking to other countries about what they're doing.
Elizabeth Kelly (00:51:59): Would defer to my State Department [00:52:00] colleagues, but what I would say is that we want to engage with a number of countries on AI safety,
Tammy Haddad (00:52:06): Such a diplomat. (laughter) What about the Paris Action Summit? So Anne Bouverot, who I hope others have met, I was honored to interview her when she was here in town. She is President Macron's new envoy for AI safety. And I know you met and they're putting together the summit. What can you [00:52:30] tell us about the US role and the role of the Safety Institute?
Elizabeth Kelly (00:52:33): Sure. So I got to listen to your interview with Anne Bouverot, which I loved. Yes. And there were two things that she said that really stood out to me and also things that she said in private conversations as well. One is the role of safety enabling innovation and that's very much sort of the spirit of the conference, but there is a whole track around safety because these things are really part and parcel. The other thing she [00:53:00] said was that they intentionally are calling this the AI Action Summit because there have been all of these important political commitments to advancing AI safety, to advancing AI innovation. But we really need to sort of get down to brass tax, do the work, take action and I think that the work that we're doing in the AI Safety Institute Network, the convening that we'll be having at a technical level in November, really feeds into this drive action. We're excited to help reduce deliverables ahead of Paris
Tammy Haddad (00:53:27): It's interesting because [00:53:30] I spoke to Ben Brake from Germany, the digital director before Anne. Thank you Julian, who I think is here. And in both those conversations with Anne and Ben, they both talked about innovation. When they look to the US when they talk about ai, they, so sorry, desperately want to have the kind of innovation that we have in this country. So we've talked a lot about safety, but are you also talking about innovation with those safety protocols?
Elizabeth Kelly (00:54:00): [00:54:00] Absolutely. As Secretary Raimondo has said many times, we really view our role at the Commerce Department as driving and leading US innovation and safety is a key part of that. Safety enables trust, enables adoption, enables innovation. And that's why we all get up in the morning and why we think our work is so important. And we are incredibly lucky to have so many outstanding researchers, companies who are really on the forefront of this incredible technology here in the us [00:54:30] and we think it's imperative that we continue to maintain that lead.
Tammy Haddad (00:54:34): And you have an office in San Francisco?
Elizabeth Kelly (00:54:36): I do, yes.
Tammy Haddad (00:54:39): Conrad, you're not moving
Elizabeth Kelly (00:54:40): There. He's not.
Tammy Haddad (00:54:42): You’re still working on that eo,
Elizabeth Kelly (00:54:43): About half of my team is actually out of that San Francisco office and it was, I remember my first conversation. Is that
Tammy Haddad (00:54:55): Surprising to the industry?
Elizabeth Kelly (00:54:57): A little bit. I think they think that we're all sort of stuck [00:55:00] in our government offices and
Tammy Haddad (00:55:03): In Maryland
Tammy Haddad (00:55:05): After the Washington Post story about the mold. They think you guys are, you got the mold there. Sorry. I think they got rid of
Elizabeth Kelly (00:55:10): That. We've got some folks in Gaithersburg, we've got some folks at Maine Commerce headquarters and we've got a lot of folks in San Francisco, which I think is really important for a number of reasons. One, it enables us to actually work closely with the companies who are headquartered there, but also the incredible ecosystem of academics of civil society, [00:55:30] all the technical experts who are out there and make sure that we are in touch with how quickly the technology is moving and able to keep pace. Two, it's been really incredible in terms of talent attraction and retention. So much of the talent is out there and wants to stay out there. And so it's been a real differentiator for us.
Tammy Haddad (00:55:51): They don't want to be with us in DC?
Elizabeth Kelly (00:55:53): At this. Well, we do make them fly out here on a relatively regular basis. We're a traditional startup in that we like to bring our team together [00:56:00] for all teams retreats and are pretty good cadence. But I think it's been a real differentiator. I think it's something that we think is really important. It's actually one of the first things I told the Secretary I wanted to do before I even had the job.
Tammy Haddad (00:56:12): Because you were out there, right? You should give a little of your background. For those that don't know, this is not your first rodeo.
Elizabeth Kelly (00:56:19): No, I am a startup junkie in some ways, I guess you say. So before I joined the Biden administration, I helped start and then scale [00:56:30] and ultimately sold a startup. We were not actually based in sf, which was unusual. But this job really brings together a lot of that sort of startup ethos of how do you create something from new and how do you navigate the incredible complexities of government and the incredible opportunities of government? How do you build big tent coalitions? How do you nurture the ecosystem? How do you get things done? And it's been really fun to see, I think the incredible sort of energy of our team [00:57:00] because we are creating something entirely new inside the federal government. And we have people who have never worked in government before. So little things like being able to offer them Mac computers is incredibly important for retention and talent attraction, I'll say. Yeah,
Tammy Haddad (00:57:15): I spoke to someone today actually in the government who said, I'm not on a Mac anymore and it just breaks my heart, not someone from you guys, just to be clear. So you're bringing people in, you're giving them this opportunity, you're building this whole [00:57:30] system. What's your goal for staffing? And I mean, are you going to have offices around the world? What do you think you need? I mean, you're talking about a lot of different things to get done. Do you have a dream amount of people working for you or systems?
Elizabeth Kelly (00:57:46): I'm really heads down and focused on the next six to 12 months and what is it that we can accomplish and deliver for the American people? I think we've already gotten a lot done in the last six months as we think about the team we brought on board, the opening, the San Francisco [00:58:00] office, the announcement, the cultivation of this network, the dual foundational model guidance, the progress we've made with the companies, but we've got so much more to do over the next six to 12 months.
Tammy Haddad (00:58:11): Well, I wanted to ask you about human rights because that's one of the issues that not just in the eo, but in everything you do. I mean, how are you looking at all of these issues? I know you're talking to everyone, but how are you making sure that people are protected in the development [00:58:30] and deployment of ai?
Elizabeth Kelly (00:58:32): I think this really comes back to why it's so important to have a big tent approach and make sure that there are diverse voices that are informing all the work that we do. And we're doing that through the AI safety and ASCI Consortium and the incredible diversity of folks from civil society, labor unions who are part of that. We're doing that through regular engagement and outreach. It was something that we're able to do in the executive order and in the M memo edge in which we continue to prioritize, [00:59:00] we're doing that. The people that we're bringing on board, my head of international engagement, Mark Latenero has really been focused on human rights throughout his career and brings that lens to every single conversation we have. And I think we're also doing in how we think about this network, one of the things that we've said is very important to us is making sure that there are voices from developing countries that are part of the network or at the convening and that we're really helping to [00:59:30] build capacity so that everyone can enjoy the benefits of ai.
Tammy Haddad (00:59:34): Okay, so if you have a question, we're going to have time for a few questions. You want to step over to Virginia, everyone knows the great Virginia Coyne. If you would say your name please, your affiliation conduct
Sumi Somaskanda (00:59:49): Sumi Somaskanda with BBC News. Elizabeth. Tammy, thank you so much for this terrific discussion. I have two questions if I may. And the first one is when you were talking about guidances for creating an ecosystem [01:00:00] of trust, we in the news are dealing with deepfake synthetic content. And of course today the question came up of these images of a rally in Detroit, are they real, are they not? And what it does is sew a lot of confusion and a time where you need obviously facts and to be able to talk to people about what is actually real and people are themselves looking for what is real. So what guidances have you released or will you release in the political sphere? And then secondly, when you talk about transparency with the companies reporting [01:00:30] to you, all the big players in the AI field, does that include where they get their data from?
Elizabeth Kelly (01:00:37): So on your first point, I should give the usual caveat that as a federal employee, I cannot talk about candidates or campaigns, but what I can say is that our job at the AI Safety Institute is really to help advance the science of AI safety, answer a lot of the open questions. And that's why we've really prioritized a lot of the work around best practices for detecting [01:01:00] synthetic content, authenticating content. I referenced the guidance that we released at the end of April and it's really more of a draft sort of landscape summary of the best practices in this space and we are finalizing that in the coming months, but we'll also be issuing towards the end of the year as called for by the executive order guidance on what the best practices are here that we hope will provide some clarity for the community in this very quickly evolving space. And of course, it's [01:01:30] part of what we're thinking about as we think about our dual use foundation model guidance, which will be finalizing towards the end of the year as well.
Tammy Haddad (01:01:36): What about Congress? What's your relationship? We have some staff here today, but you're navigating, I don't mean politics, I'm talking about on the policy side, how do you work with Congress?
Elizabeth Kelly (01:01:49): So we've been able to accomplish a lot in our six months at the AI Safety Institute. But there's sort of two big things that'll really enable us to, [01:02:00] I guess go to the moon and back, get all of our ambitions fulfilled. And one of those is formally authorizing the US AI Safety Institute so that we are a lasting enduring institution. And there we saw an exciting,
Tammy Haddad (01:02:13): That was my last question. There might be a change of administration. Alright,
Elizabeth Kelly (01:02:16): Well what I'll say there is we saw an exciting, I guess just last week, all the weeks one together with legislation by Senators Young and Senator Cantwell being passed out of committee, which would formally authorize the [01:02:30] US AI Safety Institute, which we think is a really meaningful step forward to really enable us to continue this work in a longstanding way. And of course, this is resource intensive. We have been able to achieve a lot with the money we've received in the FY 24 appropriation and we'll be receiving additional money through the technology modernization fund that recognizes the important role we have to play in facilitating the government's use of AI. But obviously the president's budget requested far, far more for the AI Safety Institute [01:03:00] and there's a lot more we can do to take on a broader risk of a range of harms to do more of the scientific discovery if we have those resources.
Tammy Haddad (01:03:10): That's great. Go ahead.
Chuck Ng (01:03:12): Hi, my name is Chuck, and actually I also want to thank Sally O'Brien, of course part of the hosts at Johns Hopkins and that's the reason why I'm here, by the way. Thanks to her who said, Hey, we have some exciting people and thank you actually, Elizabeth, I love what you said about [01:03:30] getting things done. That's really important as an entrepreneur, as an investor myself, and thank you Tammy you asked some really awesome questions and putting this on the spot. Anyways, my question is this, actually, my question is really about short-term and the long term. And in the short term, how do we ensure that AI systems remain online with the human values, even though a state become more autonomous and capable? That's the short-term aspect. The long-term question is [01:04:00] what are the potential long-term risks of AI? How should we prepare for scenarios where one day AI system potentially surpass human intelligence? Right, thank you.
Elizabeth Kelly (01:04:13): So at the AI Safety Institute, we are really taking a broad view of safety. We are very much focused on the here and now risks, the risks of privacy bias, discrimination, creation of synthetic content and CSAM and NCII, but also monitoring potential and emerging risks. I talked about risks around to public safety and national security like enabling cyber attacks or development of biological weapons. But really we view our role as working with the entire community to monitor, to keep pace, to understand what's happening. This is moving so quickly that I think it's really hard to say this is going to go one way or the other. It's our job as the US government to just keep up with what's happening so that we can respond appropriately.
Tammy Haddad (01:04:59): How do You keep up with all of it?
Elizabeth Kelly (01:05:00): Well, I hire really, really smart people. Smart. No, I think this goes back to why the ecosystem's so important, why it's so important to work with all of our colleagues across the federal government. I know that there's folks from DOD and NSD and lots of other agencies here, why it's important to work with our allies and partners across the globe, why it's so important to leverage all the incredible members we have in the consortium and to really be embedded in what's happening. I think the San Francisco office is really helpful here.
Tammy Haddad (01:05:29): Okay.
Mohar Chatterjee (01:05:30): [01:05:30] Hey, it's Mohar Chatterjee, Politico, AI special projects reporter. Tammy, first I wanted to say it's probably a credit to the event that so many people from the press want to get their questions out. It's a very good candid conversation and I'm really glad, Elizabeth, that we get to have you. My question is basically you brought up the notion of focusing on the next six to 12 months, and Tammy, you brought up the notion of there being a presidential turnover, which is happening in the same timeframe. [01:06:00] And then also is the notion of there being AI safety network, a network of AI safety institutes around the world and the Bureau of Industry and Security collecting information from industry. So my question to you essentially is how are you setting up the information transfer infrastructure within the Commerce Department to withstand a presidential administration? Because it sounds like, oh, and the last piece of course is the fact that the Trump administration really wants to repeal the eo, right? [01:06:30] So I'd love to know how you are building this information reporting Aus within the US government, within the Commerce Department to understand how the most powerful AI systems in the world are developing.
Elizabeth Kelly (01:06:47): So usual caveat that I can't comment on candidates or campaigns, but I do think that there are parts of AI safety that are truly bipartisan. I think that all of us want to make sure that we are not seeing AI [01:07:00] pose risk to public safety or to national security, and that's why you've seen legislation that is bipartisan co-sponsored by Senators Young and Cantwell that would formally authorize the US AI Safety Institute and why we're seeing these in countries across the globe with many different political leaders. I also think this is part of why it's so important that it's not just reporting to the government. It's about having a [01:07:30] transparent ecosystem. And that's part of why our guidance calls on companies to be releasing things so that in-house evaluators, third party researchers, everyone else is really able to help create that ecosystem of safety.
Tammy Haddad (01:07:42): Okay, go ahead.
Naomi Moskowitz (01:07:43): Hi, I'm Naomi. I'm here with the Noble Reach scholars, so thank you so much for having us.
Tammy Haddad (01:07:52): I should explain the Noble Reach scholars. These are brand new US government employees. They're here [01:08:00] for a month to get to know the city. The Noble Reach Foundation is like Teach for America for Tech Scholars, so let's hear it for them and wisdom. Good luck. Please go ahead.
Naomi Moskowitz (01:08:14): My question is, in terms of actions and regulations for AI safety, reaching the public and average engineers, do you foresee there ever being something like approvals or ratings? For example, a technologist sees on an AI package, this model is certified unbiased [01:08:30] by the US government, so now I can know I can use it in the app I'm making or watermarks on AI generated content kind of like the FDA. If not, how do you see these regulations turning into concrete policy?
Elizabeth Kelly (01:08:45): First off, let me say how cool it is that all of you guys are here. I think one of the best parts, (01:08:54): One of the coolest things to come out of the executive order, and I think Conrad would agree with me, that is really the AI talent [01:09:00] surge and getting people from so many walks of life to come into government to offer their expertise to help make sure we're able to lead on this. I think the Noble Reach scholars is a fantastic example of that and it's very cool to be with all of you. We are not a regulator, we are a voluntary organization. We work with companies with civil society on this research and testing as well as on created this ecosystem of trust. It's hard to know where things are going to go in 6, 12, 18 months, [01:09:30] but I think there is a lot that we can do from that sort of voluntary posture. And you're seeing a lot of different initiatives pop up. For example, the work being done at Stanford on Transparency Index that I think is going to shape this field going forward.
Tammy Haddad (01:09:44): Can you tell us more about that?
Elizabeth Kelly (01:09:45): Well, I would defer to some of the Stanford folks in the room, but a number of the folks at HAI, human-centered artificial intelligence, I believe, have been working to get companies to release more data about their training [01:10:00] of the models, their testing, and then ranking them based off both their transparency and what they're finding. And I think that's an example of how sunshine is a very important component here and I'm excited to see more of it.
Tammy Haddad (01:10:15): When did you first hear about Chat GPT
Elizabeth Kelly (01:10:23): I think all of us were a little bit surprised by the huge evolution that we saw [01:10:30] less than two years ago now. I don't think anyone would've predicted to be the fastest growing consumer product of all time. I will say that I think the Biden administration was really focused on AI even before chat GBT appeared on the screen.
Tammy Haddad (01:10:43): And Why was that?
Elizabeth Kelly (01:10:44): Because we've been seeing AI in the more traditional sense for many, many years. It's been used in lending decisions, employment decisions, a number of things and my colleagues at the Office of Science Technology Policy really took a leadership role along with a lot of the vice president's office [01:11:00] and helping draft the AI Bill of Rights that outline the fundable productions that need to be in place as we're thinking about AI systems. And my colleagues at NIST Elham Tabassi, for example, led on the AI risk management framework, (applause) yeah, and that predates a lot of this. I think it's moved incredibly quickly, but there have been really smart people in the administration have been working on this for a long time.
Tammy Haddad (01:11:28): But you were surprised, right?
Elizabeth Kelly (01:11:30): [01:11:30] I think everyone was surprised by just the huge leaps and bounds we've seen over the last two years, and I think it's humbling to think about the evolutions we can see over the next two.
Tammy Haddad (01:11:41): It's funny, Mira Murati was actually on this stage with Kara Swisher and she told this funny story that people at OpenAI didn't even know about it. Exactly. So what's the next big thing as we leave here? What's the next big thing that we should all be looking towards? You don't have to name a company or anything you've seen [01:12:00] that completely surprised you or it surprises you every day.
Elizabeth Kelly (01:12:04): There's new things every day. I mean, I think we're seeing huge improvements in a lot of the multimodal systems as we're thinking about voice image video. That's a very big deal. I think we're also seeing a lot of improvements in the explainability of these systems, mechanistic interpretability, why it actually produces what it does, which I think is really exciting as we think about the safety landscape in the coming years. And I expect just to continue to be surprised [01:12:30] as is everyone,
Tammy Haddad (01:12:31): I've got to say there's a story today on the Drudge Report. I don't know how many people read the Drudge Report about AI Olympics. And I have to say I'm kind of excited about that. It's going to be in Los Angeles in four years. I don't know if what that means for Tom Cruise. Maybe it's like an army of Tom Cruises come down on the Hollywood sign… something like that. As
Elizabeth Kelly (01:12:50): Long as I get to go with you, Tammy, it'll be fine. Let's go.
Tammy Haddad (01:12:52): Go. Elizabeth, I can't thank you enough for today. I've got to say the work you've been doing is remarkable. [01:13:00] It's an honor to sit here with you. We're thrilled to have you here at the Johns Hopkins University Bloomberg Center. That's what this place is all about. Let's hear it for Elizabeth.
Tammy Haddad (01:13:12): And thank you all for coming. It's a remarkable place. We have to thank Mike Bloomberg too, who is behind all this Bloomberg Center. And I don't say that I work there, but it's really remarkable and I hope you guys all come back.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.