Washington AI Network Podcast Host: Tammy Haddad Guest: Mary Ellen Callahan Assistant Secretary, Countering Weapons Of Mass Destruction Office U.S. Department of Homeland Security July 1, 2024 Episode 22
Tammy Haddad: [00:00:00] Welcome to the Washington AI Network podcast. I'm Tammy Haddad, the founder of the Washington AI Network, where we bring together official Washington D. C. insiders and AI experts who are challenging, debating, and just trying to figure out the rules of the road for artificial intelligence. This AI revolution has been led by industry and governments are running to catch up. This summer, we are introducing you to some of the important public servants who are executing the mandates from the President's AI Executive Order and are working to make sure all Americans are safe. Mary Ellen Callahan is the Assistant Director of the Department of Homeland Security for Countering Weapons of Mass Destruction. That includes chemical, biological, radiological, and nuclear threats. We will ask her how this Herculean task has gotten easier or harder with artificial intelligence. Mary Ellen used to [00:01:00] work in industry, protecting Mickey Mouse in the Magic Kingdom. As Disney's Assistant General Counsel for Privacy and Cybersecurity. Wow, Mary Ellen, that's quite a pivot, but actually not, is it? Mary Ellen Callahan: Good to be here, Tammy. No, it's not a pivot, actually. If you think about it, both Disney and the Department of Homeland Security are large, complex organizations with lots of different equities and entities. And when I was at Disney, I was part of the headquarters to support and provide consistency to protect our assets through cybersecurity, privacy, and overall information governance. Here at the Department of Homeland Security, I'm the Assistant Secretary of the Countering Weapons of Mass Destruction Office. Which again, provides a centralized subject matter expertise on chemical, biological, radiological, and nuclear threats to the homeland. That's a lot. I don't even know how you do that regularly without AI. Tammy Haddad: So let's talk about that first. Is that a giant federal agency with thousands and thousands of [00:02:00] people? Do you work with states, local governments? How do you actually operate? Mary Ellen Callahan: The DHS CWMD office supports our federal, state, and local partners through a variety of training, technical assistance, and equipment. CWMD was created about five years ago to be the subject matter expertise for chemical, biological, radiological, nuclear threats for the department. And so we support five operational components that includes Coast Guard with their maritime mission, secret service with their protective mission, TSA, when they're in the airports. Customs and Border Protection at the ports and HSI when they're investigating border threats. Together, we provide them with the equipment they need, the training they need, and how to exercise it, because it can be very complex on how to prevent, detect, and deter chemical, biological, radiological nuclear threats, shorthand CBRN or CBRN. We also provide an extraordinary amount of support to our state and local partners. We [00:03:00] provide radiological and nuclear technology to do detection for rad nuke threats, and that's in 15 of the major cities where we do all the prevention, detection and training. The state locals have personal radiation detectors, they have equipment, they have ability to detect chemical threats. We work with them to have that established as a steady state to go and look for those threats. We all know that the state and locals are going to be the first ones to identify a CBRN threat because they're the ones who are going to be interacting with the public. They're cops, right? They're cops. They're on the beat. They're out at special events. They are working with people, maybe there is an incident or an accident, which has chemical dispersions. They're going to be the first ones on the course. So for a force multiplier, we're working with state and locals, both in the cities, but also in the states to try to get the CBRN knowledge of how to respond, how to detect in the first place. One [00:04:00] thing I'd like to point out is CWMD does everything that is what we call left of boom. So all the prevention, detection, training. That's all CWMD supporting our federal, state, and local partners. If there's an incident, that's right of home. That's more of the Federal Bureau of Investigations. If it's a rad nuke attack, it's the Department of Energy. And we work closely with them and we're trying to work even more closely. We are also leveraging our DHS colleagues at FEMA and at CISA to make sure that we have the whole life cycle of incident management, to make sure that we know what the handoff is from detection to response. Tammy Haddad: OK, so then the president comes up with this executive order. Mandating all across government to utilize. The resources of the U. S. government to protect Americans. Now that's your job to start with. Now you've got AI where everything's faster. The technology is being updated every moment while we're sitting here. Chat [00:05:00] GPT may be like 5. 8 or something. Who the heck knows? How do you do that? Mary Ellen Callahan: There's a real opportunity to leverage the promise that artificial intelligence provides for CBRN threats. It can help identify new novel countermeasures. It can help identify ways to respond. It can look for anomalies that you and I won't be able to see as humans, as individuals, but they could go and find where there might be something that's happening, where there is some sort of threat or burgeoning situation. We're very excited about it. The Department of Homeland Security overall is leaning forward with artificial intelligence. It's got three very vibrant pilots. And of course the chief AI officer, Eric Heisen is really leaning in on all this stuff. At DHS CWMD, we're using AI already for some of our detection capabilities. We talked about rad nuke detection. I don't want to say much more about that, but we are using artificial intelligence to really be as efficient as possible. And that's really the promise of AI. This report talks about [00:06:00] how great it can be, both to find new experiments or new opportunities, but also to go and find the bad guys sooner. With that said, the report also talks about the perils of AI and what the potential threats could be. Tammy Haddad: So you have to put this report together so that there's this true analysis. And again, the technology is unfolding while we're here. How do you do that? Mary Ellen Callahan: It was a big group effort. As you said, the president asked the department of Homeland Security to look at the CBRN threats and how to ameliorate them. The secretary of Homeland Security came to us because we are the subject matter experts at CWMD. And what we did is we talked to every one of the major large language model developers. We talked to international allies. We talked to trade associations. We also talked to think tanks and academia to go and look at particularly the CBRN threats and how to address it, how to ameliorate it. [00:07:00] And it really was a collective effort. We then socialized our findings throughout the interagency pretty extensively and repeatedly to make sure that we had the right lens on it. Tammy Haddad: And what did you find? Mary Ellen Callahan: What we found out, let's start with the positive, right? AI has an extraordinary amount of promise and can really help us in the CBRN space. With that said, AI overall, as you mentioned, Tammy, is going to lower barriers to entry for everybody, whether it be good guys or malign actors. Right now, we have found AI has not yet made creating a chemical or biological threat easier. But we wanted to put in guardrails standards and guidance now instead of in five years when there is more of a opportunity or risk. Really wanting to foreshadow this and address it before it goes into action. Tammy Haddad: In dealing with the companies that are controlling so much, do you give them a list? Here's the five things you should do. Are there more voluntary commitments to come? Mary Ellen Callahan: So you mentioned the voluntary [00:08:00] commitments. About a year ago, the president and these large language model developers all signed the voluntary commitment. We're actually using those voluntary commitments as a springboard for this conversation. In the voluntary commitments, they all said we want to provide safety and security, we want to specifically look at the high risk issues like chemical and biological threats Tammy Haddad: and they're the highest risk, right? Mary Ellen Callahan: Chemical and biological risks may not be higher, but they are probably more advanced in the development of AI for a couple of different reasons. One, there's more information available on the public web regarding biology, biological research and compounds and recipes, so to speak. So that's already been ingested into the large language models already. Similarly with chemical, if I were to rate them on the maturity of how AI has fostered or developed, chem and bio are more mature than radiological and nuclear, furthermore, the department of energy is doing a classified report on the rad nuke threat. So I'm [00:09:00] ceding that specific information to the department of energy. Okay. Where they can look at this. They obviously have extraordinary expertise in this area. And we're focused on the more near term chem and bio risks. Tammy Haddad: So as we get ready for these elections, the conventions are coming up. Then you guys have a role in all of that too. So this is not just pie in the sky. We're going to help you guys. This is what we think you're hands on. Mary Ellen Callahan: We are very hands on. So as I said, we support the five operational components, including the secret service. We also deploy as part of our mobile detection deployment program, about 200 times a year. Our two biggest deployments this year will be the convention in Milwaukee and the convention in Chicago. And that capability, my team is supporting the law enforcement elements. So Chicago PD, I think has the point for Chicago, obviously with secret service, cause it's a national security special event. And then in Milwaukee, I can't remember what [00:10:00] the law enforcement operation is, but we work with them together in concert with the Secret Service because it also is a national security special event, but we're deploying chemical, biological, radiological and nuclear detection in addition to the steady state detection that, for example, the city of Chicago has, and probably I'll stop there talking about what else Chicago has. Then we also have the Olympics. Do you support the Olympics? CWMD only focuses on the Homeland. CWMD as an office isn't supporting the Los Angeles Olympics. We are looking at lessons learned and we're going to try to talk to them about that. We are very much supporting Los Angeles. Obviously, Los Angeles is one of our major stakeholders, and in fact, I am going out to Los Angeles in August to participate in an exercise to start the Olympics planning. So four years out, we're already looking at planning because we want to make sure that we are prepared, trained, exercised, understand the risk. And as I said, this is a complex area, but we want to make sure [00:11:00] all of the participants are prepared to respond if indeed there is an event. Tammy Haddad: So, OpenAI. Is the one company that has come forward for this election in the U. S. and said, we're going to go after anyone who's using our AI illegally inappropriately, not following guidelines, right? So will they do the same thing with you now? Will they alert you? Who alerts who? How does it work? Because it feels like it's all new. Mary Ellen Callahan: That's part of the problem is it is all new, but that's why we want to get in now. The safety and security commitments and the voluntary commitments are pretty broad. They're pretty generic. In our conversations with the developers, everyone very much wants to do the right thing, but they all have different approaches on how to do it. Some people have hired chemists and biologists. Some people are using their typical safety and security operations and how to analyze it. Some people are asking us, Hey, can you go and look at our development? What we recommend in the report is not the U. S. government [00:12:00] red team, private models, but instead to give them the tools to go and develop guidelines and baselines to have codes of conduct so that they can understand how to actually implement this, to give benchmarks and things to look for as they do their own red teaming and their own analysis of what their models are capable of. Tammy Haddad: Again, it's so new. So say Colorado, I want to talk about them because of their AI regulation. You work with Colorado and they put together this plan, right? And they're pretty advanced. Do you then take that to other states? How is this all being socialized so it's all handled the same way? So if you're in one state versus another state, is it going to be handled differently? Mary Ellen Callahan: DHSCWMD isn't working with the states per se, that's legislation, they, they can work on that, whether it be Colorado or California. What we're trying to do is give the model developers and those who are supporting the model developers the tools [00:13:00] to be able to address and, and identify this is a chemical or biological threat, this is a novel threat. agent. This is a novel issue. We should remove it. We should not have it be part of the process. We should not index it in the first place. We should not ingest it into the model itself. So the way that this is going to be socialized, probably primarily, is through the recently stood up AI Safety Institute that the Department of Commerce has developed. They've just hired some folks to focus on the CBRN threat and the chem and bio piece in particular. And they were part of the discussion and development of this. So what I would recommend is that the federal government take their subject matter experts and work with the AI safety Institute to translate it, to package it, and to have it in an unclassified way so that people can. Implement the tools without it being a specific recipe or a laundry list or item. And to mature them, just like we're developing the state and locals with [00:14:00] regard to detection of CBRN threats, here we're trying to give the model developers tools to identify the CBRN threats. in artificial intelligence, but it's still early days. We're working on how to develop that guidance, and we are working on an implementation plan itself. Tammy Haddad: It really feels like I'm in the show Homeland now, and like, you're Claire Danes trying to warn people to get up. Thank you, kind of. But it's a whole different way of looking at threats. Mary Ellen Callahan: I would actually argue it is not necessarily a new way to look at threats. What we're trying to do is to preempt the threats. And something I want to highlight, Tammy, is that in the early 2000s, I worked with a lot of companies on advertising, right? And on interest based advertising and the self regulatory regime and voluntary commitments that they had. Well intended, wanted to do the right thing. There wasn't these guidelines or structures or baselines [00:15:00] that anyone talked to them about, and so it was bespoke. Everyone implemented it in a slightly different way. And so you ended up having a race to the bottom in terms of, Oh yeah, I'm in compliance, but I'm not really in compliance with these voluntary guidelines. What I want to do is take those lessons learned from the internet in the early 2000s and say, we've got an opportunity here. There are not yet major chemical and biological threats created by AI. Let's put in the governance, the structure, and the baseline so that those model developers, well intended as they are, can apply the right standards and a consistent standard to keep as much of the novel threats off of AI as possible. So I think it's an opportunity. Tammy Haddad: And, and if they've been responsive? Mary Ellen Callahan: They've been very supportive. The AI model developers want to do the right thing. They want to make sure that they are doing the right thing. In some ways they don't quite know how, and we have to help them get there. And that's what the guidance and frameworks will [00:16:00] do. And that's also how we'll help the AI Safety Institute promote this and distribute it more as well. Tammy Haddad: But just to your point, you're way ahead and the safety institute is just getting up and running. And then you're relying on the cooperation and these volunteer, again, I keep going back to it, but the voluntary commitments who no one's really judging, right? Or there's no real analysis Mary Ellen Callahan: so, I'm going to draw the analogy to ad serving, the early 2000s. Tammy Haddad: You have to explain what ad serving is. Mary Ellen Callahan: Interest based advertising, I think everyone knows what interest based advertising is, because we all get those ads on the internet. And there was a self regulatory scheme to avoid legislation. In order to say you can provide an opt out, you got to disclose what's being advertised, what the process is, and so on, primarily in browsers back in the early 2000s. And everyone agreed to the self regulatory regime, to these voluntary commitments. But there was really not a real enforcement mechanism [00:17:00] to it, and there weren't standards that were promulgated. Per se, there were standards, but they weren't, they were wide, they were wide. Exactly. And so everyone applied it in a slightly different way. What I want to help model developers do is understand what the risks are. I also want to create a culture of responsibility. And I think that this is the opportunity to do it. We're in the early stages of artificial intelligence and its impact on everything we do. So let's make sure that not just the model developers, but the people who are providing the inputs, the people who are allowing people to do research as a result of the AI, that everyone's thinking about what the risk is, that culture of responsibility to do it at this point, early on. Voluntary commitments could lead to some legal. impacts. I don't want to speak as a lawyer because I'm not one in this job, even though I am one in my spare time. So there could be enforcement mechanisms of that. But here we're just trying to help [00:18:00] train and develop people so that they can have the right standards in which to apply the right risks and rewards. Tammy Haddad: And what about new regulations? Are there regulations you're looking for? Mary Ellen Callahan: We looked at this. There are a lot of different laws that could impact different elements of this. If you think about it, export control, trade, CFIUS, technology transfer, privacy, cybersecurity, all of these could be impacted with regard to some of the artificial intelligence and the chemical and biological threats, and those are governed by a bunch of different agencies. That I think is a strength and a weakness. It's a weakness because we don't have a unified front in terms of how to address this point. It's a strength because we could come at the issue in a couple of different ways. Depending on what the risk is, the DHS secretary, Mayorkas, has said that regulation is really a reactive, uh, model and it, it isn't nimble. It doesn't respond to novel [00:19:00] threats. And so I think the priority here is first to go and get the standards and subject matter expertise to go and work with the model developers in the voluntary commitment vein. To look at existing regulation and legislation that we can leverage if there's an enforcement action, whether it be civil, criminal, whatever it would be. And then I think only after the gaps are identified, we could then look to Congress or look to modify regulations to help support it. But right now, we're trying to address the present risk and danger and trying to address it with what we have in our hands. Tammy Haddad: Congress is really supportive of your department. And I remember last year. It was a little iffy there for a while, but there's a huge endorsement. Does that mean there's a real understanding that this could go really bad and we have to have the protections and to use AI to do it? Mary Ellen Callahan: The Congress is very supportive of the department for the office of countering weapons of mass destruction. There was a risk that we would get terminated last year. [00:20:00] Our termination has been suspended and we've been extended. There was a strong vote of support from the house and we continue to thank Congress for their support. We were working on this AI CBRN report when we had the threat of termination. And I do think that this report itself and the way that we were able to integrate the developers, plus the interagency, plus the international community demonstrates CWMD value proposition. So I'm very proud of the report, particularly of when it came out., when we had that threat, I think everyone will understand that chemical, biological, radiological nuclear threats are a low probability, but a high wimpact option an need to be prepared for this and we need to train and develop and try to address any novel threats coming out of artificial intelligence. Tammy Haddad: Well, that's what was interesting. I'm sure it's painful for you to go back through all of that, but I have to say, just as someone in news, it was so interesting to see that members actually got this. This is [00:21:00] about how our constituents live and how they want to live. the way they want to live in a free society where they know they're protected. They could wake up every morning and be less fearful. Do you worry about the doomsdayers about AI? Does that worry you? Mary Ellen Callahan: I will say having worked on this report, I am less worried now than I was before. This report is intended to be clear eyed and not alarmist. I've seen a lot of reports that have fairly breathless language in the headings. And what we're trying to say is, AI is here. AI is going to lower barriers to entry. It's going to lower barriers to entry for both ways that we can improve our lives and identify chemical and biological threats. And it's going to lower barriers entries to bad guys. And what we need to do is deter and disrupt the bad guys while trying to help the good guys. And the way we do this is a culture of responsibility, a whole of community effort, that everyone goes and [00:22:00] agrees that these are the models on which we want to live. That's why I'm saying the voluntary commitments are a very strong foundation for that. They don't give specifics with regard to how to address a chemical risk versus how to address the biological risk. But they do say this is important and we've got to provide this framework and to have everyone agree to this, our allies, the developers, the academia for them to also identify if they're seeing something as people are developing, have everyone work together. to address this holistically. And I'm not going to say I'm optimistic because I'm never optimistic, but I, I think this is the right time. Tammy Haddad: That's how you got that job. Weapons of mass destruction. I bet you're a lot of fun at a party. Do people walk away when you tell them what you do? Mary Ellen Callahan: They usually laugh. They usually laugh at the title and then they say, Oh, okay. Um, and then they leave. Tammy Haddad: No, they feel safer because they're with you because you're there. Mary Ellen Callahan: That's it. That's usually the conclusion. If you're here, it's okay. Tammy Haddad: Can't get up and leave in the middle of a party or a black tie dinner. Right. [00:23:00] There's nothing worse than Washington than you're in someone from the agency or secret service or takes a phone call and the whole room turns around. But for you, people would be running from the room. Mary Ellen Callahan: The good news is I don't think I'm as famous as any of the other people that you mentioned, so it would just be like the people at my table going, Oh, she's got to take a call. And again, CWMD, we do a lot of great stuff. We're doing a lot of the deterrence stuff. If something happens, I'll turn to my colleagues at the FBI, DOE and FEMA Tammy Haddad: Exactly. All right. So I have to ask you the first time. You heard about ChatGPT, the first large language model that we all heard about. What did you think? Mary Ellen Callahan: Having worked in technology for 25 years, I initially wasn't sure whether it was really this watershed moment or not, but even after it came out, I realized it was. I realized that it was indeed a pivot that we needed to be aware of it, that it was a, search capability, but really was able to create new content. The new content piece, I think, is really [00:24:00] the novel creation here, and that's really the sea change. And that's why I want to address the new content for chemical and biological threats now. Tammy Haddad: And also for government, right? It's a game changer for anyone that has to process tons of paper and even write a report. Did you use A. I. in writing your report? Mary Ellen Callahan: We did not use A. I. in writing the report. I know that some of the people who worked on the report did use it to test it out. None of that content got into our actual report. It was all my, my team who did extraordinary work on this big lift. Tammy Haddad: Well, we thank you and we thank them. And I will look for you at parties all the time now or any sort of event. Thank you so much, Mary Ellen Callahan, countering weapons of mass destruction on behalf of the United States government and the American people. Thanks for being here at the Washington AI Network podcast. Mary Ellen Callahan: Thanks, Tammy. Tammy Haddad: Thank you for listening to the Washington AI Network podcast. Be sure to subscribe and join the conversation. The [00:25:00] Washington AI Network is a bipartisan forum bringing together the top leaders and industry experts to discuss the biggest opportunities and the greatest challenges around AI. The Washington AI Network podcast is produced and recorded by Haddad Media. Thanks for listening.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.