WAIN WAIN <> FBI TRANSCRIPT REVISED AS OF 8.30.24 12:02 pm ET
Tammy Haddad (00:02): Welcome to this special edition of the Washington AI Network podcast. We're here at FBI headquarters. That's right, FBI headquarters, for this important conversation about the FBI's approach to AI. Our guests are Jonathan Lenzner, he's the FBI's chief of staff, and Bryan Vorndran, assistant director of the Cyber Division. Jon and Bryan, thank you for hosting us here at FBI headquarters. Thank you. This is a real treat, and we want to jump right in. So the big question is, when you think about FBI and you think about your role with AI, how do you look at it? Bryan, why don't we start with you? Bryan Vorndran (00:39): Sure. Thanks for joining us tonight everyone, and thanks for coming to FBI headquarters. Maybe not the fanciest building in town, but something that's very, very near and dear to many of our hearts who've given decades of service. When we think about FBI, we break it into three buckets. Overarching those buckets is thoughtfulness as we look across the emerging space. So obviously, we're interested in defending against threats that are leveraging artificial intelligence and machine learning, and that's a clear bucket number one. (01:07): Bucket number two is how do we protect American innovation? We think we have a very specific role helping organizations, whether for-profit or not-for-profit, protecting their artificial intelligence, ML, intellectual property. And then a distant third is how do we within the FBI apply emerging tech, AI, and ML in a thoughtful way? Jon and I were talking before we got here, and not to steal some of his thunder, but industry likes to fail fast to learn. We can't fail in the AI application space because of the impact on the American public. So we're very, very thoughtful about how we approach it. So three distinct buckets is how we think about it. Tammy Haddad (01:47): Jon? Jonathan Lenzner (01:48): Just to follow on what Bryan said, the threats that we are facing today, as we all know, are evolving so quickly and we need to evolve with them. At an enterprise wide level, we're talking about understanding the technology across all of our divisions, being proactive and creative, while also adhering to strict governance standards, and then also making sure that we have close and frequent collaboration with all of our partners, private and public sector. We are all familiar with the threats that we're seeing from AI and how it's an amplifier of so much of the misconduct that we see across the spectrum of threats that we at the FBI see. And as Director Wray often says, what we see is artificial intelligence taking those junior level bad actors and turning them into the varsity level bad actors. And what we're all bracing for is when it takes the varsity level bad actors and promotes them up to another level of dangerousness. (02:44): We also see the great benefits at the enterprise level to artificial intelligence. We were briefed a few weeks ago on how AI helped the FBI identify and rescue a victim, a child victim, who was being held overseas. And so when I hear about AI, from my perspective, I hear about it in terms of how it's used to triage information, how it's used for biometrics and things like translation. And just the three categories that Bryan outlined, that's exactly how we all think about it at the Bureau, from the director, throughout the agency. (03:14): The last thing I would maybe add to that was the second category in terms of protecting American AI innovation. That obviously implicates economic security concerns, but it also very much from our perspective implicates national security concerns. And Bryan can talk about this better than I can, but in terms of China, for example, stealing AI technology, they can use that to mine all of the troves of data that they've already stolen from American government and companies. Tammy Haddad (03:40): Well, let's go right to the headlines. You mentioned China. And today the story about the China hackers penetrating the U.S networks. That scares all of us in this room and all around the world. How do you handle that? Bryan Vorndran (03:53): Well, I think it should scare people. We're not going to comment on the article specifically, but we know just if anybody's been monitoring what we call Volt Typhoon activities since the fall of '23, that the Chinese have gained persistent access to US critical infrastructure. That is very, very well published on in terms of a topic and remains a very viable threat. And I think the article is just more of the same. Tammy Haddad (04:16): But let's go back to the basic level. What are the advantages that these actors have now with AI? Bryan Vorndran (04:22): I want to take a step back. When we talk about emerging tech, whether it's artificial intelligence, ML, quantum, post-quantum, I think it's important to remember that there have been amazing advances, whether that's robotics, different types of engineering, different types of pharma. When we look at AI, it does allow the adversary to save time by automating tasks. It allows industry to save time by automating tasks. We know that the bad actors can scale more quickly, be more impactful, and think more broadly about what they're doing. (04:51): But I think it's important to take a step back and say this, the FBI specifically has faced these emerging threats for 50, 75 years, and Director Wray mentioned in a speech, when you look back at these things during the year the FBI was founded, that's the first year that Henry Ford mass-produced cars. And that was going to change everything. (05:08): In the nineties, it was the internet, and that was going to change everything. In 2000s it was the smartphone that was going to change everything. These technologies emerge over time. AI and ML is the next one. Right after that it'll be quantum and post-quantum most likely. But I think from an FBI perspective, our position is we've always been there and we're always going to be there. We're going to try to learn about the new tech, we're going to think about how we partner with industry to understand it and to do the right governance behind it, and then to be part of the solution. And we'll talk a little bit more about being part of the solution here tonight, because it's not just the bureau, it's many in industry. (05:43): But in terms of risks, I would boil it down to two things. The first one is non-rule of law nations like the PRC, stealing or manipulating US data to gain a strategic advantage over the United States. And the second would be already dangerous people becoming more dangerous because they can short circuit the process. When we look at the benefits from AI, there's really three. Number one, it is labor saving. We have proven that. We know that to be true. Number two, it enhances targeting by making the fraud appear more real. So the days of somebody who doesn't understand the English language crafting a very well crafted spearfishing email are gone because you can now use artificial intelligence and machine learning models to do that for you. So the fraud becomes more real. And lastly, it enhances obfuscation. It allows the actors to lay dormant on our networks for longer. (06:32): And so when we scope the problem, three important takeaways, right? The FBI is here, we're always going to be here. We're always going to try to do this work. Second is the two specific risks for non-rule of law nations and allowing dangerous people to become more dangerous. And then the third is the benefits that the adversary is enjoying. Tammy Haddad (06:49): All right. You mentioned it a little bit, but Jon, I want to ask you about how does the FBI work with the private sector? Jonathan Lenzner (06:54): We have a broad mandate at the Bureau. We cover everything from counterterrorism to counterintelligence, public corruption, civil rights, catching spies. And to cover all that territory we do have 38,000 full-time employees worldwide. But when you consider the breadth and the seriousness of the FBI's portfolio, we actually have limited resources. And so we rely heavily on partnerships with local and state law enforcement, with our intelligence community partners, foreign governments, but also the private sector. And it has been a significant priority of Director Wray's to strengthen relationships with private industry. (07:30): We see technology moving so fast and how it impacts our ability to do things like collect and analyze evidence and information on a timely basis so that we have to have productive relationships with private industry to make sure that not only are we staying ahead of the threats, but we're also able to share information and help companies stay ahead of the threats. We look for ways that we can leverage our resources to have an impact on communities. Critical infrastructure is an obvious area. Everything from ports and aviation and healthcare and finance. Ways that we can leverage our resources to have the biggest impact. (08:08): Same thing with companies that work with emerging technologies. We're trying to make sure that we're having as much of an impact in our engagements with the private sector. And the ways that we try to develop these relationships, one way is to actually join forces and protect America together. So we're trying to share timely information with companies. We're trying to make sure that we're being responsive even when we can't be transparent. And then also building meaningful partnerships when we can have those success stories together. And so we develop relationships with venture capital, with tech companies, with founders. Tammy Haddad (08:42): So do people just call you up and say, "We've got this interesting technology, can we come over and tell you about it?" Jonathan Lenzner (08:48): Absolutely. In fact, Bryan, I'll just give a plug for Bryan and his team, they have been one of the more forward-leaning operational divisions, and they are incredibly victim-centric, working with companies. And so Bryan always says, "We'll talk to any company." Bryan Vorndran (09:02): Our experience has been those companies, and I should be more broad, organizations, because it's not just companies, that lean into those relationships early on, whether that's before cyber intrusion, at a time when you have sensitive IP, they all fare better when the clouds get murky. It's just an important takeaway. Establishing that relationship early is just the right thing to do. Tammy Haddad (09:24): How many people that are in the room work with the FBI now? How many want to? Oh, you got a lot of potential help there. Jonathan Lenzner (09:33): We got some work to do at the Fogo de Chao. Tammy Haddad (09:35): Yeah, exactly. Well said. All right. But Jon, how should a CEO know about the FBI and what you're all trying to do? Jonathan Lenzner (09:45): Maybe I'll just take you to a scene that plays out every morning in this building, which is, we have a Scif upstairs where every morning the heads of all the divisions, like Bryan, brief on the relevant things happening that day. And so you have the counter-terrorism division, counter-intelligence, cyber, et cetera. (10:03): And sometimes by the end of those briefings, you feel like you just binge-watched a Netflix series because it's all the relevant things happening around the world that day. But the cyber division, Bryan's division, always has some of the most interesting and fulsome briefings each morning because there's so much happening in cyber and his division is doing so much to address it. And Bryan can correct me if I'm wrong, when they brief the success stories, when the FBI was able to help prevent an attack or mitigate the damage, more often than not, I believe that those are companies or organizations where there was already a preexisting working relationship with the FBI. And then on the other side of the coin, when the stories are not so successful and there has been damage inflicted in a way that's impactful and lasting, it's been based on relationships that were not preexisting. (10:49): And so I would tell a CEO, number one, nation states and bad actors are coming for your IP, they're coming for your technology. But number two, I would recommend having that relationship with the FBI now before the attack happens. Know your field office, know the headquarters divisions who cover your industry, and – plug to cyber division – they are incredibly victim centric and their goal is to help the company, help the organization, and that's their singular focus. We've seen many examples and when Director Ray meets with CEOs, we have these conversations all the time where companies have been very pleased to see how well the FBI has worked with them when they've been under attack. Bryan Vorndran (11:29): Can I give you a practical scenario real quickly? I don't know what stage many of the companies of the representatives in this room are in, but let's just say you're in a startup build phase pre-IPO, and you have sensitive AI technology that you learn is being used by the Russians to create spear phishing emails to target you name who, it doesn't matter who the target is. Would you rather know who to call when you see that activity and have trust built with the FBI, or would you rather start that conversation then? We have examples in our world that are both, that are very mature. Phone calls come in, we know exactly how to work with that organization, they know how to work with us. Others, we're starting from square one and it's a challenge. Just a real life practical example. Tammy Haddad (12:11): So, Bryan, there's a lot of conversation about AI touching people, families, the physical world. What is the FBI seeing? Bryan Vorndran (12:18): This is probably the most difficult part of tonight's conversation. AI unfortunately is touching the physical world and certainly is touching industry and critical infrastructure, but my comments here tonight are going to be about the disproportionate effect on children. There's a story in North Carolina of a child psychiatrist who essentially took teenage photos right from a dance and superimposed nude photos that were AI generated. The takeaway is that AI generated child pornography is still child pornography, and it's going to be investigated and prosecuted as such. But here you have innocent kids who went to a school dance, who were at a school function, they don't even know that their photos have been lifted from a school site or a different site, and then they are materialized that way. And so obviously the AI platforms that work can create those images based on existing images or based on inputs and overriding controls. The pain and the trauma that brings to children who really weren't given a choice is quite real and quite dramatic. (13:17): The other example I would use is we do a lot of sex-tortion investigations where criminals will essentially groom victims to convince them to send compromising photos of themselves and then they'll extort those children. The FBI is tracking at least 20 suicides as a result of that over the past couple of years. But with artificial intelligence, now the criminals don't even have to go through the grooming phase. They can stand up fake accounts on the criminal side, they can grab just a head photo of the person that they're talking to and superimpose compromising photos from AI-generated content. And so it's really, really significant in terms of how it's touching the physical world. When we talk a little bit about synthetic content here tonight, we'll talk about the disproportionate effect on the elderly as well. But just a PSA, while we're talking about this. Three things. Tell them to be cautious about anyone they're communicating online. Make sure that they know that they're not the one in trouble. And number three is, just talk to your kids about what is out there and make them aware. So just some food for thought for all of you. Tammy Haddad (14:19): Well, let's go to synthetic content now. Can you give us some specific examples of what the FBI is doing and how you're trying to counter it? Bryan Vorndran (14:26): I'm going to pull up something here on a slide deck. We get this reputation, the FBI, of being this buttoned up stuffy stiff group of people, and that's partially true, but the reality is we're also quite creative and quite witty. In cyber, we're also a bunch of nerds, and so we're trying to come up with this trading card game. What you see in front of you is going to be a giveaway set of a 10 card initial set of when we go to RSA or something, just something unique, and on the backside is the FBI Cyber Division Seal. But these pictures up here, every one of the pictures on there are generated by Chat GPT 4.0. So they have no intellectual property ties to them, anybody can build based on inputs. But just to show when we build like Cozy Bear there on the left for the Russians, that's how precise of an output we can get by image inputs. (15:13): When we talk about synthetic contents, it covers a wide variety of AI frauds and scams. As I said before, the elderly are disproportionately impacted traditionally by synthetic content. Whether we're talking about voice cloning, which can be done by just a few seconds of audio or video, or choose synthetic content in terms of images or videos. These things are becoming a reality of our present life. They're becoming increasingly hard to spot because of the sophistication of the software, and I would say as a country, we have not clued ourselves in yet to precision ways to identify what is real and what is not real. So just in terms of bringing into the face of reality, virtual kidnappings, essentially four to five seconds of my voice can be cloned in terms of very robust vocabulary. So if I was a child and you had a few seconds of my voice, you could turn it into a virtual kidnapping where you would call a child's parents and say, I have your son, your daughter, your niece, or your nephew, I want $50,000 or I'm going to injure them. And they would put a cloned voice on that call as kind of a proof of life that that child was actually there with you. This is very real, right? This is not made up in the wild. This is very real. (16:24): So that's just one example. And by the way, another PSA, if that ever happens to any of you, they're going to want to keep you on the phone. The number one thing you should do is hang up and call the person they tell you they have. Get off the phone, call that person. Second idea is like bank tech spoof. We have scenarios where a CEO of a bank essentially does an MS Teams or a Zoom call with a financial manager and says, "I'm so-and-so." It looks like the person, sounds like the person. "You need to transfer $2 million to this account right now." These are proven applications by criminals utilizing synthetic content. When it comes to synthetic content, though, and you'll hear me say something similar later tonight, it's not our responsibility alone. There are providers out there that need to be part of this. And getting back to Jon's point about this is one of the primary reasons we partner with private sector and with companies going through different startup phases and different maturation phases because we need to know each other and they need to be part of the solution as well. Tammy Haddad (17:18): What do you want them to do? Bryan Vorndran (17:19): Jon? Jonathan Lenzner (17:22): We work closely with companies. I used to run a company. I was a CEO of a company. And we are for-profit companies and we report to a board or shareholders. We recognize that companies are profit driven, but also a couple of things. One is, we work together to guard against threats, and it is our expectation and hope that companies will realize that they don't want to be a platform that is known for housing bad actors. It's not good for their bottom line and other things. And two, we want to make sure that there is a reasonable way for us to get information with lawful process for legitimate investigative law enforcement and national security activities. Tammy Haddad (18:04): Okay. I want to turn to privacy. Jon, how is privacy managed at the FBI? How do you balance privacy with taking advantage of these emerging technologies that can help you achieve your mission on a greater scale? Jonathan Lenzner (18:16): Well, I think when it comes to privacy- Tammy Haddad (18:18): And data. We're really talking privacy and data. Jonathan Lenzner (18:20): Sure. When we look at privacy from the FBI's perspective, the way it's managed, in some ways it's managed by the FBI director, who as the head of the agency, thinks about privacy in everything that he does. And as his chief of staff, I can attest to the fact that we are constantly talking about privacy issues, regarding to data as well, and how to balance that with a need to execute our mission. We also have, because it's by statute, a privacy officer who's one of our deputy general counsels, Erin Prest, and she oversees our whole privacy program. But I think the important takeaway for tonight is privacy is really managed by everybody at the FBI. Everybody throughout headquarters, everybody in the field offices, understands the need to respect the privacy and civil liberties of Americans. (19:09): We work with very powerful authorities and technology and tools every day to execute our mission. But with those tools and authorities comes responsibility to balance privacy interests with how we execute our mission. And so when we're looking at things like emerging technologies, we have a framework which we will analyze whether to use a technology, how we're going to use it, and ultimately we're accountable to the American citizens, to taxpayers, and to Congress. And so we always think about how are we going to explain what we're doing, how we're doing it, and why we're doing it to all of those audiences, and how does that balance with the mission that we're trying to execute? And so, that is one of the important things about the FBI and the culture of compliance that starts with the director, and everybody in the bureau, I think, embraces, which is making sure, as Chris Wray talks about all the time, not only doing the right things, but doing the right things in the right way. And that is a culture that is embraced when we're talking about privacy and data. Tammy Haddad (20:10): And how does the FBI work with facial recognition technology? Jonathan Lenzner (20:13): So we're talking, from my perspective, one-to-one matches and then one-to-many scenarios where we're running an image against a database full of images. With facial recognition, that's a very powerful technology that provides us with really important tools to provide investigative leads. And we have people in the bureau who are trained to use that technology, whether some agencies have it homegrown in the agency, sometimes you work with third-party commercial vendors. But either way, the people in the FBI who work with facial recognition have to follow very strict guidance and governance standards. They have to be trained on it. We have inventories twice a year about our use of it, and it's a very strict use of it throughout the agency. We work under guidelines from DOJ, from executive orders, from the White House. (21:06): But the benefits of it, quite frankly, is that we can do our work faster and at more scale when we're doing things like trying to identify victims, we're trying to find fugitives from justice. We've had cases where we are scanning the dark web, where we can identify a victim who's in another country. And so it is a very powerful technology and we're trying to implement it in a way that's deliberate and careful across our agency, but also within the strict framework and guidance that we have from the White House to DOJ and our own policies and procedures. Tammy Haddad (21:38): I don't know how many attended the Democratic Convention, but when you walked up to check in, they said your name, so it was facial recognition technology or whatever the Secret Service used, and at first the media people were horrified, and then they're like, "Great, I can get through faster." It was an amazing change in just two days, but that had not been used before. It was speedy, let's put it that way. Bryan, let's go back to data, and these nation state adversaries, everything's about data now, these vast troves. How do you prevent them or maybe you don't prevent them from stealing our data? How do you work with that? Bryan Vorndran (22:17): It falls into a broader conversation about China, primarily. Can I take a step back because I don't want to miss an important point. When it comes to AI acceptance and adoption, we are seeing in the cyber ecosystem, along the cyber criminal space, the cyber nation state space, but also with international and domestic terrorists to essentially draft and translate propaganda and distribute it. So it's not just a cyber adversary piece. We're seeing the adoption, as I said, child pornography, elder fraud, and even terrorism. China remains a problem. That's just the reality of it. We can go back to the OPM hack from many years ago. Even when you look in 2022, there's APT 41 activity about theft of PII and the generation of unemployment counts in many state.gov accounts that pilfered millions of dollars back to China. So it's more of the same. (23:08): A couple important things in China, and then I'll answer the data exfiltration piece. They have a bigger hacking program than all other nations combined. They are very, very well resourced. They have targeted every emerging tech industry in the United States. You name it, they've targeted it. Just go read their next five-year plan. It'll tell you if your industry is on it, a target or not. They do that so that collectors around the world for them can collect on anything that's important to them. They will always be looking to steal intellectual property or data. The data will be used to try to gain access to steal the intellectual property, and I'll walk you through that. And they're doing it so that they can cripple us in as many ways as possible and to have strategic advantages. Just one strategic advantage, they're trying to pre-position themselves for a potential invasion in Taiwan, maybe as early as 2027, and having access or data to use back against us becomes important in that environment. (24:01): So you have this vicious cycle of theft of IP and theft of data feeding more targeting from a very, very well resourced group, and then theft of more IP and theft of more data. One thing director Ray talks about is how do we convince the average American that what China, Russia, Iran are doing and why it matters to them? And that remains an ongoing dialogue. But I think director Ray says the best. They undoubtedly care about what all of us are doing and they undoubtedly understand what all of us are doing and what we're all tied to organizationally and the value of those things in the organizations. And it's an important takeaway. (24:36): But in terms of the data, it's all about theft of the data to enable onward attacks. So if they steal data about me, they can make a spear phishing email to the FBI more accurate. If they steal data from a financial manager at a Fortune 500 company, they can make a spear phishing email much more precise and targeted against that company. So I just would look at it in this vicious cycle, like they are stealing the IP for economic advantage, they are stealing the data to further their ongoing hacking activity, and it remains a concern. It will remain a concern for decades. Tammy Haddad (25:11): I'm wondering if you'd comment on the arrest of Pavel Durov, the Telegram CEO, either one of you, the French arrested him. Bryan Vorndran (25:20): I don't think we're going to comment on the arrest, but what I would say is that encrypted messaging apps remain a challenge for law enforcement and for the intelligence community to be as effective as we can. If I asked for those of you in this room to raise your hand if you use Signal, I'm sure I would get some hands raised, and the reason for that is an encrypted messaging app and those things become very, very difficult for us to work with to understand what is in somebody's content versus what is not. We have an entire effort underway that we term lawful access because there's obviously important intelligence, important evidence, especially from an international perspective. So take it out of the United States context, take it out of the First Amendment protected speech context and said, should we know that a terrorist in Syria is talking to a terrorist in England on a secure messaging app and what that content is? I think there's many of us that would argue, yes. Telegram is a problem and a challenge when it comes to that. Jon, I don't know if you have anything down? Jonathan Lenzner (26:22): No, that's exactly right. Tammy Haddad (26:23): Jon, I'm going to change the conversation a little because you're a former US Attorney, longtime federal prosecutor. What is your perspective on the intersection of AI and criminal law? Jonathan Lenzner (26:33): My goal when I was a prosecutor was always to have an impact, especially in the communities where I lived and worked. But one of the challenges was always you can't prosecute every case. You don't have the resources with investigators or in court to bring every case, and so when I was the US Attorney, I had to make tough decisions about how to prioritize resources. There were times where I would actually shift resources away from a threat to another one that I thought was more pressing and was impacting families in Maryland in a more immediate sense. And those are always hard decisions to make, and I'd make those decisions based on data. And you always worry about the data that you have, but boy, I wish I was US Attorney today because we didn't have some of the AI tools that we have today, and the ability to make better informed decisions about how you prioritize your resources I think is going to be one of the great developments that we see for us as prosecutors. (27:29): But when I think about AI as a prosecutor, you think about having to hold accountable those who are using it to commit the crimes, and then also how we can use it to advance our mission. And when it comes to new technology enabling crime, there's always a question about do we have the statutes on the book? Do we have the laws to handle this new technology? And as our Deputy Attorney General, Lisa Monaco, often says, is the laws today are still relevant and sufficient for what we're seeing. Fraud committed with AI is still fraud. Discrimination committed with AI is still discrimination. So we can rely hopefully on the same legal frameworks that we have been, but we just need to really find ways to seek stiffer penalties. And I think the Deputy Attorney General has talked a lot about this. Making sure that when people are using things like technology to further a crime in a significant way, that we are looking to bring stiffer penalties to punish things like misusing AI. (28:27): And at some point, if the federal criminal statutes need updating or if we need a new statute, I'm sure DOJ will work with Congress on that. But Bryan mentioned earlier, sextortion. I talked to my former US Attorneys all the time, the ones who are still US Attorneys, and they've talked about a huge spike in sextortion in recent months, and how do we allocate resources for that? And so making sure that as prosecutors, not only do you understand how the technology works so you can investigate, prosecute, and explain to a judge and jury how it works, but also to make sure that as a prosecutor you are properly allocating your resources. We have 94 US Attorney's offices, and as a US Attorney, you're bringing your cases in your district. And so making sure that we're all properly aligned I think is a challenge, but also an important mission for us as prosecutors in DOJ. Tammy Haddad (29:19): Bryan, let's go back to the world. Have you seen AI used for foreign maligned influence purposes? Bryan Vorndran (29:26): The answer to that question is yes, and I'm going to go right to discuss what we refer to as Meliorator, which was spoken about publicly in the last 45 days. Essentially, AI-backed fictitious accounts on the X platform to essentially support foreign maligned influence. And this is the second slide I have, but these are just some of the images that were used for the stand-up of the fictitious accounts on X. In this case, Russia essentially used that artificial intelligence to build a bot farm to essentially stand up these fictitious accounts that then were used to disseminate foreign maligned influence, write messages to sow discords, produce narratives favorable to Russia in this case, so the answer to that question is 100%, yes. Tammy Haddad (30:08): We're hearing this term “adversarial machine learning” from our friends at NIST. What is it? Bryan Vorndran (30:14): One of the questions, again, when we talk about adversarial machine learning, is why does the FBI care? The FBI cares because it's predictive analysis by NIST to define threats that may not have been manifested in the wild, but that we expect to manifest in the wild. There's three of them. The first one is what we call an evasion attack. You'll hear the slang term of jailbreak, but it's when the threat actor systemically attempts to essentially circumvent the controls on an AI/ML model. That's the first one. (30:40): The second one is called data poisoning, which I think of all of these is actually the most straightforward in terms of definition. There's actually two subcategories. One is an image scaling attack. So when you look at autonomous vehicles, they need to play with the aspect ratio to feed into and out of the vehicles to get a real sight picture. Well, if that aspect ratio is distorted, essentially through that type of attack, it could disturb how the vehicle is essentially scaling that and cause fault in the car. The second type of the poisoning attack is what we call a sponge attack. Think about it as a DDoS attack on the data supporting the model. And the third is what we call a privacy attack, where essentially it's the theft of the algorithm, the of the intellectual property. That is the data that supports the model, right? So that's what NIST has defined as adversarial machine learning. Again, after defining those here tonight, pretty straightforward, I think, why the FBI cares because it's going to be predictive for our work. Jonathan Lenzner (31:36): Hey Tammy, can I make just one plug for NIST? So I don't know how many people work with NIST. They are a terrific agency. So impressive. They actually help look at all of our algorithms for facial recognition technology, and they verify the accuracy of our algorithms ,and they are terrific to work with. I know you had somebody from NIST on a few weeks ago. And so. Big fans. Tammy Haddad (31:56): Yeah. They're incredible. Okay, so I have some fun FBI questions. How many tips do you guys receive a year? I was looking at your tip form. I wish I had someone to tip you about. I didn't know any crimes. Bryan Vorndran (32:11): I just don't know that number. The closest I can get from my part of the conversation, this is not an FBI wide, I would have to go read the IC3 annual report, but it's measured in hundreds of thousands. That's just cyber. Tammy Haddad (32:25): What if it's AI? Bryan Vorndran (32:27): You mean how do we differentiate between a real person and AI? What I would point us back to is Jon had a great line about the laws that we have independent of AI still make things a fraud or some other underlying crime. We're not going to take a stab at saying something is or is not AI generated or not, as an initial referral. We do this for a living. We're going to do due diligence on them, and if we believe it's fictitious and AI generated, then we have a way to process those, but that's not the starting point. The starting point is evaluate the referral, vet it, do investigative activity to prove or disprove whether there's a crime. Tammy Haddad (33:02): Do you know if... We have some people from OpenAI here, we could ask them, if they used FBI public records in large language models? Felipe's like, oh my God, I can't believe she said that. I guess you can't protect your public records because they're public records. Bryan Vorndran (33:15): Yeah, I don't have really a strong comment on that. Tammy Haddad (33:19): Okay. I tried. Bryan first. When did you first hear of ChatGPT and were you surprised? Bryan Vorndran (33:28): All the years run together, much less the months. I mean, probably about two and a half years ago. Was I surprised? No. We do this for a living. Part of our responsibilities post-9/11 is to be the lead domestic intelligence agency, so there was a lot of production on this from an intelligence perspective at the strategic and tactical level about what was coming. So I don't think I was caught off guard. I think the ease of adoption and the ease of use is somewhat surprising. For example, the images I showed you, those are built in five seconds. I think the ease of application and the precision of some of the outputs is somewhat surprising, but the underlying technology and the evolution of it, not so much. Tammy Haddad (34:08): Is it strange to work in this technology that's being led by private industry and not government? Does it change how you operate? How you worry about it? Bryan Vorndran (34:20): No, I don't think so. I would just point us back to some of my earlier remarks. The vehicle was supposed to change how we looked at private industry. It didn't. Then the Internet, then the smartphone. This is the next thing. We know who we are, right? We know how to work with organizations. We're proud of that history, and we just take it as the next thing. Tammy Haddad (34:40): Okay. Jon, when did you first hear of ChatGPT and were you surprised? Also, do either of you allow your kids to use it? Jonathan Lenzner (34:48): The other day, a package showed up from Nigeria, and, thank God, it just had a baseball card in it, but my 12 year has access to all kinds of things that I didn't realize she has access to. But we do a lot of engagement with private industry. We go out to Silicon Valley and meet with venture capital and companies. We had a week last fall where we went out and met with tech companies, and we invited our Five Eyes partners in, and we're constantly trying to have those engagements. We do some things publicly. We do some things privately. We go down to Austin, Texas, and meet with defense startups. We have a new founders initiative. I remember hearing about it in that context. I'll tell you that when I first have seen it in practice, it's obviously been something that I was pretty surprised by, but… Tammy Haddad (35:28): Bryan wasn't surprised at all. This is so good. By the way, you're the first person I met that wasn't surprised and it makes me feel safer, I just want to say. Bryan Vorndran (35:35): If I can go back, I teach every spring semester, and the use of ChatGPT in the classroom is deeply concerning about the ease of finishing assignments. I haven't figured out how to counter that. Tammy Haddad (35:48): That's the same thing in the newsroom. We worry about that. Jon, what's the founders? You mentioned a founders initiative? Jonathan Lenzner (35:55): Yes. Well, we are working with some venture capital to identify portfolio companies and founders who we are looking to develop strong, long-term relationships with. We want companies to think of the FBI as an ally, as a partner and we want them to think of that in their early stages, not when there's a problem. And I think some of the founders who we're meeting with and talking to, these are serial entrepreneurs and when they are launching their next company or launching their next technology, we want them to already think of us as a partner, so we can talk to them about things like protecting their innovation and intellectual property and looking out for the threats and getting into what we call that virtuous cycle of information sharing, where companies share information with us, we share with them, we get some data points about threats, but we don't have the whole picture. The company might get some data points but not have the whole picture, but we take our two, put it with their two, as Director Ray says all the time, put it together and you get more than four, you get five and six because the partnership leverages so much in terms of the results. (36:58): And so we want these founders and their companies to be thinking about us as a way that we can partner and leverage together to identify those threats. Because the reality is today, no one by themselves will see the bad actors, what they're doing, how they're doing it, when they're doing it, but together we can. Tammy Haddad (37:16): I think that's a great ending. Thank you, Jon. Thank you, Bryan. Not just for tonight, but all that you do for all of us, and thank you all for joining us. We're here at FBI headquarters. On behalf of the Washington AI Network, thank you so much for being here. (37:42): Thank you for listening to the Washington AI Network podcast. Be sure to subscribe and join the conversation. The Washington AI Network is a bipartisan forum, bringing together the top leaders and industry experts to discuss the biggest opportunities and the greatest challenges around AI. The Washington AI Network podcast is produced and recorded by Haddad Media. Thanks for listening. END [00:38:12]
WAIN_FBI (Completed 08/30/24) Transcript by Rev.com Page of
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.