Washington AI Network WASHINGTON AI NETWORK TRANSCRIPT EPISODE 2 Host: Tammy Haddad, CEO, Washington AI Network Guest: MIRIAM VOGEL Chair of NAIAC, the National AI Advisory Committee President and CEO of Equal AI
AUGUST 18, 2023
Episode 2: Miriam Vogel Tammy Haddad: Welcome to the Washington AI Network podcast. I'm Tammy Haddad, the founder of the Washington AI network where we bring together official Washington, DC insiders and AI experts who are challenging debating. And just trying to figure out the rules of the road for artificial intelligence. This AI revolution has been led by industry and governments are running to catch up both Congress and the white house have committed to some kind of regulation and that's why I call my guest today. The bravest woman in Washington, Miriam Vogel is at the center of AI policy conversations.
She's the chair of NAIAC, the National AI Advisory Committee, the independent outside experts who advise on AI policy and report to the president. She's working with Congress and the White House and guiding industry to develop guardrails and protections. She is also the president and CEO of Equal AI and that's been pushing the industry to lessen the harms that are popping up with the new AI systems, especially with machine learning. She's working hard to demystify and make it trustworthy. Miriam, welcome.
Miriam Vogel: Tammy. Thank you. It's great to be with you.
Tammy Haddad: Well, it's great to be with you here at the house at 1229, where I met you, where women leaders gather, and I have to start out with what's happened so far. Seven tech giants made a “voluntary commitment to the Biden administration that they'll work to reduce the risks and harms involved in artificial intelligence.” You've been working on this a long time. What are they doing? Miriam Vogel: Thank you. It's great to be with you. The voluntary commitments are in short a start. It is some of the tech leaders across the globe. Mostly that originate from the U S who come together to say, we understand there are significant risks that could come from AI. I think there's a broad enthusiasm for the opportunity that will come, but a realism that there are risks inherent in both what AI does today and future iterations. And so they worked with leadership at the White House to come together and say, this is what we will do as a starting position to make sure that it is safe to make sure that the technology that is in the hands of everyone who touches it is meant for them is safe for them. But to be clear, it's just a start. There's a lot more that needs to happen. Tammy Haddad: And how are they going to do it? How does it roll out within these companies? Miriam Vogel: Well, I think we have to unpack that a little bit because the voluntary commitments have a long-term vantage point and there's a shorter-term vantage point as well. So a lot of the discussion enthusiasm fear today is about the large language model and the future AI iterations as it becomes more and more powerful. But I think we can't forget about the fact that there is AI in all of our daily lives today. Very few of us go through a day where we're not in some way touched by artificial intelligence and that can create real harms today. It can create discrimination on who is afforded [00:03:00] an opportunity to have a job, to have an appropriate medical diagnosis to be offered a loan. And so the voluntary commitments really look at future iterations and making sure we have safeguards in place so that as AI becomes more powerful, they have committed to putting safety checks in place to be thinking about national security, personal safety, et cetera. But in terms of what more needs to happen and to answer your question about what's happening now, we work with a lot of tech companies, both known, and those you don't think of as AI companies to make sure that. Tammy Haddad: For example. Miriam Vogel: For example, I would say almost every company is an AI company now. Almost every company is using AI in a pivotal way. I would argue hiring is pivotal because you're deciding who's a member of your team who is going to help safeguard all of your operations, including your AI use. Tammy Haddad: And that's where people are most afraid. Let's be honest. When you talk about a regular person, not someone in national security or working on an election. What they're worried about is this machine somewhere in the middle off somewhere, is making a decision whether or not they're even considered for a job. So how do these companies and how do you that's in the center of all this? What I mean, what are the guardrails? What can you do today? Or how can you make people less afraid? Miriam Vogel: The good news is there's a lot you can do today. The good news is we have the privilege of working with companies who are committed to doing that work and what we call it at Equal AI, and many of us across industry is responsible AI governance. It was a foreign term a few years ago, but fortunately we've come a long way more and more companies and key players know what it means. Top line, we're still in the process of figuring out the specific standards. And that's what Equal AI does. We work with AI leaders and AI companies. Again. But those known, and those who are just coming to learn that their AI companies to establish best practice in this time of some unknown. There are laws on the books that apply to AI. There are new laws coming down the horizon. We don't have time to wait. So responsible companies are looking at frameworks to put in place, making sure they have safeguards, making sure that anything that will touch their employees or their consumers has been tested to ensure that that is the use case they were intended for, and that is the population they've tested to ensure that it's safe and effective. Tammy Haddad: Okay. But how does the person who was applying for the job know that? I think about the good housekeeping seal, I think about the LEED certification for housing. I mean, is there going to be a little sticker that's going to be somewhere so that people know to trust that or that these companies are following these rules? Miriam Vogel: Eventually it will get there. Eventually there will be seals. There are different companies, government, inter government organizations that are working towards just that. But we don't have time to wait. Obviously, there is AI, as we've talked about in hiring and HR systems, for sure but think about it in the healthcare space. Think about the fact that we have increasingly relied on AI to help us diagnose cancer, to help us do early identification and it's really exciting to see these 88% effective rates and higher when you're talking about early cancer screening, saving lives. But what is critical today is that we note who has been tested for who is that success rate for, so that you don't have the worst-case scenario where a physician thinks that they have this very powerful tool that's helping them do better than their own naked eye would. But instead, they're telling their patient a false negative for our cancer diagnosis, because they don't know that the screening success rate is for majority Caucasian males as is often the case in many healthcare scenarios, given the data that is training and feeding the AI system. Tammy Haddad: Let's go into that next, because this week a study came out in the UK that says that it's, that this generative AI is lefty. That it's, there's a liberal bias, just say it in a better way. And that these biases are right there. We've got a polarized nation. We've got a polarized world. These studies are coming out. You're trying to do it while this AI plane is taking off. How can you stop things and add these kinds of rules in, I mean, are you calling data scientists? I mean you're running equal AI, right? So do you have like 25 people to call in the industry to say, please do these five things? These are things that'll help you. How are you handling that? Miriam Vogel: Yeah, so I would just say, yeah, I think we are not taking off. We are midair. We are in our flight stream for sure. And the steps that need to be taken. One way to look at this, we like to call it good AI hygiene. If you are using AI in a pivotal way, there are five things you need to do. You need to make sure that you have a framework in place. What are your guardrails? What are your go no-go decisions? And who's responsible. Luckily, we have some great examples of frameworks out there. A lot of the tech companies have publicly shared theirs and there was a Congressionally mandated framework that was presented by NIST in January, which is really impactful. To help even break it down further, they've created a user playbook so that you can understand how to use this framework. And at Equal AI, on our website, we have an algorithmic impact assessment tool that is based on the NIST framework to help organizations understand how to ask these questions for a framework. You need to make sure there's accountability. Second, absolute second step you have to know who in your C-suite, who in the leadership of your organization is responsible for AI- related decisions. When there's a problem, who are you going to contact? When you have a general policy to supervise and authenticate, if's that person you need to be documenting, you need to be clear on what you're documenting. Someone down the line is going to be using your AI, whether it's in your organization or outside, when there's an acquisition, your licensing, et cetera, they need to know what was tested for and what means. We don't right now have, clearly, we don't have a standard definition of artificial intelligence. Tammy Haddad: Well, that's what I was going to ask you too. I mean, what, how do you define it? I mean, you're running. I want to get into NAIAC. You're the chair. Congratulations. Maybe. Congratulations. Maybe not, what a heavy lift. But you're coming up with these definitions with input from others. How do you do that? Miriam Vogel: Well, I'm very fortunate that we have 25 experts across the field. We, Tammy Haddad: And where did they come from? Who picked them? Miriam Vogel: They were presented by the Department of Commerce and authorized by the president. It's a Presidential commission that is Congressionally mandated. In the National NDAA of 2020, they mandated a few really important AI deliverables. One was the national AI office at the White House, one was this NIST framework. They also fortunately talked about this committee to report to the president. It provided the president and the White House with AI policy recommendations. So it's a really broad cross section. We've got civil society, industry, academia who come together on this committee and work really hard to answer the call of what can the White House, what can the President be doing to make sure that AI is safe and to make sure that it provides all the benefits we hope it will. Tammy Haddad: And it's totally nonpartisan too. Just so people know, just because it's this president. The idea is not to, it's not all Democrats. Miriam Vogel: And it was absolutely bipartisan. It was created in 2020. So it was not even in this administration. Hopefully it'll carry out it for many Administrations. Our terms are three years. But no, this is absolutely bipartisan. Created by bipartisan mandate and while our audience, our Congressional mandate is to report to the president and the White House. We make sure that whenever we have a Hill presentation, it is bipartisan. And I will say this is really squarely, fortunately, an issue where there is bipartisan support and interest. Tammy Haddad: I was at an event with Congressmen Jay Obernolte, and you could've heard a pin drop. Everyone wanted to hear everything he had to say, he's a computer scientist, Vice Chair of the AI Caucus. I've never seen more cooperation and interest by Congress, as well as White House, but let's talk about Congress, right? Because that's where this regulation is really going to start or come from first. I know we're waiting for an EO from the president, but you're talking to Congress now, where are they on these regulatory issues of AI? Miriam Vogel: I think you're right. I think you certainly have the knowledge to say without any question of what is unusual here, but it really underscores my perspective that we are seeing Congress rise to the occasion here. I think the level of depth and understanding has really, really deepened in the past few years. And I think most members are clear that, that this is a moment that where they cannot shy away. I think they recognize that they've missed opportunities in the past, particularly with tech to weigh in and, and serve their role. And I think we're seeing a deep sophistication. Representative Obernolte is one of the key leaders in this space given that he's got a computer science degree, he was a gamer. He worked for game development company and owned one. So he's deeply sophisticated and committed to bipartisan leadership. You know, they recently created a task force. The is a bipartisan task force with the Republican governance group who has partnered with the New Dems. So I believe they're about 90 members in total and some subset of them on an AI Task Force. And what I've had the privilege to present and learn from them. They're asking great questions. This is not the DC of a few years ago where you know, the hearings with Mark Zuckerberg made DC look like a joke. I think we're going to see members who are really sophisticated understanding both the broad sweeping frameworks that we need to have in place like we've seen Schumer talk about, as well as the more particular specific national security. Hiring and other particular types of legislation, we need to ensure safety and inclusivity. Tammy Haddad: Let's talk about Congress. You're talking to them, you're helping them. You're briefing them. They're talking to you. They're asking questions. Everyone's asking questions. How do you get all of Congress or enough members of Congress to agree to actual legislation? Miriam Vogel: That is a great question. And as you know, that is no small feat. But I do think that there's more interest. There's more, bi-partisan support. There's more awareness that they cannot wait then than we've seen in years past. Tammy Haddad: What Have you seen so far? Don't tell us what's going on in those halls of Congress, but from your perspective where you see the intersection that both parties or even let's add the White House in NAI experts. But what do you think is the first thing that's going to happen? If you can predict. Miriam Vogel: If I could predict the first thing that's going to happen. Well, it's also my bias. We all have biases. I mentioned earlier. It certainly gets baked into AI in all of our decisions. My bias is we need to look at what has already been produced. What do we have to work with? We have this very impressive NIST AI risk management framework. It is one of the most applauded government documents by the broadest cross section I have seen in my several decades having worked in the U S government. So I think smart savvy politicians and policy makers are realizing we have this really important contribution that was delivered to us in January. NIST is continuing to iterate. It was a 1.0, and they're doing various additional work to make sure that it stays current and that it goes into different avenues. But I think that a lot of policymakers we've seen some letters to OMB and, and other potential bills where they're figuring out, how do we take the best practices that have already been discussed, aligned in and gotten broad support within this framework and make them a best practice, whether it's within the us government or in our international agreements. Tammy Haddad: Congress is coming back. The White House is going to announce this Executive Order. How is the process going to work? Is it going to start with legislation? What are the next steps for Congress? Miriam Vogel: It's hard to predict what will happen. We can imagine what we would like to happen. Tammy Haddad: What would you like to happen? Miriam Vogel: You know, I think we're starting to see what I would like and that is sophisticated, deep thinkers emerging across the population of government policymakers and not just in the US, across the globe, but to keep our lens on the U.S. for the moment. We have these, as we mentioned leaders on the Hill, Senate House who are thinking about what their role is. We've also seen across the federal agencies, leadership come up and say ways that they are looking at AI, how they're going to regulate it, how they're going to use it themselves. Tammy Haddad: Name some of the agencies and what you think they're going to do or what they've talked about doing. Miriam Vogel: I think one of the really powerful developments Has been the EEOC coming out. Over three years ago, first of all, saying, they're going to have an AI initiative. You know, I mean, this, this small agency that has many less resources than some of the others. I understood the importance of AI on their work and how it could impact the civil rights laws they're in charge of. And they came out first of all and said, you're on notice, we're looking at this and they had hearings. Come talk to us. Tell us your experience. Tell us what you're seeing in industry or in your daily lives. And then they took another really important step. They had a historic joint statement with the Department of Justice. They said, we want you to be on notice. We would rather you look at this now than create harms or have liability. We want you to know that we are deploying all of our civil rights authorities in the AI space and they're applicable in the AI space. And they specifically use the example of the Americans with Disabilities Act. We've seen with facial recognition that it's rife with bias and potential discrimination. It cannot see dark skin as easily as light skin tones. We know that. What's interesting is when you're talking about auditory AI, It has even more biases. It is harder to hear deeper tones. The different tone that it hasn't been trained on. It has mostly been trained on deeper tones that tend to be associated with male voices for a variety of reasons. You might notice Siri cannot hear you as a female as well as it has the ability with males in your house. Well, what if you're talking about people who have a different dialect. These AI systems are often trained on English, a certain dialect. Now what happens if you have a speech impediment, what's the likelihood that this auditory AI system is going to hear and understand you? Well, there you might have an ADA violation and the EEOC has put us on notice. We're looking for that. DOJ has said we are going to enforce these laws, whether you intended or knew that there was this discriminatory outcome or not. Tammy Haddad: Let's get into the Justice Department. Because we just talked about employment. And it is remarkable that EOC is so far ahead. But let's talk about Justice, the administration of justice and courts. Where is AI in the justice system? First of all, is the Justice Department focused on it. Miriam Vogel: The Justice Department does have initiatives underway on this. I think they are certainly ramping up and I can see from their budgets that they're looking for additional resources in our NAIAC report. We talk about ways that we hope they get additional resources so that they can do this work. I think it's another area where we also need to make sure. As we do across the government, they have significant expertise. I think it's hard as we know in government to compete with the industry salaries and, and the pull to work in different areas. So we need to make sure that they have the expertise on hand, as well as the resources to prosecute these challenging cases. But we've already seen some outcomes. We've seen a historic settlement with Meta last year from DOJ where they looked at advertising practices in the housing market. They found these violations, they made sure that they were prosecuted and that there was a settlement. And I think that sends a message. You say, where is this happening in the federal government? You've seen the FTC for years. Talk about their attention on AI and ways that they plan to use. There are laws in effect today to regulate and oversee AI use, making sure there's not false claims. You can't claim your AI is doing more than it is making sure that you have the basic protections in place. If someone is denied a financial opportunity, that they know why they were denied this financial opportunity. The FTC is on the case. The CFPB has also, you know Rohit Chopra has said, I'm looking for the ways that AI impacts you are financial opportunities so the good news is what I would hope to see. And what we are seeing is leadership [00:20:00] across the federal agencies. Secretary Raimondo, Secretary Blinken have been very clear that they are on notice. They're looking both at the national security implications, they're looking at the international implications and they're looking at the industry applications. President Biden, likewise, there was an R and D mandate yesterday where he talked about that, that he is laser focused and the agencies will be on the need to have more R and D opportunities in the AI space. Tammy Haddad: What about the Vice President? Because she's leading an AI group as well. She said some meetings at the White House. I hear there are meetings in other places. I wasn't invited. Maybe you can tell us a little bit about what is her role. Miriam Vogel: I can tell you that her involvement is a very good sign. We need senior leadership. With the NAIAC, one of our proposals to the white house was that they have a task force on technology and AI as part of that. And we suggested that the vice president lead it. In this issue in terms of responsible AI is exactly the forefront of the two issues that she cares about so much. It's civil rights and human rights, as well as technology inclusivity, and an opportunity that stems from technology as the former senator and ambassador from Silicon Valley. I really think that her involvement is a positive development and I hope we'll see more come from that. Tammy Haddad: Well, she's a former Ca. Attorney General, and I wanted to ask you about the courts. How are they interpreting cases about AI? Miriam Vogel: You know, Tammy, I'm so glad you asked about that because while we're looking for new laws on the books, what we really need to be thinking about is all the laws currently on the books that are going to be applied by judges without a training in AI and technology necessarily. What we've seen over the past few years is really a divergence of holdings and outcomes where they can be in conflict, and I think it's really problematic for those who are using AI and building AI to know what the right steps are without these certainties that the litigation process is going to bear out. We've seen a huge uptick. I think a hundred times increase in the last year from years before in terms of litigation in the AI space. And some of the proposals on the Hill are to allow more civil litigation. It's a key area we need to keep our eye on and to the extent that we can make sure judges have support for understanding how to interpret these laws when it comes to AI, really thoughtful, interesting, deep questions that we can help them navigate this process. Tammy Haddad: I think one of the things that helped is the story of the lawyer. He used chat GPT that cited this case that didn't exist, and the judge chastised him to make it more real to regular people. But did it have any more impact? Are there any other cases like that that have really brought it to the public mind? Because you're talking about changing hearts and minds to “don't be afraid of this technology. It can help you.” But, you know, people need to be wary of it too. You need to be smart about it. Just like any other technology. Miriam Vogel: Absolutely. I think what we need to realize is that AI is a powerful tool, and it can be a weapon whether we know it or not. While also recognizing the opportunity that can come from it. So yes, we need to build trust in AI systems so that we can all benefit from it so that our economy can benefit. So that as a global leader, we can benefit from what AI can produce. And I love the opportunities for underserved populations to participate in the AI economy because study after study shows our AI is better when there's more diversity in the building of the AI. You can, if nothing else sell to more people who are, can safely benefit and use your AI, if you've tested from our populations. So, you know, there was a story of Nikon creating technology to help you take better pictures. Well, apparently, they had no one on their team who was Asian or didn't think to test because it misidentified Asian eyes as having closed eyes. Well, there you've lost an entire continent that cannot use your product, let alone any other country where there are people with Asian eyed features. Let alone, anybody who understands there's this flaw, who's going to trust your technology. So there's a two-part process. We need to be able to trust the technology, but it also needs to deserve our trust and companies need to be talking more about the safeguards that they're putting into place to deserve that trust. Tammy Haddad: And will they talk about it? I mean, you can't really get a company, even, even when Meta started Threads, that was a really fast turnaround, but there wasn't much conversation about how they're doing, what they're doing. Can you get companies to start saying how they're doing AI? Miriam Vogel: I think we are seeing a turn of the tides there. I know with Equal AI. So for instance, in the next week or two, we'll have a white paper coming out where we have different companies from a variety of industries coming together to say, here are our responsible AI best practices. Here is what we determined to be responsible AI governance, top level. There's many more white papers. We'll need to get into that go more to the details. But I think we're in a point now where companies are realizing that it is imperative if they are going to earn our trust that they talk about what they're doing to deserve that trust. At a minimum saying what the use case is for the AI who has been seen, who can be heard by this AI and who cannot. Tammy Haddad: I love your podcast In AI We Trust because you go so deep with people that are on the front lines and do you think your message of Equal AI has gotten through to these data scientists and other experts and people leading companies that they've got to figure this out now and not wait? Miriam Vogel: Well, thank you. It's a high compliment, and it's really a labor of love. I mean, we just love talking to people who are leading in this space. I love to understand why. You know, five years ago, eight years ago, when some of the responsible AI leaders started this work. It was not a known thing. No one understood. When we talked about discrimination from AI, people would look at us cross-eyed like we were really crazy and, and it's come a long way. So understanding who saw this issue ahead of time, why they cared about this, why they've dedicated their time and passion to this is really a fun project. Have I seen a change? I think the average consumer is much more savvy now about the fact that first of all, they're using AI. Knowing that if you're using WAZE in GPS, when you're using facial recognition to open your iPhone, I think people are understanding when we do a workshop or a talk, I'll often start by asking people what AI they are using, what they used, and the answers are much more robust. Now there are many more answers and many more people who can give thoughtful answers than even a few years ago. Tammy Haddad: I have to ask you about national security. I was over at the Pentagon yesterday. And I walked around and looked at all of these folks that are doing so much to keep America free, working very hard, putting their bodies, their minds, their families, right out there. And I think about the impact of AI on the work that they do. Can you talk a little bit about that? Miriam Vogel: When we think about the highest consequential, highest risk, highest opportunities in AI, I think the defense space is the first that comes to mind other than healthcare. But, if you're talking about automated systems in the defense space, thinking about if you're committed to human in the loop, where does that human fit in? How do you make sure that when it's all systems go, that the right people have the right opportunity to weigh in. Who is building it? Who's testing it? What are the safeguards look like? I think when we talk about the federal government, we're fortunate that the defense department has been upfront and well ahead of all other agencies and understanding and talking about their AI use. They were the first, I think, to commit to the responsible AI principles and have an office committed to that work several years ago. So the good news is, as you saw from your meetings, they're aware and focused on this. But thank goodness because it certainly is a high risk, high opportunity. Tammy Haddad But it's fascinating because it reminded me that the internet started there. And all these other technologies, Bluetooth, et cetera that we use regularly. But this technology started with private industry so if my job is to keep America safe, I'm a little nervous. Is that fair? Miriam Vogel: I would say the government certainly had a role in getting the technology started through DARPA and other mechanisms, but you're right. That right now industry is in the lead and its unprecedented territory to not have government support that is helping to pave the path to this new technology as we've had in every other iteration of these important innovations in our past. Some really important players have thought about this, and we've seen some steps forward, like the proposal for NAIR. We had a NAIR task force for a year and a half that was looking at how would we invest in research? How do we make sure that academia maintains the role that it needs to, that the government is providing the significant report? We had a bill in the last month by Congressman Obernolte and Congressman Beyer and several others who understood that our commitment to research at the federal government level needs to be not only maintained, but increased significantly in order to ensure that it plays out in the way that we want it to, that it is inclusive that there are testbeds, that there are opportunities for those outside of the leading companies to play a role. And, and it's interesting to see if you look at who's supporting NAIR their most, if not all the technology companies are in broad support. Tammy Haddad: Okay. Tell us about Nair. Miriam Vogel: NAIR is a piece of legislation and a concept that's been around for a few years. It's the National AI Research Resource concept where we think about how the federal government plays a role in supporting technology research at the highest levels and how academia plays a role. There was a task force that spent 18 months thinking about how a NAIR program should be supported, what it would look like what the safeguards should be, what the test bed should look like, who plays a role in what levels of support it should have. Their report came out last January, and we've seen bills that have been supportive of putting it forward. I'm very hopeful that it moves forward. You've seen most of the academic institutions that have a role in AI. Most of the tech companies have supported this. We were in a NAIAC meeting when Jack Clark, who had recently testified on the Hill, pointed out the really interesting discrepancy, where with the NAIR proposal, it's asking for a few billion dollars, but it's over half a decade. It's a lot of money and it's hard in Washington to get that kind of support. Also, because it's something forward thinking, it’s longer-term it's resource. It is absolutely imperative. Our future depends on this investment, but it is longer term, which as you know, in DC is a harder conversation to anchor. But he also, in his conversation with NAIAC, had a really interesting point where if you're looking at what other countries are putting into their AI research, and at that point he was talking about the UK announcement that had just come out where they had suggested a hundred million pounds for sovereign AI. 900 million for AI compute. So 124 billion, as opposed to the 2.6 billion that the U S has proposed with Nair over half a decade. If you look at the per capita if you look at the U S GDP of the UK proposal, you would get 9.9 billion in the UK proposal. So if the UK proposal was translated to the U S we would be asking for 9.9 billion. Obviously. That is a significant difference from the 2.6 billion over half a decade that the U.S. was asking for. So to put it in perspective. It's not a lot considering what other countries are doing to try and maintain or get access and leadership in this space. And yet it has not yet gotten full support. So I'm keeping my eye on that and hopeful that it'll get more support. Tammy Haddad: Okay. Before we go, we have to talk about elections. What are you worried about with this election and the use of AI? Miriam Vogel: Well, Tammy again, another great question., And, certainly, one that needs to be at the forefront of our minds. The good news is I think, as we've seen across the federal government in the U.S. and abroad. It is not a problem lost on anyone who is in a position to be talking and thinking about this. It's not a new problem. So while we're talking about generative AI in today's public space, AI is not new. AI has been around. Tammy Haddad: Deep Fakes! Miriam Vogel: It’s been around for a while, you're right. AI has been around at least since 1956, some would say earlier, but it was first a named at a conference in Dartmouth in 1956. So it is not a new capability, but it's potential, the access we have, the compute power we have, the number of people who now can touch and use it has obviously scaled at unprecedented and unfathomable levels. So. I don't think that it is a different problem than we've had in our past few elections where deep fakes are so available and problematic. But it's available to so many more people whether they have nefarious intentions or just don't even realize that something that they're playing around with can be become viral. We've seen it used against former presidents with President Trump when there was a fake illustration of his being arrested in New York, which did not actually happen. Obviously, something like that has huge implications. We've seen in Ukraine, we've seen Zelensky being the victim of a deep fake, surrendering last March. Obviously, If that got to his troops before other people brought it down and recognized, first of all, that it was circulating and people were seeing it. And second, that it was false. They were able to identify that the voice was not accurate. And in certain movements we're not accurate, but if something like that got out without people recognizing it ahead of time and stopping it and realizing that it was fake. Huge, huge consequences. So it is something absolutely top of mind that we need to be on top of. Tammy Haddad: So let's end on a more positive note. Thank you for that. Looking ahead, what are the most promising developments in AI regulation? That you think will have a positive impact on society? Miriam Vogel: That's a great question. And thank you for ending on a positive note, because I will say…
Tammy Haddad: In AI We Trust! That's it. I mean, that's not the name of this podcast, but come on. Miriam Vogel: Well, our podcast does have a question mark at the end of it. I think that we are at a pivotal moment where we get to decide if AI creates more opportunity or if it creates this negative outcome where it becomes more discriminatory where less people have access. Where less people can benefit from AI or its economic advantages. So I think we need to be spending a lot of time. I hope this is an area actually, where we'll see the federal government play more of a role. How can the AI systems we're building create more opportunity? How can we make sure that underrepresented communities who have not sufficiently benefited from past income bubbles, benefit, participate in the AI economy. How can we make sure that there's even computer science classes so that kids can understand at a basic level all of this technology that's fueling their lives. Not to say everyone needs to be a computer scientist. And I think another emerging concept that we've seen in the past few years, it's really important is to understand this should not be a computer science or an engineering issue. This is an issue where we all have a role to play, and we all need to bring our different perspectives to bear when we're building and deploying AI systems. Tammy Haddad: In Miriam Vogel We trust, no question mark! Miriam, thanks so much for being with us. President and CEO of Equal AI. Thank you for all the work you've done. All the work you'll do. It's good to know you're there, right in the middle, and such a brave person in Washington, DC. I'm Tammy Haddad, and this is the Washington AI network podcast. Thanks again, Miriam. Miriam Vogel: Thank you, Tammy. It's been a pleasure.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.