WASHINGTON AI NETWORK PODCAST Host: Tammy Haddad Guest: Paul Rennie Head of the Global Economy Group at the British Embassy in Washington DC October 27, 2023
Tammy Haddad: Welcome to the Washington A. I. Network podcast. I'm Tammy Haddad, the founder of the Washington A. I. Network, where we discuss innovation, ideas, and issues with artificial intelligence policy. It's quite a time in A. I. policy. The Senate just had another forum featuring Marc Andreessen and others, and we're counting down to the executive order from President Biden on October 30th. But the first Global AI Safety Summit is coming, hosted by the UK Prime Minister Rishi Sunak. It's November 1 and 2, mark your calendars.They've got world leaders, they've got China, they've got all the AI experts from around the world. And they're starting to answer the question of how to make AI successful and safe for the world. No small agenda there, which is why I'm so excited to welcome Paul Rennie from the UK Embassy here in Washington. Paul is the head of the global economy group leading the UK's climate and energy, economic and trade, and science and technology networks across the United States. He's been a leader in AI in Washington, and he'll be there on November 1st with the Prime Minister as these world leaders, industry experts, and global stakeholders come together.The first global AI safety summit. Thank you, Paul, for joining us.Wow, that's a hell of an agenda with a large group there. Okay, I have to ask the question. Why should the UK take the lead on AI? Paul Rennie: Well, thank you Tammy. It's great to be here and great to be talking about AI because as you said in the introduction It's such an important issue at the moment. Now, why the UK? Well, the UK is not a bystander to this. The UK has been at the heart of AI development for many many years. So if you remember DeepMind, now owned by Google, was one of the founding companies that came through the AI journey, developing some really interesting applications and collaborations.The UK technology sector more generally is vibrant. We were the third country in the world to reach a trillion dollars of technology investment and capitalization, which is a huge number when you think about the size of the UK and what we're doing. And as the Prime Minister is saying himself today in a speech in the build up to the conference. The UK has become the go to destination for all the major AI companies looking at their European headquarters. So the UK taking on this role, thinking about AI safety is both, I think, a reflection of not only our technology capability, but also our very real interest in how do we make the AI work well for us? How do we make it work safely? And again, how does the UK use its natural international connections to make this an international opportunity too? Tammy Haddad: I want to talk about the economic side, but you're saying this is a safety summit. So have you guys figured out what you want to do and this is just for show? Or are you talking to other countries? And something's going to take place there that we all need to know about and plan for. Paul Rennie: So I think the answer is nobody has quite yet figured out AI. This is the joy of companies and technologies. They've really raced ahead with these amazing models. I think everybody's using ChatGPT at least, if not publicly admitting it.Somebody's using it for their work and their speeches. But how do we help make that a safe, usable technology for everybody? And the Prime Minister said himself, we're not racing to regulate, that is not his ambition. But we do have to recognize there are real issues around AI. And again, it's important for people listening to this that we think about AI, there's different kinds of it. We're really looking at the generative AI, the frontier models. Anybody who's used Microsoft Teams and has the closed captions on, That's machine learning. Machine learning is in our Spotify recommender. Machine learning is in any number of things we do in our day to day life. It's not about regulating that. It's not about looking at that. It's the frontier stuff. It's the stuff that is really at the cutting edge of technology. And that's where we're trying to find a way to bring together international partners in this, bring together international experts, academia, and civil society. to help us both understand what are the risks. Paul Rennie: We have a few ideas in the UK. We published a few papers today on the risks. Tammy Haddad: Yeah, you should talk about that because you've, you've defined some things coming into this. So you've studied it. I mean, you're an economist, you're all about data. So please talk about that. Paul Rennie: So yeah, so the risks are in a different calibre to perhaps what we've seen in the past.So the things we're looking at are how might this affect, you know, bioterrorism? How might malign actors use AI to support the development of cyber tools or hacking tools and so on? Now why is this different to what we've had before? Because a lot of people might say, well, I could find the recipe for a bioweapon on the internet.Everything's on the internet. But one example that is very different is if you go onto the internet to develop something and it says, oh, mix the red thing with the green thing with the blue thing. and it doesn't work, well that's about as far as the internet will take you. With generative AI, it could explain to you what went wrong. You could say to it, well I mixed the things together and it didn't work. Why is that? Can you help me with that? Can you help me explore what's going to happen? Or developing very targeted types of cyber weapons, where you're looking at very specific opportunities, and giving people who otherwise have very low skill sets, suddenly some of the best skill sets known to humanity via the learning of the AI. So that's why the risk element here is not about being prescriptive, you know, for obvious reasons, we don't want to list out all the risks and how people might use AI in a bad way, but the scale of that risk has changed dramatically when it comes to Frontier Models and Generative AI. And critically, this is the beginning. You know, you see ChatGPT, you see what it's achieved. You see the- Tammy Haddad: In, in like eight months, right? In eight months. Paul Rennie: Well, it's a culmination of decades of research. But the speed at which it's gone up the curve is amazing. And so we have to think not just about today's challenges, but what kind of things are we setting out for the future, for the next 12 months, for the next 12 years to make sure both we are harnessing opportunities and AI for good is a key part of that, but also that we're looking to manage the risks that might emerge. Tammy Haddad: I like the fact that you're doing it because these companies are way far ahead. This is a rare time that the technology has come from the private sector right to government. So that puts you as a government in a completely different situation. Paul Rennie: It does, and yet I think there are huge benefits for how we work together with companies. And again, the Prime Minister has been clear about that. This is not a case of governments going away in a box and coming up with their plans. It has to be in collaboration with the companies themselves. It has to be in collaboration with the academics.We need to understand how these models work. The companies themselves don't fully understand how the models work. Tammy Haddad: Well, doesn't it bother you? I mean you're used to data, right? Doesn't it bother you that you can't explain how it all works? Paul Rennie: I've got a 13 year old daughter. I've got no idea how she works. She's been with me my entire life. Tammy Haddad: Okay, well said. Alright, you're right. Paul Rennie: But, but I think it is a point of question that the public will expect us to do things. And I think a lot about, for example, the car industry, automobiles. You know, you could argue automobiles are one of the most transformative technologies we've had in the last hundred plus years, and AI could be the same.When the first automobiles came out, we had somebody with a red flag standing in front of the cars and waving people out of the way. But today they're a transformative part of our lives. And yet, We want to have brakes that work. We want to have headlights that come on. We respect the rules of traffic signals, even if we have different speed limits and go on different sides of the road and go on different journeys in our cars. And there is something about understanding the risks to find common solutions that helps everybody drive their car together. And the public doesn't want to say, I want to buy a car and I've no idea if it's safe. Nor do they want to buy a car with the car company being the only people telling you, no, no, no, it's safe.The government's ability to say, well, we set certain standards in the industry. And after that, there's a lot of choice about fittings and style and design, but you'll always know the car is safe. Tammy Haddad: Do you feel the pressure of people who feel like government, it is the government's job to protect me? So you're, you're putting together plans, working with other countries, and the idea is to protect the population. Does that feel personal? Paul Rennie: It does. I mean, I always feel one of the key elements of government is to protect the population. That is kind of why governments exist, to my mind. Others might disagree, and I'm sure there's any number of philosophical texts that talk about what the government's real job is. But it does matter, because, why, why does the government have a role in this? It's not because we have all the answers, but I think we have the convening power. So the Prime Minister has committed a hundred million pounds to our frontier model task force, which is bringing together some of the best minds in the world to look at this problem. Paul Rennie: Now the British government has financial capability, it has the convening power to attract some of the brightest and the best talent, it has the ability to reach out internationally to our international partners to bring them together on this journey. Now it's not to say the government will generate the government's view necessarily and everything, but this forum and the work the Prime Minister is doing with the AI Safety Institute, which is another key thing he wants to launch through the summit, is to bring together a centre of expertise that can reassure the public, we are looking at these challenges and doing so critically in a way that is agnostic about companies' interests, government interests, but really focusing on the people's interests, what makes AI safe, and how do we find the right path between helping support the companies develop and reassuring the public the product they're getting is going to be right for them. Tammy Haddad: Does that institute cover, say, military, in addition to the workplace, and to citizens, I mean, those are a lot of different areas. So I think the main focus is not going to be on, I mean, the military applications are much more complicated in AI, and as you can imagine, there's a whole different realm of security conversations, that's another conference. Paul Rennie: Exactly, you can only do so much at one time. But the challenge, of course, as you rightly pointed out, is that the companies are ahead of the curve. You know, the people with access to some of the most sophisticated models are within the companies themselves and some of the safety things we want to look at are applicable in a range of fields. So how are you keeping the models safe? You know, these models and the model weights that underpin them are highly-coveted assets. So are the companies doing enough to look after these? Because that's something maybe these tech companies haven't always done, is think hard enough about that kind of security. Particularly against, you know, very serious malign actors that may wish to access them, as opposed to the kind of hackers in their bedrooms. The second thing is, is the model being kept, is there an integrity to the model? So we hear a lot about red teaming. How are we testing the model to make sure it doesn't have unexpected outcomes? Paul Rennie: So is there a way that if the model is running critical national safety systems or energy else, things like that, that the model can't be fooled into switching off the power? Or the model doesn't decide to do things that were unintended when we set the model up in the first place. Now, getting safety in those kind of domains that keeps it secure, that keeps it safe internally, applies to any application of the AI. I mean, any more than you'd want it to switch off the power grid, you wouldn't want it to switch off your bank account either. Nor would you want to find some way if you rang up your bank and said, "Oh, I found a way to fool the AI into giving me another customer's details." Any more than you'd want to fool a bank, a person in real life doing that. So these safety features should exist in every application. And the AI being generative could be applied to any application. Tammy Haddad: Do you think it's going to be a battle though? Say you make four announcements or even just one that the companies are going to go along with it. I mean, don't you need their cooperation? Paul Rennie: We do and again, I think this is where there's a shared self interest in this. First of all, we want this to work in conjunction with all the other things that are happening. The UK is not alone in this journey. We're working with our g7 partners with g20 partners with the OECD United Nations. There's a large number of international groups that are looking at this very seriously. And that matters because if you're an international corporation, this isn't just the UK alone, this is a global interest that is taking place. Paul Rennie: Secondly, if you're a company, you want to find good standards. It's not the case that a company working in America wants to develop a different standard of AI for the US, and then one for Europe, and then one for India, then one for Japan. It wants to find commonality of approach, the same way that the tire on your car, you could probably ship it anywhere in the world, and it's legal, it matches international standards. And that helps the tire companies, because they make one tire for the world standards. But also if you think about pharmaceuticals. Back in the old days when pharmaceuticals first arrived and anyone could stand on a corner selling you a potion or a lotion and promising anything they wanted. How did you have confidence as a customer? And how did the good companies earn your trust versus all the snake oil people out there selling you bad products? Companies wanted standards. So now people know when a drug comes on the market, not only does the company get a chance to market it against the competition, but you know as a member of the public it's safe.And the companies that have met the safety standards are rewarded for that by being given government approval and that increases our market share. So from that perspective, I think companies will want to be part of this journey because it will help them sell their product. Tammy Haddad: Yeah, that makes sense. Except, you know, here there's a lot of controversy , and we have some of the primary players in AI saying, "Oh, wait six months." So it's a mixed bag. So when you take it to the public, and you say this to us, look, here in the U. S., it's hard to get anyone to agree on anything. So now you're saying we're going to put some standards together. So I do think echoing that out to the public is a hard thing. The Prime Minister today came out and spoke. You're out here talking about it. But are you worried that you're not going to be able to break through all the noise of everything going on in the world to get this message out? Paul Rennie: I think not because, because I have great faith in British diplomacy. Okay. I mean our Prime minister. Tammy Haddad: And that's why you're the head of the whole group and not just one piece. Okay. Paul Rennie: No, and again, if you think about the Prime Minister himself, I mean, he's a man who has a strong background in America. He's a Stanford graduate. He spent a lot of time around. Technology. Spent a lot of time around the international environment. He's a great friend of America. He was here only very recently meeting with the president. And I think in terms of his personal understanding of the journey makes it very powerful. It means that he can talk the language of companies and talk the language of technology companies as well as talking the language of politics and international politics at the same time. And when you think about America's journey or Europe's or Britain's journey, every country will have to find their path. Tammy Haddad: So you're not going to try to tell anyone what to do? Paul Rennie: No, I think this is the point about creating international standards. It's not international rules. These are not mandatory requirements. It's a case of saying, you know, if you think about what the Prime Minister is trying to do, the Safety Institute is going to share best practice. It is there to learn and understand as best it can how these things work. The second thing the Prime Minister wants to do is talk about a shared understanding of risk. risks. I mean at the moment there is no internationally agreed concern about what the risks are. We talked a bit about that at the start of this conversation. And that matters because agreeing on the risks and agreeing on sharing research doesn't prescribe what responses you have to make. Now, the UK's approach is not to be overly prescriptive. As I said, the Prime Minister doesn't want to rush to regulation, particularly as he says, when you don't fully understand what you're regulating quite yet. But at the same time, we do know we have to move forward in how we manage this technology for the public and for the public safety, and I think America will find its path. I mean, the separation of powers, executive orders, congressional, legislative approaches will be different, perhaps, to the EU's approaches. And the UK potentially is a third way. And amongst this, we will take the best of each system but look to find the thing that works best for the UK market. Tammy Haddad: This morning at the Washington Post Live, they had an AI Forum and Senator Schumer was there and there was a demonstration with Senator Schumer's face. They put him in space, they hooked him up with cats, these visual representations, all which really told the story. But when he started talking, he specifically said that The EU is pulling back from their regulations. And I can tell you in the break room where they were serving pumpkin latte, there was disagreement from some members of the European union who happened to be there. What is your take on what Europe has done so far and how different, as much as you can talk about, what have you learned? And what are you going to change? Paul Rennie: So I think the EU's got a much more prescriptive approach to how it wants to work with AI in terms of signing off or proving how AI will work or not work. The UK's approach has been much more about the outcome of AI rather than the AI itself. So if you look at the UK's white paper, now for those who don't know, for my American friends, the white paper is a government document that sort of precedes legislation. We publish our white papers. with the latest government thinking on what we want to achieve before we move to legislation. So we have a white paper on AI. And the white paper on AI very much talks about not wanting to prejudge the AI itself, but think about the outcomes. So for example, we've always regulated equality in the workplace. We care about equality in the workplace and how it manifests itself. And you could not, for example, turn up in front of one of our quality inspectors and say, “well, I'm really sorry, I have huge discrimination in my workplace. It's all the fault of my Excel spreadsheet.” Because you'd say, well, that was your choice to use an Excel spreadsheet. You could have used parchment and quill, or anything you would have liked to manage your workplace. You chose Excel, but you have discriminatory outcomes, and that is what I am concerned about. It would be the same with AI. We're not trying to set up some kind of AI regulator. What we want to do is get all of the people who are involved in supporting good outcomes in the British economy and British society better understanding AI. So the same way that a company couldn't stand beside and say, "Oh, the Excel spreadsheet got it wrong." You also couldn't say, "Oh, my AI HR management system produced all these discriminatory outcomes. That's not really my fault, is it?" And that's a difference of approach, I think, because it means we're looking at the outcome. of what AI is doing, and whether that is good or whether that is robust, rather than saying we approve or disapprove of this AI model and therefore it's registered as safe. Tammy Haddad: But doesn't that mean there has to be some sort of intervention because so much of the AI that we're all talking about now is already in the can, as we would say in television. So, you're talking about coming out, but then do you go to companies and say, "Please be aware of these biases? And you need to tell us." Not that it's the UK government, but you're talking about the whole globe. You need to tell us how you're going to fix that. Because I do think that's one of the concerns. It's clearly an issue in the US. It's an issue for this White House. It's issue for both Democrats and Republicans. I'm going to lose my job because some model in some room in 2021 picked up these five different inputs. What do you do? Paul Rennie: Yeah. So that's, and it's a good challenge. I think the way I would imagine the space is one: how do we support companies coming up with good answers themselves? You know, if you have a company that is buying a particular product for you, if you have a company buying a piece of software to manage its databases, and the software doesn't work. The company speaks to the company and says, your software database is rubbish, you've got to improve it. So the same can be true with AI. How do we support the developers of AI to come up with better products? We are saying to the end users, you should also feel free as an end user to challenge the data that's coming, to challenge the AI development and say, I want something that's better. Self evidently, if you were right now on your laptop using your system and the program kept on crashing, You'd stop using that program and get a different one. So equally, if you were using an AI tool that kept on generating bad outcomes for your company, you'd stop using that AI tool and find a different one. And this is the point, AI is not one company with one product. There are a great many companies with a great many products and a great many people using core AI systems and overlaying them with different solutions for data sets, for human resources, for finance, for accountancy, and so on. And how do we harness the benefits? Because the risk is always not just managing the negative, but how do we encourage... The good. And this is the point about AI for good. You don't want to pull up the shutters and say we have created a product that must now live in a box, which is very heavily regulated and cannot step outside. You want to find a product that can keep adjusting and adapting and improving to meet the needs that we have for our workplace. And this, again, the Prime Minister's journey on AI for good is as much a part of the summit as it is about AI and safety. Tammy Haddad: And what about elections? Because it's not just a U. S. election, there's elections all around the world. Are you going to address the safety side? Are these elections coming up? Paul Rennie: Well, this again is part of what we want to look at through AI safety, is understanding how AI can have a malign influence on populations. And that's not just the risk of it being through elections, it's also the risk of misinformation. We've gone through the pandemic now, it's the risk of public upset, any number of ways that a kind of malign actor could be using AI to turbocharge what they're doing. And again, let's not assume it's not been the case that people have tried to influence elections through existing digital tools. People have been trying to influence elections since the first time I'm sure we had elections. You know, there are all kinds of very interesting ways to support and corrupt and so on. I think the difference now is with the generative AI or the risk with generative AI is that one person's potential impact is now so much greater. You know, go back 150 years. If you wanted to send a whole pile of poison pen letters to prospective voters, you have to physically write those by yourself. Then came the printing press, and you could print out an awful lot more of these harmful letters a lot faster, but still have to do them one at a time and post them.Then comes email. You still have to self generate and find a list of people to email it to. The risk with generative AI is how much easier it is to get bespoke products to individuals in ways you couldn't have done previously. So, I think that the risks of people wanting to do harmful things has not changed in millennia. It is the capability now that is changing with them. But, but so too the capability for good. in this. You know, I mentioned DeepMind at the start. One of the things they worked through was AlphaFold, which looked at proteins to support medical advances in drugs. Able to do the work of thousands of scientists, it would have taken them hundreds of years in months. And that's the opportunity, too. Tammy Haddad: Yeah, I was with Jacklyn Dallas, who's a YouTube creator, NothingButTech, and she told this story about how she had an MRI and she couldn't talk to the doctor for three days. But she went into, I think it was ChatGPT, but one of the Genitive AI, and she got the answer. The prompt was, as you know, so important. And she got the answer of what was actually described, and she didn't have to wait through the weekend. Because she had no understanding of it. So if you just think of the impact on health. Paul Rennie: Yes.and in public services, I mean, if you think about, you know, how many, how much time people might spend writing contracts to do things that could now be done more rapidly. If you think about how much time it can be to process files for individuals. If you think about the opportunity for those who are seeking work to have a highly personalized approach to their job seeking opportunities, because AI is able to assess not only all the jobs coming in, but also their skills, the opportunities, how we want to support them. We see already in call centers, the kind of AI as a co pilot makes a massive difference to new trainees. So people who are arriving on day one in a call center with AI support can perform at a level of an experienced handler almost on day one. And so again, all that way to support efficiency, to support our efficiency in our jobs and giving us more time to do the things that we perhaps find personally fulfilling, or the things that add the highest value to the workplace. All of that's an option with AI. It's interesting a report in a newspaper saying that when the first spreadsheets emerged, first digital spreadsheets in the 80s, I will show my notes by talking about Lotus 1, 2, 3, shows my age a little bit. But these spreadsheets destroyed 40,000 jobs in accounting. This article said. But it created 60,000 new jobs in accountancy. Because it opened up the market to more people being involved in different ways. So there's no question AI will create opportunities; That the key challenge to some extent will be how do we harness those advantages for the most equal use in the public. Tammy Haddad: You talked about the Prime Minister, but can you give us a little more insight? Because he seems to be excited, motivated. He, what does an American know, but he seems like he really wants to be a leader in this area, not just for this conference, but overall it seems personal. Paul Rennie: Yes. And I think he is being a leader. I mean the AI Safety Institute, he's Proposing to put together will be a global first. The risk papers that we have published today is also a global first. The first time a country is trying to set out these global risks. He is motivated by this and he is always deeply energetic as an individual. But particularly on this particular topic as I said his background his awareness his technology connections means he's very alert to what is going on. And critically, this is not the Prime Minister working in a vacuum. There are tech companies coming to us, as the British government, saying they want to work with us on these topics. They want to find better approaches. Companies themselves are finding new ways to reassure their customers. The job we are having here is not to set some kind of ceiling. It is to try and set a common understanding of a floor. And just the same way that a car company might pass the car crash tests, but then tell the customer I'm doing even more to keep you safe. We're not saying to companies by any stretch once you've done this, that's enough. I'm sure they'll be going out and saying we want to show our customers and the public we're going above and beyond to make these products safe and to make them work for you. And that's and that's what the prime minister I think is really harnessing. He sees the interest within the commercial world. He sees the opportunities for the UK to take advantage of this and to be a leader. And the summit and his creation of the summit is an example of him actually making that into practical reality. Tammy Haddad: All right. My last question, it's the end of the summit. All the world leaders were there. What would you like to see in that joint statement? If there is a joint statement, so we're in a perfect world and you're writing the joint statement on AI safety on behalf of the UK government and the world, what would you like to say? Paul Rennie: So I think the first thing is I wanted to talk about a shared sense of risks. that people come together and talk about the sorts of risks we also have an interest in. I want them to come together and talk about supporting the AI Summit, the AI International Institute, because again, supporting the collaboration, supporting genuinely open access, I think, is a perfect way for humanity as a whole to demonstrate there's a way to use this technology to benefit us all. And I very much hope as part of this, people will think about what comes next. You know, the Prime Minister is also clear. This summit is both part of the wider international picture, as it is much a waypoint to the future conversations we will need to have about AI. And right now we have focused on AI safety because that is the first block in the puzzle, but after this talking about AI for good and how we expand it will become so much more important. And I think like so many of the international summits I've worked on, just the bringing together of people, the chance to see we collectively can at least agree to this much. Sets the perfect foundations for everything that comes next when people want to keep moving that dial forward and keep building on the successes they've had hopefully all coming from the first summit the Prime Minister hosts. Tammy Haddad: That sounds great. One one final final... Paul Rennie: ...final final final question Tammy Haddad: Well, what's the role of the UN if anything? Paul Rennie: So I think like everything when it comes to international activity you have to deepen and broaden And it is not always possible to do both at the same time, or at least it can be very time consuming. And I think that what you have seen through the US approach with voluntary commitments, through to the Prime Minister's summit, the work of the Japanese through the G7, the work that India is coming up to do as well, is with each step we are both trying to deepen the amount of opportunity we are working towards, or safety we're working towards, and then broaden. So how do you take an initial initiative amongst one or two or three countries, and make it an initiative for seven or eight? Now how do those seven or eight take it a bit deeper and make it one for nine or ten or twelve or thirteen? And ultimately, If we think about AI as a potential risk for all of us, and a potential benefit for all of us, self evidently we have to ultimately talk about that with the widest group of countries. And the Prime Minister himself referenced the IPCC, the Intergovernmental Panel on Climate Change, as part of his inspiration. And the IPCC and the COP journeys bring together 190 plus countries from around the world to talk about climate change, to talk about the opportunities and the risks and the threats at the largest possible gathering we can get. And the fact that the Prime Minister himself referenced that I think shows his own vision for this. This is not designed to be stuck in a box. It is designed to be lived and used by every country in the world. And AI for Good is not just about benefit to the UK, it is about benefit to every citizen on the planet. And that is as much about developmental outcomes for the poorest countries as it is about supporting the jobs in the rich countries. And I think that is where for me the UN model is going to be one that has to be used and we're already working with to make it a success. Tammy Haddad: Well, we look forward to watching this summit and everything you do. Paul Rennie, thank you for being with us. Paul Rennie: Thank you. Tammy Haddad: Thank you so much. This is the Washington AI Network podcast. We're here in Washington, D. C. at the house at 1229. Thanks again, Paul. We'll see you next week. Paul Rennie: Thank you, Tammy. Tammy Haddad: Looking for the perfect gift this holiday season, or just as sweet way to treat yourself or someone special. The Washington AI network is proud to support the mission of dog tag bakery. Dog tag is a living business school for transitioning veterans with service connected disabilities, military spouses and caregivers. The program helps them develop critical skills to find renewed purpose, community and to find success for themselves with greater confidence in direction. You can visit Dog Tag bakery on gray street in Georgetown, in Washington, DC. Or place an order online right now. At dog tag, inc.org. Support the mission, their signature, brownies, and cookies. We'll keep you coming back for more. Check out their holiday baskets. Perfect for clients or family and friends. Thank you for listening to the Washington AI Network podcast. Be sure to subscribe and join the conversation. The Washington AI Network is a bipartisan forum bringing together the top leaders and industry experts to discuss the biggest opportunities and the greatest challenges around AI. The Washington AI Network podcast is produced and recorded by Haddad Media. This episode is sponsored by Booking Holdings. Thanks for listening.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.