WASHINGTON AI NETWORK PODCAST TRANSCRIPT Host: Tammy Haddad Guest: Ylli Bajraktari, Special Competitive Studies Project March 22, 2024 - Tammy Haddad: [00:00:00] Welcome to the Washington AI Network podcast. I'm Tammy Haddad, the founder of the Washington AI Network. Today's guest is Ylli Bajraktari, the president and CEO of the Special Competitive Studies Project and one of the world's leading experts on artificial intelligence. His experience in the White House and heading the National Security Commission on AI has made him central to the conversations at the cross section of national security, intelligence, and government. At a time when AI is experiencing exponential growth, Bajraktari praises the U.S. for leading on innovation, but he also issues a dire warning that the U.S. must take big steps to stay ahead of China and maintain its information dominance. He says that the First Nation to master AI supremacy will hold a significant strategic advantage. He calls for the DoD to be combat ready by 2025, noting that the next battlefield militaries will have great difficulty hiding from and surprising one another. Ylli, thanks for joining us. Ylli Bajraktari: First, thank you so much for having me on your podcast. We at the SCSP are a big fan of your podcast and everything you're doing to, you know, expand the knowledge and network of AI in DC. I think we were living at the time that DC needs more education and, efforts in bringing up the speed, the decision making and policy makers on the conversation around AI. The Special Competitive Studies Project grew really out of the National Security Commission on artificial intelligence. And the background story to this is that when we were getting at the close of the National Security Commission on AI, NSCI, which– Tammy Haddad: And who called for that, by the way? What's the history? Ylli Bajraktari: Yeah, yeah, yeah. So that's a really good question, Tammy. When you look in hindsight, everything makes more sense sometimes. But in 2018, I think our Congress was really concerned with two factors. Number one is everything they were hearing from Silicon Valley. They thought there's a powerful transformational technology coming our way. And I think you can see the early signs of this in, you know, among Silicon Valley founders and big companies that they were testing and playing with this new technology called AI, and then the second factor was when you look back in 2017, 18 and 19, everything China was doing in this space. You could get a sense of why this is such an important technology. You could see a clear sense of the strategy they were coming up publicly. They wanted to be, they said it, that they want to be a global AI leader by 2030. They were putting enormous resources behind their companies. They identified national AI champions among their companies and said, “Hey, you are my AI champion. You go and run, run as fast as possible.” And so I think our Congress was really concerned with that, with these two factors. What are we doing with AI for the purpose of national security? And I'd like to highlight that the NSCI was not looking at AI for the benefit of humanity or the economy or education or society. The focus was very narrow because it came from the Armed Services Committee on the Hill. [00:03:00] So the focus and the mission was clear. What are our national security agencies doing with AI? And what can they do more? And so that's how we structured our work. They picked 15 individuals from the private sector, academia, and government to be the commissioners. I admit that I didn't know much about the commissions before that. The most famous one is the 9/11 Commission that came after the tragic event of 9/11. And they looked at what went wrong and recommended ways to fix those deficiencies. But Congress routinely appoints like one or two commissions a year. You don't hear for most of them. Ours I think came at the critical point because I think when you look back in 2017, you know, we came up with a national security strategy in the Trump White House, and I was fortunate to work for Gen. McMaster who's the primary author of that document, that called for turning around our policies against China. Up to 2017, our policy for China was seeking their growth and development so they can benefit the overall global commons, global goods and everything else. [00:04:00] But I think in 2017, there was a clear idea that that is not the China we wished for. We got a different China whowas a main competitor to the United States and our system of values and beliefs. And so I think in 2017, you have a massive shift in how we view China from a political, geopolitical order competition. But I think the one piece that the government documents that, I've been many, many times witnessed to don't have is the technology piece because the government is no longer in charge of technology that has moved to the private sector. That's where the innovation is happening since the end of the cold war. And so I think with NSCAI, we moved and we filled that void, and we produced what I call a national security document on artificial intelligence. But I mean, you can easily replace the word AI with any other technology that is happening right now because there's some clear deficiencies we have as a society that we have to address. The people piece, the hardware piece, how we're organized, where the money has to go to in terms [00:05:00] of basic R&D. And so I think NSCAI played a vital role in those two plus years in providing Congress with the overarching themes of where we have to move. I think the CHIPS Act probably is the most visible act of that effort. Because for three years, we advocated for a massive infusion of resources towards the hardware we lost over the years to China and other countries. Because key component, as you know, in developing these AI models is the hardware. And so we didn't have any more access to that hardware, nor were we producing that hardware. So we recommended, you know, massive infusion of resources towards that hardware build up in the United States, which we now see with the execution of the CHIPS Act. So NSCAI was really good in terms of identifying all those deficiencies, I think. We have massive wounds on some of them. I think we count more than 50 pieces of legislation that were passed based on that document. And then to the long answer to your question, how we get to the [00:06:00] SCSP. This is the time in the summer of 2021 when Eric was writing his book— Tammy Haddad: Eric Schmidt Ylli Bajraktari: Eric Schmidt was writing a book with Dr. Kissinger, "The Age of AI." And Dr. Kissinger spoke fondly about this initiative that he led in the 50s called Special Studies Project, which basically— Tammy Haddad: And that was for President Eisenhower, right? Ylli Bajraktari: So it started first under the Nelson Rockefeller mentorship, but ultimately it informed President Eisenhower build up in terms of not just the military, because people forget that it you know, Eisenhower also created the National Science Foundation. Ylli Bajraktari: NASA, all these institutions that actually helped us lead in the innovation, whether it's space or science or all these other institutions. But Nelson Rockefeller tasked Dr. Kissinger in the 50s to look at how we can get ourselves organized against the Soviet Union threat. Now, when you look at these documents from the 50s and, I was fortunate to look at the entire archives of the minutes and everything they've been doing. You know, back in the 50s, we were talking about mutual [00:07:00] destruction because the Soviet Union reached a nuclear parity with the United States, so it was a complete change of dynamics between us and them. They were going and pressing our alliances. They were investing in countries. They were building their military capabilities. And so it was a really difficult time for our country in the 50s. We were still living through the World War II victory sentiment. How we got over, you know, the Nazi Germany and their allies, but we were not looking at what is the new threat in front of us. And I think the Special Studies Project from the 50s really captured that sentiment. And so Eric came and said, "Hey, Dr. Kissinger really likes us restarting that initiative." I admit that I didn't know anything about the Special Studies Project from the 50s. There's not much you can find online because it was from the 50s. And then they published a book that became a bestseller, not widely known called "Prospect for America." 400,000 sold copies. It cost [00:08:00] $1.25 in the 50s. I think I paid $99 on Amazon on a good day. I think ever since we launched the SCSP, the price has gone up. Tammy Haddad: Yeah. Ylli Bajraktari: Yeah. But the book is really remarkable because when you read it from today's perspective, it provides that thinking in the 50s from our leading policymakers, military officers—what are some of the things we have to do, not just on the military and national security space, but how do we have to organize our educational system? Ylli Bajraktari: How do we have to organize our economy to combat the vision that the Soviet Union was projecting globally? And so I think that document, as you said, informed the Eisenhower build up. And I also think that that document really set the foundations for how we built everything during the Cold War to compete against the Soviet Union that we ultimately won. And so, Eric was like, should we do this? I drafted a one place mat for Eric, and then I called Bob Work, who's my old boss from the Pentagon. And I consider him as my closest mentor. And I said, “hey, sir, We're about to launch this. I would [00:09:00] love it if you can be part of this.” And he said, “really, we got to add competition in the title.” Ylli Bajraktari: I was like, how's that? Why is that? And he said, because in competition you either win or lose. And we are in a generational competition with China. So we have to win this. So that's why we added. That's why we added competition in our title... Tammy Haddad: Well, it's, it says action. Ylli Bajraktari: It is, it is action, and I also differentiate this from the SSP from the 50s. Tammy Haddad: Right. Ylli Bajraktari: And so SCSP came to fruition at the end of the NSCI. On October 1st, we launched SCSP. And it's been two plus years now. And our goal, as I said, in the mission statement that is available on our website is really how do we get ourselves organized for a competition that will be so transformative for our society, writ large, in a time when you face competitors, not just China, but now you see a set of alliances building against the United States and our allies, you have China, Russia, North Korea and Iran that are really getting much, much closer across all domains to compete and to present a different alternative [00:10:00] view to what we present globally. And I think we help in that space because we bring the power of our network, I think that we have, we bring the power of the work we have done so far in putting AI and emerging tech at the center of this competition. And I think there's a bipartisan sentiment that exists that I think we have. Tammy Haddad: And don't you think it's stronger today than maybe even a year ago or no? Ylli Bajraktari: One argument I make is that there's a lot of political noise in Washington or if you turn on TV around our political system. But when it comes down to a technology competition with China, there's a clear sense of bipartisanship that exists. I mean, if you look at the actions for the last two weeks from the China Select Committee against TikTok. Tammy Haddad: Well, that was going to be my question. What's your take on that? Ylli Bajraktari: We've been pretty clear from day one of the SCSP about TikTok and all the platforms coming from what we called countries of concern. Because TikTok is, I think, the first that we are dealing with. But I think there's a set of [00:11:00] applications sort of waiting in line to become the next TikTok, whether it's in retailing, online shopping, FinTech, you name it. But China has deployed globally a set of platforms to dominate our digital life. And I think from the last two years we've been pretty clear on TikTok we should ban TikTok on two grounds. Number one is. It's coming, TikTok is a platform coming from the country that is our main competitor. And it's not a competitor in sort of like democratic space. It's a competitor that presents a threat to our political system, economic system, and military competition. How are we even allowing a platform that's coming from a country like that to operate in our society, number one. So our recommendation has been ban it on national security grounds. And the second thing is, how is it possible that we allow any platform coming out of China to operate freely in the United States when they don't let any [00:12:00] of our platforms or they have banned them in China? Tammy Haddad: Right. Ylli Bajraktari: And as you know, the TikTok that it's allowed to be used in the United States is not the same TikTok that they allowed their teenagers to operate in China. And so reciprocity is big for SCSP. And on those two grounds, we have argued that we should find a modality to either ban or really, as the China Select Committee has said, really have TikTok be divested from ByteDance, which is closely related to CCP. Tammy Haddad: Okay. So Secretary Mnuchin is putting together a group, and I guess there's other groups forming, to buy TikTok. Is that going to help? Ylli Bajraktari: So look, the China Select Committee was pretty specific. If they divest, if they cut ties from their main company, which is also related to CCP, if they build data centers in the United States, if they follow United States laws, if there's an American leadership of the company with the American [00:13:00] board, that is a completely different TikTok. Now, the challenge you're going to have there is, will China ever allow that? I personally don't think so. Will China ever allow the algorithm to be in the hands of United States citizen leadership and company. I also don't think so. Because if you look back two years ago, the one thing they placed an export control on was on the TikTok algorithm. They just didn't want that algorithm to be in the hands of non-Chinese investors or owners. Tammy Haddad: So why does Secretary Mnuchin think he can do it? Ylli Bajraktari: Because I think the China Select Committee has given a path to TikTok to do the right thing. Tammy Haddad: Right? Ylli Bajraktari: If they want to do the right thing, I think, American investors should invest there because they can build a company around U.S. values, U.S. norms. And so I think that's why you have indications from, you know, Secretary Mnuchin and others that they might invest in this next TikTok. But I think we have to see whether TikTok will do that cut with [00:14:00] ByteDance and ultimately CCP. Tammy Haddad: I know you're not a technologist, but if TikTok is on somebody's phone right now, what guarantee do they have if someone else buys it? It doesn't seem like it could be a clean cut—I mean, do you think there is a clean cut there? Ylli Bajraktari: The technologists you speak to they'll tell you that the separation is possible I mean, if data resides in US and US territories, if data centers are here where you have access to the servers and these data centers, which is controlled by technicians and our guys, and probably with some kind of a government oversight in terms of what goes in and out. That is probably possible. But I think first you have to see that separation of TikTok from the main company, which is ByteDance. And I think then that would be the next step of like how that gets executed. When I was in Australia, some of the conversation we had is they were much more open to these things in terms of like, look, we're not going to get rid of TikTok, but we'll have to find ways to mitigate the risk and [00:15:00] have enough control on like what happens with the Australian citizen data, not going over to CCP for malign reasons or whatnot. So I think that would be the next step of like, how do we create that arrangement for TikTok to exist here? I would also know that, you know, in India, a lot of people complain that these bans will lead to massive pushback or economic repercussions and whatnot. But India banned almost all the Chinese apps. And I think one of the biggest ones was WeChat, if I'm not mistaken, that had 400 million users in India. And after two days, life went on in India without any challenges. But they were able to pull the bandaid right away and protect themselves. I worry because the next five years will be so difficult in our relationship with China because of their goals and the ambitions and everything around their technological objectives, military objectives, that we should not give China any opportunity to mess up with the minds and the information space. Tammy Haddad: and the election? Ylli Bajraktari: And the election for that [00:16:00] matter inside the United States. Tammy Haddad: And do you think that sale or proposed sale or whatever is going to happen will be resolved before they have an impact on the election? Ylli Bajraktari: So as you know, the house passed the bill. Now it's on the Senate and we'll see what happens, but like, look, we're a democracy, Tammy. I'm also a realist in terms of that these things take time. We'll probably go through court cases. We'll see, you know, like first action followed by a counter action. So, I would be hopeful to see this before the elections, but realistically, you know, things move in a democracy in a pretty slow way that I think it's much better to do the right way and have that separation the right way than to rush and have the legal battles continue for many, many years in some uncertainty outcome. Tammy Haddad: Are any other countries contemplating the same kind of ban? Ylli Bajraktari: I think the UK came following us. With like, they have to relook also the approach based on our approach, but I think there's enough bipartisan consensus right now on both sides against coming up with solutions to address these issues. [00:17:00] Not specifically TikTok. Export controls and closing any gaps that I think the China Select Committee has identified in terms of like, you know, deficiencies we have in our organizing principle against China. You got to remember, Tammy, our organizations and institutions will develop for a cold war competition against the Soviet Union, which was primarily a military competition. Our Department of Commerce didn't play a huge role in competition against the Soviet Union. But now I think the Department of Commerce is the centerpiece in competition against China. Look at most of the policies that came up in the last two years. They came from the Department of Commerce, both in terms of promoting our domestic capabilities in terms of CHIPS Act, but also protecting our capabilities through export controls and everything else that we do. So we have to go through massive revamp of our institutions because we're facing a competitor that is a full spectrum domain competitor. They compete with us on FinTech, economic policies, military policies, technological [00:18:00] outreach. If you look at the global 5G map, it's mostly red. Because we closed our eyes for several years, and we allowed China to subsidize Huawei and deploy it globally to countries that didn't have a choice either to take it or they were heavily subsidized to take that infrastructure. So now all of a sudden we woke up in a landscape in which most of the world is using the 5G built and deployed by China. And so we should never allow ourselves to be in a position like that where another technology—we call it "get 5G'd"—in front of us because now we live in a period in which these technologies are setting the foundation of our country, of the future of our economy, and the future of national security. Tammy Haddad: What do you think is the implication of the closer relationship between Russia and China on AI and these issues? Ylli Bajraktari: They're doing it by action. If you look at what's happening in Ukraine. They're putting their limitless friendship that they came up with right before the Ukraine invasion. This is the [00:19:00] statement. This is a long statement by President Putin and by President Xi on the nature of the relationship. I think they're demonstrating that by doing joint actions in Ukraine whether or not they're providing each other with capabilities with spare parts, releasing expert controls and trade barriers. North Korea is chiming in there. We know that Iran is providing Shaheed drones there. So they are demonstrating their relationship on the ground in Ukraine right now regardless of their like public rhetoric of not helping each other. But I think we all know by now how they are doing these kind of a jointness on the ground against Ukrainian forces And so I think you know, we should just judge them by the action not by, as I always say, "This is the battleground we face, not the battleground we wish for." And I think the battleground we face is really these four countries whereas we call it axis of disruptors really coming together in terms of trade, economic policies, technological exchanges, because China is really strong on the technology... build, manufacture, and produce... but [00:20:00] Russia is really strong in terms of human capacity. So I think there you can see the early results of this cooperation on the ground in Ukraine, but probably elsewhere as well. Tammy Haddad: I've heard you talk about AI supremacy, whoever has the best AI, most AI is going to win. And you still think the US is ahead? Ylli Bajraktari: Absolutely. Tammy Haddad: And how do they get there? I know you're calling for 2025. Let's talk about intelligence in the military. Do you see progress in the US in facing not just China, but all the other countries you mentioned and all the problems or are they just moving at the usual pace? Ylli Bajraktari: So let me first answer the question about AI supremacy. If you and I lived when electricity was invented, would we say we need to be ahead in electricity because if we're not ahead, then others will catch up and surpass us and build their economy and their future based on this new technology called electricity? The [00:21:00] story I usually use is that at the beginning of last century there were candle lighters in New York City. It took people that would go every night and light the streets in New York City. After five years with the invention of electricity that profession disappeared and I think within seven years the Union of candle lighters disappeared. So these are massive technological tsunamis that happen once in a generation. You had it with electricity. I think nuclear weapons was probably the second biggest one because our main competitor was developing nuclear weapons. We were developing nuclear weapons for mutual destruction. And I think we went all in. I mean the whole “Oppenheimer” movie that now is popular is that it shows that critical mass of our scientists and our military people getting together to come up with the next generation weapon systems because our adversary was rushing ahead. AI, I think, falls into that category. If we don't master AI, we will be behind because our main competitor is [00:22:00] rushing ahead and not just our main competitor. This is now a race for AI of the future because the economy is going to be built on this AI. Our future education will be built on this AI. Our way of life will evolve around AI and ultimately our military will use this AI for the purposes of military operation. Now to your second question about is our intelligence community using it? I believe that we are there in two steps forward, one step backward at the moment because every new technology presents a new opportunity for all our government agencies to include the intelligence community. I think AI will help RIC enormously because if you look at the massive amounts of data, if you look at the massive amounts of text that our IC deals with, and ultimately the products that they release for policymakers, these foundational models will provide an enormous benefit for our intelligence analysts. Tammy Haddad: Can you talk about the next battlefield? What does that look [00:23:00] like? Ylli Bajraktari: Yeah. Well, I think we're seeing the glimpses of that in Ukraine right now. We call it offset X in one of our documents. Tammy Haddad: Offset X? Ylli Bajraktari: Offset X. Tammy Haddad: Okay. Ylli Bajraktari: Which basically you need three elements for the future of battlefield. You need to dominate software, and our argument is America leads in software. Therefore, our men and women in uniform should lead in software. What we mean by software is AI, data, algorithms, they all have to be set up in a way to work. So when you ask me about “what does DoD Ready by 2025 mean” is that all these systems work. Eventually you'll get to the point with AI, as you know, Tammy, that it will be in the background, that everything we do is somehow with AI. But you don't talk about it. You don't say like, see, I'm using AI, but it's there in the background like electricity. You walk into a room, there is electricity. That's how the future will be. So our argument about AI being ready [00:24:00] 2025 is that Department of Defense will be that AI ready, that we don't struggle anymore by saying, "Hey, we need to buy more software, we need to bring more people, we need to change the procurement systems." It's there and it works. So the software supremacy is one thing. The second piece is, as you see in Ukraine, the future of battlefield is a lot of unmanned, small drones deployed, distributed, talking to each other. I think Ukraine has showcased to us how you can achieve battlefield successes by deploying massive amounts of drones. Now, unfortunately, Russia catched up fast because they saw the future and they geared up their domestic industrial base to produce these drones. But I think our system has been for many, many years built on building massive platforms that are now easy targets. Of these small cheap platforms, and so I think we have to flip the switch and start building these things on scale these network distributed [00:25:00] platforms that can talk to each other that can communicate in non pervasive environments where there's no GPS and that they can guide our soldiers to the battlefield, because our soldiers don't have to be first in line. You can deploy the system to do reconnaissance, to give you a visual of how the battleground looks like, so I think that's the second piece of our Offset X. And the third piece is that we argue strongly is the human machine team. We're moving into a space, and I say this personally, the next three years, we're going to live in this co-piloting stage. Everything we do will be with these systems. Whether you're a soldier, whether you're an intelligence analyst, if you're a podcaster or a writer or whatever you call it, but the next three years. Your life will be much easier because you will be working with these models. And so I think we just have to train people that we will do everything to include military operation in this human machine collaboration and combat teaming. That the man will guide these [00:26:00] systems. They'll know more because of these systems. They'll be able to target more because of these systems, but this requires a new way of thinking, new educational opportunities, new training for our men and women in uniform, but the co-piloting stage will really allow us to be better, to be more productive. And then I think we will enter a different stage after maybe these co-piloting stage in which these systems will probably be in the lead because they'll be better. It'll be faster. They'll communicate with each other, and we will be able to be there in the background organizing and prioritizing them on what to do. Tammy Haddad: And will the military be safer because it starts with AI and not individuals? Ylli Bajraktari: 100 percent because like if our goal is to protect our men and women in uniform, then you will not put them in the first line of the battlefield. You will put them somewhere much, much more in a distance in which with the help of these systems, they can see forward. They'll be able to strike [00:27:00] forward. I think Bob Work calls it see deep, strike deep platforms. And these are cheap platforms. So, our men in uniform will be able hopefully with these integrated platforms to use many of these distributed platforms forward to give them the battlefield advantage. Tammy Haddad: All right, so let's stay on the battlefield for a minute because on this side we see Starlink and Elon Musk coming in. What are the implications of this private company and private companies like this right in the middle of the battlefield? Ylli Bajraktari: No, I mean, I think the Starlink episode is really what was happening for the, since the end of the Cold War, probably, where innovation moved to private sector. Now, as a reminder, because as SCSP, I always have to go back to the Cold War, is during Cold War, innovation was happening in government labs. That's where, you know, internet was invented, you know, through DARPA studies and investings. That's where Siri, I think the early model for Siri was invented. [00:28:00] But what we have done as a country is when the Cold War ended, we thought that we don't have to make any more government investments in basic R&D. So Congress cut the funding. We went from 2 percent of our GDP, investing in basic R&D through universities and these like startups to like less than 1 percent. And I think then the innovation shifted to the private sector. And you see the explosion since, you know, early 2000s of these like tech companies that invested so much. And I think Starlink is a result of that. That finally, you know, an individual with enormous capacity, enormous resources, went and invested in an area that government abandoned probably for years, because we went to invest in like large satellite platforms, and we abandoned the launch of satellites completely. And he found a gap in which we have abandoned, invested a lot, and luckily for us because it's a U.S. company, it proved to be successful. Now, we are in this dilemma [00:29:00] now in which, you know, war decisions are being put in the hands of a private individual in the United States that has enormous capability. And I think we just have to, as we have said with SCSP, we need a new public private model in how we operate going forward. I also like to remind most of the time that we're facing a competitor that has a civil-military fusion. What does that mean? That means that everything that private sector does in China automatically can be in hands of government. There's no question there. If CCP wants to get a hold of any algorithm, any satellite, any cyber capability, they'll get it within a minute. In our system, that doesn't work. And I'm not arguing that we will beat China ever. This is America. We're different, but we need a different model in how we build a public private model. And I really think that SCSP wants to build that, because I think you cannot have government coming up with AI policies absent of some kind of a conversation with [00:30:00] technology people. Otherwise, we will be developing policies that are not going to stand the test of time, because this technology is moving fast. But also, I think we need a whole of nation approach to public private partnership, because the private sector in the United States is what's leading in innovation. I'd like to remind everybody that in the last three years, two critical innovations came from the private sector in the United States. The COVID vaccine and the speed and scale of deployment of COVID vaccine, globally probably, and ChatGPT. These are two inventions that came from our companies, our labs. So we need a different model of how we bring this community together. And that's what we do with SCSP. To be honest, Tammy, every day. Tammy Haddad: So, let's talk about ChatGPT. It comes out and the whole world changes. You guys are aware of it. The Chinese are aware of it, right? These models, these companies that we're seeing now, how do you see, if you can, how can you see their future? [00:31:00] In terms of how it's used. Ylli Bajraktari: Our companies or all the companies? Tammy Haddad: No, I mean, AI companies now, like NVIDIA, we know where they are, but like OpenAI, Cohere, all these companies, are they going to exist? Ylli Bajraktari: Yeah, I mean, look, right now, most of these frontier models are concentrated in a handful of companies, as you know, because these models to be built are expensive. You need enormous data, you need enormous resources to build the hardware. And so I think you have a concentration among a handful of companies for these frontier models. There are also enormous resources going right now in AI, as you know, a lot of new companies being built, a lot of new capabilities being built. So I think, you know, I remember somebody said, but like, we will probably see the next big companies being developed in the middle of this in this era. I don't think we have seen the last of these companies. When you look at Anthropic, for example, came out two years ago, [00:32:00] look at where they are now. Tremendous worth that they have, but also one of the, some of the best models we have, Claude3 was released last week and some of the early indications are that it's probably as good as but maybe even better than GPT-4. Tammy Haddad: What's your take on open source versus not open source? Ylli Bajraktari: So I think, yeah, so, um, I think right now you have two pathways in terms of these large language models. You have the proprietary models that I think we all are familiar with, and then you have the open source model that was released by Meta. Llama 2 is I think the latest model, and I think there are predictions that Llama 3 will be released this summer. I think there'll be space for both camps. And both of them will provide some kind of added value in the market. Now, I think one challenge we have is generally our ecosystem work because we have the triangle of innovation because of the private sector, [00:33:00] academia, and the government working hand in hand for these things. Right now the problem we have is that academia doesn't have enough resources to have access to these models. Now, as you know, Congress has worked on establishing the national AI research and resource cloud. I think OSTP with the National Science Foundation are working on the test case for these things, but I think we have to come up with a model in which academia has access to these models because I think— Tammy Haddad: What do you think it should be? Ylli Bajraktari: So I think it has to be two things. Number one is I think we have to really scale up the NAIR. I think the initial resources dedicated for this are not sufficient. Tammy Haddad: So that's government? Ylli Bajraktari: That's government. Because I think that will allow academia, small size entrepreneurs, access to these cloud systems to play and build the next generation innovation systems. But I think the second thing is, one of the things we've argued for is, during the cold war, for example, we have given government labs, university labs, [00:34:00] capabilities that are to either build protection around nuclear weapons, safety and trust around nuclear weapons, but the government gave them those capabilities. Those capabilities now come in the name of hardware. Like if you only have the latest and greatest from NVIDIA, you will be able to build these models and test them in terms of safety, in terms of risk. I really think that government should step in and buy hardware for universities. And so then they can test them, evaluate them, and give us the right and left limits on what can these models do. Because otherwise, academia will be left so far behind. Tammy Haddad: There's a lot of worry about that. Ylli Bajraktari: 100%. 100%. Because, as I said, these models are really expensive. Hardware is not easy to get to, especially the high-end hardware. But also, another angle to this that we have advocated for is, if these export control rules will hurt our companies and their ability to export to China, then the government should [00:35:00] step in and buy this excess hardware and give it to universities. So our companies are not hurt, but also you have distributed this hardware to universities that they have the talent. They have the people and they're willing to do these things so we can balance the triangle of innovation in our Ylli Bajraktari: country. Tammy Haddad: And are you working on that? Ylli Bajraktari: Yes. I have a piece coming up hopefully. Tammy Haddad: Well, I mean, it does balance things out, right? Ylli Bajraktari: It does. I mean, Tammy Haddad: Because then, to your point, you're going to have small companies with innovation, but there's no way to really scale it unless there's private investment. And then for, it's all individual as opposed to where universities just elevate everything. Ylli Bajraktari: And look, I mean, obviously, it requires a new set of resources from government. And as you know, getting new resources is always difficult. But even Secretary Raimondo argued like a couple of weeks ago that she also foresees a CHIPS Act 2. Because I think CHIPS Act 1 was only to jumpstart these capabilities. Tammy Haddad: Yeah. Ylli Bajraktari: It doesn't [00:36:00] mean, and it will only get you a lead for a short period of time. It doesn't guarantee you will have that lead forever. Right? Because China is investing enormous resources here and they're building their domestic capability to really block our companies. Tammy Haddad: So we've got to make a complete turn here and we've got to talk about elections. Okay, around the world as well as in the US because the power of AI has already been demonstrated. We're not going to go through the details. What do you see as possible fixes? Ylli Bajraktari: Before the elections? Tammy Haddad: Before the elections. Ylli Bajraktari: This is a difficult one, because you're racing with a technology that is changing by the day. I mean, you mentioned the release of chat GPT, but how much and how more advanced these models have become since then, right? November of 2021. So that's one thing. So you're racing with that. Technology is open source. It means it [00:37:00] can, it can be in the hands of anybody. State actors, bad actors, non state actors, right? Most of the democratic world is going through elections. As you know, I think more than billions of people are voting this year in the elections. Here, in the EU, India, Indonesia, you name it. Yeah. We have argued strongly that if you are our adversary—China, Russia, North Korea, Iran—this is your once in a lifetime opportunity to mess up with our social cohesion, to mess up with the democratic system and the outcomes. So what can you do? Number one is you make sure that the institutions we have in place have robust authorities and policies in to really protect our system as best as possible. And I think this is more from the outside adversary perspective, because we will not close our democratic platforms. You will never be able to close our platforms like Meta, and X, and Insta, from our adversaries, because [00:38:00] we are a democracy. And we have seen time and again how they try to circumvent the system, how they build troll farms, etc. So we just have to be better prepared for these elections. I don't think we'll be bulletproof. I think we have to educate our citizens about the risk. Tammy Haddad: By the way, how do you see citizens being educated, not just about that, but about AI? Have you seen anything that works? Ylli Bajraktari: Taiwan. Taiwan elections were a great model. If you look at Taiwan, the number of cyber attacks they go through daily from China. It's mind boggling. I don't have the statistics here with me, but we can send you this afterwards, but like, but they were able a to build a, a solid election process with an outcome that was a legitimate and they were able to tell their citizens, “Hey, this is like a complete deep fake media speculation or platform speculation. And this is real.” So they worked really hard in educating their [00:39:00] citizens. And not to mention their position, it's a really sensitive one. So they got ahead of this and they produced a really strong election outcome. So you can learn from some of these models. You can buy, you know, you can learn from the experiences. I think I would, I always like to believe that we are better off because I think we have seen now in the past what has happened. Tammy Haddad: But it feels like if you look at all the polls today. It seems as people have forgotten all of that. The New York Times had a front page story that all of this disinformation, it works better now than it did in the last election. So what about what OpenAI is doing? They're the only ones that I'm aware of now that have come out and said, if AI is used in any of these campaigns, they'll take it down. Ylli Bajraktari: In Munich, most of our tech companies announced like coalition for election integrity. There's an organization called Truepic that does fantastic work in this space. They have a coalition [00:40:00] for content provenance and authentication. C2PA is called, but what they do is they work really hard in identifying images and videos that are generated by the models. They might have a watermark behind it that says, “Hey, this is generated by an AI system. This is generated by a human.” It's a tremendous coalition and an organization that works on this space. We are a big supporter of Truepic. They're going to be at the expo you mentioned earlier. But, there are some efforts in this space in terms of like, you know, if you see tomorrow a fake image or a deepfake on how we can identify them. I think the technology also has catched up in terms of like identifying these models and deep fakes and they can show it to you like, "hey, this is not a human generated image or a video. It was generated by a model used by who knows who in this space." Tammy Haddad: Congressman Obernolte suggests that we're trying to trademark, watermark, [00:41:00] identify the AI generated information that we really should be identifying the authentic and original information that we're like looking at it wrong. Do you agree? Ylli Bajraktari: Could be, could be. But I think like, look, the power of these models is so strong that I think it will be difficult for humans. to try to identify all these things. So you need to come up with a technical solution and really filtering out the junk that exists. Because otherwise they're so fast, the speed and scale of these models is so enormous that I think as a human, it's going to be impossible to do these things. So you have to come up with some kind of a technical solution. Now, as you know, it will be up to tech companies to adopt these mitigating actions. But I think they also acknowledge that a lot of these things will come at their cost. If a deep fake goes wrong, or an image generator goes wrong, who to hold accountable. Tammy Haddad: Well, that's the issue. Who do you [00:42:00] hold accountable? Alright, let's turn to the AI Expo. May 7th and 8th, Washington D. C. You're taking over the whole convention center. Tell us about it. Ylli Bajraktari: Our goal as a project from the beginning has been to really bring the community of private sector and government in one place. Because as I mentioned from the beginning is, we lead as a country where these two communities are working hand in hand for, you know, our country's innovation and building the next generation of capabilities and technologies. And so we always were working on how do you bring these two communities at scale, and I thought nobody has ever organized an AI expo in Washington. There are many departments and agencies in Washington that would benefit by seeing, experiencing, and maybe eventually adopting these kinds of technologies for their purposes and their mission. And so we have reserved one floor of the convention center. It's the size of four football fields. And there are several elements to our AI Expo. Number [00:43:00] one is we're going to bring 100 plus exhibitors. This includes small, medium sized, large tech companies. It includes government labs, it includes universities to showcase what they can do, not just with AI, but all the emerging tech. So that's one area. The second area of the Expo is we have three demo stages where we will have these companies really showcase what they can do with AI, quantum, bio, you name it, to the visitors that are visiting Expo. We have a center stage there, where we will bring, you know, CEOs from tech companies, cabinet members, decision makers, and members of Congress to talk about these things. On the side of this, we're also organizing the Carter Exchange to honor the late Secretary Carter, where the conversation will be focused primarily on national security aspect of innovation and technology, which was really something that he has advocated for. And then, lastly, we have six side rooms and we're partnering [00:44:00] with private sector companies, government, and non-government organizations to just organize a set of like conversations, fireside chats. I think in total, maybe about 40 of those in these side rooms where people can enjoy really some interesting conversations about AI safety, responsible use, the future of supercomputing, all conversations that I think aim to bring this community together. So we expect thousands of people over the course of two days. Tammy Haddad: Can anyone attend? Ylli Bajraktari: Anyone can attend. You just register SCSP.ai where you can register, and you can attend and hopefully Tammy Haddad: You can podcast from there! Ylli Bajraktari: You can podcast from there, which, hopefully we can get you to do it from there. Tammy Haddad: Yes. Ylli Bajraktari: I also encourage you because Washington has 10 universities. So if we can get students to come out there and they can meet potential recruiters, we will have some of the major companies there recruiting people on site and universities and universities. and government. Right. [00:45:00] and so this is a tremendous opportunity. It's in May. You're about to graduate. If you don't have a job, come to the AI Expo to meet recruiters. We will have a pavilion for future recruitment. You will have so many opportunities to engage and listen to conversation and maybe land a job. Tammy Haddad: Oh, interesting. Also, learn what AI is doing. How about that? I can't tell you how many people have asked me. How do I learn about AI? What do I do? Is there a class I can attend? Is there a special course? Because even in every conversation, I don't need to tell you It's different for everyone described in a different way. The experience is different. Ylli Bajraktari: Definitely. Tammy Haddad: And it keeps changing to your point, right? So are you saying that, let's just use chat GPT because we started there. Chat GPT came out two years ago. Would you say it's advanced? Maybe you can't predict like three years, five years, one year? Cause there's always these conversations about how far forward things are going. I find it remarkable [00:46:00] from using a year ago to today. Ylli Bajraktari: No, I think you're absolutely right. All these models have gotten better and better. Like remember when they first started they were not generating images And then we had the next way when they started generating images based on the prompts I use them all the time I think the best way for this is for what purpose you want to use AI. If you want to use AI to write performance evals, I think it's a tremendous aid. A lot of us have worked in government. You spend like hours writing performance evals for you, for your colleagues, for the people you supervise or they supervise you. You can generate these things so much easier and so much, much faster with these models. So it really depends. If you have little kids and you want to help them with math, Khan Academy has an amazing app called Kalingo, in which it helps personalized tutor for your kid, whether it's in math or chemistry or biology, but it will [00:47:00] help your kid get to understand what two plus two is. It will not give you an answer, but it will walk you through a logic of how you get to the four number. And so it just depends. I think we'll see a lot of personalized health care in this space. You'll see personalized education. You'll have now probably explosion of agents. They will do a specific task for you in this space. So it really just depends on what you will use it for. In terms of where you can learn a lot of these things, I think Coursera offers a lot of free courses online. I think even on YouTube you have a lot of courses online that companies have actually produced to bring you up to speed with what large language models are 101. We went through a cyber education wave both in government and outside when you have to have a certificate every year in government to be eligible for these cyber technologies. I think we'll go through a wave of being educated in AI, what it does mean, what are the limitations, [00:48:00] how it can help you do your job better. But I really recommend people just working with them. Right? I think the one way you build trust in these systems, whether or not you know they're not right 100 percent is by trying them. And eventually you will notice the errors they're making, but I think I use them all the time, as I said, and it helps me a lot in so many tasks. Tammy Haddad: Well, I can't thank you enough for being here and for all you're doing. I look forward to seeing you at the Expo. And for all of you listening, you got to join us there. The AI Expo, May 7 and 8. And right here in beautiful Washington, D. C. at the Convention Center. Thanks so much for being with us. Ylli Bajraktari: Thank you for having me, Tammy. Tammy Haddad: Thank you for listening to the Washington AI Network podcast. Be sure to subscribe and join the conversation. The Washington AI Network is a bipartisan forum bringing together the top leaders and industry experts to discuss the biggest [00:49:00] opportunities and the greatest challenges around AI. The Washington AI Network podcast is produced and recorded by Haddad Media. Thanks for listening.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.