WASHINGTON AI NETWORK PODCAST TRANSCRIPT Host: Tammy Haddad April 26, 2024 Guest: Joelle Pineau, V.P. of AI Research, Meta
Ep. 16 - Joelle Pineau Tammy Haddad: [00:00:00] Does everyone here have these fantastic Meta glasses? Do you have your glasses? Okay, everyone has their glasses. Oh, look, look at Lynda with the glasses. I've never seen people so happy with a giveaway, Joelle. Joelle Pineau: It's exciting. Tammy Haddad: Have you ever seen people so happy? It's so good. Yes, we have one more seat. Josh Ginsburg back from the West Coast. Here he is. It's a reunion. Thank you so much. All right, we're going to get started. Everyone good? Tammy Haddad: Welcome to the Washington AI Network podcast. I'm Tammy Haddad, the founder of the Washington AI Network, where we bring together official Washington DC insiders and AI experts who are challenging, debating, and just trying to figure out the rules of the road For artificial intelligence. The conversation you're about to hear took place at the inaugural T. G. A. I. Friday lunch on Friday, April 26th to kick off the 2024 White House Correspondents weekend. The [00:01:00] Washington A. I. Network took over the house at 12 29 to showcase Meta's latest AI technologies, including The Quest three, mixed reality VR headset, spotty, the AI Robot dog, and my personal favorite, the RayBan Meta Smart Glasses. In this episode, you'll hear from one of the world's leading AI researchers. My guest is Joelle Pineau, vice President of AI Research at Meta. Join us as we discuss the future of artificial intelligence. Tammy Haddad: But before we get started, I hope to see you at the AI Expo for National Competitiveness. It's May 7th and 8th at the Walter E. Washington Convention Center in Washington, D. C. No matter what you're doing in AI, technology, and government, you don't want to miss it. We'll also be recording podcasts for the Washington AI Network from the heart of the Expo, including Washington Post columnist David Ignatius and Elham Tabassi, CTO of NIST. See you there, and thanks for listening. Tammy Haddad: Welcome [00:02:00] everyone to the Washington A. I. Network. I'm Tammy Haddad, your host, the founder of the Washington A. I. Network. We're here at the House at 1229. Welcome, everyone. And it's so good you're cheering because the last six, seven months, we've been bringing people together to talk about AI But now, today, we have one of the great pioneers, Joelle Pineau. Who's vice president of research at META. Let's hear it for Joelle. Joelle Pineau: Thank you. Thank you. Tammy Haddad: Thank you so much for coming to Washington. You see the interest you have in the room for what you've done and how you've done it. They have stepped through our demos. The Quest 3. Who did the Quest 3? Right. What song did you perform on the Quest 3? House of the Rising Sun, that's what I did. I sang too. Joelle, I cleared the room. Sorry about that. And we've got the Meta Ray Bans, right? And we've got Spotty, the [00:03:00] robot, and the fantastic AI team. Tammy Haddad: There was Linda Carter herself, getting a ball from Spotty. What an incredible day. We're so honored to have you here with us. But we have to start, how did you get into AI and research? so much. Joelle Pineau: I've been in AI research for a couple decades now, and it was really hard to imagine 20 years ago working in AI. You know, I, I did my, my graduate studies, master's PhD at Carnegie Mellon University in Pittsburgh, where they have an amazing, yes, some Pittsburgh folks here, they have an amazing robotics program. Um, and we were, we were toiling away. Um, I think for me, really, the driver has always been just curiosity. Just the ability to use technology to solve problems and to understand from the bottoms up, "How do we build these systems, but then how do we take them out into the world?" People's hands in solving real problems. So that's been my driver on this journey. I had no idea I would be in this wave of [00:04:00] excitement, interest, concern, curiosity about AI, uh, when I started. But it's been, it's been really great to see how the field has changed. Tammy Haddad: Well, everything did change when Chat GPT rolled out now, November 2022, right? Tell me about that day for you. I know it wasn't one of yours, but you're in AI and it, I feel like it changed everything. How about for you? Yeah. Joelle Pineau: That day was so interesting and, you know, if you go back to my beginning as a researcher, actually, you can go back to my thesis, and we were building dialogue systems with the technology from the late 90s, far, far away. So I'd been watching for many years what we could do in dialogue systems and conversational agents I built many of them. Is that text, or is that conversation or both? It's text, but it's the ability to use text for engaging conversation, communication with people, which is different from just generic text, right? There is a science, but also an art. to building a [00:05:00] system that yes, generates text, but does it in a way that's informative, engaging, and that really captures people's interest. And so I've been watching this field for many years. ChatGPT comes out, and I have to say, it was interesting to see it. There's nothing there that it did that I thought was impossible. Everything it could do , I thought was within the realm of possible, I just didn't think we'd get there this fast. And then, once we learned a little bit more about how they were building up, there were techniques in there that we had tried, including, you know, in my own labs, that didn't work, and suddenly it worked. And there's a magic there, both in terms of the mathematical ideas, the algorithms, but there's also a magic of you put great people together, you put solid engineering together, you scale it up, build a really big model that suddenly it opens up new possibilities. So that's really what ChatGPT was. It wasn't like [00:06:00] suddenly it did some magic I had never anticipated, but it just did it so well, faster than I thought. Tammy Haddad: And you have teams and teams of researchers working on AI. And how do you figure out where to put resources? How do you decide with all of this potential technology? Joelle Pineau: Yeah. I mean, the team that I support, the Fundamental AI Research team at Meta has Tammy Haddad: You should really explain that, right? Because that takes it back to generative AI. Joelle Pineau: And so this is a team that has existed for 10 years. started by Yann LeCun, Chief AI Scientist, back in end of 2013, beginning in 2014. A lot of what we do is just hire amazing people, amazing researchers, who have a track record of solving really hard problems and do it really well, and then give them a lot of freedom to pick what problems to work on. If I go and tell all of my researchers what to work on, it will not be nearly as good if I just give them the space and the tools to ask the questions. [00:07:00] Over time, we are pretty strategic about covering essentially all the areas of AI, so we have some teams working on perception, building models, of vision and videos, and some folks got to maybe see the segment anything work and so on. We have people who do robotics. Meta isn't in the business of building robots, and yet, when you build robots, you build AI systems that understand the physical world, and some of the models we built in our robotics team are now the models that are powering the Meta AI glasses because you need to understand the physical world when you go around outside. You're not on your phone You're not on your laptop. You're out in the world. It's a different type of information. So, we build up teams that cover all the areas of AI. We often have the responsibility to solve for the problems that Meta as a company doesn't know that it needs yet. We look five years, ten years into the future. And then, we give them access to a lot of compute power to build very, very large models, and let them run [00:08:00] with it. Tammy Haddad: So were you the first woman in AI? I mean, I see you back at Carnegie Mellon University. By the way, I went to Pitt. I couldn't get into Carnegie Mellon. Now I couldn't get into Pitt either. But anyway, so were you the only woman in the room? Joelle Pineau: I would say I was one of a few, you know, my cohort of graduates in robotics, there was about 20 of us and actually six women. Many of which are doing amazing things. Two of them are at NASA driving rovers on Mars. Just so inspiring. So, I would say one of the reasons that for me this was an amazing journey is because I always found that I had at least a little bit of a community. And I also found that beyond the women that formed that community, I also was always in places that had incredible commitment to building diverse teams and having people, diverse set of people be successful, whether it's at Carnegie Mellon, whether it's some of my academic journey or coming to Meta, and that was huge in helping me. Tammy Haddad: Okay. I'm going to ask the [00:09:00] question. Everyone else is too embarrassed to ask. What is AI? Joelle Pineau: Yes. AI is essentially the ability for a machine to have cognitive powers to display cognitive abilities that are common in human intelligence, but build that with a machine. When we started the term AI dates back to actually like 1956, really, it's been around for a while. And for many years we built AI by solving different pieces of the problem. So understand image, understand language, reason, form memories, plan. These are all separate subfields. Out of all of these, there's one capability that actually is key to unlock all the others, and that is learning, what we call machine learning, the ability for machines to learn from data. So when we say AI today, we mostly mean machine learning. We mean the ability to ingest large amounts of data, use that data [00:10:00] to understand information, and then make predictions. And these predictions can be pretty narrow information predicting, you know, what's the object in an image. Or, more and more, the prediction can be very complex. And so this year we've heard a lot about generative AI, the ability for machines to generate new images, generate language, generate music. That's still just predicting information, it's just predicting incredibly rich information. And so that's been also the big pivot of the last year and a half. The ability to move from predicting narrow things to predicting incredibly rich things. Tammy Haddad: I was going to say, we should go right to Llama, right? Joelle Pineau: Yes. Tammy Haddad: ChatGPT comes out, and then Meta comes out with Llama, and the whole world changes. Joelle Pineau: Yeah, I mean, Llama is our large language model, which we first released in February 2023, uh, for a research audience as a research model. Since then, we've released Llama 2 last summer and Llama 3 [00:11:00] just last week, our largest model in which people can now have access to, through Meta AI assistant on most of our platform, Facebook, Instagram, Messenger, WhatsApp, and so on. The LAMA is a really different value proposition than GPT in the sense that we've really angled on having a highest quality model, but making it open. So that means that we share the weights of the models. Other teams, research teams and universities, startups, large companies can actually take the model. They can build on it. They can shape it to their needs, and that really has opened up tremendous energy across the ecosystem. I have a tree of the llama derivatives, think of that as a little bit of a family tree, starting with Llama and all the different derivative models that have been built by bringing in different data, by bringing in new ideas, and it's really amazing to see. Llama 2, from the summer, we [00:12:00] had, essentially 19, 000 derivative models built on top of that from teams around the world, from a model that we shared freely and openly with the community. So, I think that's pretty, pretty exciting. Tammy Haddad: I was going to ask, Vinit Khosla is here, he's the CTO of the Washington Post, so by open source for those of us who don't code. So he could take LLAMA, it's all completely available through open source, and he could build something. Tammy Haddad: Oh, that's great. Question man?: And people forget Joelle Pineau: how much you think of the companies and the startups, but we've also had tremendous feedback from people in government who don't necessarily have the access to the large compute infrastructure to train this. There's incredible ability to build additional value when you have the weights and you can really change them and customize them to the needs that, that you have. Tammy Haddad: [00:13:00] I guess the hardest part for me is how you have all these people, are aware of AI. and they know it's going to change their lives. And one of the great things about today with the demos that we're doing, I think people actually are seeing how it works. But you're a teacher. So how do you talk to people about it and explain that, even for your researchers, how do you explain to them here's all the possibilities, you come up with your own idea, and you use this to change the world. I guess what I'm asking in a really long, ridiculous way. Is is it harder because of llama and everyone is aware of it, or is it easier? And how much time do you have to teach people what you're talking about? Joelle Pineau: Hmm. I mean, I, I genuinely enjoy teaching people. I'm genuinely excited about this technology. So it's always a gift to have people who want to talk about AI with me. I don't resent it in any way. I think people forget sometimes how early in the journey we [00:14:00] are. People feel like AI has just like bulldozed its way in our lives. We are just at the beginning of this journey. The models we are building today are nothing compared to the models we're going to have in five years. And so really, I think people have to embrace this phase of exploration. There's a lot of exploration going on in research, but we also need a lot of exploration in terms of What are the products we're going to build with that? how are we going to figure out the experiences that people genuinely want to have with AI systems in their lives? And even what are the guardrails we put into that and so on. So, that for me is the whole reason to have an open approach right now. It's because we get this in the hands of lots of different people, including people who have different points of views. And from that, we can have a global conversation of what should be the future of AI. If we just keep this inside, yes, we can develop it, we can build it into products that billions of people will use and love, that's wonderful, I'm excited about that too. But [00:15:00] honestly, all of that's going to get a lot better when we have many more people participating in the development, in the innovation, and in the conversation. So, as an educator, I feel my role is to, yes, build the research, but also to demystify to help people understand. And that's just like opening up a door into the conversation for many people. Tammy Haddad: So, Vineet, he comes to you, he makes the deal with you guys and open source, do they just take the product and go with it? Is there some sort of feedback loop, which helps you develop more and more AI ideas to decide where to go with all those GPUs. No one asked her, I hope someone asked her today how many GPUs she has, because to me, that's the key question. Joelle Pineau: I mean, that is certainly the intent, and to be honest,, we don't ask for a lot in return in that we share the model openly. We do share a lot of tools and information with it, but with 19, 000 people building derivative models, you can imagine we don't necessarily have the bandwidth to follow up with [00:16:00] each of them and form fully fledged partnerships. So that's a little bit the challenge, in fact. There's so much exciting innovation and even just keeping track of it and figuring out which pieces of it to bring back into the next generation of our model right now is a little bit ad hoc. Ideally, we'd be very intentional, and I'm delighted to always meet people who are using the model, but right now it's still a little bit ad hoc just because the wave of excitement has been so big compared to the capacity and at the same time our research teams are actively building the next generation. We released LLAMA 3, but only the smaller models. The larger model is still training. We will build a LLAMA 4 and so on. So that's something that I think over time will get better. People may not be aware, but our team also built PyTorch for those who are not deeply in the field. PyTorch is essentially like a machine learning library. So if you're building machine learning models today, chances are very high chance and let you work at Google that you're using PyTorch across the industry, across the academic community. [00:17:00] We built PyTorch now multiple years ago, released around 2016. With PyTorch, we've built, in a very intentional way, a community, a way for people to share back, get involved in the development. We even transferred PyTorch to an external foundation in the last year. So, over time with Llama, we will get more organized at how to do it. We already have an amazing community, and we really hope to really share back, but also bring in all the work that these people are doing. Tammy Haddad: So much is going on in AI. It's sort of hard to keep up. She actually told me earlier when, ChatGPT came out that most of the OpenAI people didn't even know about it in the company. How do you keep up with all the information so you can make the best decision? Joelle Pineau: Mm hmm. It's hard. Tammy Haddad: Symone thinks that's a good question. Joelle Pineau: It's hard to keep up. Joelle Pineau: Symone Tammy Haddad: Sanders said, "I would have asked that question." Joelle Pineau: Yes. Two things. One is, I do consume broadly a lot of information whether it's online [00:18:00] information. I read research papers, listen to podcasts, and so on. I work for a social network company. I lean deeply on my network to share good information. My research teams know if they read something exciting, they send it to me and say, "have you seen that?" And so on so forth. There's a lot of exchange of information that goes that way. The other thing, honestly, is just to be cognizant of the fact that I don't have the ability to absorb information at the same speed as Meta AI, and so there's a lot I'm gonna miss, that's okay. If I really need to know, I'll find out in time, and I don't sweat it. But if I spend all of my time online trying to read everything and follow everything, I won't have any time to do the deep work. So it's also deciding, like, how much energy do you put following information? But how much time do you save for yourself to do the deep thinking work, which is important also. Tammy Haddad: When you're in conversations with Mark Zuckerberg, another technologist, it's not like you're talking to, you know, with all due respect to the communications people in the room, [00:19:00] the communications team, or the advertising team. When you're talking about these issues, how do you... I don't know. I just feel like the company is so big, successful. It's so exciting every day. I find it different things. We're about to talk about the demos and the glasses and all the different things when you're talking to each other about it. First of all, do you focus on the competition? How do you pick it? Joelle Pineau: Mmhmm. it's been very interesting to see Mark's interest evolving over time on AI. Maybe dial back 10 years ago when we started the Fundamental AI Research team, FAIR. Mark was actually really involved at that time. He flew down to the NeurIPS conference, which is the place where all the AI researchers meet up. He shared his intent. He's been always very transparent about his intent. He shared his intent to build a world class AI research lab. They interviewed a bunch of people on the spot, and those are many of the employees that started the lab, some of which are still with us today. There's a lot of things on his plate, but [00:20:00] I would say in the last year, AI is definitely one of the topics where he's spending the most time. He's incredibly knowledgeable. He's incredibly curious. He asks a lot of questions. He wants information. He reads and consumes information. And as a result, I think he has a really deep understanding of the field. A lot of our conversations are about how fast we can move in terms of the research, what are the key problems to solve. But also, he's very product focused, and so figuring out how do you bring this into product in a way that people are going to be engaged. And, again, as I was mentioning earlier, we just said the beginning of that journey, so it requires a lot of experimentation. It requires getting comfortable with the fact that we don't know exactly what are going to be very, very compelling AI products. He has an appetite for discovery and for learning. And I do work with a lot of the product leads also who are very curious in and want to figure out how we can leverage that technology. Tammy Haddad: And what do you [00:21:00] think about the photo of Mark with the beard? Did everyone see that? Super cute, Mark. Yeah. But you told me earlier he's cuter in person, right? Joelle Pineau: I will not comment on the appearance of my co worker. Yes, you're not, you can't talk about the appearance of anyone. Certainly the meme with the beard has been hugely popular. And I do feel a responsibility that we should be transparent when some of our images are enhanced with AI to be transparent about the fact that the image has been manipulated. I can attest he does not wear a beard in real life right now. Tammy Haddad: Well you can, thank you for that confirmation. The Washington Post is in the house. They're, Cat's going off to write that right now. But you also pointed out to me that you can really tell the fake material. You can see where it originates, which is, I was talking to someone at HBO films. I said, "I don't understand. You guys should go watermark all your stuff." They're like, "you're crazy." But your point is you know what it is. Joelle Pineau: We often do. [00:22:00] I think there's been a lot of efforts in the last few years on this question of really authentification of information. In many ways, this is a problem we've tackled for many years, even before generative AI was there. There is a sort of. Spread of misinformation, and we've developed some of the most sophisticated tools to do that sometimes and that's what's interesting with generative AI. People feel like just looking at the image you should be able to tell or not, and we do have AI algorithms. You know, AI is the antidote as well. AI algorithms that can determine, but often they don't use just the image to determine that. They also use the distribution network. And so there's a lot of information behind the scenes that may not be visible to the naked eye. That is actually really, really helpful to do that. And so I would say we, we do invest quite a bit in, in trying to do that, but we don't invest just in the technology. A lot of it is, as a society, What are our expectations in terms of how to build up that information? We work with groups like the [00:23:00] Partnership on AI, which is a multi stakeholder group, non profit, that is an incredible group for building up consensus around best practices and standard. They've developed some best practices in terms of How to indicate whether media has been manipulated with AI or not, which some of the news outlets as well as digital media company have adopted. And so I think we're also going to continue to partner with others because it doesn't really make sense to develop a standard that is only met as honed. We need standards that are industry wide, that people understand, that understand the labeling on the information they consume to really inform them. Tammy Haddad: Okay, let's turn to Spotty. Spotty, the robot's upstairs. CNN's own Pamela Brown was right there and accepted the dog. And the SBA director, Isabel Guzman, she did it too. Everyone's a little afraid. It's so human. You're a robotics person. Can you also explain how the heck that could possibly work? Joelle Pineau: Mmmmm. We work in partnerships with many other groups. We don't build the physical [00:24:00] robot. Boston Dynamics builds these amazing machines, but what we do is pour into it all of the layer of AI. So the ability to understand information similar to the glasses, right? Spotty goes around, it has a bunch of sensors, not just camera. It has all sorts of other sensors that essentially map out the environment, the room. It has the ability to understand commands, and it also has the ability. It's hard to know unless you've worked in robotics, but the ability to have that very fluid motion is actually quite, quite challenging. Tammy Haddad: It's incredible. It's really incredible. Joelle Pineau: There's been a huge progress in terms of the control and in particular what we call locomotion, the ability to move with legs, not just with wheels. You're also seeing a ton of humanoid robot startups come out this year. And the reason that those are suddenly becoming viable is because of the improvement in the technology. Tammy Haddad: So the Quest 3 was in the other room. And I went [00:25:00] in and I performed "The House of the Rising Sun." And, um, and the AI is remarkable. You feel like you're right there. How do you make that? Joelle Pineau: There's a ton of different components that go into that, and especially with Quest 3, I have to say, we're just scratching the surface in terms of AI. A lot of what makes this such a good experience is the work that's done by our colleagues in Reality Labs. They've invested a lot in understanding what are the ways in which we create this immersive environment that people actually feel like they're in another space and yet don't feel like it's a completely jarring experience. I think the pass through on the Quest 3 is particularly... Tammy Haddad: it's amazing. Joelle Pineau: Well done. Which is the ability to have a tunnel from virtual reality to the real world. A lot of that is just basic technology. It's just going to keep on improving. There's a lot more work that's happening in our labs that we're not seeing yet on the product side. Tammy Haddad: Wow. You're really teasing it out there, Joelle, because I think it's [00:26:00] fantastic, just as it is. And my big obsession. Okay. The Ray-Ban Meta glasses are freakishly amazing. You put them on. You can take photos. The audio quality is beyond belief. Joelle Pineau: Yes. Tammy Haddad: Right. And it translates. It could translate my notes. Probably be better if I spoke it in French or something like that. Joelle Pineau: You don't need to. I'll hear it in French if you speak it in English. Tammy Haddad: This is just remarkable. Do you feel that way about it? I'm so shocked by it. And we're going to, by the way, take a couple very quick questions here. Cause I've talked too long. Joelle Pineau: I have to say I'm super excited about the work that's going on in, in the Ray-Ban Meta glasses. That's been one for us, you know, we talked about the ChatGPT team where OpenAI didn't quite know just how good the model was before they put it out. I think for us, the glasses were a little bit of that moment in that we expected AI to be a transformative experience for the glasses a few generations later. It was always, I think, the [00:27:00] intersection between the work going on on mixed reality and AI. We knew there was going to be a point of convergence, but we expected that to be a little bit later. But just the progress on AI has been so good in the last year. And honestly, like, earlier in 2023, we just decided to sprint for it and try to bring a model into the glasses, not just into phones, but bring a model into the glasses, and this is some of the work that some of the researchers on my team did. Some of it based on our robotics work, and there's something different about experiencing AI as you're going out in the world. So yeah, I'm super excited. I'm based in Canada and, and honestly, I didn't have the AI features on my glasses till this week. My teams are building it, but I couldn't use it day to day in my life in Montreal. So I'm just thrilled that we also released in other countries, and that we can use this not just in the U. S., but actually in several other countries, many more to come. Over time, so I'm going to start using them now, and I'll report back. Tammy Haddad: You may need it for correspondents' weekend. I might just say, does someone have a question? If [00:28:00] you can go over to the microphone, that would be great. The thing that I keep thinking about for this is that, when you think about health issues globally, right? And communicating, you put the glasses on, you can learn about things. It makes all kinds of information available to anyone, any time. That's remarkable. Joelle Pineau: Yeah. We've invested for many years in building language technology, in particular translation and multiple languages. At Meta, we operate across the globe, and so over time I would say some of our machine translation work is the best in the business. And it means investing quite a bit in understanding, the particularities, building the right data sets and so on. We initially didn't do it for the glasses. We did it to understand content on the platform to make sure that we could pull down content that doesn't belong on our platform, do it in several languages. But now to see that experience being. available on the glasses. I do a lot of travel, sometimes to countries where I don't [00:29:00] speak the local language. So I'm super thrilled to have that ready. Tammy Haddad: You know, you're going to be a big rock star this weekend when you go to the correspondence dinner, because people here are trying to figure out AI. I mean, I hope no one in the room is going to walk up to her with their Ray-Bans and ask, "how do I connect this?" Cause she's like, she's an AI pioneer. She's not the technician. Okay. We've got Kathy O'Hearn. Joelle Pineau: I can explain how they work under the hood. Tammy Haddad: Yes, exactly. Great. Kathy O'Hearn: I've connected. We in row two are on. All right. Row two. We've already downloaded the app. Anyway, but are these available on the market? Joelle Pineau: Absolutely, you can buy them for friends and family at a ridiculously low price, and this week coming out with new shapes that suit women's faces. I could go on. I'm genuinely excited about them. Tammy Haddad: The cat look? Joelle Pineau: Yes. Tammy Haddad: I like mine. Do you guys think I look good in these? I think they're very flattering. All right. Anyone else have a question? Yes, Pamela Brown right up here. Go ahead. Yeah, if you don't mind. Yes, [00:30:00] TV anchor, please go up to the microphone. Thank you, Pam. Pamela Brown: So it really struck me when you said we're at the beginning of this journey with AI. And, I think so many of us who are not in that space day to day think like we're so far ahead. So I wonder, it made me think, as you do look ahead 5, 10 years, where do you see AI In our day to day lives. How is it going to be implemented from what you can project now? Joelle Pineau: Yeah, it's a great question, and it's a very hard one. Like I spent a lot of my time thinking of where we're going in the future, but I don't pretend to have the answers, but what I will say is, I think in many ways, our use of AI so far has been very much in narrow modalities, so we have AI systems that understand language, we have other systems that understand images, we have other systems that understand sound. The human brain doesn't work that way. We have the ability to perceive very rich information, [00:31:00] to integrate it very quickly. So one of the big things we need to build is models that are much more multimodal. The other thing is in many ways, AI is at its best when you don't see it. When you don't know that you're experiencing it. And people think AI has suddenly appeared. But in fact, for any of you who has been in the digital world for the last decade, I promise you almost all of your interactions on the web are mediated by AI. AI mediates the content you see, the content you don't see, in what order you see it, and in what shape and form you see it. So, In that case, you know, you may not be aware of it, but it's, it's a huge factor in your experience, and for the most part, a very positive factor, because otherwise, the wealth of information is just completely undigestible. And so, going forward, right, the ability to have AI systems that are embedded in your device, whether it's your glasses, your watch, your kitchen, your car, and to understand information and to be able to enhance [00:32:00] your experience is going to be the big game changer, but you're not always going to see it. One of the big things we're working on is to move from what we call generative AI, so AI that generates images, sound, text, to agentic AI. And so this is AI that doesn't just predict information. This is AI that takes actions, and these actions are taken in a way to enhance your ability, whether to mediate your digital life, whether it's booking trips for you, scheduling for you, figuring out what are the right things that you want to, you know, getting Ray-Ban Meta glasses ordered and sent to your family and friends. This notion of helping you in terms of taking actions, that's a really complex space and a really open ended one. What are the types of actions that we actually want agents to do in a way that's responsible, in a way that's coherent with my values, with my preferences? Huge amounts of challenges, some of them are technical challenges, some of them are in [00:33:00] terms of building this in a way that's responsible. And so that's a lot of what we're doing, the multimodal, the agentic work, the understanding the world in a much more rich way. Tammy Haddad: Wow. Question woman?: I have one follow up if that's okay. Tammy Haddad: Yes. Pamela Brown: I think a lot of us and including myself, you know, I'm a news anchor, but I wonder about where my job will be in a few years and how AI is going to impact that and how much AI will be replacing people's jobs in the future. What do you, what do you think about that? Joelle Pineau: I'll be honest with you, right? I do think we are in for a major transformation. You don't introduce this type of technology and expect the world to be the same. I don't know how many of you remember Jetsons? There was like a view of like, "oh, here's all this technology, but everything else stays the same." That is not how deep transformation happens, right? I think the work of many people will be transformed by AI, including my own, including that of my research teams, which already when they're programming the AI models now are using a [00:34:00] coding assistant to help them program faster, better pick out bugs, run unit tests. And so each of us, I think the task that we do will change in many ways. What we're seeing when AI comes in is. It raises the level of abstraction at which we do the work. So if you think back, 500 years back, the ability to write was essentially like a very beautiful penmanship. People would copy the text. Over the last hundred years, the ability to write was dictated by good language, good grammar, and so on. Now we have tools that help us fix the grammar and so on. Especially if you write in French, there's a lot of rules there to remember. Tammy Haddad: Alexander. Joelle Pineau: Um, and so you actually, you can let go of a lot of these grammatical rules, which means that then you can operate at the level of ideas and concepts. So I think of people in press and media. Yes, you use these tools. You will write your text much faster and you will be able to pull on information much faster. But the point of view of what is worth talking about, what is [00:35:00] not worth talking about, the editorial view, asking the questions, what we today call prompt engineering, asking the right question in the right way, will still be something that's incredibly important and incredibly valuable. So my advice is lean into the technology, because if you don't, It is coming anyways, and the people who lean into will certainly have an advantage. But don't feel like your seat at the table is going to be replaced. If we do this right, and I hope we do, it will be a partnership, just a deeper partnership, between AI and humans. Tammy Haddad: That's great. Tammy Haddad: Okay, so right now I want everyone to take out their glasses. Joelle, I can't thank you enough for your time today. It's been incredibly enlightening and inspirational. And before we go, though, I want everyone to put their glasses on and we're going to take a photo together. You got your glasses? Get your Meta glasses out, everyone. You're with a pioneer. And by the way, is she here from [00:36:00] the Smithsonian? Who agrees with me that the Smithsonian should get Joelle's computer? Okay? And it should go into the Smithsonian. There she is right now! Julissa Marenco! Don't you want Joelle's computer? Like magic she appeared! How about that? Okay. Everyone put your glasses on. We're going to get a picture of you there. And then we're all going to get our picture together. Okay, there we go. Tammy Haddad: Thank you for listening to the Washington AI Network podcast. Be sure to subscribe and join the conversation. The Washington AI Network is a bipartisan forum, bringing together the top leaders and industry experts to discuss the biggest opportunities and the greatest challenges around AI. The Washington AI Network podcast is produced and recorded by Haddad Media. Thanks for listening. [00:37:00]
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.