Simon Brown (00:03.028) Hello and welcome to this episode of the Curious Advantage podcast. My name is Simon Brown. I'm one of the co-authors of the book, The Curious Advantage. And today I'm here with my co-author, Paul Ashcroft and Garrett Jones. And we're delighted to be joined today by Alexia Cambon. Hi Alexia.
Paul (00:14.445) Hi there.
Garrick (00:16.972) Hi there.
Alexia (00:23.45) Thanks for having me.
Simon Brown (00:24.906) Wow, big welcome to the Curious Advantage podcast. We're thrilled to have you with us. So you're a seasoned leader at Microsoft. You've got over 30 years of experience in business transformation and advising Fortune 500 execs globally. And as Senior Director of Research, you've led pivotal initiatives on co-pilot, AI and productivity. I'm really keen to hear more on these and what you're up to. You've been shaping Microsoft strategies in global.
So I'd love to know your journey, Alexia, on how did you end up studying the future of work, AI, and at Microsoft?
Alexia (01:25.603) a great many things during that time. So yeah, happy to share how I got into it. I actually started in law. So my background at university, I have two degrees of law, a degree in English law and a degree in French law. And then at the end of that long arduous academic career, I decided I wanted nothing to do with law. And so I just sat down and thought about what do I really enjoy about law and it was all problem solving. And so that
Simon Brown (01:45.78) Ha ha ha.
Alexia (01:54.551) of got me into the world of consultancy, which is where I joined what was called CEB at the time and then got acquired by Gartner and found myself in the world of corporate research where essentially the entire idea was to try to understand what are some of the biggest challenges that organizations are facing when it comes to the workforce, when it comes to productivity, performance, engagement and what are the data-driven ways of solving for them. So
A lot of my background really was in that area of work design, performance. I wrote a major study that kind of took off during the pandemic that was called Redesigning Work for the Hybrid World. And that was all about how do we rethink work design and work models in a new world where we're disrupting the dimensions of time and space.
And that's kind of what got me in the Microsoft world. I was talking to lot of leaders at Microsoft about this research and they were doing some amazing things within the future of work. At that point, I was leading Gartner's Future of Work Key Initiative and I thought no better place to study it than the place where the future of work is literally being shaped, which was Microsoft. Yeah, so that was what took me to Microsoft and it's been a journey ever since.
Simon Brown (03:17.642) We're gonna dive into many of those things that you're covering with the future of work and AI over our time ahead. So one of the big things that you've been involved with at Microsoft is the Work Trends Index Annual Report, which I know has some fantastic data in there and insights. So can you tell us maybe some of the things that you've discovered as part of your research for that Work Trends Index?
Alexia (03:43.973) Yeah, well, the work trend index was some of what really drew me to working at Microsoft. One of the real exciting things about being a part of Microsoft research is that we have access to some of the most amazing data. Obviously our telemetry data, which is all the different signals that we collect across all of our productivity tools, things like how many meetings are we attending, how many emails are we reading, but then we also deploy our
Global Work Trend Index survey that goes out once a year to 31,000 people across 31 countries. And it's one of the biggest, most important reads of sentiment on the workplace in research. And so to be able to reign over those two types of data is an immensely satisfying job. And so every year we put out one of these Global Work Trend Index reports that's really meant to help.
take a pulse on how people are feeling about work. And this last one that we delivered was all about AI in the workplace and how our employees feeling about AI, what is the state of AI, is it being used a lot? And we found some really interesting things that I'm excited to dive into.
Simon Brown (04:53.854) Yeah, I'd love to hear more on that. First off, should I be really disappointed that the data tells me people aren't reading my emails? Was that one of the conclusions?
Alexia (05:00.635) Well, yes, I mean, one of the key things that we discovered over the last few years is this notion of digital debt, which is that we're all being submerged by emails. And we are essentially receiving a lot more emails than we have the time to read. So most of us really just glance at our emails very quickly. And obviously, it depends who the email is from and what the topic is about. But some of our heaviest
email users were just receiving up to 200 emails a day, which is beyond human ability to be able to process and consume that. So while yes, you probably should feel disappointed that people aren't reading your emails, you should also know that that is the norm and it's nothing personal.
Simon Brown (05:45.386) I guess one of ways that we get helped in reading our emails is the whole AI piece. So what's the latest research telling us around how people are using AI to help with things like reading their emails and other pieces? what sort of adoption are you seeing? What's some of the very latest, I guess, info that you're seeing around the AI uptake?
Alexia (06:06.425) Yeah, well, great question. One of the big research questions for the latest work trend index was to what extent are people using AI at work? So that was the first simple question we asked in the survey. And what we found was that 75 % of information workers are using AI at work. So the large majority of people. But one of the things we were really curious to know was how much of that use is sanctioned by organizations. So that was a follow-up question we asked is to what extent are the tools that you're using?
provided to you by the organization and 78 % of the population who are using AI admitted to bringing their own AI tools to work. Now that doesn't mean that all of the tools that they're using are non-sanctioned tools, but certainly a percentage of them are. So we're seeing this interesting trends of bottoms up usage where if organizations aren't providing the AI tools, then employees are going out and bringing them to work, which we like to sort of think of as BYO AI, bring your own AI.
Simon Brown (07:06.398) Maybe just a follow up on that one with the people surveyed also your sort of corporate customers as well. that that was that mainly in a corporate context.
Alexia (07:17.775) Well, interesting question. The survey is anonymous, so we don't know who is answering it, but given the reach of Microsoft across the whole world, it's safe to assume that many of the people who answered were users of Microsoft tools for sure.
Simon Brown (07:25.418) Yeah.
Paul (07:34.022) Lexar, can we dive into this number one on the findings? I love the way it's described, like people won't wait, basically, is what it says. If you're not going to provide it for me, then I'm going to use it because it's super useful. But what are some of the questions then that you think this raises for organizations? And how should they start to think about how to incorporate some of these AI tools in people's workflows?
and their training and what they're doing. Because I big, big questions for organizations at the moment.
Alexia (08:09.285) Very big questions and certainly the pressure on a lot of these senior leaders to demonstrate the business value and the ROI and investing in them now is high. And so the way I like to think about AI is right now AI is, it's very early in its journey, right? We are privileged enough to see it literally be brought to the world. Imagine what it was like, you
Well, maybe you don't need to imagine because there probably is some experience of this in the room not to offend you. But when the internet came out, right, and we had to move our entire businesses online. At least I didn't say the printing press. That would have been a lot worse, wouldn't it? I mean, I think the privilege of seeing this come to the world right now is huge and to be researching it and studying it.
Simon Brown (08:35.69) I'm
Paul (08:37.262) We thought you were going to say when computers came out Alexia for sure.
Garrick (08:38.69) I have had the 13 years of experience.
That was last time.
Alexia (08:57.509) But where I see it right now is right now it's serving most organizations in the form of a personal assistant. And in the form of a personal assistant, it will help you with key productivity barriers that are taking up a lot of cognitive load and a lot of time from the everyday worker. And we have actual research to back this up. We know some of the biggest productivity barriers for workers are things like too many meetings, things like too many emails, things like...
not being able to find information, and AI will help with those just as at the first layer. But AI will look very differently in two, three, four, five, 10 years. And if its very first form is just as a personal assistant, and in that form, it's providing these huge refunds of time and energy immediately, that I think is hugely exciting for organizations that are really struggling to ensure their employees can be productive, healthy, healthy and happy.
Garrick (09:58.114) I'm fascinated by what you were saying about leaders and leadership. And it's having a huge impact, I can imagine, on the democratization of decision making and assistance and so on. But do you have any data on leadership and their reluctance or their fears, perhaps, or anything? What do we know about what leaders are thinking about AI and its adoption?
Simon Brown (09:58.996) Ha.
Alexia (10:27.375) Yeah, great question. We saw in the data kind of an interesting gap between employees who are rearing ahead, like roaring ahead, wanting to use AI, wanting to bring it into their everyday workflows and then leaders who are somewhat nervous and who really kind of are reluctant to go too fast with obviously very understandable hesitation in the sense that
AI brings up a lot of questions about data management, about data security. But we saw things like, you know, 60 % of leaders worry about their organization lacking a plan and vision to implement AI. And that lacking leadership vision for what AI should do for you and do for your company is going to trickle down to your employees and how they're using it and what is appropriate for them to use it on. So we're definitely feeling this interesting gap.
and this urgency for leaders to catch up to their employees.
Simon Brown (11:29.096) And I guess the data is very comfortable.
Paul (11:29.42) And so when I was going to say, you sorry, I was going to say, your advice, are you advocating sort of baby steps as it were? So allow, you know, get people using it, try it, use it as a personal assistant, but keep the, keep the guardrails in place and keep revisiting where the guardrails are. Because there's that balance of, you can't ban something. You know, we talk about organizations, but schools have the same issue. You can't ban it because people will use it anyway. So using it and adopting it in a safe way, but getting people to.
want to use it because the flip side of this, and I think this is relating to the leaders point and managers, there's quite a lot of changing of what this means to people's role in terms of managing teams that are using AI. Right. So
Alexia (12:12.667) Totally. And so I think there are two questions there, which is one pace and the other is scale. And so when it comes to pace, I think your exact point is really important, which is if you don't bring it into your organization, your employees will. And that is probably a risk for your organization long-term because they might be feeding documents into systems that aren't secure. And that is the last thing you want. So you need to bring AI in pretty much right away. Then the question becomes about scale. Well,
Do we roll it out to everyone? Do we roll it out to just a subset of people? And that really feeds into the other point you made, which is getting people to use it and behavioral change. And there is a very good school of thought that suggests that if you just give it to certain groups or certain people, you are going to miss out on the entire peer network effect that enables and accelerates that adoption effect. So just targeting it to maybe one person on a team.
or to one team rather than everyone is probably going to reduce your ability to roll it out effectively. So while it might seem, this is probably why leaders are feeling anxiety is the advice being to roll it out now and then to roll it out at scale is a very big step and a very big plunge.
Simon Brown (13:28.17) I was intrigued at XCSA, the data is showing that 78 % of people are bringing their own devices, 75 % of people using AI. And that fits with, I think as Ethan Mollett talked about, the secret cyborgs where people are empowering themselves by using these AI tools. But then on the other side of things, we also hear of a reluctance to use it because of the fear element. Either the fear element that you were just referencing
there from a safety perspective of data and doing something one shouldn't, but also the fear of being replaced by AI and that if I use it, it will show that my job can be done by AI and therefore there's a threat. So are you seeing that same piece around that fear of being replaced by AI or my job changing as a result of AI? What's the data telling us there?
Alexia (14:22.351) Yeah, it's interesting. Some of the analysis we did was to look at familiarity with AI and its relationships across other outcomes. And we saw the more familiar people are with AI, the less they worry about it taking their jobs or them being replaced. And one could read that in a sense of AI is not very capable yet, or one could read that in the sense of AI requires a human in the loop and it's actually augmenting your skillset. It is not replacing you. As a human, you will always be
essential to the workflow and the process that you're in. And so I found that super interesting is potentially the fear of being replaced is connected to a lack of education around what AI is and why it can fundamentally help you. I think Dr. Kareem Lekani, who works closely with Ethan Molek said it best, which is, you shouldn't worry about AI replacing you, you should worry about someone who knows how to use AI replacing you because that combo of
Garrick (15:17.442) Yeah, brilliant.
Alexia (15:20.441) human intelligence plus artificial intelligence is for sure much more powerful than either one on their
Simon Brown (15:27.498) which there's some interesting research that I think was shared in the last few days, some of it seemed to go counter to that. That sort of you'll be replaced by someone using AI seems to be sort of the direction most people are thinking. But some of this research seemed to show that I think it was based on doctors and it was doctors not using AI was least effective at diagnosing particular ailments.
doctors using AI was next and then actually AI on its own was the most effective. And I guess the hypothesis there is maybe that people are sort of fighting with the AI and not necessarily leveraging its full benefits, which would actually take them to that sort of that winning position if you like. But have you seen data along that or research along those lines as well?
Alexia (16:13.979) Yeah, I mean, that brings to play, I think, a whole other interesting conversation around decision making, because essentially that is what that study is looking at. It's doctors diagnoses, which is all decision making and whether or not humans are capable of making unbiased decisions, right? And it's there's a whole academic research school of thought around this, where ultimately, what we're asking is, to what extent is AI and neutral?
decision maker who that is not informed by a lot of different potential biases that humans are exposed to. And I seem to remember a study I read years ago that was looking at judges, like criminal judges, they were much more likely and it was a statistically significant finding to pass harsher judgments before lunch, basically because they were hungry.
Garrick (17:08.637) the
Alexia (17:08.899) And so this is another really big important question for us moving forward is what decisions are going to be majorly impacted by AI? Is that appropriate? Is that desirable? In some cases, is that better? And there are whole processes right now that we already source out to tech judgment because in some places that's the safer call. So there needs to be a whole lot of regulation and a lot of research to understand that area.
Simon Brown (17:37.226) So we're talking with Alexia Cambon, who's the Senior Director of Research at Microsoft. Alexia leads the research into the future of work and the M365 team as Senior Director there, working to identify emerging research opportunities and delving into customers' most pressing workforce challenges. She also co-leads Microsoft's cross-company research initiative examining AI's impact on productivity and performance. She's a seasoned presenter and a speaker with a passion for storytelling and creative thinking. And her areas of focus include AI, hybrid work,
design, organisational culture and previously she spent 10 years at Gartner as we heard as a leader of their cross-functional future of work reinvented initiative. She's written for the Harvard Business Review and The Guardian and being featured in NPR, Forbes and The Times in the UK. Alexia, I know when we last spoke you talked around a model of how sort of work has changed on different dimensions, the impact of sort of the move to hybrid work and now the impact of AI. Could you maybe tell us a little bit about your thinking
that.
Alexia (18:37.849) Yeah. I mean, it's one of my favorite frameworks to use to try to contextualize AI within the history of work, because I think the big danger that we're facing when it comes to implementing it in work is underestimating just how big and disruptive this milestone is. And so the way I typically talk to organizations about it is to think about the work environment as made up of two key dimensions, which is the dimension of space. Where do we work? And the dimension of time.
which is when do we work? And if we think about the origins of modern information work, 1950s post Second World War, post Industrial Revolution, most of work really happened in an office and in meetings with good reason because back then asynchronous work was not particularly effective, right? Asynchronous work is a typewriter, but that changes when the internet comes along. The internet essentially makes asynchronous work really productive.
because now we have fax and then we have email and then we have IM and we can work just as productively asynchronously as synchronously. And that's how we worked for a very long time until of course the pandemic happened and the pandemic disrupted this dimension of space just the way the internet disrupted this dimension of time. And the pandemic essentially forced us to prove that we could be productive working from home. We had no choice. We were all sent home and we had to get on with it.
And that's what hybrid work essentially is. It's the ability to work across those dimensions of time and space. And then if we think about how AI enters this narrative, and we're trying to understand where does AI fit within the dimension of time and the dimension of space? Does it fit within synchronous time, within asynchronous time, within remote work, within on-site work? It actually doesn't fit into any of those dimensions because it is its own dimension.
And that's how I typically think about AI is it's this new dimension of intelligence. We're up until this point, we've only really ever worked with human intelligence. And now artificial intelligence is entering the scene and is joining the workforce. And just like we have to ask ourselves, should we work from an office? Should we work from home? Should we be in a meeting? Should we be outside of a meeting? We need to now ask ourselves every day, should we do this with AI or should we do this with other humans?
Garrick (20:58.882) Mm.
Alexia (20:59.297) And that is a really big milestone in the history of work. It's essentially disrupting the when, the where, and now the who.
Garrick (21:07.4) talking to a friend of mine who's a barrister the other day and talking about AI and its impact and if you think about the law, mean Butterworth's for example, which has all the precedent in it and so on, the best lawyers, the best barristers were those who had a comprehensive understanding or could find minutiae in the law that would enable them to make a case which would enable them to win the case and that's completely changed because it's no longer about the individual expertise of the lawyer.
and all their memory or their ability to research and the time they have, AI can go through the entire catalog and canon of precedent very, very quickly and pull out specific cases that are relevant to what's going on. And that's having a huge impact on how, you know, legality and opinion is being formed and so on. And they, and, he was talking about how some lawyers want to ban it.
completely and others want to embrace it. Same thing, but this point you make about when are we going to use AI, how are we going to use it, and in what places is a really big one to answer, especially for the professions. Do you think that we're in danger of outsourcing our thinking or handing over our decision-making to AI tools?
Alexia (22:32.911) Yeah, well, I think we always have this anxiety when new technologies at this scale, at this level get rolled out that we are going to sacrifice cognitive ability. So Plato argued that writing was dangerous for human memory. He believed that when we write things down, we compromise our ability to memorize things. Pope Alexander III, I think it was,
Garrick (22:52.022) That's right.
Alexia (23:01.945) banned printing. He believed that if we printed and distributed information, it would be dangerous because everyone would get access to an education and that might not be the type of education that was wanted. So we always have fears, I think, about any type of democratization of information and now of expertise. I think on the flip side of that, we do have to think very carefully about what skills we need to protect and what skills
Garrick (23:11.638) He's right.
Alexia (23:29.963) might expire naturally as a result of technology. And a really good example is probably very few of us in this room, in this virtual room, know how to read a map, right? We're all hyper reliant on Google Maps or... Well, so there you go. And that's a real, that's a real big area of focus for many researchers is what are the specific skills we need to protect?
Simon Brown (23:42.514) I was a scout once upon a time, I'm proud of my math reading! Although don't ask me to do it now, probably.
Alexia (23:57.275) and ensure they don't expire so that if technology ever did shut down, we would be able to survive. And reading a map is probably high on that list, right? Even though most of us, if we were to go into a car and not have GPS, we would probably be a little bit panicked. But the early research suggests this isn't something we should worry about when it comes to AI and cognitive thinking, cognitive abilities. Because what is happening when you're engaging with AI is that you're actually using your brain quite a lot.
And there's a really interesting study that some of my colleagues at Microsoft wrote up about metacognitive load and the effort it takes the brain to use AI. And the really interesting thing I think is oftentimes when you're using AI to write or to consume information or to catch up on meetings, you're actually able to retain a lot more information because it's an interactive experience. And AI is teaching you just like you were prompting it.
Garrick (24:51.06) Exactly. The thing about prompting and prompt engineering is about learning how to question. And the better the question, the better the outcome. And there's something about learning how to question which is inherent in how we learn as humans and which allows us to focus. if you look at kids and what kids are doing with AI for their homework, just the idea of questioning and asking the right questions is having the learning impact.
I think that's hugely beneficial. The concern I have is about critical thinking, for example, and bias. Now, we don't know how much bias is built into some of the large language models. It may not be deliberate, I'm sure it isn't, but is there a danger of, the language models are learning from all the prompts that are being put into it, and there is a...
bias across the prompts, does that impact on a very large scale the learning of the AI tool in some way? That's a question I have, which is sort of very high level. what about our, and I don't think we can answer it here, but we can have a go. But that question of the bias or the inherent bias or the, is it a level playing field? Do we know that it is?
Alexia (26:02.587) you
Alexia (26:16.072) Well, one point I did want to quickly make about your point on kids in schools that I found really affecting. My husband's a teacher and he's obviously seeing all of this play out with his students at school. And he tells me of some students who describe, you know, AI, Chachi PT, Copilot, whatever tool they use as game changing because, you know, for whatever reason, they never felt comfortable asking questions in a classroom setting.
And so to be able to go to a tool where it's a safe space to ask questions they might worry would sound stupid is actually a really, really important advancement in their critical thinking skills. Because it's obviously so much better to your point to be able to ask the question than to hold it back. And so to provide another option, another format, another space to ask those questions is, I think, a huge leap for many people.
Your question on biases. I mean, I suppose it all comes back to how are these models being built? What data are they being trained on? What data are we feeding them? And good quality data is the most important aspect to all of this. Someone described it to me as a really, I thought, helpful analogy, which is if you think about AI as the tree, the water is what feeds it. The water is the data that feeds it. And so we need to be feeding it.
you know, high quality, good data for it to be effective. The prompting piece, I think we're already starting to see evolutions in how we prompt and how AI is responding to prompting, where if you actually go in and you prompt copilot, you know, it will not just, I was in there today, I'm terrible at maths, I'm absolutely awful, I was having to do calculations about different percentage differentials and...
I was asking Copilot to help me figure out what the difference was between two groups and two different results on an experiment we ran. And not only did Copilot give me the answer, it also broke down how it did, how it got to that conclusion. And that's a difference from the very first models where it might just have given you an answer. Now it's actually telling you how to do it. And there are also iterations where it's then asking you follow up questions and then it's suggesting prompts that you might want to feed into.
Alexia (28:34.125) So I think we're seeing even the way we prompt evolve massively.
Paul (28:39.822) And next one, I to click into that. and a little bit back to in the report, when you talk about the, the rise of the power user and specifically the experience of work, because I, I don't know if it's for everyone everywhere, but there's been a lot going on, let's say. so generally in the workplace, there's quite a lot of disengagement, fear, being overwhelmed, et cetera, demands, et cetera, et cetera, cost cutting and so on.
Alexia (28:48.485) Mmm.
Paul (29:09.25) And you just mentioned this in schools and I've seen the same where an AI tool is basically providing some of that scaffolding to help improve your experience, whether it's your experience of learning or just giving you the thing that you weren't very good at. I need to write some marketing messages. I need to find out how to do this. I can't crack that bit of code, you know, writing. Can you just help me? And it can. So what are you seeing? And maybe you could tell us a little bit about what you found in the research around
as people get more into it, did they have a more enjoyable, a more interesting or enriching experience of work as well?
Alexia (29:47.395) Yeah. The power user piece is incredibly important because we're finding with power users, they naturally experience much better outcomes using AI, both in terms of their productivity, but also in terms of their experience at work. But they're a very particular beast because what they're doing is they're sharing an enormous amount of one curiosity, which is very convenient given that we are on the curious advantage podcast.
Simon Brown (30:11.815) We made that.
Alexia (30:13.447) but then also persistence and I'll share a story from an experiment I just ran internally within a marketing team at Microsoft. Where we were essentially trying to convert a larger share of users to power users over the course of six weeks. and we did this both by looking at usage data, but also by, sending out a number of surveys. And we look at power users within surveys in two ways. We looked at them objectively. So we asked them questions like.
how many times a day do you use Copilot? To what extent would you agree you're an experienced user? And that helps us assess objectively whether or not they could be classified as a power user. But then also subjectively, we asked them, would you consider yourself a power user? And what we found was there was a whole percentage of people who objectively could be considered power users. But when asked if they felt they were power users, said no.
And we kind of called these people our unicorns because here they are doing incredible things with AI and they aren't being the advocates for it, right? They aren't shouting up the rooftops as what they're doing. And so we did focus groups with them to try to understand why they wouldn't call themselves power users. And essentially it boils down to their belief that a power user can only be someone who gets it right on the first try.
And we know with AI, it's not a search engine. It's not, put one prompt in and you get a load of information back. It's an iterative collaborative process where often you have to try many different prompts until you get to the outcome that you're happy with. And that was a whole mindset shift we had to create with this population to get them to see themselves as power users. And that's ultimately to your question about, you know, the types of employee experiences that they're having.
power users are having much better outcomes. And so we need to convince them that they are power users so they can share those learnings across with everyone.
Simon Brown (32:12.052) So it couldn't let it go without digging into the curiosity and persistence as what makes people successful. tell us more on why curiosity is coming through as one of the traits that means people are getting more out of AI.
Alexia (32:27.451) Yeah, the one of the number one things we found in the data as to what differentiates a power user from everyone else in the sense of the types of behaviors and characteristics they exhibit is a willingness to experiment as an experimentation mindset. experimenting, essentially, you have to be very comfortable with failing. A lot of the time, probably half of experimentation, if not 90 % of experimentation.
is going to go down the failure road at some point. And so the ability for AI users to confront that and to try different things is what makes them really unique. Because oftentimes when we use a tool and we use a technology, we get very frustrated when we don't get the outcome we want right away. And the way I typically describe it,
to my friends and family when they ask me why they should try again when they're using AI is the difference essentially between using a coffee maker and a barista. And so, you know, if you're using a coffee maker to get your coffee, you go down, you press the button which has the pre-configured setting for your coffee. And if it doesn't deliver you the coffee you want, you're gonna throw the coffee maker out because it had one job and it was pre-configured to give you that coffee. But if you go to a barista and you order
Paul (33:45.259) Yeah.
Alexia (33:49.639) you know, I don't know, like an oat latte, and they give you a soy flat white, you're going to start to ask yourself, was it me that gave them the wrong order? Was it them that interpreted the wrong order? Like what happened for the information to not be exchanged in the right way? And that's the model we need to apply with AI. It's not a command we're giving it. It's a conversation we're having with it. And that requires curiosity, that requires experimentation to apply that mindset.
Garrick (34:00.557) Yeah.
Paul (34:17.73) And as you say, well, I've loved about it is this ability to fail fast and, it doesn't care. Right. You ask it a question, it gives you totally the wrong answer and you go, no, now try this. It doesn't have a fit in a tantrum and storm out the room. It just gives you another go at it. Right. So I love, I, when I personally really love the ability to sort of fail fast and try things, but I also really love what you said. And it's very much baked into, know, what we've been learning about curiosity is
Alexia (34:35.215) Yeah, totally.
Paul (34:46.198) It's not a search engine. It's a collaborative process to figure something out. If it's got a thought process, it's trying to figure out what you want and you're trying to figure out how to ask it the right questions to get a better answer, right? Which is a very nice way of thinking about.
Alexia (35:03.225) And so think for what that means when in a world in which everyone has a copilot and you are using copilot to ask every transactional question or every potential question that might really demand a lot of time and energy from your human partners. What then does the human to human relationship become? Right? Like if as a manager, my direct report is going to copilot for all of their transactional questions. That means when my
direct report comes to me for a conversation, it is really about the things that matter. And similarly, so that I have huge hope for that the transactional elements, the transactional nature of many of our relationships will go to AI. But then also the really exciting thing I think is about, you know, what happens when we're collaborating with a tool that like you say has limitless amounts of time and energy.
that we can literally go again and again and again, build bigger and better things. And then when you bring in humans into that equation and you start having a team that is made up of AI as well as humans, the potential is just enormous.
Simon Brown (36:11.902) So building on that sort of what does that human to human relationship look like leads me down another question which we could go deep into which I think within your role as sort of what's the future of work? I would love to see what is your latest thinking on where we're gonna be in a year's time?
three years time, five years time, and I recognise even probably two years time is going to be hard, but what are you seeing as like the sort of immediate things in terms of how it's going to impact work, and then what are you seeing as maybe the weak signals or your hopes on where we end up on a slightly longer term view?
Alexia (36:50.521) Yeah, great question. When I spend a lot of time on it. So I'll give you some of my, some of my theories bearing in mind that 90 % of the time people who try to predict the future of work are wrong. So we'll probably revisit this podcast in five years and laugh at the things I said. But what we know in terms of how AI will evolve is right now, I kind of mentioned it was in its personal assistant era, right? Where it's helping you do the things you do now, but better and faster. It will evolve to
Simon Brown (36:54.452) That's
Simon Brown (37:02.494) Yeah.
Simon Brown (37:06.538) Thank
Alexia (37:19.451) become a team member where it will be joining the team. And in this era, it's not necessarily helping you do the things you do now, but better and faster. is essentially helping you do net new things. So if you, I don't know, are a team that does a lot of marketing videos and you have to, you know, buy a lot of music to go to your marketing videos, maybe there's an AI that helps you create music, right? And you now all of a sudden have a composer on your team that can create music. That's a net new skill you've added to your team.
That's incredibly exciting. When you think about AI joining the team and providing you those net new skills, it increases your capability as a team to do a lot more interesting things. But that also requires management, right? When AI joins the team, you are no longer just managing humans, you're also managing AI. And so that will require whole new skill sets for people to develop. Once we kind of move past the AI as a team member,
Era, we're going to go into an era which is already happening and bubbling certain places, which is AI as the agent. And so in this era, AI is not necessarily helping you do the things you do, but better and faster or helping you do net new things. It's taking things off your plate altogether. It is doing whole workflows autonomously, obviously with human supervision, but it is essentially doing end to end processes. And again, that will require whole new skill sets because now you have a workforce that is essentially AI agents.
And that raises a huge question, which I speak to a lot of organizations about in the talent space in particular about, what are the talent and skills that are human that will be required for organizations? How will it change hiring and labor? And my own theory here is that if AI essentially provides you all the technical skills that you need, right? AI will provide you the writing skills, the coding skills.
Garrick (38:58.677) Yeah.
Alexia (39:14.115) you know, the learned experiences that you've spent years developing, what does it rarefied? What does it make much rarer? And in, in that era, it's not making learned experience rare, it's democratizing learned experience, but it's making lived experience rare. And by lived experience, I mean, it's, it's all of the things that you have experienced in your in your personal lives that make you you so
you whether or not you coached a football team and learn how to deal with impestuous, impatient kids, right? Or whether or not you were raised with a neurodivergent sister and have learned the importance of routine and structure. It's all those things that you have built over time in your personal life that AI cannot give to you that will make you an important and rare skillset. And those are the things I think we will start to hire humans for as opposed to more of the technical learned experiences.
Simon Brown (40:08.506) So yeah, I'm in the whole agent one I've been playing with a little bit over the last week or two and that's that's already blowing my mind in terms of having different personas as playing around it. Trying to if you get it to write a book you have one as a writer, one as a editor, one storyline person, one as a character developer and they all start to interact together and yeah little glimpse into into where the future is and yeah you can you can very quickly move from that into yeah managing these different roles of a composer or whatever it may
Alexia (40:22.617) Mm-hmm.
Simon Brown (40:38.101) that creates these new roles in the team.
Alexia (40:41.573) There's the, I think it was the founder of Bumble, which is a dating app that sort of talked about her theory for how dating will look in five years. I found it so fascinating, not because I'm single, I'm a happily married woman, but just thinking about the next generation and how they'll experience dating. And she talks about how she sees every user on her app will have their own agent, their own personal agent. So let's say Simon, you were single and I was single and we were both on this app. You would have an agent.
I would have an agent and our agents would talk with each other and discuss whether or not they would feel we were a suitable match. So they might tell you, know, Alexia is a runner and she's a future work specialist and she's a control freak and your agent might say, that's not for me. Like I'm a triathlete and I like to keep things nice and agile. And they would then decide we wouldn't be a match. And that's kind of mind blowing, right? To think about a world in which we have agents autonomously operating in that way.
Simon Brown (41:17.479) you
Paul (41:39.618) And there goes all the random randomness and diversity of the human race. We all just get together with people like us. Maybe that's better.
Simon Brown (41:39.913) And I.
Alexia (41:43.065) Hahaha
Alexia (41:48.429) Well, maybe, or maybe AI will have figured out, you know, if you were to match up my husband and me, there would be things in common, but there would certainly be, like you say, things that wouldn't be obvious, but AI would have access probably to all of the information related to. So I don't know. We'll see, I guess.
Paul (42:02.776) probably knows more than we know about ourselves.
Garrick (42:07.104) to arranged marriage.
Paul (42:07.438) I've got one more question for you Alexi about curiosity and it's personal question if you will. So what are you personally curious about right now? Maybe one professional although you've given us thoughts, but also a personal one if you could.
Alexia (42:22.509) about AI or in general in life? I think I'm always very curious about human relationships. I think the wild swings that humans have experienced throughout evolution, it feels like we're in one of those moments in time right now where the pace is very, very quick.
Paul (42:24.662) In life, what are you curious about?
Alexia (42:52.035) And I think whenever the pace evolves past human's ability to keep up with it, interesting things happen. And human ingenuity is such that we often solve for it. And that I think is going to be very interesting to see how human relationships adapt to a world where technology is evolving at this pace. And every single time I sit down to talk to someone, I try to remind myself to not make it a transactional conversation. I am just trying to
give them some of myself or trying to get something out of them, but to really just listen and interpret and try to understand where it's coming from. And that I think is something we all need in an era where technology is infiltrating all of our lives. And my hope is AI will take us more in the direction of that being made more possible than the opposite direction of it being made more transactional.
Simon Brown (43:47.754) Love that. I think that's a nice way as we're nearing the end to bring things together. we've covered a huge amount, Alexia, so a quick summary. And we went through your career path and your sort of love of problem solving that took you through the corporate executive board, Gartner, and into Microsoft. How you got involved in redesigning work, looking at hybrid work and how you disrupt time and space. And then your involvement now in the work trend index and some of the
fantastic data coming out of that around things like 75 % of information workers using AI, 78 % bringing their own tools and what that means in terms of some of the implications that we discussed. Talked about democratisation, some of the challenges around leaders maybe lacking the plan for AI that employees are raring to go. Talked about how we need to take baby steps to get there or some of the risks also in taking baby steps of not getting also that network effect if we don't
have enough people using it and learning from each other. Interesting your point around once people get more familiar with AI they actually become less scared around replacing them so I think that's definitely one to ponder on. And then your ideas around the different dimensions of work where we had space and time before and how email and
instant messaging changed the time element, how then the pandemic changed the space element, but now we need to think of a completely new dimension in terms of AI and adding intelligence behind all of that. We discussed some of the challenges. Are we handing over our thinking and how Plato was worried that writing things down would be dangerous? And so maybe we just need to think of what comes next that maybe this isn't so dangerous for us. We actually need to use our brains to use AI. There's that message cognitive load piece that you raised.
Garrick (45:32.054) love that.
Simon Brown (45:41.934) And actually the point that maybe you retain more because it's an interactive experience. We dived into kids and using it at school and how it can be game changing and create that safe space for people to question. And then super interested by some of the two things that power users showing of curiosity and persistence and that curiosity coming through because of the willingness to experiment, people being willing to fail and having the tenacity to carry on. And we need to think of AI as our barrister, not as a barrister.
not barrister, barrister rather than our coffee maker. And yeah, if we don't get the coffee that we need, we need to keep asking and yeah, we'll get there eventually. And then fascinating to hear on, you where does it go from here? So we go through that sequence of AI starting to become a team member, then we see AI as an agent being able to take things off our plate altogether. And then getting to the point where that lived experience actually becomes rare and valuable.
Alexia (46:12.923) you
Simon Brown (46:41.558) and then lovely place to end in terms of, know, then what does that mean for our human relationships? So a lot we covered there. If there was one takeaway from all of that for our listeners, what would you leave everyone with?
Alexia (46:48.379) you
Alexia (46:54.789) I would say the fears around AI are completely backed up by centuries of fear of technology. But if you don't hop on it now and understand what it can do for you and how it can change your job and how it can improve the work that you do and the value that you provide to your organization, but also to yourself and the people around you, you will fall behind.
So I would just encourage people to jump in and to experiment and to be curious.
Simon Brown (47:28.554) So yeah, take away, be curious, don't be afraid of failing and yeah, just start experimenting in a safe way. Alexia, huge thank you for taking the time to talk to us today. It's been a fascinating conversation.
Alexia (47:43.481) Likewise, thanks so much for having me guys and for all the great questions.
Paul (47:44.961) Yeah, thank you. Thank you.
Garrick (47:46.357) Thanks, guys.
Simon Brown (47:48.294) So you've been listening to a Curious Advantage podcast. We're always curious to hear from you. So if there's something today that you found useful or valuable from the conversation, then do please write a review for the podcast on your preferred channel and share what it was that was your main takeaways and what you learned from the discussion. Always appreciate hearing our listeners' thoughts and have a curious conversation using the hashtag Curious Advantage. Curious Advantage book is available on Amazon worldwide. So order your physical, digital or audio book copy now to further explore the seven
model for being more curious and subscribe to the podcast today and follow the curious advantage on LinkedIn and Instagram to keep exploring curiously and see you next time
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.