- Hello and welcome to the first episode of the Capital Markets Exchange Podcast. I'm your host, David McClelland, and over the next few episodes of the podcast and ahead of the inaugural in-person capital markets exchange event in London on June 27th, 2024, we're gonna be taking a closer look at some of the topics that matter most to the capital markets and finance hive communities. Topics such as how artificial intelligence can give traders the edge on the trading desk. That's what we're gonna be discussing today, but we'll also be looking at the transition to T plus one and understanding the best skillset culture and leadership strategies to build a winning desk. But first, today's episode is all about AI and really where else could we begin? Artificial intelligence certainly isn't a new term or a new concept from automaton several hundred years BC in Leonardo da Vinci's inventions in the 15th century to the birth of modern AI research in the middle of the 20th century from the likes of Samuel McCarthy and Turing. Skip forward a few decades to the 2020s and the current innovation wave is fueled by a subset of artificial intelligence known as generative AI. Since late 2022 when open AI's chat GPT burst onto the world stage alongside players like Dali, Mid-Journey, Co-Pilot and now Sora, machines that can turn text prompts into news articles, Shakespearean style sonnets, computer code, fake photographs and even cinematic movies have now become accessible to everybody with a smartphone or a web browser, essentially for free. It's a lot to take in and this AI innovation wave promises to be disruptive across a significant number of industries. But what about the impact on the trading desk? What impact will generative AI and indeed other rapidly evolving forms of artificial intelligence have in making the trading function more efficient? Well, today with help from some expert guests, we aim to demystify AI for trading and share insights on how it can be, is being and will be used to analyze past trading patterns, forecast future liquidity positions, improve real-time analysis and much, much more. Joining me today are co-founder at Applied Data Science Partners and author of Generative Deep Learning, Teaching Machines to Paint, Write, Compose and Play, David Foster, hello David. - Hi there, great to be here and looking forward to the conversation. - Me too. - Also joining us is head of investments in data science at T. Rowe Price, former astrophysicist and director of AI Labs at BlackRock, Dr. Iro Taziziomi, Dr. Iro, Iro, if I may, nice to meet you. - Nice to meet you too, thanks for having me over David. - And finally, head of systematic trading at Allspring Global Investment, Andrew Klais. Hello Andrew, good to meet you too. - Hello, thank you for having me. - Well, thank you very much for joining us everyone. And just to set the tone maybe for our discussion proper in a moment, David, you said you've written a book specifically about generative AI. Give us a quick intro into your role, maybe outside of being an author and tell me whether you think in the future books about generative AI will be, well, frankly better written by generative AIs. - Yeah, thanks David. So it's been a really interesting journey over the last six, seven years for myself personally but also for our company because when I first wrote the first edition of the book, it was back in 2018 and generative AI wasn't as hot as it is today, I think that's fair to say. Companies across every sector were focused mostly on predictive AI, they wanted to know they wanted to know about things about the future or predicting things about their customers. And what we're seeing today is a complete reversal of that trend. We actually are seeing this generative AI movement now which is all about creating data. And it's really an interesting paradigm shift because now we're asking what can this AI do that can actually produce data for me whether that's in the form of summaries of documents that I have or in the form of responses to customer service inquiries.
So it's about the construction of more data rather than the collapsing down of data that you have. And I think as we've seen over the last few years, this is just accelerating faster and faster. And I think companies are more and more wondering where they should start with this technology, what should they do, what shouldn't they do? And especially for areas such as finance, understanding how it can help them when a lot of that data is in a structured format and not necessarily immediately obvious where generative AI can be applied. So certainly within this conversation, I think we can get into some of those areas and start to identify some of the potential use cases. - Yes, goodness me, that summarizing of data is important. It feels as though we're struggling with a volume of data that's already being hurled at us. So anything that can help us to filter and make that data more concise and more manageable, it can only be a good thing in my book. More on that in a moment, David, but thank you again for joining us. Dr. Eero, you've been tracking deep science and deep tech and applying that to finance. Tell us about your role now and whether you've seen enough to convince you that AI after several winters is finally moving into its spring and is ready to deliver some real value. - That's a wonderful question. I think the answer is that of a consultant, it depends.
(laughs) - Oh, there we go. - I would argue that the sell side has always been, I started my career on the sell side. So I think the sell side has always been more eager to embrace AI, machine learning, et cetera. I think things now start happening in the asset side more and more.
When it comes to AI and generative AI, there is a challenge around managing expectations. I don't think we are where we think we are so far. I don't think the field is mature enough. For example, I keep advocating for not having yet a strategy, right? Of something that is so constantly evolving.
I do think we're probably four to five years away from where we think we are. Don't get me wrong, the potential of the tools is well understood, acknowledged, generative AI included. AI we have been using for, again, longer. But there is a maturity level that I think we have not reached yet. That would be my take. - Really interesting. And with all of the momentum and did excitement around AI and the party tricks is a phrase I often use, particularly around generative AI at the moment, to hear some comments about how AI maybe isn't as mature as we might like to think it is. Maybe we can peel away at what some of those barriers are, how much further you think we've got to go to reach that level of maturity later on. That'd be really interesting. But for now, Ira, good start, thank you. And Andrew, finally, you are already some way along your road with work in terms of implementing AI type solutions at Allspring. Tell us about that and what it is we really mean when we talk about AI in trading. Because AI as far as I've always understood it is actually quite a big umbrella term. You speak to some analysts and they'll say AI is a code name for things that don't actually work yet. (laughing) - That's probably too often true.
No, I would say we're definitely early in the journey and I describe our progress as phase one. We're looking at the low hanging fruit, we're looking at AI as a labor saving device.
Definitely more on the machine learning side and looking to transition eagerly into more of the generative AI and more advanced aspects.
But right now there's just a tremendous amount of work on buy side trading desks that don't really add any value. So I think that's the first area that we're looking, how do we make good traders better and bring in some of these technology that can do tasks that we all know are better done by machines. I mean, I think with generative AI, everyone's jumping to a very deep end of the pool of very complex problems when in the real world, we have actually much simpler problems that can be solved that we need help with. So I think to the 45 year answer that Eero mentioned, I think that's probably true. But I think in the meanwhile, there's a tremendous amount of mileage for these technologies to take out things that are probably below their ultimate potential.
- So low hanging fruit might be one way of terming that. And I think that that's probably where we kick off our discussion proper. And Andrew, I may as well throw this ball to you. Where is AI most applicable on the trading desk at the moment in terms of the low hanging fruit, in terms of scaling processes, helping people to do the thing that humans are able to do with more efficiency? - Sure, I think counterparty selection is one,
quickly analyzing recent data and kind of calibrating things for traders to analyze.
For a long time, we've used one click trading and kind of algorithmic wheels, so to speak. And AI fits in that nicely. You're just building something better that we've been using for many years. So I think there's a tremendous amount of runway on that front. - Okay, and probably the same question to you as well, Eero, given your experience and your longer term view of AI and maybe where we are on that maturity curve, depending on which curve you look at, what do you think maybe traders, trading desks can be taking advantage of right now? Where are the ripe fruit for you? - Frankly, I think throughout the whole lifecycle of a trade, you can find applications, right? From pre-trade research to order origination, to confirmation, settlement, regulatory reporting, all these, especially the after execution stages, can be quite tedious and not necessarily easy to follow, especially with regulations, et cetera. So this is definitely a place where I've seen AI being applicable when it comes to the more intelligent things, right? Like anticipating how can you execute optimally from the transaction cost perspective, from the finding liquidity, all the predictive models we're saying. And last but not least, to talk about generative AI, that everybody is excited. So imagine that you have traders inundated with information, let's say a broker, commentaries and market color, et cetera, that they are supposed to be able to synthesize as fast as possible and focus on what matter. So going back to what David said, summarization, for example, with generative AI can actually give you a big advantage.
- And David, then same question to you, summarization of all of this data that in its various structured or unstructured forms we're trying to make sense of in our various jobs is one thing, are there any other ways in which you can see generative AI, or if you can set your foot outside of the generative circle, any other types of AI that maybe right now we can start to see some value in? - Yeah, I think you can break down the applications basically into three buckets.
Enrichment of information is certainly one. So this is where you bring a small amount of information to the AI model and it's able to enrich that information for you in some regard, whether that's in the simple case, writing some bullet points down about an idea and then converting that into a full email, for example, we're seeing that, for example, in customer services being incredibly powerful.
The second, and that's the one I think most people sort of think of when they think about generative AI because they think, oh, it's producing a lot more information.
The second one is condensation of information. And I think this is to Ira's point there about summarization. Can you take this PDF financial report of a company and condense it down into something that I can digest within a single reading session, or indeed down into something which I can then import into my database as structured information. And then lastly, transformation, I would say is the last bucket. So this is the idea that we might wanna take data in one format and move it to another format. Generative AI models are very, very good at writing code.
And therefore you can do things such as write a prompt to be able to get generative AI to say, converts an input format of data into another format of data without you having to write that process and it has as a programmer.
And so I think there's opening up a lot of potential here for people who aren't necessarily coders themselves to be able to write something that is effectively like code. Natural language is becoming the new coding language of the 20th century and 21st century. So I think whatever applications are out there can fit neatly within one of those three buckets. - David, if I may, I- - Please do. - David, yes, Foster.
I'm curious and we can discuss offline as well, but I'm thinking of especially large language model GenAI because we might wanna touch on the non-language related, which is usually not discussed as much and undermined in some sense. But I think of it as a three Cs of content, content consumption. So it can be Q and A, summarization, just information retrieval or extraction,
then it's creation. So all the things you add, write an email, write code, et cetera. And the third, which is usually more difficult, it's characterization, which is around attributes. So for example, in the trading case,
recording what the sentiment in the market is through news. So characterization has to do with whether the language is positive, negative, uncertain, all this and that. So I'm curious, I'd love to think of how we align in our closer relationship. - Yeah, that's really insightful. I love that categorization. I think that's even clearer actually than the three I mentioned then. And there's a overlap obviously between them. - And we're already starting to see, I think, some case studies, some use cases for generative AI across various industries, including in finance. I'm thinking right now of a buy now pay later provider who implemented generative AI into its customer services. And very recently at the time that we're recording this, published some figures in a press release about the savings. Well, one thing they didn't say is how much they'd spent on this. And there is a real cost in many different matrices at many different angles when it comes to generative AI and AI as a whole in particular. But certainly in terms of how it was able to scale customer services operations, I'm sure we'll see many more of those. Andrew, I'll come back to you. We've spoken about the three Cs, maybe four Cs here around what generative AI can bring.
With regards to maturity and coming back to something that Eero said earlier about her view on where we are in terms of maturity or tooling of capability right now, have you found maturity to be a barrier? Are there some things that you would like to be able to do that maybe you feel as though you should be able to do, but haven't been able to do quite yet because the capability or the tooling to provide that capability isn't quite there for you yet?
- No, no, I'd say for us,
the challenge is kind of in the implementation phase. I think there's a lot of very interesting technologies hanging around and it's really, okay, how do we kind of bring this in, make sure we're overseeing it, everyone's comfortable and put it into the implementation phase. So, in our kind of development environment or kind of sandbox kind of walled off non-production environment where we try and look at new technologies, there's a tremendous amount of potential, but to productualize and roll that out to an organization with 1500 employees, that's kind of the challenge. But I'd say we're close on a lot of fronts, things that are being used, I think are, some automated news categorization and feeds that we have some vendor programs that we've kind of partnered with to use.
So, I think there's bits and pieces being used across the board. And I think it's only a matter of time before it really gets productualized. Kind of used a little bit more end to end in a cohesive manner.
- Yeah, we're certainly starting to see big vendors. I'm thinking of the Microsofts of this world and so on, really doubling down on their generative AI implementation for everything from how it integrates with your office suite and then all the way back through the back office as well. Iro, you were the one that raised the point about maybe we're not quite as mature as some vendors might like us to believe that we are yet. What do you see as the barriers? Where are we lacking in capability at the moment? - Well, let me start by the fundamental aspect of making the distinguish between what Gen AI is and what AI is. Okay, so when it comes to AI, anything from smart order routing to liquidity forecasting to whatever we've done, we are doing, have been doing.
That's, let's say, the classical AI, dare I say, but I wanna contrast that to generative AI. So generative AI is the tool that can give you a summary and summary means something that is simply shorter than the initial piece of text. Not guaranteed that you have no material information omission. So it's not as simple as I give you 10 pages, you give me back 10 sentences, good to go. Especially in a highly regulated industry such as, my thumbs up apparently, especially in such a highly regulated industry as like finance, it's one thing to do that
when the stakes are low.
When, for example, anyway, somebody wouldn't have the time to read a hundred documents, so who cares if the summary is exactly accurate. But you have other situations and use cases where you just cannot afford giving somebody a summary without ensuring that all the necessary information is in there, right? So that's what I mean that we're not there yet. People see a summary, they say, oh my God, you can do it in two minutes. Yes, what is it that it did? Just shortened the text, not necessarily gave you a summary,
including all the important points. So having now to prompt engineer around that, to tune around that, these big models that others have developed, that's where the challenge is, that it's not necessarily an out of the box solution.
It's not ready for consumption.
These models are like generally, they're being trained to be generally conversational, very knowledgeable, as David said, basically they know everything about what exists on the internet, but how do you make it give you the right summary, not just the summary, is the challenge.
- And just to pull out the R word there as well, maybe Andrew, I'll come to you first of all with this. And David, if you have anything, then feel free to chip into in terms of regulation, working in regulated industries, when you are creating potentially synthetic data, when it comes to generative AI in particular, or when you are using other AI systems to make automated, decisions on your behalf, what are the regulatory challenges, hurdles that you need to understand, and be able to jump through to make sure that a tool that is in theory due to make you more efficient, more productive, more profitable, actually doesn't end up costing you on those levels.
- That's a great question. That's something we faced, before AI really took off, when we first started implementing automation, and it really comes into oversight. So any good systematic investment process has to have good attribution, so a way of analyzing the past data. And then the real time oversight, traders are the ones that are, on my team are the ones calibrating and kind of implementing, using the parameters to kind of design these programs as a team. So they're the best ones to kind of watch and monitor these programs in real time. And you're kind of changing the equation of a trader being,
on the spot making these real time decisions too, more like an airplane pilot, monitoring the equipment and systems to make sure that everything's working properly. And we've kind of on a lot of asset classes, moved to one click trading,
and we didn't really have a need to go past that. We still want a trader to kind of review these things before they go out the door. Yes, looks good. Everything's kind of set up the way they are. We do have some full automation, but I think that's kind of where, you can really make a difference in terms of the regulatory behavior, is that you have to demonstrate that you have a way of managing and overseeing these processes, which I think it lends itself well because you can't analyze and judge these processes that they're working without that framework to begin with. So it becomes, when you reach that point, I feel like it's pretty exhaustive that you're ready to kind of go through that final hurdle. And David, same to you really in terms of regulation here, but there are still,
as we often see with technology cases whereby technology leaps ahead of the regulatory frameworks that provide governance over those, it strikes me that there's still a little way to go before we can really understand what some of the challenges are, let alone find ways to help organizations to embrace the new technology while remaining compliant.
Yeah, I think it's important to point out that applying AI is a slider, it's not a switch. You don't just have to turn it on and then let it make all your decisions for you. That's obviously not a good way to do AI. I think it's important to recognize that companies can start small and they can start by, like Andrew said, testing in a sandbox environment
and really treating the AI almost like a new employee. And this is a really interesting way to think about what AI can do for your company is what systems do you have in place to test that new employees are performing well and doing the right thing? And usually there's maybe some standardized testing that they have to go through when they join the company. Maybe also they have a oversight reviewer or their manager, like reviewing their work regularly. And I think you can almost apply the same rules to any AI system that you want to test. Come up with a test suite of 100 cases that you want to check and a human would have to go through manually and score those. We ran a project actually recently with a company that was looking to improve their translation of onsite material.
And we did a blind test with their staff to understand if the AI translation method was better than human translation. And the AI won, incredibly, with a fraction of the time taken that it would have taken for that translation team. But what was really interesting was that there was some more complex translations that the humans were still way above what the AI could do. And I think that's just such a great use case for showing how AI can augment what that team is doing by taking away all the low level work that they didn't want to really be wanting to do anyway
that allows them time to free up to do all of the higher order tasks, like the complex translations where there's real cultural oversight needed. And so, I would come back to the idea that in terms of regulation, just do what you are comfortable doing within the regulation first. And you don't need to take all of these complex challenges on day one. There's gonna be a ton of low hanging fruit that you can just start to implement right away.
And you don't need to kind of run immediately off with AI just because it's the latest cool thing. There's just so much opportunity before that that you can take advantage of. - I've heard it said that with generative AI in particular, you mentioned new employees there about how generative AI is like having an army of infinite interns. And yes, you can ask them to go away and do certain tasks but would you always trust what they came back with? No, and I guess the journey of implementing AI into your organization is training up those interns so that they can be more autonomous, more productive to your organization. On that theme then of implementation, I guess we should talk about how, what those first steps might look like. Maybe you are working in a house where it's like, well, we need to start moving in this direction but don't necessarily know where those first steps are. So Andrew, as someone who's already trodden this path, what are those first steps that you think companies should take to practically implement AI solutions? How did you start off perhaps? - I started off with a task that David Foster already touched on, which was converting code. So we have a lot of vendor code that's not proprietary, nothing that Allspring produced, but we need that in a different format. We're gonna compile it in a different language. So take it into, from R to Python, from SAS to R, what have it. And it worked fairly well. There were still some errors, but we did 95% of the work and needed a little bit of a correction. But I think that gave me a good feel for what it can do and what it can't. I mean, here's 95% of the code base that was translated perfectly. And then here's where the errors are, some syntax things that kind of gave me a comfort level of where things will potentially go wrong or what types of things. And then, as I mentioned before, when you look at the tasks that are very mundane and really below the potential of what this technology can do, I think those are things you can easily check them. They have determinate outcomes. They can't really be wrong. And if they are wrong, you have several ways to correct or check them. So I think those are the things that,
even if you find a small percentage of things that are wrong, if you have an oversight program, you're kind of still saving yourself from the time of having to do 98, 99% of the work yourself. So I think that that's kind of my attitude for getting started. And then everything on the post-trade side, because it's post-trade,
you're basically sending in something to do. You keep your original attribution running. You can do them side by side. I mean, that's in kind of the data creation space where you're basically running things in parallel. And eventually, you'll probably switch and like the newer version when you reach that point.
- Iro, same question to you in terms of first steps that you would, you do recommend for really beginning to implement, getting some value going back to the point we started out with out of these tools. What steps would you take? What caution would you advise?
- Experiment. So it is about identifying indeed low stakes use cases. We discussed one earlier for generative AI, which is I don't have time to read the 100 documents that are on research on things that I'm interested in or the area or the industry I cover. So anyway, I'm not gonna read anything. So being provided with short summaries, even if they are inaccurate, that could help me at least identify one or two that I should read. There you go. That's a low stakes summarization use case, right? So that's the one. The second thing is education, especially when it comes to generative AI, people expect gen AI to meet them where they are. That's shouldn't, that cannot happen. We have to move and meet gen AI a little bit where it is because it means a different way of thinking and a different way of basically working and going back to the supervision, your best risk management tool in generative AI is your user.
So in that sense, make sure you start educating people around proper use, effective use, responsible use, all this good stuff. And also another thing that we all have a tendency to do is to chase what everybody else is chasing. It doesn't have to be like the most, indeed like Andrew was saying, the most like difficult problem or the most impactful that you're gonna start with.
So maybe it's not even AI that you should use in several problems. Why should it be AI, right? From a cost benefit risk. So having this maturity as an organization, it's a good also risk management tool towards, okay, where am I using it? Why am I using it for what the benefits are?
- Yeah, not implementing technology just for technology. Say goodness knows how many times I've had that conversation with leaders in the technology space in a number of different industries. David, something I feel as though we do, an elephant in the room that I feel as though we do need to address here, changes is always difficult. And as history has taught us, we learn very little from history and we are potentially changing the nature of people's jobs here. Your book very much on the nose of how generative AI is creating stuff, humans. We've always treasured that part of humanity as being something that singles us out from other creatures, but also from what machines do. And as we start pushing the boundaries of what AI generative AI as well is able to do, then that does stand the risk of disenfranchising a workforce who previously thought they were adding real value to an organization. So what do you suggest in terms of ensuring that you keep a workforce on side, particularly when there are so many unknowns? By the time this podcast goes out, goodness knows how many new AI tools, goodness knows which version of GPT will be on. It is changing very, very quickly. So how can we keep humans on side in the face of this change?
Yeah, it's a really great question and you can get philosophical about this quite quickly, I think, so I'll try to keep this very much in the practical space. I think first and foremost, the low hanging fruit, the easy work, so to speak, that Eero talked about, where it's tasks that actually, if you talk to most humans, they don't want to be doing.
And I think this is where there's less contention actually around taking jobs and taking tasks. And I think actually if you asked most people who were doing a certain job that they would prefer it if they had an assistant that could do some of that for them, it's like everybody having access to their own research assistant or their own personal assistant that can just do things for them that otherwise they would have had to do themselves. So a typical example might be composing that email that there's been sitting there waiting in draft for ages that they just kind of need to know how to write but they don't want to get some of the spelling wrong and what have you. So it's like things like that can be a two minute task instead of a 20 minute task and they can spend the extra 18 minutes thinking about the content of that email and the message they want to get across. It's a very simple sort of trivial example almost but I think you can extrapolate that out to so much of what we spend our time doing.
We don't spend enough time thinking about the content. We instead spend that time thinking about the nuts and bolts of how it's presented or the nuts and bolts of how it's constructed. And we as humans aren't designed to do that. We're very, very good at thinking of how to solve problems and that's what people want to be doing rather than thinking about if they've got their grammar correct. So, there's lots of things like that I think that are very easy to see being sort of semi-automated to at least. - And there lies a whole podcast series in its own. Let's bring it back to finance and to capital markets. And speaking of humans in the loop, the CMX team has been going out to the capital markets and finance hive community and they've got some questions for us knowing that we're gonna be talking about this. So, quick fire question round if I can please.
This one is, where is the field of AI headed in the financial industry and what are the potential implications for by side trading desks? Who wants to pick that one up? - Just further adoption that's becoming now obvious to me, bringing all the efficiencies that we all know and expect. On the gen AI again, it's gonna take a little bit longer, but on the AI definitely efficiencies and depending on maturity, even insights and ways to do things better, not just faster.
- Another question here from a CMX ATEND. How can firms differentiate themselves in the market by leveraging AI capabilities effectively?
Andrew, differentiation, do you see an opportunity for differentiation from the market here? Is indeed that one of the key things as a when you're trying to build a business case to get the investment necessary to implement some of this stuff is how you're able to differentiate your offering in the face of your competitors, one of the little levers that you can tug on?
- Yeah, of course. There's many ways to implement this technology and I don't see it being implemented the same. I could see down the road years from now, there is kind of one model or one vendor that everyone's using to provide some of these technologies. But when you look at import parameters, difference in investment strategies and the way that people are gonna implement these technologies, I see a tremendous amount of room for creativity and kind of differentiation. So I don't see it as kind of a one fits all technology.
- Yeah, and I would add to that actually, Andrew, as well, I think it's very, be very wary as a company of just buying the off the shelf from like Microsoft CoPilot or something and thinking that you've just done generative AI because now you've got that. I think if you want to create a differentiator, you have to start building in-house. Otherwise you just end up being another consumer of the product that everyone's got anyway. - So buy the commodity and build the differentiator is what I see. - Correct.
- Good stuff. Thank you very much indeed. And thank you for your questions as well. We'll be inviting those for our future podcasts, more on those in a moment. And I'll make sure that we put those to our guests on the show as well. Just a final question, a final quickfire round question to each of you. For buy side trading heads, what is the one piece of advice as to how they should look at and consider AI for trading now? I guess this comes back to that next step. What's the one piece of advice that you would give them? Andrew.
- There's no substitute for experience. You know, who do you want analyzing and building these systems and models? A trader with 10 to 15 years of experience in capital markets or someone with very little. I think that's blending the human experience with this technology is really where you're gonna shine and get the most benefit. So that would be where I would start is having the experienced traders on the trading desk building and being intimately familiar with this and then ultimately supervising this technology.
- And that's where we start turning AI from artificial intelligence to augmented intelligence. There's one big vendor I heard from a little while ago that said you're standing shoulder to shoulder with the machine, it's not pushing in front of you. Eero, next step, one big piece of advice for traders looking at implementing this technology. Where should they look at first?
- Well, they should look into understanding what's possible with the technology and establish again, a mature cost benefit risk kind of framework. Because again, I really see often as having an enormous hammer and thinking that every nail is for our hammer or even we're going the other way. We have a technology, we're trying to find where to apply it and I empathize. The people that are not in the field, they are like, "Oh, are we doing Gen AI?" But having this maturity and this education, maybe people that bring this to the organization that use it where it should be used and get the best out of it, understanding that it's the most important thing, I think, to make good use of these technologies. - It's that age old conversation of the CEO or senior partner having gone through an airport where goodness knows how many enterprise IT vendors are advertising, getting off the plane. And the first thing is getting on the phone to the CTO or CIO saying, "We need some cloud. We need some big data. Give us some AI. We need some AI. That's what everyone else is doing without any real idea of which of those nails. I like that. We've got a hammer. Which of those nails is the best fit to hit?" David, one piece of advice for those looking at considering AI in trading right now, where would you steer them?
- Yeah, I think I can talk about maybe what Project One should look like. From our experience as a consultancy, I think Project One is really something that should be effectively your own implementation of a large language model in-house. For example, standing up your own Azure OpenAI service and allowing your staff to then use that. And I think that's Project One because the last thing you want people doing is putting all of your private confidential information into the public facing chat GPT or Gemini, whatever you want to be using. I think shutting those things down internally and saying, "Look, if anyone wants to use GenAI, we've set this project up. It's a private instance. That data isn't going anywhere. It's staying within our Azure cloud or whatever it might be." You can put documents in there, whatever it might be, because that's all secure and private.
So Project One doesn't have to be anything more complex than that because then from that will spring ideas from people that are using it.
The worst thing you can do is just not to start, I think.
We see companies sitting on their hands wondering what Project One should be like. And we're like, "Look, this is a simple thing you can do. It's not gonna take months and it's gonna open doors." And you'll start to learn a lot more from your users and your staff and actually leveraging the technology.
- This has been a great conversation. I feel as though between the three of you and the questions from our audience as well. We've really been able to peel away at what those, where we are, where the good applications are, those low-hanging fruits. I love the three Cs, three and a half Cs if we had that classification in there as well. I love that as well. Thank you so much for sharing your insights with us. And I think it does go to show the value of conversations, of sharing thoughts with peers, with other people who've maybe been down this road before us in helping to shape our own strategies as well. Iro, David, Andrew, thank you very much. - Thank you. - Thank you. - Thanks so much.
- On the next episode of the CMX podcast, we'll be focusing on T plus one implementation, how trading desks are preparing, implementation requirements, and what you need to know ahead of the deadline. Make sure to subscribe to get that as soon as it goes live. And if you like what you heard today, then well, make sure you share it with your trading community and leave some feedback. We love feedback. We love your questions too via the finance hive. Make sure also you're subscribed to the CMX on LinkedIn. And if you haven't already, head to the CMX website. That's the-cmx.com, the-cmx.com.
To find out more about and register for the Capital Markets Exchange event in London on June the 27th, 2024. But for now, from me, David Mcleodland, and from all of our panel today, thanks for listening. Until next time, bye-bye.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.