interview with Munib Ali
“There's a need for a move towards more dynamic assessment of risks”
Maurice
Welcome to another edition of C&F Talks, where I get to speak to and interview a speaker at one of our forthcoming events. Today, I'm here with Munib Ali, who is a Partner at AlixPartners. Munib's going to be speaking at our Market Abuse and Market Manipulation Summit, which is being held in London on the 28th of April. Munib, welcome.
Munib
Very nice to be here, thank you for inviting me.
Maurice
Our pleasure.
The importance of AI use in market abuse surveillance
There's been a lot of discussion about the role of AI in relation to market abuse, both in terms of insider dealing and market manipulation. Why is it so important to discuss the use of AI in this context?
Munib
Well, Maurice, the widespread use of AI, which we are seeing, and the accessibility to AI has resulted in the risks relating to market abuse changing, and changing quite rapidly. And not just changing rapidly, but elevating as well. So, you might recall at last year's Market Abuse Summit, which you organised, I delivered a presentation about how the risks of market abuse are changing as a result of AI.
And I spoke quite a lot about the way trading activity is changing. I spoke about the capability of AI, and the way it's elevating the risk of market manipulation is something that we need to address. So, with that in the background, I think there are two key reasons really why it's very important to discuss AI in this respect.
Number one, with respect to the risks that are changing and elevating, which I just mentioned, the traditional rules-based techniques to combat market abuse really won't work in future, in my view. Those technologies simply will not be able to compete with the evolution of the risks that are now arising as a result of AI. So, we need to be talking about how AI can be and is being used to combat that risk. So, that's the first reason why it's so important.
The second reason, Maurice, is that apart from the need to incorporate AI in the way we combat market abuse, it also presents us with an opportunity to undertake surveillance relating to market abuse more effectively and efficiently, and address some of the limitations that firms have experienced in the past with more traditional rules-based techniques. So, not only is it important in terms of giving us the capability to address the new risks of market abuse or the elevating of market abuse, it does present us with opportunities as well, and that's why it's important.
Maurice
Yeah, I can see that.
Practical obstacles to using AI to combat market abuse
So, obviously, organisations are keen to implement AI-based solutions, but what are the practical obstacles in terms of their position when they're using it to combat market abuse?
Munib
Yeah, so from the work that we've been doing with organisations, we see some obstacles, some hindrances. One of them, of course, is the fact that resources are always limited, and firms have to make decisions as to how to allocate the resources, because there is naturally and logically a need to think about how AI, in the first instance, how firms use them for value-added and business creation opportunities. So, I would say when we first started talking about AI a few years ago, a lot of the conversation was about how can we use AI to enhance our business proposition?
But over the recent past, we've certainly seen a lot more allocation of the resources and budgets to exploring how AI can be used in the control framework and in the second line of defence. But still, the allocation of limited resources can be an obstacle for many organisations, particularly those that aren't very large.
Another obstacle, which would come as no surprise to the viewers, is about data. So, for AI to work effectively, we need good data going into it to make sure it delivers the output and for it not to be biased. And the quality and complexity of data, particularly within the larger legacy bulge bracket banking organisations, is still an issue. And we speak about data quality issues, we've been speaking about that for quite some time, and it remains an issue.
The number of systems that hold different forms of data, that we need to come into AI to do this effectively, is a challenge for organisations. Perhaps less so, I would say, for the fintechs and for the challenger banks and those newer organisations and indeed even crypto firms that we work with. They have lesser obstacles when it comes to data, which is interesting.
The other thing which I would mention, and there are many obstacles, the last one I would mention is the understanding of the risks that we want AI to be helping us combat when it comes to market abuse. When we work with organisations in relation to market abuse, we still find a number of organisations where the understanding of how the risk is changing is rather limited. And that's partially because historically, the risk assessments have been quite static, rather than dynamic.
And I think there's a need for a move towards more dynamic assessment of risks, because they're changing so quickly. And what we've seen in the past is, and still we see, as I say with many organisations, is that firms choose to purchase off-the-shelf surveillance systems without adequately considering the specific risks that are relevant to their businesses. And so that understanding of the risks and making it and keeping it dynamic is an obstacle to the effective use of AI.
Maurice
Yeah, and I can see that. I can also see those problems with legacy systems. The traditional banks have real problems with that, don't they?
The future of AI for market abuse surveillance
But looking forward a little, how do you see the future of AI in relation to market abuse surveillance? As people are increasingly sophisticated in what they're doing, so you might have cross-product, cross-dataset type manipulation, which is quite hard to track. How do you see AI rising to the challenge of that in the future?
Munib
Yeah, well, I mean, it's developing quickly. So, there are providers, technology providers, who are certainly now at least claiming to provide that capability to be able to address the historic challenges and limitations around being able to, you know, look at some of the scenarios that you've mentioned around cross-product manipulation and cross-market, cross-value manipulation. And certainly, the use of AI will support that.
As I said, in terms of, you know, the future of AI in this area, it is developing quickly. Almost all of the providers that we know are investing heavily in incorporating AI into their technology’s capabilities. And indeed, many of the, I mentioned, sort of the fintech firms and some of the smaller organisations who are, you know, perhaps more agile, a lot of them are developing these technologies themselves in-house. And so, we're seeing that more so than before. So, there is a lot that's happening right now. It's developing very quickly.
But I think from my experience and my point of view, there are a few critical success factors, I think, for this to be successful. And, you know, just to go through a few of them, and I'm talking about this in the context of, you know, being able to successfully use AI in future when it comes to surveillance. Some of the critical success factors include, for instance, addressing some of the obstacles that I spoke about, in particular, things like data quality, you know, that does need to be addressed.
And, you know, there are mechanisms of doing that, but firms need to be ready to address the challenge, because it's not going to be addressed in the short term. And that's one of the things that, you know, that's a very strategic conversation that we have normally with our clients. So that's one of the critical success factors, which is, you know, overcoming some of these key obstacles.
Another one is to do with the fact that criminals, they typically seek to exploit weak chains of information.
Maurice
Yes.
Munib
And those chains of information go across different sectors and stakeholders. It's not possible to draw the chain into just one organisation, because criminals will seek to utilise more than one organisation to try and, you know, undertake crime and abuse the markets when it comes to, for example, manipulation of the market. So, in my view, the future is one where there is a need for a stronger framework of collaboration and sharing of intelligence.
And we do talk a lot about this when it comes to fraud, for example, it's a very clear case in a lot of conversation that's happening in the industry about collaboration and sharing of intelligence. The same is needed in this area. And that's why I think it's, you know, it's so important that we continue to discuss it in conferences such as the one that you're organising, you know, bring together regulators and the other parties to talk about this.
And I'm sure at the upcoming summit, that will be a key point of conversation.
And just one final point I'll mention when I think about the future in this area. I think we in future, we will need to find a balance between the guardrails and safeguards when it comes to utilising AI. So that, you know, it doesn't go rogue. And, you know, we have sufficient understanding of what the AI is doing. That balancing that with almost enabling or allowing the AI to self-learn and react quickly to the actions of criminals. Because the criminals, and of course, market risk doesn't just occur due to the deliberate action of criminals, it can also happen inadvertently as well. But if we just talk about the deliberate action of criminals, we need the AI to be able to react quickly to the way they are changing their behaviours.
And so, there is a balance we struck between, you know, having the safeguards and controls around AI and, you know, having the very important input of humans to know what they're doing, versus allowing it to react quickly as well. And I think in future, we need to find that balance. And that's a conversation, you know, with the controllers and the second, third lines of defence, with the regulators, you know, and the technology providers. So, I think that's an important point that we will be talking about for, you know, a little while as we head into the future.
Maurice
Yeah, I think that's entirely right. And that inherent tension between innovation and realising potential of AI on the one side, and guardrails and regulation on the other is something we'll be exploring at the summit. So, for our viewers, if you'd like to hear more on this and related issues, the Market Abuse and Market Manipulation Summit is being held in London on the 28th of April. We'd love for you to come along and attend in person and join in the discussions. Further information is available at our website, www.cityandfinancial.com.
And it just remains for me to thank you very much, Munib, for joining us today. Very interesting. We look forward to seeing you on the 28th of April.
Munib
Indeed, yes, looking forward to seeing you and the attendees.