interview with Isabel Simpson & Rosehana Amin
“We are seeing a global rise in AI laws”
Maurice
Hello everybody and welcome to another edition of C&F Talks where I have the pleasure of speaking to some of the speakers from our upcoming events. Today, I have with me Isabel Simpson and Rose Amin who are Partners at Clyde & Co, both of whom are going to be speaking at our Data, AI and the Future of Financial Services Summit in London on the 16th of March. Welcome, Isabel and Rose.
Isabel & Rose
Thanks Maurice, lovely to be here.
Maurice
Great to have you with us. Let's turn to our first question.
The UK govt's vs others’ approach to regulation AI in FS
How would, and I think this is for you Isabel to address isn't it, how would you describe the UK government's approach to regulation of AI in the financial services sector and how does the approach here in the UK differ from that in other jurisdictions?
Isabel
This is a really interesting area because we are seeing a global rise in AI laws and in the UK, we've also seen an increasing amount of guidance being released from the different regulators. Within the UK there have been different perspectives to balance here between those who are really looking to concentrate on the benefits of AI and those who are really concerned and rightly so about the risk that using AI exposes us to. And there have been different voices within the UK government and also within the House of Lords piling on this and many of the listeners will be aware that there remains a draft AI bill in play.
But my impression is that there continues to be a very strong message from the UK government that the UK should be a leader in innovation, in technology and in AI. And one of the ways it's going to achieve that is by continuing to take a sector-led principles-based approach to regulating AI rather than implementing a new AI-specific law which some other countries have done. And that might seem strange to organisations who are new in this area or who have less developed or mature compliance models, but actually in a highly regulated industry such as financial services, whilst AI may introduce new risks, in essence this is really another technology risk and we already have laws in place which regulate how we should be using AI. And what the regulators are looking to do is they are looking to produce guidance which addresses the new risks that AI brings to us.
Maurice
And how do you see that, just to explore that further, in comparison to the EU AI Act, what do you think are the sort of pros and cons of the UK's approach versus that that very prescriptive approach of the EU?
Isabel
The EU has taken an approach, as you say, that is prescriptive and they are really looking at it from a, almost in a product liability sense. So, they are asking organisations to risk assess the use of the AI and there are obligations on those who have created the AI, there are different obligations on those who are using the AI, and it is very prescriptive in nature.
Arguably though, the [EU] AI Act is not the only law that organisations in Europe need to think about, they also need to think about all of the different laws which are already in place as well and that includes the laws that we have in the UK or that we have that are very similar in the UK around health and safety, around data protection law, around financial services regulation, intellectual property, there are lots of different things to consider.
So, what the European model is looking to do is to be more prescriptive which I think some organisations find very, very helpful but the UK is looking to really strike that balance and promote innovation.
Key legal and governance risks around AI
Maurice
Just broadening it out from the sort of breakthrough approach, what are the principal legal and governance risks around AI?
Isabel
It's a broad question and I think it is worth reminding listeners of the legislation that we already have in place which governs the use of AI but first I think it would be worth highlighting a couple of areas that listeners should consider. The first is AI dependency and that is where organisations embed AI into their systems to such an extent that they cannot perform without it, and they don't quite know where it is. That is a huge operational resilience issue and if you don't have a human in the loop then that dependency can move AI from being an enabler, being used to drive efficiency within the organisation, to being a decision maker and that is something that in my view AI shouldn't be. AI is an enabler, it is not a decision maker, it should just enable decision making.
With that in mind, in the UK we do have a number of existing laws, a couple of which I mentioned earlier, so data protection, employment laws, consumer laws, competition laws, equality legislation, intellectual property considerations, online safety considerations, there are a huge amount. And I think where organisations are struggling is actually pulling all of those together and creating one compliance framework and what organisations do need to consider when they are thinking about governance and about that framework is this is not a brand new risk. This is technology that has increased risks associated with it, but these are not, they shouldn't be starting from scratch, they should be uplifting the compliance model that they had in place when it came to procurement of technology already rather than looking at AI inside it.
Maurice
Yeah.
Main contractual and commercial risks when drafting third party vendor agreements
And I suppose one of the big risks isn't it, is the reliance on third-party vendors and the sorts of main contractual and commercial risks when drafting those sorts of agreements for procuring AI systems, particularly with say the FCA's focus on that exposure to third-party suppliers.
Isabel
Yeah absolutely. I mean the biggest risk really in that area is that organisations really need to understand what they are buying, so we are seeing an increased risk in what's called AI washing, so AI washing is where organisations say that they are using AI or that the technology is AI when in fact it isn't, so when organisations are procuring AI they really need to understand whether it is AI that they are procuring or whether it is just another piece of software and that's quite difficult when we are working in an environment where AI quite frankly is fashionable and attaching AI to absolutely everything is becoming increasingly fashionable and that comes with a lot of risk and we're seeing that.
We've seen it in Europe, we've seen a number of cases over in the US and no doubt we will see that in the UK as well where organisations claim to be using AI when in fact they aren't, so understanding what you are buying and the solution that it is meant to be given your organisation is incredibly important.
Maurice
And I guess that there's a sort of asymmetry in bargaining power to some extent isn't there, between very, very large tech companies and the customers who are taking on their systems which perhaps increases some of that risk but perhaps we can explore that a bit further.
Litigation risks from AI usage
Rose, I think you were going to tell us a little bit more about the litigation risks arising from AI usage and I suppose that litigation may be part of that vendor customer relationship, there's also customers and staff and wider society, but what are the main litigation risks?
Rose
Yeah that's a great question, Maurice, and it's really an extension of the really helpful points that Isabel made, namely that AI is meant to be this enabler and not decision maker, but inevitably when it becomes a decision maker and perhaps an organisation like banks or financial institutions aren't fully locked in or dialled in to understanding that, they can become responsible for that automated decision making.
So for example in an employment context, if you employ AI algorithms to review CVs of potential candidates and there might be bias, you might not even be aware that those algorithms have biases as you say, Maurice, there's this dependency on a supply chain and the lack of visibility means you inherit any potential errors or biases and that may result in potential employees, candidates making claims against employers for algorithmic bias because there is that automated decision making.
So, to Isabel's point earlier that human oversight is really important and failure to do so can create those litigation risks that they may not have fully anticipated.
Another area that I think will be really interesting for financial institutions is the use of AI to on one hand innovate and improve customer service, to bring speed and expediency in the customer experience, for instance by use of a chatbot. The question again is, should you be liable for what a chatbot says?
Now there have been cases out there particularly in Canada, in by way of example, an airline being held by a small claims tribunal to be liable for the incorrect and misrepresentation by a chatbot which gave wrong advice and it was forced to pick up the cost of that customer. Now imagine if that chatbot gave incorrect advice to 30,000 customers. Would an employer or an organization be expected to pick up all of those costs and that's a clear litigation risk and perhaps it's no different to a product liability claim. At the end of the day a chatbot using AI technology is another form of a product that is being utilized to enhance customer service but again if there isn't great visibility it can create another litigation risk for an organization.
And of course as you picked up on, Maurice, there's the contractual elements of the intricate web of different parts of the supply chain whether depending on a third party and then the supplier because there is an issue of you know different responsibilities and obligation and what rights you may have commercial or otherwise that can really expose an organization to contractual liability if there are flash points or failures of to deliver service reliably.
Maurice
Yeah, yeah, there's so many risks on there. I mean, one of the things I wonder about is agentic AI and when you have autonomous agents going out performing all sorts of tasks and taking decisions admittedly set within the parameters of the agentic AI software but nonetheless taking decisions and the litigation possibilities from that must be pretty large, I'd imagine.
Rose
Exactly, and I think a key issue is it's important to first identify who owns the risk. When you deploy technology, agentic AI or any other form of LM model and someone holds you liable for any algorithmic bias or any other issue, who owns that risk? What do the contractual agreements say?
Now in the EU we have the EU AI Act that prescribes where responsibility sits where ownership should be and the enforcement action and penalties that may follow one falls or fouls of those prescriptive measures, but there is no such structure in the UK and so it's really important when you adopt these measures and technologies that evolve over time and take autonomous decision making. Who's ultimately responsible? The person who provided you with that technology, or the person adopting that technology and executing it?
Maurice
Yeah, absolutely.
How AI will change the cyber threat landscape
And of course, AI will change the cyber threat landscape immensely, both in terms of attacks and their sophistication and potential I suppose also a plus side on their defences, but how do you think the financial institutions should assess this particular risk?
Rose
Now that's a great question, and as much as it is a helpful tool in that it can improve sort of security and perimeter testing of IT infrastructure which may have previously been static, it can evolve into something that's dynamic because you have AI software.
You no longer need someone looking at a 24-7 SOC screen to be able to pick up on red flags 24-7. It can be automated and that's a good thing for financial institutions but equally the threats are becoming sophisticated and the scalability of the sort of attacks particularly scamming and encouraging customers for payment diversion frauds that's happening more frequently, and the ability to trick people into handing over money it's becoming very sophisticated.
An example of that is AI deepfake frauds and CEOs, CFOs and senior officials especially or just pretending to be a bank and tricking customers into handing over money that's going to be something we're seeing more and more, not just in the form of video deepfake frauds but also audio deepfakes and we've seen this in the form of threat actors sending WhatsApp audio messages trying to put pressure on individuals to hand over money. And I think it's really important for banks, financial institutions and customers to really have the framework to ensure they guard against such attacks and threat actor tactics.
Maurice
So, you know overall whilst it presents so many huge opportunities for productivity and growth for individual corporations and the economy, there are huge risks associated with it as well.
Advice for financial institutions
Given everything that you've both said which you really shed some light on these issues, what's your advice about what financial institutions should be doing now?
Isabel
From my perspective, Maurice, I think it's really about education.
So whilst us lawyers know what laws are out there and what organisations should be doing actually the people that are really important and people who are using the AI and want to use it day to day as an enabler and so really organisations need to do their best to educate their people about what they should be doing, what they shouldn't be doing, what the risks are and also the opportunities as well.
Rose
And from my perspective, Maurice, I think you know as I said earlier identifying the where the risk sits and understanding who's responsible for it will give clarity to financial institutions as to how and where they need to tackle these risks but in relation to those cyber threats it's important to acknowledge that these are not going anywhere and it's a board level issue and the AI threats that are going to be accompanying cyber attackers means that resilience and preparedness is key.
And I've sat down with financial institutions and run these cyber preparedness workshops and it's always such a fascinating learning experience for the c-suite, for people in HR, for people in the legal team, and the key takeaway is that each part of the organisation and business really needs to be dialled in, coordinated and ready to be able to triage and deal with an issue that is going to likely cripple the business if a cyber threat were to occur. So, preparedness is key.
Maurice
Thank you both for that and for our viewers if you'd like to hear more on this subject, do have a look at our website www.cityandfinancial.com for the details of the Data, AI and the Future of Financial Services Summit which is being held in London on the 16th of March. We very much hope that you, our viewers, will be able to join us.
Just remains for me to say, thank you so much Isabel and Rose for your time today, looking forward to seeing you on the 16th.
Rose
Thanks, Maurice.
Isabel
Thank you.





