interview with Rachael Annear & Giles Pratt
“The UK is still taking an iterative and pro-innovation approach rather than kind of pro-regulation”
Maurice
Hello everybody and welcome to another edition of C&F Talks. I have with me today, Rachael Annear and Giles Pratt of Freshfields, who are going to be speaking at the AI Regulation Summit, which is being held in London on the 27th of January. Rachael and Giles, welcome.
Rachael
Thank you.
Giles
Good to be here.
Maurice
Great to have you both with us today.
How AI Regulation is likely to evolve in the UK
Obviously, this is a very interesting and pertinent and topical subject.
So, let's start, if we may, Rachael, with you. How do you think AI regulation is likely to evolve in the UK and which skills are required to comply with it?
Rachael
Well, the UK's AI legal landscape is mainly defined by the existing tech agnostic laws, and we think that's likely to largely remain the case. The approach in the UK to regulating AI really has been influenced by the existing significant strength of its AI sector already and so the regulatory policy is really focused on maintaining this position of the UK as a leader in AI innovation and also sort of underpinned by the awareness of the growth potential in the way that AI is regulated.
And the continued acceleration of generative AI and the way in which that's being adopted, we think that's likely to prompt some changes to UK law, but that those will be limited. So, for example, the government has outlined plans to put requirements on those developing the most powerful AI models, but those haven't yet been defined. We expect there'd be some mandation of the current voluntary safety commitments that are already in place, but the government has stressed that the reforms would only really impact a narrow range of organisations, a handful of tech companies, and so won't be directly impacting many businesses in the UK.
The other area the government is considering is how to reconcile the interests of the UK's creative industries and looking at whether to introduce some reforms to UK intellectual property laws. But by and large, the UK is still taking an iterative and pro-innovation approach rather than kind of pro-regulation. And so, for many businesses, it's good news because rather than the EU kind of wide-ranging new AI law, we're not expecting that kind of approach to take in the UK.
And so, there's a benefit of a slightly lower or avoiding a major compliance challenge, we could say. And there has been a sort of a broad degree of cross-party consensus between the government and opposition party for this approach, so we think it's likely to continue.
But it's not to say that the UK is a low-risk jurisdiction. There are already in place a number of robust laws and regulations which govern the areas of which AI is impacting. So, for example, we've got data protection, I've mentioned intellectual property already, product liability, inequalities laws, competition laws, even online safety and developing there, and obviously sector-specific laws such as financial services. And regulators have for some time now been expected to interpret and prioritise their existing regulations, addressing the risks around AI. And there are five overarching principles that the UK has proposed to assist regulators in managing that.
So, with this kind of patchwork of UK laws and the regulatory remits, I think really the things that need to be focused on in terms of skills, which was I think the second part of your question, really for organisations is having access to advisors that have the right knowledge and skills to help navigate those UK-specific regulatory issues. Also, the right governance structure is really important to help manage the ever-evolving challenges.
And finally, where AI projects are cross-border in nature, which in many cases is the case, then making sure that the regulatory issues can be addressed in all the relevant jurisdictions and recognising that different approaches are taken in different countries.
Maurice
So, the UK is taking a fairly distinctive, sort of evolutionary approach to this, but as you mentioned briefly, there's the EU AI Act, and I think the only other sort of comprehensive system is the one that China's putting in place with a rather different approach.
How AI Regulation is evolving outside the UK
So, looking at cross-border and probably globally then, Giles, how is AI regulation evolving outside the UK? And are we ever going to see, or in fact, do we ever really need an AI regulator on that global basis?
Giles
You're absolutely right. You do get very different views across the world on how AI should be regulated, and that has given us quite a fragmented regulatory approach. And in the face of that, you even see businesses reacting and pulling back in some places, even withdrawing services from the EU, for example, because of local laws here. So, you've got really many countries in the US states continuing to largely or exclusively rely on existing laws to deal with AI.
So as Rachael said, major jurisdictions like the UK have got new policies and guidance around how to streamline how existing regulatory systems apply to AI. But in other cases, you have got businesses who are kind of struggling because they've got very little guidance to help them understand how regulators are going to apply existing laws. You've got the EU obviously taking the opposite approach, creating new laws like the EU's AI Act.
There's also the draft AI liability directive, and collectively that's going to be imposing quite extensive obligations on a broad range of AI systems and models. The AI Act, which I appreciate lots of people are kind of fluent in these days, is probably the most extensive piece of legislation out there. It's modelled, perhaps unusually in this space, on product safety laws, and it is touching on obligations for a wide range of providers, deployers and others in the AI value chain.
People are paying close attention to that right now because you're seeing the rules on prohibited AI practices coming into effect quite soon, and people are planning on how to deal with the regulatory rules on high-risk AI, and of course people will be aware the fines are up to 7% of annual turnover under that scale. But that's quite a big difference from what you see in Europe to the rest of the world. You've got, as you mentioned, you've got AI laws in certain US states, Colorado, California among them, as well as China, which has specific laws around different types of AI, including one more comprehensive rule around generative AI.
But then you've got other jurisdictions like Brazil, South Korea and Canada where you've got draft laws being proposed, and the scopes of those laws, it varies in different cases, so it is a bit chaotic for companies to have to track. You've got some places that will implement only narrow AI laws, looking at a particular aspect of AI, maybe transparency, and we see that in some of the states, Utah, Illinois, Maryland, and then you've got some laws that are maybe focused on what's been newsworthy, so laws around protecting electoral processes, for example, from perceived threat from AI. Where's that going to take us? Well, the outlook in any jurisdiction, I think, can change pretty rapidly and is not that simple to predict.
You've seen, if you were asking this question a few months ago, there were some early leaders into countries pursuing AI-specific regulation, Thailand among them, that are now showing a more cautious approach. You've got others that were initially quite cautious, like India, which are now kind of more open to AI-related legislation. And then, of course, the US is big news for everyone, and it's likely that President-elect Trump is not going to take the same approach to AI regulation as we've seen from the Biden administration, so clearly lots of change.
With all that fragmentation, I think the chances of a global AI regulator are essentially zero. Would you end up with broad-based, cross-border regulatory collaboration? I think that's hard. There's already, by the way, a legally binding treaty on AI that was signed through the Council of Europe, and that does include the EU, the US, and the UK, but it's got limited enforcement mechanisms, it leaves substantial discretion to signatory states, so whilst it covers some good principles, I'm not sure people will feel that takes them a lot further forward in terms of what businesses need to implement day to day.
I think probably the most concrete example you see of international collaboration might be through the National AI Safety Institutes. They are there to evaluate the safety of the most powerful AI models, and that work has seen, to my understanding, good international collaboration, but of course it still relies, subject to the point around whether new laws are introduced, it still relies on voluntary cooperation from the model providers, and it's only addressing one narrow aspect of AI, which, candidly, many businesses will not have to worry about that type of so-called frontier risk.
Maurice
So it's, as you say, there's zero chance of a global AI regulation, and in a sense I suppose that's not different from quite a few other areas of corporate activity.
Main components of a successful AI Governance programme
But looking at the situation, if you're a user or indeed a provider of AI, you have to have in place, I guess, fairly rigorous AI governance programmes, on the one hand, to deal with the external regulations, the jigsaw regulations, on the other hand, to deal with how people internally work. So, what are the main components of a successful AI governance programme?
Giles
You're totally right, all of those regulatory pressures, different expectations from shareholders, customers, the sort of spectre of litigation, all of that has put AI governance on top of the agenda for lots of businesses. What we hear from our clients is that getting the right governance around AI has been very important for them in terms of their growth story, both organically, in terms of their regular business, but also, and this is especially true for early stage companies, to attract investment, and you're seeing a lot of investors wanting to sort of test how businesses are adopting AI to make the most of it for their growth, as well as thinking about that legal risk point. I think that's worth highlighting.
A lot of people, when they talk about AI governance, they really kind of labour the risk aspect of it. We are seeing people trying to take a more balanced approach, making sure that they are finding ways to scope the legal risk, but also building into their governance processes ways to maximise the value of their AI investment, and that can include a legal piece, in making sure that you're actually crystallising some legal value in your AI developments for your business, and then setting yourself up for further growth. What we're tracking, really, is kind of regulatory and similar guidance that's presenting different degrees of prescriptiveness around governance structures, including on topics like how involved senior management had to be, and how that kind of monitoring process goes through the organisation, how reporting lines work, and the need for AI expertise.
Within businesses that are looking to add AI to their existing offerings, we're typically seeing one person who has to have general oversight of AI, sort of AI lead in the organisation, and they are typically supported by some sort of cross-functional AI steering committee, which includes senior leaders from different functions within a business, including shock, horror, legal and compliance as well.
And the sort of considerations we're seeing include, you know, whether the AI steer co and the AI leader should report up to the boards, obviously regular reporting is going to help the board keep, you know, different stakeholders within the business accountable, whether there need to be particular links between different committees, be it cyber, risk, audit, all of the usual committees you see across listed companies, and then across group companies, are you going to ask for decisions to be made centrally, particularly around bigger points of principle, or are you going to allow for, you know, different divisions within a business to make autonomous decisions around how they adopt AI?
Rachael
And I think it's also important to remember that people are central to the success of any governance structure, and I think the particular challenge with AI really is to find the right balance of expertise and skills, because people need to be appropriately qualified, but that probably comes from a range of different disciplines, so including engineers, developers, in product specialists, as well as the legal and compliance.
And I think many of the governance frameworks we are seeing have key pillars across three areas, so not only legal and compliance, but also in AI product development and AI deployment, and underpinning all of these kind of cornerstones is really a business AI governance structure that has flexibility and adaptability, because recognising that both the technology as well as the regulation is rapidly changing, and so the risks and opportunities really are in flux, so I think, you know, the final kind of thing to remember really is that if your business is developing or deploying AI, now really is the time to test and make sure you have the right governance structure in place.
Maurice
Very interesting, I mean, obviously a rapidly developing area.
So for our viewers, if you'd like to hear more on these and related issues, please do have a look at our website, www.cityandfinancial.com, where you'll find further details, the speakers and the programme, we very much hope to see you there at the event on the day.
And Giles and Rachael, thank you so much for participating today, very interesting thoughts, thank you for sharing them.
Rachael
Thank you for having us.
Giles
Thank you.