interview with A&O Shearman
“You are fundamentally reliant on your people as your first and last line of defence to use these systems responsibly”
Maurice
Hello everybody and welcome to another edition of C&F Talks. It's a great pleasure to have with me today three Partners from A&O Sherman, all three of whom are AI specialists. So, I have with me Jane Finlayson-Brown, Alex Shandro and Catharina Glugla. Welcome to the three of you.
Jane
Thank you.
Alex
Thank you very much.
Catharina
Thank you.
Maurice
Great to have you with us. Now of course all three of you will be speaking at the AI Regulation Summit, which is being held in London on the 11th of November. But in advance of that, there are a few questions I would like to ask you.
An update on global AI regulation
So, first of all, just to give the sort of macro context as it were, how is AI regulation developing across the world? And is there a Brussels effect to this? And I suppose in addition, are we seeing convergence or divergence? And does that mean that we need a global regulator?
Jane
Catharina, do you want to head off on that one? Because with your EU perspective from Germany.
Catharina
Yeah, I'm very happy to. Thank you, Jane. So, the headline is definitely divergence.
It's a very fragmented landscape. And the legal landscape is mainly driven by geopolitics. So let me start with some geopolitics and the drivers behind that fragmentation of the legal landscape before Alex can give some details on the legal landscape.
So given the driver, our politics, we mainly see three tiers where you can allocate the different locations and global areas in. So, we have the US and China as global AI superpowers along the AI value chain. But the two of them have very different strategic focuses.
So, the US is focused on R&D, so research and development, and also to win the race to AGI, so artificial general intelligence. Whereas we see China is more focusing on development and deployment and getting AI into industry and households. So really deployment of AI.
And then the second tier of geopolitics drivers is AI specialists. So, for example, Taiwan and Israel, they are focusing on specific parts of the AI value chain. And the third one is a class of other countries as pioneers along the AI value chain that are balancing, on the one hand, the desire for AI sovereignty with the factual reliance on US and Chinese tech.
And here, let me talk a little bit about the EU. So, the EU opted, and Alex is going to talk about the legal landscape a little bit more, but the EU opted for a strict regulation, horizontal, so not sector-specific, but risk-based and use case approach. And then at the same time in September 2024, the Tragi report was published, and they realized that the EU needs to do some homework for being more competitive, particularly on the digital sector and AI.
So, the European Commission published the Competitive Compass, and we're currently seeing massive efforts to becoming more competitive. And that involves both massive investments from the European Commission or the European Union into the digital infrastructure to be less dependent on particularly the US and China. And that involves both data centres and cloud spaces really along the AI value chain.
And then the other factor on the value chain is they really want to have development and deployment of AI in the EU. So, they're trying to simplify the legal landscape. And that even touches laws like the GDPR and other digital EU laws.
There are discussions about whether or not the EU AI Act should be simplified or postponed. And there are still discussions on that level, a massive push from industry in the EU and also the legislator discussing that.
Now, in that context, will there be a Brussels effect of the AI Act? As I said, the EU opted for a very strict use case approached AI Act. The Brussels effect meaning that given the importance of the EU market and economy, other countries will follow the example of the AI Act as we saw with the GDPR. It's very unlikely. No one really expects there to be a Brussels effect.
And you'll see once Alex goes into more detail on what we actually see on the legislation side. And I think it has mainly two aspects to it, why there won't be a Brussels effect. One, AI is too important politically and economically to make any compromises there. So therefore, as I said, there are a lot of geopolitics driving the legal landscape. And then also, there is some international effect of the EU AI Act, whereas output use in the EU, then it's subject to the AI Act, the AI chain. But it doesn't have all those little international regulations as the GDPR did. So, there's also a legal reason for it but the main one is definitely the importance of AI for society and the economy. So maybe Alex, you can put some flesh to that.
Alex
Thanks, Catharina.
I mean, look, I think the fragmentation is inevitable and here to stay, because every country is legislating to protect its strategic objectives within the global ecosystem. But a caveat to that it is nuanced. And I think the EU is a great example of that. And the reaction among AI communities to the AI Act has been quite divisive. We're obviously yet to see how enforcement plays out in practice.
But other industries, automotive, for example, have shown that early focus on safety and horizontal legislation supporting that can actually help to hasten adoption and spur innovation. So actually, I now read many commentators referring to the EU approach to regulation as its superpower. And so, it's very notable in that context that China in the last few months has really pivoted to focusing on governance and safety and ethics as part of its big push that Catharina mentioned into deployment.
So, they are looking at governance and regulation as part of its push to get AI into industry and into households. So, you're seeing some approachment, I guess, in other areas globally to the EU approach. And the same goes for the US.
So, there's this raft of new bills that have recently been signed into law in California that follows in the wake of legislation in Texas, Colorado, Illinois, New York, many others. And it's notable that some of those bills actually target equivalent areas to the AI Act around transparency, for example. So, the narrative around the red table being in the EU, I think, is definitely nuanced.
And we're seeing in the US that the federal approach to not legislating at federal level is increasingly problematic. The existing patchwork fragmentation that we're seeing even within the US, which is amplified recently, cannot be good for the frontier AI labs that the federal government has openly stated it wants to create this sort of relaxed framework for. So, the fragmentation is literally everywhere and sort of flowing from those strategic drivers that Catharina mentions.
And I haven't touched on sort of other areas outside EU, US, China. I mean, in APAC more broadly, we've seen legislation in Japan, South Korea, Malaysia, Vietnam, Thailand, and every approach is quite different. Singapore as well, depending on whether it falls into one of those categories of sort of, are they a pioneer? Are they focused on chips? Are they focused on deployment? Are they focused on governance? It is all very esoteric and individual at the moment. And I guess, Jane, in terms of what the UK is doing in that picture, the same is true for them.
Jane
So, I think the UK is trying to be super pragmatic as usual, trying to steer a middle course between the EU, perhaps, you know, sort of very conservative, cautious, careful approach to regulation and ensuring that they've got their ducks in a row. And equally, there's slightly more libertarian instincts in the US, notwithstanding the counterbalance of the state legislation that we're seeing. So, the UK is very much keen on growth, as we know, the Chancellor is under a lot of pressure to ensure that the economy does not have any, you know, sort of fetters and under pressure to remove red tape, let alone impose more.
So I think very much we'll see increasing pressure on regulators to be innovative, to think about the way in which they can encourage in a sort of safe environment, sort of, you know, sort of further developments with AI, working closely with industry in that regard, I think it's unlikely that we're going to see anything like an EU AI Act, I would sort of discard that really. But much more along the lines of, you know, safety, governance, you know, those sorts of things and enabling. And it was really interesting to see that the GDPR specifically, UK GDPR, the ICO is going to be tasked to write an AI code to help us to interpret GDPR for the AI world.
And trying to look at some of these concepts afresh, which is that, you know, there is quite a lot of tension, as you know, between GDPR and AI concepts generally. So that will be interesting to see how that develops.
Maurice
Yeah, so it's interesting, if not somewhat splintered world that you're all describing.
What effective governance looks like
But you've also mentioned in your answers that there's the flip side of regulation being governance. So effective AI governance is seen as essential to limiting the risks of AI to consumers and society as a whole. What does effective governance AI governance look like? And which regulatory bodies oversee the governance of AI in the UK?
Jane
Great question. Alex, is that one you'd like to pick up?
Alex
Sure, thank you, Jane. It's a very big question. And I think that the starting point on any sort of governance program around AI is that there is no one size fits all.
So, the best AI governance will have been developed for the enterprise's specific needs, specific strategy, and within its existing processes. And the reason for that is the cornerstone of good AI governance is actually looking at governance as a tool to foster adoption.
So, the risk landscape around AI is so vast. And in-house legal teams, compliance teams, governance teams are faced with an almost insurmountable compliance burden, when you look at that sort of risk landscape and apply it to however many number of use cases there may be for a given sort of global business. And that sort of democratization of AI, it's sort of in every aspect of every business and rapidly increasing volumes of use cases. It is simply not possible to map every single risk to every single use case in a way that is credible and proportionate and pragmatic.
So, the governance has to be really laser focused on the specific needs and risks for the given business. And there are many, sort of, ways you can do that. It's a huge part of what Catharina, Jane and I advise clients on.
And I guess a few, sort of, tenets of good governance would be firstly flexibility. So, every use case presents a different set of risks. So, you need a governance program that is flexible enough to identify the key risks for a given use case and to do so quickly.
So, as well as being flexible, it should be staged. So, you want to triage and assess use cases really quickly so you can just eliminate the non-starters straight away. You can perhaps sort of wave through the use cases that fall within a defined risk tolerance and then focus efforts on what you have left in the middle.
The other sort of key aspect to this of course is multi-stakeholder collaboration. So, if you have risks in privacy, in IP, in infosec, in regulation, in third-party risks or contracts, cybersecurity, any number of specific risks for a given use case like HR recruitment or coding, you need a lot of different people's inputs to assess those risks properly. So, the governance needs to bring those people in.
And then finally, it needs to be people centric. So that risk landscape is incredibly vast and none of those risks can be eliminated altogether. So, you are fundamentally reliant on your people as your first and last line of defence to use these systems responsibly to look at how they're using inputs and outputs.
So, the way we look at governance is very different to traditional legal policy. We look at governance as needing to be very sticky. So, your people need to have fully digested, assimilated and remembered what governance is there for what the risks are and what their responsibilities are to those ends. So that sort of feeds all the way through your rules on use, your training programs, your policies and everything else.
Maurice
So, did anybody else want to add that or should we move on?
Jane
I think Alex has answered it beautifully actually, so happy to move on.
Maurice
Yeah, well I think you covered that very well in the sense that you've tied in governance and specific risks.
Where the balance of legal responsibility should lie
So, taking that forward a little bit if we may, clearly there are two main parties involved in this. So how does the balance of legal responsibility and perhaps in relation to governance align between the providers of AI technologies and the corporates that integrate them into their client-facing systems? And are there any significant legal precedents on this yet?
Jane
So, Catharina, I wonder if you would be well-placed to answer this, particularly because we had obviously the liability regime that was in mind at the time of the EU AI Act, which has subsequently been dropped. I wonder if you might cover some of those issues.
Catharina
Yeah, I'm really happy to. Thank you, Jane.
So indeed, we did discuss in the EU the AI liability directive, so specific AI liability rules that then would have been to be implemented into member state laws. So not a regulation as the AI Act or the GDPR, but a directive. Now in course of the approach for more competitiveness, to see more welcoming and less restrictive, the EU has dropped that AI liability directive. So even in the EU, there won't be an AI-specific liability regime.
And that brings us back to the themes that we already discussed in the questions so far, and that's there's a fragmented global legal regime, so it's different. There aren't specific AI liability rules available yet. And then coming back or repeating the theme that Alex just elaborated on for AI governance is that it's AI use case specific.
So, there are very different areas of legal liability if you look at AI, and they will evolve as technology evolves. As for example, we see with AI agents and Alex, maybe once I'm done, you can elaborate a little bit more on what we discussed with clients specifically on the liability aspects of AI agents. But that gives an example of the technology evolves, the use cases evolve, so the liability question also evolve.
There is no universal global liability regime, as I said. And then there are also features of AI that challenge the existing liability regimes that we have in place globally. So, the existing liability regimes typically assume a human actor to be involved.
That's not always the case, particularly with AI agents. Then AI is a black box by nature that always is a challenge when it comes to allocating liability and responsibility. It's also always a multi-party ecosystem.
So, we have developers, we have deployers, we have data providers. So, it's rarely one party that you're looking at, which makes it a bit more difficult to allocate liability as well. So, these are the challenges that AI provides for liability and responsibility.
Now, nonetheless, what do we currently see in practice? How is that being approached? Now, mainly what we see in practice is that some of the risks of AI can be backed off contractually.
And there's definitely a primary mechanism as allocating risk contractually. So, the contractual negotiations, whenever applying, developing, buying in AI is a major factor to focus on. And there again, with the technology evolving and the use cases evolving, also the market standard will continue to evolve and will continue to change.
Now, what do we see currently is, let's say like a market standard or experience when we do this for clients. Now, again, understanding the AI value chain and the different relationships between the players, there's a massive difference on whether you're contracting and negotiating contractual liability directly with the AI foundation model provider or with an application developer. The application developer is typically like squeezed in the middle between the foundation model provider and the customers. So, in this case, our clients often. So, they are likely more reluctant to assume contractual risks. So, it's a tougher negotiation there.
And then also we see most AI vendors contract caps for financial exposure. They also exclude liability for specific errors, for biases or hallucination. And it's really important to focus on those contractual negotiations. Maybe Alex, do you want to add on AI agent specific, which is just a fascinating aspect for liability?
Alex
Thanks. I mean, just really quickly, I think, and just only to say that agents have, I think, thrown these questions into much sharper relief. The liability allocation is inherently more challenging when an AI system has been programmed and designed and configured and deployed to do a task where by definition, you don't have the human in the loop at every stage of that particular task.
So, the probability of bad outcomes is potentially greater and that could be bad outcomes, even though the governance around the tool is robust and the tool has been designed safely and subject to robust guardrails and compliantly and all the rest of it. So, what legal frameworks we have to deal with that bad outcome, many people have observed are not fit for purpose, whether it's a negligence claim, a product liability issue, whether our existing laws around agents, which of course, vary widely from jurisdiction to jurisdiction, whether those are fit for purpose, these are questions that we're grappling with. There's absolutely no answer to your question. Are there legal precedents on this yet? Absolutely not. It's a really fascinating evolving area.
Maurice
Yeah-
Jane
Sorry, Maurice. I was just going to observe.
It is fascinating because obviously, for many, we've got a very well-established doctrine of vicarious liability where you're responsible for the acts or omissions of your employees, including things that they do that are sort of untoward and against policy. If they're doing it in the course of their employment duties and so forth, there's a lot of case law around that and there are limits to it. But as was interestingly found in the case where there was a data breach of Morrisons, there was very famous Supreme Court judgment on that, which looked at the limits of where vicarious liability is.
But nonetheless, if you have someone, an agent that goes rogue, we are sort of testing all these sort of parallel doctrines that we've grown up with as lawyers for many years as to how they work in that new environment. So, it is completely fascinating for nerds like ourselves. Thrilled by all the nooks and crannies here that we can investigate.
Maurice
Yeah. I mean, a fascinating area, the whole area of agentic AI, which we'll pick up on if we may in our last question, perhaps draw out some of those points further. But just turning to another subject for a moment, if we may.
Privacy and IP issues surrounding AI
There's been a lot of discussion about privacy and IP issues in relation to AI. Do you think these are likely to be, and in the machine learning as well, do you think these are likely to be resolved soon?
Jane
So, I remember last year, Maurice, I did a speech at the same, very same brilliant summit that you run, where I really outlined the ways in which I thought GDPR was not really fit for purpose in this new world. Not that it's completely new, but agentic, well, generative AI, relatively newly deployed in a sort of mass scale. But you look at things like the accuracy principle, you know, sort of minimization principle, transparency principle, all of these things are quite challenging, you know, with the massive amount of data that you have as against using only the minimum transparency, telling everybody what you're doing and how you're processing when you've got a bit of a black box, and so forth.
There are any number of these issues, and they only become, I think, more complex in the agentic world. When, as Alex says, there is less oversight from the human in the loop, and more control for the code, the very brilliant AI to sort of make up its own mind, as it were. So, these things are difficult.
I do think our regulators and our governments are looking at it extremely actively with an open mind and thinking about how they can make things easier. And, you know, how we can grapple with some of these topics. And there is a willingness to think about, you know, sort of different solutions.
So, as Alex says, I think we're probably only part of the way to seeing a fully thought-out regulatory approach. And we shouldn't, I guess, beat ourselves up about that as a society, because it is new, it is different. But we're not there yet.
We're definitely, you know, far from having a complete regulatory picture as to how things will look. I think the good news is that people are aware of it, they're listening, and they're thinking about how they can adapt existing models. I mean, even on the IP side, as well, Alex, there's quite a lot of thought being given to the fact that, you know, some of the copyright issues that surround the development of AI models is challenging as well.
We've seen that in the UK, but you may wish to comment further on that.
Alex
Thanks, Jane. Yes, look, Maurice, we are a very, very long way from having sort of clarity, if you like, on those key IP questions. And there's lots of them. We focus on copyright mainly in the narrative around it in two dimensions.
One is the infringement dimension. So, there's been some cases now on the training side of these AI models. So, in training these models, are the developers infringing third party rights? And I can say a little bit about that. There's been as yet no focus, at least in sort of courts on the output side.
So, when deployers are using these systems to generate content, there is an inherent risk that that content has reproduced training data that the model has been exposed to. I expect that wave of litigation will come. But the focus, understandably, for now is on the training side.
And then the sort of the other key copyright question is around ownership. So, there is, again, we'll use the word fragmentation, that there is a very divergent set of standards and laws around whether AI generated content is capable of having copyright subsisting in it. Most regimes around the world require some sort of standard of human authorship for copyright to arise. And we are seeing some developments there. So, China has had some very interesting cases where they have found copyright in AI generated works. The US has very firmly said, absolutely not, even if you've prompted the model 1000 times, it's not enough.
The UK does have a regime for owning computer generated works, which is likely to be repealed. That seems the direction of travel, it just, to Jane's expression, is not fit for purpose. So again, a lot of, there is a huge absence of clarity, which is, I think, important for deployers, when they're using these tools to create software or marketing or materials that would traditionally be protected by copyright. They're sort of having to rethink how they, that their internal steps around protecting those materials. So again, a long way to run on that side of it.
Going back, just as I mentioned, I would to the infringement question for training. Some landmark cases are in the works, we have some early decisions in the US, which I think on the whole are ominous for the developers. So, there's been a lot of talk about the settlement that Anthropic has reached with a class of authors around copyright infringement. That was specifically around pirated works.
So Anthropic downloading and storing pirated copies of books. It didn't directly address the fair use question, which is the focus for most developers in their reposts to accusations or allegations of copyright infringement. On the other hand, the Meta case, which hasn't been as well publicized, the judge in that case did say very clearly that training AI models on copyright works could never be fair use because of the impact on the content creators, the market dilution impact.
That argument was not run in that case, but the judge said, had it been run, Meta would have lost. So, I think we could expect a torrent of class actions in the US based on that argument, which will, they will take their time to get through the courts as they do. But I think there is a long way for that to run as well.
Maurice
Hmm. It sounds as though that the law and regulation is running hard just to try and keep up with the status quo of AI, but AI itself is a rapidly developing technology.
Is it naive to imagine that regulation will be able to keep up with AI’s rapid development?
We touched on agentic AI as well. Is it naive to imagine that any form of regulation will be able to keep up with its rapid development?
Jane
I certainly think it's a challenge as we’ve seen, but then equally, you know, there is the very well-established trait of giving guidance and interpreting. And I mentioned earlier, the AI code that the ICO will be writing for UK GDPR at least. So, I think there's a lot of incentive for regulators to keep on their toes and to be continually thinking about how they can refine their thoughts on this to ensure that we do have a regime that, you know, protects against harm but equally, permits safe and secure development.
So, I do think it’s absolutely top of mind, Maurice, for regulators and policymakers to enable that to happen. I am, sort of, relatively optimistic, you know, we have got very bright minds in the regulators and the government that we will achieve that. But we’re just not quite there yet.
Maurice
No, quite. I mean, finally, because I know that we’re probably running out of time on this.
Unique regulatory challenges of agentic AI
We’ve referred to agentic AI in various of the answers that you’ve put forward on this and clear it does from what you were saying earlier, pose some quite unique challenges in the sense of identifying who, or what is liable. Do you think that progress is being on this, cause we’re on the cusp of agentic AI as being the next stage. Do you think we’re ready from a legal and regulatory point of view for its wide deployment?
Jane
Catharina, would you like to pick this one up? It’s perhaps an evolution to your previous comments on liability.
Catharina
Yeah, very happy to, Jane.
So, I think as in all of the AI developments we’ve seen, and as Jane said, we are at a stage where we have to rapidly adapt our existing legal approaches to new technology. Which for, as Jane said, nerds like us it’s really, really interesting time. It’s really challenging legal frameworks.
And yes, it’s, so, agentic AI is hard to fit, even harder to fit, lets state it like that, into the existing legal frameworks. So, for example, on data protection it’s really hard to define who’s the controller, who’s a processor, who’s really responsible for the processing of personal data that may happen by or through agentic AI. And that is challenging structures of the law.
Again, as with liability regimes where we typically assume a human is involved and that’s not the case with agentic AI. So, it’s really challenging the existing framework. But as we said previously, it’s also a chance for the legal framework to develop, for regulators and politicians to find answers to that, for us as lawyers to find answers to that and to actually really apply the legal frameworks that we have in place, rather than just looking at a lot of precedence, really actually doing the work and applying legal frameworks. So, it’s challenging but there are existing frameworks that can handle it.
Maurice
Very good. That’s been a very, very interesting discussion. Thank you so much, Jane, Alex and Catharina for that discussion.
Which I think our viewers will very much have enjoyed as well. And if you would like to hear more on this fascinating subject, please do come along to the conference which I mentioned before, which is the AI Regulation conference, being held in London on the 11th of November. Further information available on our website, www.cityandfinancial.com
Thank you very much indeed for that and look forward to seeing you all at the conference next month.
Jane
Pleasure. Thank you very much, Maurice.
Alex
Likewise.
Catharina
Thank you.




