The 3rd Annual AI Regulation Summit
The 3rd Annual AI Regulation Summit
Book Now
  • Home
  • Agenda
  • Speakers
  • Insights
  • Sponsors
    • Sponsors
    • Sponsors
    • Partners & Supporters
    • Why Sponsor
    • Sponsorship Packages
  • Register
  • Related Events
  • Venue
  • Past Attendees

The 3rd Annual AI Regulation Summit:

Shaping the Future of AI in the UK and beyond

A Practical Guide for Corporates and Financial Institutions

interview with Tim Hickman

 “One of the biggest challenges that companies are facing right now is how to ensure that they understand who is their regulator and for which piece of technology.”

Maurice

Welcome back to C&F Talks. This afternoon, I'm going to be speaking to Tim Hickman, a Partner at White & Case, who's going to be speaking at our third annual AI Regulation Summit in London on the 11th of November.

Tim is Head of the Data, Privacy and Cybersecurity Practice at White & Case, and advises on all aspects of data protection law and AI regulation. He's closely familiar with the EU AI Act and similar laws around the world, and led the publication of the firm's EU AI Act handbook and global AI regulatory tracker, AI Watch. And as the focus of this interview is going to be around the EU AI Act, Tim is perfectly positioned to answer the questions today.

Tim, welcome.

Tim

Thanks for having me.

Maurice

Let's turn to our first question.

New obligations of the EU AI Act and compliance challenges for providers

The implementation of the EU AI Act is being phased in, as we all know. In the most recent phase, August 2025, new obligations came in for providers of general-purpose AI models. What were these in summary, and what are the compliance challenges that they pose for such providers?

Tim

Sure. So, the EU AI Act is a very complicated piece of legislation. Even as someone who spent most of my career dealing with EU regulatory laws in the form of Directive 9546 and then the GDPR, this legislation is in order of magnitude both more complex and arguably less clear than those laws.

The primary provisions for trying to understand the order in which things come into effect are Articles 111 and 113, which sort of work in concert to bring various bits of this legislation into effect in stages. As you rightly say, many of the rules for general-purpose AI models came into effect in August this year, and the, if you like, parallel rules for AI systems, which are separate, come into effect in August 2026. However, there are sort of, if you like, delays for both of those, depending on the facts.

So, for any general-purpose AI model that was already on the EU market before the 2nd of August this year, the enforcement date doesn't actually start until the 2nd of August 2027. And likewise, in 12 months' time, slightly less than 12 months' time, you have high-risk AI systems that were placed on the market before the 2nd of August 2026 potentially never face enforcement, provided that they're not materially changed after that date.

So you have this oddity of the law will apply depending on when your general-purpose AI model or AI system was placed on the market, and then also depending on whether your AI system has a certain risk designation, what's called high-risk AI systems, in which case you have a large number of compliance obligations versus lower levels of risk where you have substantially fewer obligations.

Maurice

So, a pretty complex situation for providers.

AI office and board: what will this mean in practice?

Turning to the, if you like, the organisation and the implementation, the AI office and the AI board also became operational in August this year, and they're supposed to take a central role in the implementation and enforcement of the Act. What do you think this will mean in practice and how will they relate to national authorities?

Tim

Yes, indeed. So, one of the biggest challenges that companies are facing right now is how to ensure that they understand who is their regulator and for which piece of technology. And as you might imagine, that involves a sort of multi-stage analysis of the first thing I have to do is figure out which pieces of my tech stack are regulated AI under this law, and then are those pieces of technology, are they general purpose AI models or are they AI systems?

And then from there, I need to work out if they're AI systems, are they high risk, if they're general-purpose AI models, do they have systemic risk? And then from there, I can figure out who is my regulator because it's only once I've answered all of those questions that I understand whether I am dealing with a national surveillance authority or whether I'm dealing with the AI office as my regulator.

The bigger challenge then is if I am using both my own general-purpose AI model and my own AI system built on top, so if I own the whole stack as it were, then in certain circumstances, I'm entitled to argue that in fact, my only regulator is the AI office and unless the AI office declines to regulate, I should not then have to interact with national surveillance authorities.

And that of course is very attractive for many companies because unlike say the GDPR, the AI Act does not have an equivalent sort of one stop shop structure where you can have a single national regulator as your regulatory body. You instead face effective regulation across all 27 EU member states plus the three EEA non-EU states, which means you're potentially facing 30 jurisdictions worth of regulators and then within those jurisdictions, many of the regulatory functions are broken up between multiple bodies. So, Ireland, I think at my last count, had about eight different regulators for different parts of this law.

And so there is obviously a huge amount of motivation for businesses to try as far as they can to end up in situations where either the AI office is their sole regulator or is at least their primary regulator. But I think for many businesses, the reality is they're going to face parallel enforcement and regulation across multiple EU member states.

Maurice

Yeah, I can see that everybody would be trying to get the most simple, straightforward arrangement that they can.

EU AI governance provisions impact

In August, there was also the entry into application of several of the EU AI's most critical foundational governance provisions. These provisions form the basis of the Act's institutional and enforcement infrastructure. What will their impact be on businesses, particularly those involved in general-purpose AI?

Tim

Yeah, so the challenges for general purpose AI models are substantial if you're a provider of one of those models, because there is no bright line test you can apply, although there should be, but in practice there isn't, in order to figure out whether or not you are caught by the rules I mentioned earlier regarding systemic risk.

So, in very simple terms, if you have a general-purpose AI model and you have to analyse whether or not it has what's called systemic risk, but the difficulty is that the tests are sort of malleable a little bit. So for example, one of them is based on the total level of compute power that is used to create the model, but it isn't totally clear whether that includes just the compute power that was used to produce that particular version of the model or how far back in time you have to look in terms of it being cumulative and whether materially different, but foundational in past versions of that model are also brought into the equation. 

So, there's a lot of doubt as to exactly how that calculation is going to be done. And then even if you can make a hard argument that I'm not subject to any of the rules on GPAI models with systemic risk, there is another provision that says, well, the European commission can just decide ex officio.

So, it is up to them to just decide that your model has systemic risk, at which point all of the compliance obligations that are set out in Article 5, suddenly apply to you because the commission has decided that they apply to you. And while in theory, there are sort of some guardrails, in practice, it's very, very hard to see how businesses are going to successfully argue against that kind of designation by the commission. So you have this big challenge for a lot of businesses right now, which is trying to set themselves up in the best position, either to take the view that their models are subject to a lower level of regulation, or to take the view that the commission should not be able to designate them and sort of set up their arguments in advance.

Is the EU AI Act flexible enough?

Maurice

Given this complexity, I do wonder, and many other people said the same thing, whether the EU Act is not flexible enough to cope with the very rapid development of AI technology. Do you agree with that?

Tim

I think you can make the case that actually we might want to go the other way and say that the act has been drafted with so much flexibility that there are portions of it that are not fit for purpose. And I would take especially the definition of AI system, for example, in Article three, which has been drafted so broadly that I think I can make a cogent case that auto-correcting Microsoft Word is within the definition of an AI system. I don't think that's what they're trying to regulate, the framers of the legislation, but at the same time, they've included so much flexibility to allow this law to be adapted to future technologies that they've inadvertently captured vast swathes of technology that we wouldn't ordinarily think of as AI in the first place.

And I think this is going to be a huge headache for a lot of businesses, because if I'm an in-house lawyer at a company, I now need to look at all the technologies that the business currently has in place and anything that it's planning to roll out, and I need to ask the question, well, does this fall within that definition? Is this a regulated technology under this law? And the problem is the test is very vague.

And the AI office has put out, or the commission at least, has put out a lot of guidance on what exactly is meant by the term AI system, and to my mind, it raises more questions than it answers. And so, for a lot of people at the cutting edge of this trying to advise their boards on how to deal with this law, the biggest challenge is working out which bits of my technology do I actually need to think about as being within scope, and then how do I assess risks from there?

Providers back designing their systems for optimal treatment

Maurice

A final question, Tim. Are providers trying to sort of back design their systems to get the optimal treatment under the EU Act, do you think?

Tim

I think there's a fair argument that some of them are. I think that the lack of clarity that I referred to earlier is a big driver here. You can either get comfortable with the idea that your technology falls outside of the systemic risk and high-risk categories that I mentioned earlier, and obviously you want to argue that your technology is not prohibited under Article 5, but if you can't do that, if you have to deal with a lot of these sort of substantive regulatory obligations, then yes, a lot of companies are looking at the question of, well, can I design the technology then in such a way as to reduce the risk that I would face or to at least reduce the burden that I would face? The answer often is no, and then the next logical question for a lot of in-house lawyers becomes, well, if I can't do either of those things, can I push the risk onto contractual counterparties?

And we've been seeing this heavily for the last 18 months of businesses of all kinds saying either I provide an AI system or an AI, general-purpose AI model. If you use it, you are responsible for ensuring that everything you do is lawful, and you will indemnify me against any unlawful use of my technology and so on, and then on the flip side, you have the service recipient saying, no, no, no, no, you indemnify me because you're the one providing the technology and I have no control over it, and you get sort of back and forth battle of the forms that we see in lots of other areas of contract, but especially here because there is so much doubt as to how this law is going to be enforced and where the liability lies, and therefore businesses are very keen to ensure as far as possible that they've pushed all the risk off them.

Of course, one of the big uncertainties here is whether or not those protections are actually worth the paper they're written on because if you have, for example, an AI startup providing your service and they agree to indemnify you, obviously the value of that indemnity is limited to essentially the market capitalisation of your startup, and if it turns out your startup has been giving indemnities to everyone, then if that indemnity is eventually called upon, the likelihood is you will be at the back of a very long queue for what might end up being no money at all.

So, whilst it makes sense that people are pushing for contractual protections, those protections may actually not be effective in practice.

Maurice

Absolutely fascinating. Well, we've run out of time, Tim.

But for our viewers, if you'd like to hear more on this very interesting, highly topical subject, do come along to the third annual AI Regulation Summit in London on the 11th of November. Further information is available at our website, www.cityandfinancial.com. 

Tim, looking forward to seeing you in about a month's time.

Tim

Likewise

Jump to

New obligations of the EU AI Act and compliance challenges for providers
AI office and board: what will this mean in practice?
EU AI governance provisions impact
Is the EU AI Act flexible enough?
Providers back designing their systems for optimal treatment

Please note that this event has now finished.

To purchase the recording, please email: bookings@cityandfinancial.com

Video recording

The conference fee includes 28 day access to video recordings, excluding any sessions that may have been held under the Chatham House Rule or off the record.

Watch now

SPONSORs & Media partner

Premier Sponsor

Sponsor

Logo for Advai

Sponsor

Logo for PA

Sponsor

Logo for White & Case

media partner

Logo for City AM
City & Financial Global Ltd is a protected trademark.
Copyright ©
 

Terms and Conditions | Privacy and Cookies

QUICK LINKS

Agenda

Speakers

Begin Registration

Contact Us

CONNECT WITH CITY & FINANCIAL

#AIRegulation

WHEN IS THE EVENT