The 3rd Annual AI Regulation Summit
The 3rd Annual AI Regulation Summit
Book Now
  • Home
  • Agenda
  • Speakers
  • Insights
  • Sponsors
    • Sponsors
    • Sponsors
    • Partners & Supporters
    • Why Sponsor
    • Sponsorship Packages
  • Register
  • Related Events
  • Venue
  • Past Attendees

The 3rd Annual AI Regulation Summit:

Shaping the Future of AI in the UK and beyond

A Practical Guide for Corporates and Financial Institutions

interview with Oliver Patel

“Ultimately AI governance is change management”

Maurice

Hello, everybody. Welcome to C&F Talks. Today, it's my great pleasure to be interviewing Oliver Patel, who is Head of Enterprise AI Governance at AstraZeneca. Oliver is going to be speaking at our AI Regulation Summit in London on the 11th of November.

So, at AstraZeneca, Oliver is Head of the Enterprise AI Governance area, as I mentioned. AstraZeneca, of course, a very well-known global pharma company with 90,000 or so employees. Oliver is responsible for implementing AZ's Global AI Governance and Regulatory Compliance Framework. He leads the AI Governance area and is an educator and trainer in this field.

He's also an AI faculty member at the IAPP and a training instructor for the AI Governance Professional Course and the creator of the Enterprise AI Governance Newsletter. Oliver is going to be publishing a book, the Fundamentals of AI Governance, in early 2026. Oliver, welcome.

Oliver

Well, thank you so much for the kind introduction and we're really looking forward to the interview today and the event in a few weeks' time.

Maurice

Fantastic.

The journey of writing the Fundamentals of AI Governance

So, by way of background, can you tell us what the journey was that led you to be writing this particular book?

Oliver

Great question. So, I think that I can trace my journey into this space back to when I studied philosophy during my Master's, which perhaps isn't the answer that people expected. But I studied philosophy with a focus on moral philosophy and ethics and AI and public policy back in, I think, 2014.

And back then it was a slightly more niche and esoteric field. There were researchers and scholars looking at this question of AI ethics, but it was the first time I was really inspired by this whole topic and these big questions around the relationship between humans and AI. Fast forward a few years, we get sort of the 2020s and all of a sudden, it's a career path that one can take in the global world, working on AI governance and AI policy and even AI ethics in some cases.

So, I've always been interested and passionate in this subject area, but I spent most of my career working on issues relating to data policy and data ethics, first in academia, then in the UK government. And then when the kind of AI governance field really started to become a real profession, obviously, that's what I went to do. So, I've spent the past few years really focusing on this topic of enterprise AI governance.

Specifically, as you mentioned, I've been sort of leading AstraZeneca's global program in this space, working on the frontline of AI governance and really helping to figure out how a large multinational organization can effectively manage the risks of AI, comply with global AI regulations and seize the opportunities of AI, because it is obviously such a hugely important technology for all businesses and society more generally. And so, being on the frontline, but then complementing that with work on the training and education, as you said, I've done a lot of work with the IAPP and other organizations to train sort of the waves of AI governance professionals as the profession has matured, and then sharing a lot of information and content and knowledge on LinkedIn and on my newsletter.

Putting all of that together has led to a point where I almost feel like I had to write a book because I've been working on this stuff for a few years. And with my background in sort of philosophy, academia, I've just been writing for many years, but putting all my kind of thoughts and ideas together in this way, like I didn't have a choice basically, at this point.

Maurice

Yes, I remember that academic background and that very relevant experience. And as you say, the sudden surge of AI across all our worlds, you're in the right place at the right time.

The structure of the book

So, turning to the book itself, how have you structured this? How have you approached the subject matter?

Oliver

Yeah, so the book is called Fundamentals of AI Governance. It's a very practical, practitioner-focused book. So, it's not sort of a deep exploration of AI ethics, despite my philosophy background.

It is very much the kind of manual operating system for AI governance professionals, especially those working in organizational contexts, whether in the private sector or the public sector or the third sector. So, the way it's structured is first it focuses on the kind of risks of AI that are most relevant for organizations, such as risk relating to data, security, privacy, copyright. So, it goes through this kind of 10 core risk themes.

Then it focuses on global AI laws, regulations, and policies around the world with deep dives on the US, on China, on the UK, Australia, and various other jurisdictions, as well as an overview of international frameworks and standards and important activities in bodies like the UN and the OECD, just to kind of set the scene for how countries around the world have approached this over the past few years, really since sort of 2019 in particular, when the OECD released the AI principles.

And then there's a kind of deep dive on the EU AI Act, because obviously from a regulatory perspective, that's the most comprehensive AI law that companies have to grapple with today. And that's really the only sort of comprehensive, wide-ranging AI law that has become a core part of the compliance focus of companies around the world. So, there's a deep dive on the EU AI Act, but very practical and very much focused on what organizations should be doing to be in a good position to comply with it.

And then finally, the meat of the book, and this is really my kind of core framework, and I guess the most original contribution is the final part is implementing enterprise AI governance. So, it goes through the 10 key pillars for AI governance, which is what the AI governance framework should consist of, and then it talks about the how. So how do you actually do this work? How to lead an effective AI governance team? How to secure buy-in? How to approach this work?

So really my hope is for this to become a sort of definitive guide for all of the professionals that are working in this space, as well as students and others in their career that are seeking to enter into this space, because it's a hugely complex field that requires understanding across many different disciplines, from technology to law to sort of ethics and social issues. And I think that with kind of drowning in content, there's an overload of content and updates on all of these topics, so really the book aims to bring that all together in a way that's accessible and actionable.

Maurice

Yeah, well a very timely publication.

The biggest risk regarding AI governance

So just turning to those three sections of risk, international regulation and implementation, in terms of risk, which is the biggest risk that you identified?

Oliver

It's a great question and I guess the cop-out answer is it depends on the organisation and it depends on what their business line is, what their products and services are, but a one risk theme that really applies across the board to all enterprises is how the widespread use of AI and the way that AI has been democratised is posing and amplifying various data related risks. So this is not just about data protection and privacy, although it's a core part of it, but just given the ease at which anyone can use AI tools, that anyone can put input or upload any data or information they have access to into AI tools, this is hugely exposing organisations to risks around sort of data loss or kind of losing control over their most business critical or sensitive or confidential data.

And then these risks become amplified with increasing use of sort of agentic systems or highly personalised co-pilots and agents that can kind of surface information to people by mining through a company's whole data estate that they wouldn't have ever found without AI. And really the big concern here is that excessive use of publicly available AI tools poses a risk to companies because once that data is shared, there's not really much they can, any company can do to control what happens to it. It could be you train the company's AI models; it could be accessed by malicious actors or competitors.

So really these concerns around data, how data is used, how it's shared, what AI tools it's uploaded to is key. And really the best thing organisations can do, apart from obvious things like, you know, training their employees and raising awareness about this, is actually providing access to best-in-class AI tools and capabilities internally to reduce the incentive for anyone to ever go and use something that's publicly available.

Maurice

Yeah, and I can see that. Although I suppose even if you do that, you can't stop them in their own time using other AI tools.

The risk of no unified approach to AI regulation

But turning to the regulation part of your book, I obviously, as you mentioned, there's the EU AI Act, which is a template, one holds for various countries around the world. But there's no unified approach to AI regulation. Do you think there's a risk in that and regulatory arbitrage may be a danger? 

Oliver

Well, you're absolutely right. It's a flag that there's no unified approach. There's a highly divergent landscape. So, we've got the EU, which has gone very broad and very deep on a law that primarily is seeking to protect citizens and individuals from the potential risks and harms of AI.

Then we have the US, which, although there is no federal AI law, and of course, the current administration is against that, there are a huge number of AI laws across the various states that last count well over 130, especially in states like California, Colorado, Utah, New York. But these laws are very focused on, you know, almost like one specific thing at a time, rather than the kind of comprehensive approach in EU law. And then in China, there actually are quite strict and stringent generative AI focus laws, which we'll be discussing at the summit.

And these laws are very different to the EU's because they're primarily concerned with restricting and controlling the way in which information is accessed and shared through public generative AI applications to ensure that the information ecosystem is not compromised in China by those generative AI tools. So, you've got three sort of main powers with very different approaches to AI regulation. And then, although other countries have passed AI laws, such as Japan and South Korea, none of these can really be compared to the stringency of the EU AI Act.

So, when we talk about this concept of the Brussels effect, you know, we speak about this a lot in other areas, such as privacy and data protection. Really, we're not seeing a wave of kind of countries adopting laws that are similar to or inspired by the EU AI Act. So, in that sense, we're not really seeing a Brussels effect.

However, multinational corporations and companies that operate across jurisdictions, especially when that includes the EU, they are modelling that global framework on the EU AI Act, because it's highly sort of ineffective and inefficient to have different AI governance policies in different countries. And it becomes even more complex to just following the highest standard and implementing it worldwide. So, I would say that often we see companies taking that approach and then building in mechanisms to enable them to take advantage of more flexible laws and jurisdictions where they don't have the EU AI Act or such a similar approach.

Maurice

Interesting how that fragmented scenario may change perhaps over time.

Biggest challenge of implementation

Finally, because of our time limit, Oliver, in terms of implementation, what's the biggest challenge there?

Oliver

The biggest challenge is always the people side of things. So, you can have the best policies and processes and ideas about what your AI governance framework should consist of, how it should be rolled out, how it should be adopted. But the theory is one thing and actually implementing that in practice requires a huge amount of influencing, persuading, stakeholder management, change management, because ultimately AI governance is change management.

So, you can have the best policies, processes in the world, but it's worthless if you're not bringing the organization on the journey with you and getting buy-in and support from the right people at the right level of seniority at the right time. So, the challenge with AI governance is that it's almost directly coming up against the broader push to rapidly adopt and use AI. So, many companies now see AI as integral to their future competitiveness as an organization.

Many companies, and probably rightly so, think that, well, if we don't get ahead of this thing, we're going to lose out and miss out in the long term, and that's going to give us bigger problems than these risks and compliance issues that the AI governance folk are talking about. But I think it's a false dichotomy, because ultimately, you're not going to be successful in your scaling of AI if you just don't think about the risks and regulations, because you might go quicker initially, but something's going to come back to haunt you and undermine your whole program. So really, the people side of it is most important, and winning the argument and driving change by effectively persuading, influencing, building relationships across the organization.

But ultimately, you have to get to a place where AI governance is seen as an enabler of the innovation and not a blocker, because otherwise, you're always going to get pushback. And you can only do that by taking a balanced approach. So, it's not easy, but this is how to sort of approach the change management side of it, the stakeholder side of it, and how to actually get the balance right between risk, compliance, and then value and innovation is obviously a core part of what I focus on in the book.

Maurice

It sounds like a tremendously useful book, I think, particularly with that practical approach, practitioner's approach, as you say.

Now, for our viewers, if you'd like to hear more from Oliver and our other speakers at the AI Regulation Summit in London on the 11th November, do visit our website, www.cityandfinancial.com. I'm sure there'll be further details available about Oliver's book, Fundamentals of AI Governance, available at the conference.

So, Oliver, looking forward to seeing you in November. Thank you so much for joining us today.

Oliver

Thank you, Maurice. See you soon.

Jump to

The journey of writing the Fundamentals of AI Governance
The structure of the book
The biggest risk regarding AI governance
The risk of no unified approach to AI regulation
Biggest challenge of implementation

Please note that this event has now finished.

To purchase the recording, please email: bookings@cityandfinancial.com

Video recording

The conference fee includes 28 day access to video recordings, excluding any sessions that may have been held under the Chatham House Rule or off the record.

Watch now

SPONSORs & Media partner

Premier Sponsor

Sponsor

Logo for Advai

Sponsor

Logo for PA

Sponsor

Logo for White & Case

media partner

Logo for City AM
City & Financial Global Ltd is a protected trademark.
Copyright ©
 

Terms and Conditions | Privacy and Cookies

QUICK LINKS

Agenda

Speakers

Begin Registration

Contact Us

CONNECT WITH CITY & FINANCIAL

#AIRegulation

WHEN IS THE EVENT