interview with Stephan Geering
“humans are no longer in control of kind of some of the decisions makers”
Maurice
Hello everybody, welcome to the latest edition of C&F Talks. Today, I have with me Stephan Geering, who's a Global Privacy Officer and Trustworthy AI Lead at Anthology. And Stephan's going to be speaking at our AI Regulation Summit, which is being held in London, on the 27th of January next year. Stephan, welcome.
Stephan
Thanks for having me.
Maurice
It's very good to have you with us.
Stephan’s role at Anthology
First of all, Stephan, can you just tell us a little bit about your role at Anthology? A global privacy officer, I can understand, trustworthy AI lead, what's the, sort of, definition around that, in terms of the lens through which you look at these issues?
Stephan
Yeah, and I'm probably kind of in a similar position to many of my privacy colleagues that start in a role as a data protection officer, a global privacy officer, and then as kind of chat GPT kind of entered the focus of all the attention, and comprehensively understood that they need to implement responsible AI, what we call trustworthy AI programs, that some of the privacy teams kind of inherited or volunteered to kind of take on that part of the work as well.
And while obviously there's privacy as aspect of the whole aspect of responsible AI, there's much broader range of risks that need to be addressed as part of that. But I, in kind of my role, I'm owning kind of a global compliance with privacy, but then also making sure that we as a company are kind of responsible in how we use AI kind of across the globe, not just from a privacy perspective, but much more broadly.
Maurice
Fantastic, that's very helpful to understand.
The risks around the EU AI Act
And obviously AI regulation, it goes from the pro-innovation approach to the safety-first approach and anywhere in between on that sliding scale. When you look at the EU AI Acts, what are the risks you see there from your perspective then, in terms of privacy, in terms of trustworthiness?
Stephan
Yeah, so the EU Act is kind of interesting because it's arguably, I mean the Chinese may disagree a little bit, but kind of the first kind of comprehensive approach to try to regulate AI. And I mean the interesting piece about it, and it's kind of a good reminder for people like myself coming from the privacy space, privacy is obviously a really important component, but the EU Act is actually based on product safety legislation.
So it's kind of, that's why you have to conform with the assessments, the quality management system, all of that. And I think that's kind of where people like myself, had to kind of quickly get up to speed, the kind of the product safety principles and learn kind of about all these other risks. And when we can think about in terms of risk, I mean, some of our risks that we have known for quite a while in terms of AI can typically risk like fairness and bias, kind of, you know, that obviously if you train a system on training data that has some bias in the training data sets, that bias may trickle down into the system.
And we all know the examples, kind of, if you ask kind of charge equity or some kind of other tool to kind of draw your picture of a doctor and the nurse, you can kind of guess who is going to be the man and who is going to be the woman in that picture. And then we, of course, have things like accuracy, that's new, with generative AI you need to make sure that we focus on the hallucinations, the issues about being accurate. But there's kind of also security aspects, kind of prompt injections of aspects, people can try to either circumvent certain guardrails that kind of the LLM providers have implemented and kind of get access to training data or kind of get the LLM to kind of provide, kind of produce dangerous outputs.
And then, of course, the more broader risk of that there's not enough kind of governance and transparency around this and that humans are no longer in control of kind of some of the decisions makers. That's kind of why the EU Act has this kind of comprehensive approach. And I mean, you can go into a lot of details, but kind of it is four tiers of kind of prohibited, unacceptable risk to kind of the high risk, which includes some HR and education use cases.
And then kind of the more limited risk is mainly around transparency and almost zero risk. So it's kind of a risk based approach that touches on all of these risks and has kind of, depending on how risky a certain use case is, has different requirements and obligations.
Maurice
So, as you say, that's a very comprehensive model that we have in the EU.
AI approach in the US
In the US, perhaps towards the other end of that spectrum that we were discussing, there's much more of a pro-innovation approach. And I guess you would see that then as coming with a fair degree of risk.
Stephan
Yes, it's interesting to compare kind of the US and European approach.
I mean, kind of similar to privacy, the US is, it's kind of more like a patchwork approach at the moment. I mean, this is kind of almost necessarily because as many of you know, kind of it's very hard to pass any legislation in Congress on the federal level. So a lot has been done kind of on the kind of it at the help of executive orders that can be issued by the President.
So we have this really comprehensive AI executive order that was issued, I think a bit more than a year ago, that really requires a lot of US federal agencies to come up with programs and define requirements for the various areas that they're responsible for. And then we have kind of on the state level, we have states kind of that have been starting to implement laws. And these are kind of typically a style kind of very sector and risk specific.
So we have, for example, New York, a specific law regarding use of AI in hiring, then Colorado Act has, kind of, taken a not exactly the same, but kind of similar risk approach to you there. It's kind of focused on high-risk system as well. So, there's lots of different laws in on the kind of the state level.
So, it's kind of a federal or state and sometimes even city level, they kind of have different occasions, it's much more patchwork. You, of course, have the very comprehensive kind of uniform of new AI.
New AI model in the UK
Maurice
And in the UK, I guess, we're seeing the emergence of a new model, moving on from the thinking of the previous government. And I guess that's something more of a halfway house. And do you think that that's a better balance between the safety and the risk aspect?
Stephan
Yeah, I think it's a bit early to say, because I mean, we need to see all the details here. But I think that the interesting piece is going to be, I mean, if you're a kind of a service provider company that kind of just provides services in the UK, then you can kind of fully focus on that approach.
But obviously, because the EU Act is a product safety legislation, if you as a UK company want to export anything or provide services to your citizens, you kind of invariably have to implement the EU Act to make sure you still have access to the EU single market. So I think I can imagine that a lot of companies in the UK, who at least do some kind of export or provide services into the EU market and want to do in the future, will have to kind of closely follow the EU Act as an approach.
Maurice
Yeah, yeah, I'm sure that that's absolutely right.
Global AI regulation: a patchwork of systems
I mean, you talk about the sort of fragmentation in the US, but I guess, you know, across the world, there's a patchwork of systems. What does that mean, then, from a regulatory point of view? Do providers of systems have to adhere to the highest level of regulation, do you think, in order to export globally? Is that what happens, or do they simply comply at a national level? It's sort of fairly inefficient if you're a global AI company.
Stephan
Yeah, I mean, this is a challenge for all kind of companies.
And kind of from my experience in the privacy space, kind of, the EU GDPR has almost become kind of the global standard, and we kind of as a company, there's a baseline, lots of other companies do, so that has become a bit easier. I think on the EU, sorry, on the AI regulation piece, we're kind of in a much earlier phase, but it's not really clear yet. It's going to be the EU Act that is the global standard.
Will there be another approach that will dominate? I mean, Singapore has quite an interesting approach to the regulation of AI, kind of just the halfway house between innovation and also kind of sufficient regulation. So, a bit early to tell. I mean, I think the EU Act will definitely be very influential, because it's very comprehensive, but at the same time, you've also seen that it's not perfect.
I mean, one example, for example, that only last minute, the whole aspect of the general purpose AI systems, or LLMs have been introduced last minute, because the EU Act actually started before the big kind of release of chat GPT and wasn't even considered. So there's kind of the danger that the EU Act may be the first, but because of its kind of early on, it may miss some of the critical technical developments that you may see in one, two years, because things will be just so quickly.
Steps institutions should follow to ensure their vendors meet high standards of ethical AI
Maurice
Yeah, and turning to one specific risk, you know, with your privacy and trustworthiness hat on, vendor risk, third party risk, you know, what's best practice there? Do you think, you know, how do people, how do institutions and companies manage that risk? They're outsourcing those to vendors, how do they ensure that they're not simply unaware of the risk? How do they manage them properly?
Stephan
Yeah, that's a good question.
I mean, and some of it is not really new, because a lot of companies use cloud-based services already. So, they're already outsourcing a quite a lot of kind of, in terms of privacy and security controls to vendors. So, most companies will have kind of through the procurement process or a separate kind of third party or vendor risk management program will have kind of due diligence in place, making sure that they're comfortable, that the vendor meets their own standards by kind of sending questionnaires, by doing audits, by reviewing their SOC 2 reports, ISO certification, whatever it is.
And I think on the AI piece, we can just leverage that and build on it. I mean, there's obviously one of the significant, important aspects that many companies have is kind of the concern that your data ends up in the training data of some kind of vendor and kind of be like many other companies use kind of the generative AI API services that companies like Microsoft or AWS provide, where you basically take a cloud-based LLM off the shelf and kind of include this in your products. But basically, it's driven by Microsoft and AWS with the help of OpenAI and Anthropiq.
And there, yeah, you really need to make sure that you're comfortable kind of how Microsoft and AWS and similar companies handle your data. And there are some commitments on not using that data for kind of training their own models, which is generally the case for enterprise licenses, but also otherwise, how they kind of responsibly use kind of that.
Maurice
Great. Well, I think we've probably run out of time there. So, for our viewers, we very much hope you'll be able to join us at the AI Regulation Summit on the 27th of January in London, where you'll hear more from Stephan and our other expert speakers. So, if you want to find more information on that, do have a look at our website, www.cityandfinancial.com.
Stephan, thank you very much for sharing those thoughts with us today.
Stephan
It's been a pleasure. Thanks for having me.