Learn How Strong AI Governance Improves AI Itself

Episode 161 March 05, 2025 00:11:37
Learn How Strong AI Governance Improves AI Itself
Ethicast
Learn How Strong AI Governance Improves AI Itself

Mar 05 2025 | 00:11:37

/

Hosted By

Bill Coffin

Show Notes

In this episode, Dr. Eva-Marie Muller-Stuler, who leads the Data & AI practice for EY Middle East/North Africa, shares how organizations can maximize their value with AI by building a strong framework around the tech to determine its use case, who is responsible for it, and how its inherent risks will be managed.

1:28: Ethics and compliance's role in a world where AI is everywhere

4:35: How to robust AI governance amid uncertain AI compliance regs

7:01: Building an AI framework that maximizes value and minimizes risk

To explore EY’s global insights on how you can build confidence in AI, drive exponential value throughout your organization and deliver positive human impact, visit www.EY.ai. And to learn more about Dr. Muller-Stuller’s work on AI, visit www.dreva.ai.

For free resources on AI Governance, please visit the Ethisphere Resource Center at www.ethisphere.com/resources.

Also, be sure to check out our related BELA Asks episode, How Do I Develop a Good AI Policy? - https://youtu.be/6OlohQK-a9M?si=2c8g1JWVAhMZRata

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Hi everyone. Today we'll discuss how building strong AI governance can create much better outcomes for your organization. I'm your host, Bill Coffin, and this is the Ethicast. Artificial intelligence utterly captured the attention of the ethics and compliance space. Throughout 2024 and 2025 is shaping up to be no different as organizations contend with operationalizing this transformative technology into their daily practice. But for all of AI's manifest capabilities and strengths, especially in the realm of third party risk management and supply chain due diligence, it also brings potentially substantial risks with it. Strong AI governance then is the key for making the most out of AI. But that's easier said than done. The technology is moving so quickly and organizations struggle to keep pace. Joining us today is Dr. Eva Marie Mueller Stuler who leads the data and AI practice for EY Middle East North Africa where she is responsible for the development and implementation of complex data science and AI projects and transformations. Previously, Dr. Mueller Stuler was CTO Artificial Intelligence and Chief Data Scientist for the Middle east and north and Africa at IBM. Dr. Mueller Stohler, thank you very much for joining us. [00:01:27] Speaker B: Thank you for inviting me, Bill. [00:01:28] Speaker A: AI is an amazing technology, but it has significant risks attached, from bias and hallucination to its role in eroding privacy and distorting our very appetite for the truth. These are all indications that the technology's capabilities are evolving much faster than our frameworks for creating guardrails around its use. What are your thoughts on that? And specifically, what role do you see for ethics and compliance officers to play in a world where, where AI is everywhere? [00:01:57] Speaker B: I think, to be honest, what we saw, especially in the last few years, the topic of AI took over. It took off at a speed that nobody saw coming. And it's really coming into every field of work, every field of our private life. And we're heading towards a future where there is not much space without AI or some AI solutions influencing it or judging us in some way or the other. And that really builds up the importance of having an AI ethic officer. Because the question for companies will be more and more led by what are we doing? Are we doing it right and how are we doing it? We see the risk of bias, of privacy, protection, of hallucinations through large language models is becoming so high for organizations that it actually can put the whole company at risk. So the role is really to act as the moral compass for the company and saying, okay, how do we ensure that the AI we're developing or the AI that we're using is actually aligned with our Values, how do we mitigate the risk? How do we build a culture of trust that we can trust our AI solution? We trust our people and we also trust that when things go wrong they are reported very, very early, so we have space to interact with. That comes the importance of the ethical compliance officer is also in charge of making sure there is the right education for the people that they know what is allowed with an AI solution, what is not allowed, where are our own frameworks? And it's building a culture of trust and collaboration. I think that is one of the most important things when things go wrong is normally when the people in the team and the developing team or in the procurement team and so on are too much worried about raising their concerns. So what we really want is that it has zero retaliation for somebody saying, I, I don't trust it. I don't think we can build this in a good way. I'm very, very heading and, and this culture of trust is I think, one of the most important things that a compliance officer and ethics officer has to ensure in their company that people feel secure in, in raising ethical concerns. [00:04:35] Speaker A: Regulatory frameworks often help to ensure the safe development of an emerging technology without getting in the way of innovation. However, the regulatory framework for AI is quite scattered both in terms of jurisdiction and in terms of regulatory approach. What advice do you have for organizations, especially multinational or global organizations, that are seeking to create a robust governance and compliance framework around AI when the compliance expectations themselves so uncertain? [00:05:02] Speaker B: So I always struggle a little bit with a phrase like that. Rules and regulations hinder innovations. Aviation would have not been such a successful area if we wouldn't have had strong safety guidelines. Nobody would go on a plane without having the trust that it's safe to use. And that is similar where what we need in AI, we need to have people have the trust and build in it. So because the landscape is so scattered, I think it's very important that the ethics and compliance officers stay on top of the laws and regulations so they know what's applicable for them and whatnot, but also that they're 100% aware of what are their organizational values, what do they stand for on top of that? So the rules and regulations are one piece, but if you're actually standing for sustainability, for equal chances and equality, or any other sustainable development goals, make sure this is also embedded in your values and in your, in your ethical values. The other thing is because the field is changing so quickly, it's very important to stay agile and say, okay, no, we have a new problem. We Have a new solution. We have, for example, with large language models becoming so popular, the risk of hallucination, the risk of bias and so on, the risk of safety, have completely new dimensions. And how do we adjust that to our organization? And the last thing I can only advise everybody get involved. The the laws are still in the writing. The laws are still not fixed in stone forever. If you want to be part of a society that is ethical, join the discussion and make sure you're part of shaping the values that we hopefully one day can align more and more between the different demographics and the different industries. [00:07:01] Speaker A: What advice do you have for ethics and compliance teams on building a framework around their organization's use of AI that maximizes value and minimizes risk? [00:07:11] Speaker B: First of all, I think it's important that you take a holistic approach around the organization. You don't just focus on one area. You don't just say, okay, I'm writing an ethics and compliance framework for procurement that is good, but actually we have AI solutions everywhere. So it needs to be holistic. First step is always where are we now? What AI solutions do you have? Are they ethical, are they biased, how are they implemented and so on. How do they shape us as an organization? What is the risk? So really sit down and say, okay, let's look at understand the current state that we are. And then based on that, I would start developing clear policies for data collection all the way to AI solutions, data representation and so on. That even if everything is right, we still have to make sure they don't mislead you in the way the outcomes are presented. So the framework really needs from end to end, then it's important to implement oversight mechanisms. So first of all, like, okay, how do we make sure we know what is happening in our organization and how do we make sure we know what are things changing? How is it reported with the organization, all these like, who's responsible for what? I think the point responsible is very important because it needs to become a culture of responsibility. Safety is not just one person's or ethical. AI is not just one person's response responsibility. Everybody in the organization will have to be responsible for, for reporting it, for seeing, saying something if they see something for the AI they're using. Same as with money laundering laws and any other doesn't it's not enough if one person is compliant with the law, everybody in the organization has to be compliant. And then last thing is monitor an audit and make sure that if things changes, you're on top of it. So what is your monitored audit framework, your guidelines and how will your ethics change and have a plan for the next stage and the next stage so that it's not superseded too quickly. There will always new solutions coming be coming up and there will also be new things coming in through third parties. So it might not just be your own development but also from the outside and this open mindedness on what is going on in the future, future readiness, that is something that is absolutely important for these teams. They cannot be a closed mindset. It has to be a constant culture and deep ingrained in their personalities, in their hearts is the development involvement, the growth mindset of being sure that you stay ahead of it going forward. [00:10:02] Speaker A: Well Dr. Mueller Stuehlar, this has been a really fascinating conversation and I am very grateful to you for coming onto the show and for sharing your insights with us today on this important topic. Thank you very much for joining us. [00:10:12] Speaker B: Thank you so much Bill. Thank you for inviting me to explore. [00:10:16] Speaker A: EY's global insights on how you can build confidence in AI, drive exponential value throughout your organization, and deliver positive human impact. Visit www.ey and to learn more about Dr. Mueller Stuehller's work on AI, visit Dreva AI. That's www.dreva AI. For free resources on AI governance, please visit the Ethisphere Resource center at ethisphere.com resources. Also, be sure to check out our related Bella Asks episode How do I develop a good AI policy? The link for that is in the show notes. If you would like to appear as a guest on the Ethicast to share a best practice, a success story, or your own proof point around how business integrity builds value, we would love to hear from you. To drop us a line, visit ethisphere.com ethast I'm Bill Coffin and this has been the Ethicast. For more episodes, please visit the Ethisphere YouTube channel@YouTube.com ethisphere and if this is your first time enjoying the show, please make sure to like and subscribe on YouTube, Apple Podcasts and Spotify. Thank you so much for joining us. And until next time, remember, strong ethics is good business.

Other Episodes

Episode 101

July 25, 2024 00:07:33
Episode Cover

BELA Asks: How Do I Craft a Good Non-Retaliation Policy?

In our BELA Asks series, we address questions posed by members of the Business Ethics Leadership Alliance (BELA) about wider issues facing the ethics...

Listen

Episode 107

August 23, 2024 00:07:05
Episode Cover

BELA Asks: What Are Good Ethics & Compliance KPIs?

One of the best benefits of being a member of the Business Ethics Leadership Alliance (BELA) is that if you have any questions at...

Listen

Episode 113

September 11, 2024 00:25:09
Episode Cover

Optimizing Your Supply Chain Due Diligence

In part 3 of our special 4-part series on supply chain due diligence, join Patrick Neyts, CEO of VECTRA International; Rob Bailes, Director of...

Listen