Episode Transcript
[00:00:14] Speaker A: Hi, everyone. Welcome to the Ethicast. I'm your host, Bill Coffin. Today is Global Ethics Day, an event created by the Carnegie Council for Ethics and International Affairs. The theme for this year's Global Ethics Day is Ethics Re Envisioned, which captures the state of incredible transformation that we find our world undergoing and the significant ethical crises, challenges and opportunities that present themselves to leaders every day. And few things are changing the landscape in the field of business ethics more than artificial intelligence, which isn't just transforming the everyday realities of ENC teams. It's prompting leaders to wonder how they can prepare for their own next big moment of organizational transformation. How to reconcile AI's own ethical concerns with the power it has to advance business integrity and, and what responsible AI might look like a year or two from now. With us today to share her insights on this is Ethisphere's Chief Strategy Officer, Erica Salmon Byrne. Erica, welcome back to the Ethicast and happy Global Ethics Day.
[00:01:09] Speaker B: Happy Global Ethics Day, Bill. It's my pleasure to be here to talk about a topic that we are hearing about constantly from our community.
[00:01:19] Speaker A: It is, and it's a topic that has been a sort of the meta theme, I think, for this whole, whole year. And every time you think that maybe it's going to hit a critical mass and sort of tail off, it really doesn't, because this is, this is one of the most robustly evolving meta topics I've seen in this space in a very, very long period of time. And it never stops being interesting. So I'm glad we have this chance to talk about it today.
The first question I have for you is this. Over the last two years, we have seen this extraordinary leap forward in terms of the ENC profession's relationship with artificial intelligence. Initially, much of the effort around this was building proper governance for a swiftly evolving technology and its associated use cases. Now, your answer to all that has been interesting. It's always been, relax, everyone, you've got this.
I love that answer. And I would love if you could just explain that a little bit more and perhaps get into how EC leaders might view the value that they bring for their next AI level transformation that will suddenly make everything feel upside down all over again.
[00:02:15] Speaker B: Yeah, yeah, Bill.
So you're right. My response to a lot of the conversation on this topic has been, relax, you know how to do this. This is a risk. You know how to evaluate risk. You know how to craft policies, you know how to talk to employees, you know how to do this.
The one caveat, and I still believe that, I believe that firmly, like the skill set that ENC teams have, positions them beautifully to address this emerging risk, just as it has positioned themselves to address emerging tariff risk, emerging fraud risk. Like all of the risks that ethics and compliance teams have managed over the course of the last couple of decades, which at their heart are about people making good choices and the tools that they use.
We know how to do this. We just have to take a deep breath and apply those same principles and processes to this new technology.
I did, however, see a story that came across my email.
I can't remember if it was yesterday or today.
It was a survey that Law360 was promoting about the fact that of the surveyed companies, 57% of them said they were deploying AI without being certain of proper governance.
So while I would say we could have this, I guess I have to caveat and say I'm not sure we do have this, because I do think, you know, teams need to be moving quickly.
Employees are getting a lot of pressure from their managers to be more efficient to use these kinds of tools. Right. We've seen this in all kinds of organizations of all kinds of shapes and sizes. And the regulatory landscape continues to evolve. We are seeing litigation that is, you know, that is happening right now that will govern the parameters of generative AI. We're going to see additional regulation that's going to be coming out around generative AI. So it's a very evolving landscape. Employees are using these tools. They're using them personally, they're using them professionally. And so we have the skills to have this, but I'm not sure we do actually have it, that is.
[00:04:24] Speaker A: Yeah, yeah. Well, as a technology, AI is not without its own ethical challenges, from the legality of how large language models have been developed, to the pretty astonishing environmental costs of its operation, to the rollout of potentially problematic AI avatar such as Tilly Norwood. How would you recommend leaders reconcile any persisting ethical concerns about the technology itself against competing professional expectations to use this technology both as a competitive differentiator and as a force multiplier for business ethics and compliance?
[00:04:55] Speaker B: Yeah, yeah. Bill, when you put this question in the invite for today, I spent a fair amount of time thinking about it and trying to think about parallels that we have seen where, you know, we could take some lessons from history. And so I dove into some of the writings of the Industrial Revolution, which was kind of one of the same times that we saw a lot of these very rapid technological evolution. You know, what is it doing to.
To workers, to safety, to all of these kinds of things and I don't know the last time that you picked up a copy of the Jungle, but there's a lot of interesting parallels we can see, you know, in, in, in the writing on that particular topic. And I think my advice to a company that is thinking about this seriously would be to go back to a values and a stakeholder conversation because there are absolutely interests that have to be balanced. You, you know, you, Tilly Norwood. I would also talk about some of the litigation we are seeing around ChatGPT or other agents that are being used by children without necessarily the right kind of rails and in place to control that usage in a handful of circumstances with extraordinarily tragic results. And so there's a lot we don't know about what technology is ultimately going to be able to do.
And there are things we can think about that we can pull lessons out of the, you know, the sort of the writings of the Industrial Revolution. So the first is to think really seriously about what are you using this tool for in the first place?
[00:06:25] Speaker A: Right.
[00:06:26] Speaker B: What is the point of the tool?
Is the point of the tool to replace your employees? Well, that's one conversation from an ethical perspective. And how are you going to make those decisions and think about those, those that process and, and, and, you know, and making sure that you are satisfying stakeholders. Because on the one hand, you've got a situation where you can use these tools to do, to be more efficient and to, you know, raise your EBITDA margins and all of that sort of thing. But at the same time, I would challenge that assumption a little bit because you, you know, unless we all become inured to the lack of quality in some of the AI generated content, we are still going to have people who are dissatisfied with what is coming out. And it requires that human brain to kind of filter it. So, you know, you've got, you've got that sort of efficiency versus quality balance that you have to work through. Are you actually producing something that a person is going to want to engage with? Or are you, is your bot producing something that is then going to be screened and dealt with by another bot? And then we just have bots talking to bo and there are, by the way, some pieces of research out there that are saying that as much as 50% of what's on the Internet right now is just bots talking to bots. And so, and that's not getting anybody anywhere in terms of, you know, sort of moving the ball forward. So think about your stakeholder analysis. Who are your key internal stakeholders? Who are your Key external stakeholders. Do you actually know what matters to them or are you making assumptions about what it is that they're going to want?
Really, as a leader in particular, press your people. Do we actually understand what our external stakeholders are asking for? Do we actually understand what our internal stakeholders are asking for? Are we gathering those insights through point in time data where we may be dealing with a situation where it's a bot giving us the insight as opposed to an actual person? Have we asked a customer what they want as opposed to making assumptions as to what they might want and so really leaning into that stakeholder analysis. And for those of, of you who are out there listening to Bill and I, who are BELLA members, we have a great resource in the BELLA member hub that is a stakeholder mapping exercise. And it gives you an opportunity to say like, hey, who are my internal stakeholders? Who are my external stakeholders? What matters to them? And how do I know, right? How do I actually know that this is what matters to them? When was the last time I asked?
So that's one key piece of the analysis and then the other key piece of the analysis is your values as a business. Right. Why do you get up in the morning? What's your purpose for existing? What's the thing that distinguishes you from every other business like you out there?
Those are going to be the things that will allow you to steer the AI ship without it going fully off the rails, right? Those will be the things on either side of the train, to continue to abuse this metaphor. Those will be the things on either side of the train that are going to allow you to make decisions quickly. Because that's the other thing. Right. This is all changing very rapidly. Yeah. It's going to allow you to think about what do I mean by AI, right? What is the thing that I am talking about when I say AI? Because look, we've had algorithms for a bazillion years. Under some definitions, spell check is AI. So like there's lots of different ways that we can talk about and think about AI. Are we talking about large language models? Are we talking about agentic?
And those are some of the questions you really have to ask yourself when you are thinking about those pieces. And I'm really, you know, one last thing, Bill. I'm really glad you brought up the environmental piece because you know, I live in, in Colorado and there was a piece in my local news this morning, the title of which was the data centers want our water.
And you know there are going to be trade offs between the need for power, the need for water, the way these data centers are being run environmentally, that is going to run smack up against other stakeholder interests. And we're going to have to figure out how to balance all of those in order for any of this to be sustainable in the long run.
[00:10:29] Speaker A: Well, this brings us to the notion of responsibility. And my last question for you is that in the ENC space we hear the term responsible AI used to describe AI governance, but also as you mentioned, you know, general operating principle. Often it boils down to making sure that humans are in the decision making loop and that the technology is being used to advance human interests rather than purely mechanical ones. So my question to you is what values do you think that the concept of responsible AI currently reflect, especially through an ENC lens? And more importantly, what do you think responsible AI might reflect in another year or two from now?
[00:11:07] Speaker B: Yeah, the future casting is getting harder and harder, isn't it Bill?
[00:11:11] Speaker A: Because it really is.
[00:11:13] Speaker B: This is all just changing so quickly and I don't know that we know exactly what it's going to look like, you know, so there was another piece in my local news feed just last week. You know, what happens when you hit a driverless car?
What? That is an, that's a question that we are going to have to grapple with. There are increasingly situations where there are driver, you know, cars with human drivers and cars that are being driven autonomously and they all exist, they coexist on these streets that we have built across our planet. And we're going to have to figure out how to navigate some of those pieces. So I think those are going to be questions that we are going to see emerge and be addressed over the course of the next couple of years. I think the models, the large language models are going to get better. That's my hope at least the output of them will be something that human beings want to engage with. That is my hope at least. And I think we are going to have to grapple with some of the environmental costs of running some of these large data centers because, you know, the cost is substantial enough that it is going to have to be something that companies figure out a better way of doing. It's, there's just, there's simply no, no alternative there. So because in some of the parts of the world where these things are being built, there literally is not enough water. So we're gonna have, we're gonna have to figure this out.
So, you know, I think on the responsible use of AI though, from a policy perspective, again, there's a, there's an opportunity for ethics and compliance teams to have a piece of the conversation that is really structured around the why, why are we using these tools? What are the expectations for them? What are the expectations for the people who are going to be using them? How are we going to make sure that the output is consistent with our values is something that our stakeholders are going to want to engage with. How are we going to watch for situations where this, where these tools are being abused? How are we going to protect our intelligence, intellectual property on our confidential information?
People who listen to you and I a lot, Bill, know that, you know, I am very fond of citing the statistic that as much as 80% of the average company's value today is tied up in intangible assets. And intangible assets are at enormous risk from an, from AI because the reach and the speed of AI models is so substantial and the difficulty of getting something out of a model is so substantial.
So really thinking about what goes in, how do we protect what, what we need to protect when it goes in, what does it look like to build into our budget for the future, the cost of these models? Right. You know, none of these models are being given away or will be given away for free in the long term. So at what point does it turn into a situation where you were, you became reliant on a particular AI model to do your work and the company decides to raise the price three times? Well, are you going to pass that on? Are you going to absorb that? How are you going to handle those kinds of situations? And so I think there's a lot of very important questions that we need to be asking ourselves that are appropriate and timely for a global ethics day. And again, for those of you who haven't read the Jungle in a while, I recommend you go pick it up.
[00:14:33] Speaker A: Well, Erica, if you'll permit me to make a philosophical statement or observation, you know, the Bill Coffin having a relationship with AI of today is radically different from the Bill I for the bill, the Bill Coffin with AI that existed two years ago. Right. I remember when this conversations first started, kind of, you know, dominating the landscape. I was adamantly opposed to its use. I had deep, deep, deep philosophical concerns and some of them, frankly had been, have been borne out. I am a named party in one of the class action lawsuits because I've had a novel of mine illegally scraped by a large language model. Right. They just took it, you know.
Now to be fair, I'm a little flattered that my work was considered, you know, steel worthy, I suppose.
Right. But, but yeah, you know, but, but it makes me think though that I remember in those early days I had a very public conversation with a colleague of mine where I was dead set against the use of AI and he said, very interestingly, he was very pragmatic about it and he said, you know, basically, be it as it may, the truth is that the people we are here to help are going to be using this technology. And it behooves it like we can't help them if we can't understand this technology ourselves. We owe it to ourselves and to them to understand this technology and to, to, to not just keep it, you know, at arm's length forever. And I've thought about that a lot over the last two years as I, as I realized just how right he was and just how perhaps too conservative I had been back then. And as I have watched people in the ENC space undergo similar transformations and really developed some amazing best practices in how they make use of this technology.
I have to say that this is a very exciting time for the ethics space because, you know, I've never seen more ENC leaders basically have to play it by ear to keep up with the extraordinary pace of this technology's evolution.
And it's been very encouraging to see how well they play by ear because that ethical muscle memory they developed over many years of tradecraft, over many years of program building and assessment are really coming now where they don't have the time to spend a year figuring out what they want. They had to do it now and they're really rising to the task. And so, you know, I've often said to you, you know, ethics is sort of, you know, morality in motion, right? It's, it's that notion of like, you have principles, you have realities. How do you reconcile the two?
And the degree to which the ENC space is rising to that challenge is one of the most life affirming things I've seen come out of the whole AI phenomenon. So I just wanted to share that because I think it's an important observation worth making.
[00:17:00] Speaker B: Yeah, no, I mean you definitely, if, if you know, one of, at the sphere's values Bill is includes the language of having a growth mindset. Right? One of our values is be relentlessly curious. And that includes having a growth mindset. And I think that's a great example of a situation where, you know, you were, as you said, adamant. I mean, I was in that conversation adamantly opposed, you know, two years ago.
And you have definitely expanded your worldview on it. But the one thing I will say that was part of that conversation was you made the point that you have gotten, you had at that point as editor in chief of at this year magazine, gotten AI submissions, right? Submissions that were clearly generated by artificial intelligence, that were not noted as generated by artificial intelligence and were garbage.
So that is the piece I think that, you know, you're, you, you have not been proven wrong on yet. You know, a lot of what is getting generated by is has, has legitimately earned the title of work slop.
And, and, and, and that's why I keep, just keep reminding everybody I talk to about this particular topic. You have to remember that at the end of the day, you are producing something for a consumer of that thing.
And if you don't center that stakeholder in everything else that you're doing, you are going to stumble. You may not fall down completely, but you are going to stumble. And so if you're using these tools and you're getting pressure to use these tools because of, you know, EBITDA margins and finance, you know, investor expectations and whatever else it is, remember to keep the ultimate consumer of your product centered in the discussion so that the decisions that you're making around using AI are the right decisions for that ultimate beneficiary of the work you do.
[00:18:51] Speaker A: Well, Erica, thank you as always for sharing your thoughts and insights with us. And as Global Ethics Day challenges us to re envision ethics, I know that's something that you are doing every single day. So once again, thanks for your time.
[00:19:00] Speaker B: Oh Bill, it's my pleasure. And happy Global Ethics Day to everybody out there listening to Bill and I. And if you didn't know it was a holiday until you listen to Bill.
[00:19:07] Speaker A: And I welcome you do now so well. To learn more about Global Ethics day, please visit Carnegie counsel.com and if you're interested to learn more about the impact of AI on ethics and compliance, then check out Ethisphere's latest report, AI and Ethics and Compliance Risk to Manage Tool to Leverage, which features an overview of AI regulatory trends, AI governance best practices, and compelling use cases from ENC leaders at Cargill, Palo Alto Networks and Verisk. To read this report for free, visit ethisphere.com Also registration is now open for the 2026 Global Ethics Summit, which will take place in Atlanta, Georgia next March 29th through the 31st. Lock in your place to be part of the most impactful discussion of best practices, success stories, innovations and insights that will be driving the business ethics conversation around the globe. To reserve your spot to attend in person or virtually visit attendges.com well, thanks for joining us. We hope you've enjoyed the show. For new episodes each week, be sure to follow at The Sphere on LinkedIn and please subscribe to us on YouTube, Apple Podcasts and Spotify. Every like comment and share helps to make the world a better place by advancing business integrity, and we truly appreciate your support. That's all for now, but until next time, remember, strong ethics is good business. Bye now.