Episode Transcript
[00:00:00] Speaker A: Hi, everyone. You've got questions and we've got answers. Welcome to another Bella Asks episode of the Ethicast.
The Business Ethics Leadership alliance, or bela, is a global ethics and compliance community that provides exclusive access to helpful data, benchmarking events and other resources to advance your ethics and compliance program.
One of these resources is Bella's concierge service, in which members can submit any question at all regarding ethics and compliance, and our internal experts will provide an answer, plus helpful resources with more information.
Now, while we invite everyone watching and listening to join bella, we also know that there's no competition in compliance.
That's why we're using this program to thematically respond to high level questions from the Bella community for the benefit of E and C teams everywhere. Joining us once again to answer those questions is Bella chair Erica Salmon Byrne. Erica, as always, it's great to have you on the program, Bill.
[00:01:05] Speaker B: Thank you so much for having me back. I am excited that our BELLA members keep asking questions, so I have a reason to.
[00:01:11] Speaker A: Well, our next question is rather topical. It's an AI question and it reads, how should companies message around AI use and policy?
[00:01:19] Speaker B: Yep. And it is very timely, Bill, because AI is everywhere right now. Every conference agenda is filled with panels about AI. Everybody's writing about AI, there's all sorts of questions out there about AI.
And look, I get it, it's a, you know, it's a very exciting emerging technology.
But the takeaway to me is, you know, I would just like to remind our audience, you know, how to do this, right? It is a, it is a, it's a people process control analysis. And so when it comes to messaging in particular, you want to use that message messenger modality formula that you and I have talked about over the years, Bill, to say, okay, what is it exactly that I want my people to understand?
Is it don't put confidential company information into the model? Is it don't use the enterprise license to the, the model to plan your personal vacation? Is it make sure that you proof what comes out of it before you just go ahead and pop it in a client email. Right. I identify what the risk is and, and, and who presents you with that particular issue and then target your communications accordingly. Right? We, yes, it feels, you know, scary and overwhelming, you know, and sort of existential, the potential risks and opportunities around AI. But when you really break it down, what you're talking to your employees about is how to use a new tool.
And we as compliance professionals know how to talk about how to use a new tool. And so you know, what, what, what a rollout looks like is basically exactly that. You know, who is using it, what's the risk they're presenting me with? And then how do I get the right message in front of my employees to be able to, to make sure that I've mitigated that risk?
[00:03:09] Speaker A: Now you are the expert of the show so far. Be it for me to jump in with my own thoughts here, but would you mind if I offered a personal observation?
[00:03:15] Speaker B: Please do. Yes.
[00:03:17] Speaker A: So on the professional front, one of my great challenges with AI is a, what I call a failure of imagination. I often struggle to understand how can I make use of this tool. Not so much how do I make use of the tool. And from a risk perspective, I also look at it from the other way, which is I think people suffer from an imagination, a failure of imagination. For how could I inadvertently misuse this tool? Or how could I use this tool in a way that actually violates established protocols and programs and codes? And I say that because just today in personal conversation, I was speaking with somebody who relayed to me a absolutely blood curdling story. Something that happened in real time that was, you know, somebody dumping tons of confidential information into, into a, an agent in what would be clear a violation of any kind of AI governance policy.
And it's, it just, it mentioned and I think this person never even thought about it. You know, that was the kind of, the conversation. They couldn't even imagine that they might be violating a rule, a norm or a best practice. So these things we talk about, they are not hypothetical. They are very, very real and very, very practical. And all the things you said are like really concrete things people should be implementing.
[00:04:27] Speaker B: Yeah, and, and, and Bill, you know, hopefully as an organization, you know, you've kind of thought about what the, what the risk and the use cases for AI look like for your company. The stat that I would highlight for you is a piece of research that came out six or seven months ago. Now IND there are, you know, a lot, and this was six or seven months ago, so the number is probably even higher now. There was a little bit north of 60% of employees who responded to the survey who said that they regularly use AI to, you know, automate pieces of their workflows. And just 30% of that group even knew what the, their company's AI use policy was. And so again, like, not to be a little bit of a broken record here, Bill, but you and I talk about this all the time. It's people, right? It's people being people. It's do your, do your people understand what it is that they can and can't do when it comes to these new tools? Do they know how to use them effectively, efficiently, and in a way that they don't? You know, they're not presenting the organization with risk.
And how do you know? Right. So again, take a breath. You know how to do this, people. Process control, message messenger, modality. Right? Use the formula to identify who's going to present you with these risks. You know, your message to your engineering team is going to be need to be different than your message to your sales team. Target it, tailor it, grab people's attention so that they understand what the issue is that they're dealing with and then go from there.
[00:05:56] Speaker A: Well, Erica, thank you so much for weighing in on this. I sure appreciate it. And it's a very, very timely answer to a very timely question.
[00:06:02] Speaker B: My pleasure, Bill. And to all those Bella members out there, keep the questions coming. Okay, so I have a chance to come back and talk to my friend.
[00:06:09] Speaker A: Bill to learn more about Bella. Visit ethisphere.com bella to request guest access to the member resource hub and to speak with the Bella engagement director. If you have a question that you would like answered on this program, contact the Bella Concierge service and we'll get to work on it for you.
This has been another Bella Asks episode of the Ethicast. Thanks so much for joining us. We hope you've enjoyed the show. And if you haven't already, please like and subscribe on YouTube, Apple Podcasts, and Spotify. And, and be sure to tell a colleague about us as well. It really helps the program. That's all for now, but until next time, remember, strong ethics is good business.