[00:00:00] Speaker A: Hi everyone. In this episode we'll look at how AI and generative AI can help you better understand and manage your supply chain risk. Welcome to another episode of the Ethicast.
This is the second episode of a two part series on how AI is transforming best practices in supply chain risk management.
In the first episode of this series, we learned how AI and generative AI turns the traditional cylinder shaped data stream of supply chain risk issues into the shape of an hourglass where AI driven practices can create a pinch point in the middle that makes supply chain details far more manageable. In this episode, Craig Moss, Executive Vice President Measurement at Ethisphere and noted AI researcher Dave Ferrucci will discuss how to use all of that data to inform and empower better supply chain analysis and decision making. Craig is a leading expert on using management systems to improve compliance and risk management performance within companies and across supply chains. He is also a Director at the Digital Supply Chain Institute where he developed a program to accelerate and scale digital transformation and a unique new data trading framework. Dave is an award winning artificial intelligence researcher and keynote speaker who created IBM's Watson and led the Watson team from its inception in 2006 to its celebrated success in 2011 when Watson defeated the greatest Jeopardy players of all time. Since then, Dave has helped companies within the financial services and healthcare industries implement AI in their own operations. He is presently managing Director for the Institute for Advanced Enterprise AI. And now here's Craig and Dave.
[00:01:47] Speaker B: Welcome everybody to part two of the podcast where we're looking at supply chain due diligence and the application of AI and generative AI. I'm here with Dave Ferrucci who is a director at the Institute for Advanced Enterprise AI, which is a sister organization to the Digital Supply Chain Institute where I'm a director.
Dave, it's great to have you here. Anything about your background that you want to elaborate on before we dive in?
[00:02:17] Speaker C: I think I really appreciate the intro and thank you.
I think the only thing I'd underscore is that for me and for what we're doing at the Institute, this is an incredibly exciting time. I think so many enterprises are excited and optimistic and at the same time challenged about how best to apply the recent and remarkable advances in AI. And I'm happy to be part of it and happy to help.
[00:02:44] Speaker B: Yeah, and I was so excited to get a chance to start to talk to you about this because similarly there's a huge push of increased regulation and focus on supply chain due diligence. And I thought, you know, hand in hand, these Two things really go well together. So in the first episode, the key thing that we went over there was ethisphere and partner organization Vectra International had developed a supply chain due diligence maturity assessment. So if you think about what companies typically do as a funnel, you got 10,000 suppliers, they try to funnel it down to the hundred. And our supply chain due diligence maturity assessment is geared at understanding the residual risk of those 100.
What is really exciting and I want to talk to Dave about today is how do you take the data that you collect in the middle and turn the funnel into an hourglass where we start to use that data to extrapolate back out and gain real insight and actionable intelligence on the 10,000. From 10,000 to 100, back to 10,000.
So with that Dave, obviously there's lots of buzz about AI and generative AI. Trust and transparency is critical to supply chain risk management. It's also really critical for AI and generative AI. So for the non technical people here, how do you start to talk about or think about building trust and transparency into the results that you get?
[00:04:20] Speaker C: Trust and transparency really in the end come down to the data. Everything kind of starts with the data. So understanding the data sources that are used to train the AI, their reliability, their currency, their consistency is the essential sort of first step. When you start thinking about how do I trust what the AI is doing and how do I make sure that I can dig in and understand where the answers and inferences are coming from?
You know, as they say, it's never been more true. You know, garbage in, garbage out.
One of the unique aspects of gen AI is that gen take your data and build these big probabilistic models about how things co occur and interact with each other, what patterns are formed in that data, and then use those patterns to generate answers. So it's the model, it's all those connections, those probabilistic connections that are used to generate. And it's often the case that the underlying source, original source of that bit of information that might end up in an answer is not actually readily available in the LLM. The LLM is kind of the large language model that's used in Genai is a distillation of that data rather than the actual data at that point. Okay, yeah. So there's an additional work that helps with this, often referred to as RAG or retrieval augmented generation. They combine the power of LLMs with search and document retrieval so they can help mitigate this problem. So original sources are pinpointed, narrowing the scope of the data from which the answers are generated and then also providing direct access to those sources. And you could even include the reliability score, for example, related to that source.
So while Genai can sort of greatly facilitate access to the answer in the context that matters, and that's kind of what's so powerful about it, is that with just natural language, with the right prompts, you can quickly narrow down to the data that matters. These rag solutions or retrieval augmented generation solutions can increase the transparency related to what precise data contributed to those answers and linking back to the original source. Now let me say one other thing about that, because now what you're seeing coming from a lot of the AI providers are tools now called research tools or deep research tools. And essentially what they are is these rag type systems in the sense that they're using LLMs to generate answers, but they're also directly providing the original source data that contributed to that answer. They're really trying to mitigate that problem of the LLM being opaque. This is happening and this is happening fast. To help you deal with the transparency.
[00:07:26] Speaker B: Issue from a supply chain due diligence or supply chain risk area.
Getting reliable data is one of the hardest parts from suppliers. So again, we go back to this idea of the hourglass. I love the ability to, if we're able to get more reliable data from a small subset of my 10,000 suppliers, to be able to use that I think is really powerful.
[00:07:50] Speaker C: Yeah, I think that's a key point, is that you do have to invest in that seed of reliable data. There's no way around that. And then the gen can help facilitate access and look at the patterns in that data and apply that. And I think we'll get to that later. About how you really expand to the bottom level of the funnel for completeness, though I just did want to mention, because I know it's talked about so often, is the hallucination challenge another problem with Gen AI. The power of Gen AI is the ability to predict answers based on, again, patterns in the data.
And the predictions are heavily conditioned on the context and the query, which in some sense is really great because an application or user, they can just use natural language to refine their query. And this is a great job at predicting what the, you know, what the right answer is, but it's still a probabilistic process.
So one way to mitigate the risk of hallucination, which is generating an answer that might not actually be reliable in the data, is to use structured consistency checks. And this is where you go Back to data that's super reliable, that you know is super reliable. And so if you're asking about a supply chain network knowing, for example, that every link in the chain at any given point in time to be reliable must have adequate inventory. And so while the gen AI may be using patterns in the data to generate answers that are largely right, you know that you should always be checking that.
And that can re rank or cause generation of other answers that ensure that's always a check. And that's just one example. You could even use the LM to call the right code that ensures that you're doing those validity checks and therefore reducing hallucinations. But this is just always something to be aware of when you're building these AI architectures.
[00:09:53] Speaker B: We're going to get into some of the specifics now, but you know, a lot of companies are experimenting with AI and generative AI, and as you and I have talked about, I think defining the problem you're trying to solve first is really a critical thing that is going to give it more value and you're going to get much better return on investment.
So one of the things here is that if we go back to this idea of the funnel and I have 10,000 suppliers and I'm trying to funnel it down, we know that there's use of AI to go down from the 10,000 down to the 100. But what I really want to talk about with you right now is how do we use the extra data that we get from the hundred, like really understanding their residual risk, understanding their program maturity, how when do we start to use that to flow that back out to the 10,000?
[00:10:47] Speaker C: Gen AI in particular, of course is great at finding patterns in data. Machine learning is in general, I think gen AI is really interesting because of how it can find patterns based on a wealth of conditional context. And this is kind of very interesting because there are many, many features describing a supply chain, describing the inherent risk, describing the residual risk. All of these features create context.
And one of the things you can do is when you're looking to apply what you carefully learn from the assessment in the funnel to the broader data set, knowing how that bigger data set clusters into similarly behaving supplier groups from a risk perspective gives you predictive and explanatory power over the unseen data. And this is really a traditional strength of machine learning generally, is that we can look at the results of that first step and we can say how does that group or cluster the 10,000 and start to learn about those groups? And the AI can do that. And then once you, once you have that, you can now apply that. So in effect you're able to say this supplier is likely high residual risk because it is so similar to these other high risk suppliers, different from these low risk ones in the following ways, where those ways are actually explained based on the features of those suppliers and how they align with the answers to the assessments.
So the AI is kind of doing the hard work of analyzing those assessments, how they've applied to the hundred, and really extrapolating that in what's potentially a very complex way. But ultimately with Genai explicable back to that original seed data.
[00:12:52] Speaker B: That's really interesting. And then also it would give you the ability to your point before, of needing then to check.
You could then spot check. When you go back to the 10,000 at the bottom, you can actually spot check some of those to verify, to make sure that. And then would the AI continue to learn from that?
[00:13:12] Speaker C: That's right. I think when you embed this in a larger architecture, as you're kind of spot checking that stuff, you can literally start using reinforcement learning to help you understand where might you feel that the AI missed something or disagreed or should consider something else. And ultimately categorizing one of those unseen suppliers, one of the 10,000 wasn't in the 100. And so the AI comes back and says, here's why I think this is lower risk. I expect these things to be true about the residual risk based on what I've learned. And if you look at that and say, well, you missed this or you didn't understand that you put those, you put those responses in those terms with reinforcement learning, the AI will continuously learn with those responses and get more and more accurate.
[00:14:06] Speaker B: That's amazing. That's amazing. So let's look at it specific. I want to look at a couple specific cases that I know companies really grapple with all the time. So in our supply chain due diligence maturity assessments, somebody gets the answer questions and we understand their inherent risk. Then we ask them questions and we understand their program maturity. So inherent risk minus program maturity equals residual risk from that. Let's say that in the 100 we find that there's a pattern here. That residual risk is really tied in a lot of cases to weak policies, poor training, and no monitoring by those hundred.
How could we start to use that data at a more specific level to be able to start to extrapolate about the broader universe?
[00:14:57] Speaker C: You know, I think once you, once you learn, once you learn that from the smaller data set, I think taking all the data of the 10,000, that in effect reflects the answer to those questions. In other words, without necessarily redoing all the surveys, you now know you've learned what you're looking for. You learn the patterns that are associated with high residual risk, and now you're applying it to the data in that broader set. And I think that's the opportunity. And so now, as we said earlier, you can take any one of them and then Genai should not only be able to categorize it, but in the same terms explain why it thinks it fell into one category or the other, opening the door for you to do the reinforcement learning. As suggested.
[00:15:52] Speaker B: I'm sitting there and I've got the 10,000 suppliers in 30 countries and a new law comes into effect and I have to update my supplier code of conduct. And then from that I need to find a way to send that supplier code of conduct update out to the 10,000. But ideally I'd want to modify it based on what country they're in because there might be different implications there. I want to try to modify it based on what they do for me. Are they a software provider, are they a manufacturer, things like that? Right now, that takes companies a lot of manpower to do that. Let's you look at a case like this, how would you go tackling this kind of case?
[00:16:35] Speaker C: This is a great application for Gen, in my opinion, because you have so much linguistic or language data describing the laws, the guidelines and conditions that the various countries apply to the suppliers. This is a body of data that can be used to train a model that's sort of uniquely good at this. And this is sort of really right in the wheelhouse for building, you know, Gen AI model with enough data it. And I think, you know, in this case you'd have to acquire it, but I think you have it also the foundational model. So you're not building an LLM from scratch. In other words, you're not ingesting all the world's data. You're just taking the, the supplier codes, the legal tax, the country restrictions or guidelines or additional unique content. And you're fine tuning that model. You're giving it examples from that and you're training a layer on top of the foundational models that are already trained on language data. They're already trained on so much of this, so you're specializing them, enhancing them. This is a typical technique that's used in getting Gen AI to be more accurate in a particular area. I think once you've done that it's going to be remarkably good at generating sort of a first draft of, you know, here's what you, you know, here's what you have to worry about.
This is the potential impact, you know, of these laws on the supplier in this country.
So it strikes me as a very good application. And this is again something that humans could more rapidly and more cheaply inspect because you're getting the AI to take, if you will, the first cut at this and refine it as necessary and again through that process increase the overall quality of the AI's results. But I think this is a very good application.
[00:18:42] Speaker B: So let's take this same situation a step farther. So we've now sent out a tailored communication to 10,000 suppliers that was customized for them.
Could I then use Genai to go the next step and communicate something internally to all the people in my company that deal with those 10,000 procurement people, salespeople, whoever it would be, to tell them what it means for them?
[00:19:08] Speaker C: This is again one of his strengths is taking this kind of linguistic data, summarizing it, conditioning it, generating and tailoring summaries and syntheses of the, of that data. So I think this is a really a, you know, very good example because what you've got is, you know, you've got these 10,000 suppliers, you have 30 countries. Much of what's going on here, like policies, laws, codes of conduct, these are all, you know, sources of linguistic data that can be used to fine tune these gen AI models to produce exactly this type of summary and application.
[00:19:59] Speaker B: That's fantastic because from there, if you think about it, from a legal and compliance or sustainability department from there then they need to also do internal reporting to senior management like summarizing what they're doing.
And I guess then it would be, that would be a next easy evolution from that. Once wants to use AI and Gen AI for that. Is that right?
[00:20:21] Speaker C: Yeah, absolutely. I think there has to be, you know, the way I would architect these things is always be a human in a loop here. But the, the clear advantage is to do a really good job at generating initial drafts, continuously refining them so that the Gen AI gets better and better where you're going from maybe having a human in the loop, even with a dramatic reduction in time needed, I'm going to bet an increase in quality for applications like this. But you still include the human in the loop initially, maybe in certain points more involved and then as the, the process is refined, you know, you know, less and less so.
But the opportunity is enormous for this type of thing.
[00:21:21] Speaker B: Dave, anything that you want to say to wrap up about application of Gen AI to specifically supply chain due diligence?
[00:21:29] Speaker C: Yeah, I think that, I think the opportunity for this type of stuff is enormous. I think that technology is extremely advanced.
I think people often overestimate how easy it is to use.
While it creates enormous opportunity and is easier to apply for problems like this than any previous generation of AI, it still requires some understanding of a more robust architecture that understands the data, the reliability of the data as new data arrives, how do you automatically ingest that and respond to that? As I mentioned earlier, where do you position humans in the loop? What sorts of interactions make sense at what points in that process? How do you facilitate continuous learning and refinement? Because as the data changes and even the expectations change, how do you keep that going and how do you implement the process for regular testing and validation? So I think when you, when you step back, there's also security and privacy issues and things like that. You have to look at the, you know, the enterprise architecture for these applications. But the core engine that AI gen AI provide here is significant. It's really transformative in my opinion.
[00:22:56] Speaker B: So to wrap up here from, from my point of view, what is so exciting about this collaboration with the Instit for Advanced Enterprise AI, Ethisphere and the Digital Supply Chain Institute is really that idea of the hourglass. We know companies struggle with what to do and here we're able to take new data from the supply chain due diligence maturity assessment and we're able to use that to extrapolate and make something that's far more scalable, far more efficient for people to not only meet the regulations, but reduce risk and improve supply chain performance.
So with that I'm going to wrap up, I'm going to turn it back to Bill. If any of the listeners have an interest in getting back in touch to start to look at some specific applications to your organization and how the collaboration between Ethisphere, DSCI and the Institute for Advanced Enterprise AI could help you. Please feel free to get in touch with me. Thanks very much.
[00:24:00] Speaker A: For part one of this series, please visit the link in this episode's show Notes or visit The Ethicast on YouTube, Apple Podcasts, Spotify or the Ethisphere Resource center at ethisphere.com resources ethisphere and Vector International have partnered to deliver a supply chain due diligence maturity assessment to provide your organization with quantifiable risk ratings on its supply chain. To learn more, visit ethisphere.com I'm Bill Coffin, and this has been the Ethicast. We hope you've enjoyed the show. For more content like this every week, please subscribe to our free weekly
[email protected] Ethicast thanks for joining us. And until next time, remember, strong ethics is good business.