Insight ON The ‘Moral Crumple Zone’: Who Takes the Blame When AI Makes a Mistake?

Accountable AI isn’t just about explainability — it’s about structuring agentic systems to deliver ROI and trust at scale.

When AI systems make decisions, who’s responsible for the outcomes? In this episode, Meagan Gentry, National AI Practice Senior Manager & Distinguished Technologist at Insight, explores the concept of accountable AI and how agentic systems can be designed to embed trust, transparency, and business value.

Meagan introduces the “moral crumple zone” — a phenomenon where humans absorb blame for autonomous system failures — and explains why explainability alone isn’t enough. Instead, she advocates for embedding accountability directly into agent workflows, assigning clear decision territories to specialized agents.

The episode also covers how to structure agentic AI based on use case complexity, technical feasibility, and ROI potential. Meagan shares a practical XY-axis framework for evaluating AI opportunities and offers real-world examples in retail and contact center automation.

For executives navigating AI adoption, this conversation provides clarity on how to avoid risk, build trust, and unlock competitive advantage through accountable AI. 

If you liked this episode, share it with a colleague.

Have a topic you’d like us to discuss or question you want answered? Drop us a line at jillian.viner@insight.com

Jump right to…

  • 00:00: Welcome/intro
  • 03:12: What is agentic AI?
  • 06:45: Why accountability matters now
  • 09:30: Explainability vs. performance tradeoffs
  • 13:10: Ownership and moral crumple zones
  • 17:15: Mapping accountability across AI lifecycle
  • 20:21: Empowering users with AI awareness
  • 25:32: Human in the loop vs. human in command
  • 27:24: What CEOs must ask before greenlighting AI
  • 29:30: Who belongs at the AI strategy table
  • 30:58: Culture shifts and trust in AI agents

The better a model performs, the harder it may be to explain — and that’s where accountability comes in."

— Meagan Gentry, National AI Practice Senior Manager & Distinguished Technologist, Insight

Audio transcript:

The ‘Moral Crumple Zone’: Who Takes the Blame When AI Makes a Mistake?

Meagan Gentry:

Hey, if we're giving the AI a lot of decisions to make, and it we, we know it performs very well, do we also need it to be explainable? If not, that's where accountability comes in

Jillian Viner:

Hmm.

Meagan:

And at every point, everybody gets a share of the accountability. So not one person, not one entity, is entirely accountable or responsible for the success of the AI product, or the mistakes that it might make.

Jillian:

If you're making technology decisions that impact people budgets and outcomes, you are in the right place. Welcome to Insight on the podcast for leaders who need technology to deliver real results. No fluff, no filler, just the insight you need before your next big decision. Hi, I am your host, Jillian Weiner, and today we're answering the question, who's accountable for your AI agents with National AI practice? Senior Manager and distinguished technologist, Megan Gentry. Let's go. Megan, you've been an artificial intelligence for how, how, how many years? How many years have you been doing artificial intelligence?

Meagan:

I've been in the data science and machine learning space for about a decade now. Okay.

Jillian:

Yeah. You really emerged as our practice lead at Insight. Within the last couple years, you've been at the forefront, at least, of all of our experimentation and adoption. What are you most excited about today and what are you most nervous about

Meagan:

With AI in general? Mm-hmm <affirmative>. Um, I am most excited about its potential to improve day-to-day quality of life for everyone. Um, I am most nervous about our ability to balance our trust in AI with, you know, the healthy amount of fear that it takes in adopting a new technology safely. Um, I like to have the analogy of like, when the first cars were invented, there were no seat belts, right? What do we do as the seat belts are being invented and created, and how can we, you know, prepare ourselves for some of the trouble and some of the benefits that are coming down the road?

Jillian:

There's a great story, by the way, about how, why there's seat belts in cars <laugh>, but we won't go down that rabbit hole. Um, I know trust is a, is a really big piece for you. And I, I love that your intention, or at least your vision for AI is to make the day-to-day easier, better. I wanna start just by clarifying some terminology before we dive into this conversation, though. So maybe for, for leaders who aren't deep in the trenches of ai. 'cause even though we think this is what everybody talks about, there's still some people catching up. Um, just give us a really, like a scientific explanation of what we mean when we say agentic ai.

Meagan:

Yeah. Agentic AI is different because instead of the AI model relying on providing you words that it generated to help you with your day-to-day work, it's now able to turn those words into actions. So it can set goals, it can do things to attain those goals, whether that mean, uh, explain itself or, uh, develop code to potentially, you know, create another agent that will do something for you. It's truly, you know, not just sitting speaking with you. It's helping you get things done.

Jillian:

So the first step was the generative mm-hmm <affirmative>. So interacting with the chat bot and having it interact or having it generate text telling me what to do. Mm-hmm <affirmative>. Then it seemed like the next breakthrough was, it actually remembered our conversations. It could, it could learn from some that experience, and now the Lex next level really is it can act on those things. So it's moving away from telling me what to do to actually being able to go and do some of those Monday tasks, which is exciting because even as a power user of generative ai, I still found that like I'm doing the grunt work here. Like I'm doing the copy pasting, the searching to, like, validate that the information's given me is correct. Um, so this is exciting, but what do we need to be careful of as we start handing off work to agent ai? What's the biggest risk here?

Meagan:

Yeah. Agent AI creates another problem, which is, where is my accountability in the process? Right? So if you're now asking AI to help you get things done, where are you putting the decision points in that process? Where is AI making a decision for you on what tool to use? Uh, maybe, uh, what data to use, and maybe even how to authenticate with other agents in the space that it discovers to help you get things done. Um, one of the kind of critical emerging findings that's happening now is how we actually implement something called the Internet of Agents, which is a really exciting topic that I love to talk about. Um, and it's this concept that, um, eventually we'll get to a space where AI agents will routinely be able to efficiently and effectively access one another, uh, to get things done totally autonomously without our instruction, without our orchestration, without us hooking together the pieces and the parts and dealing with their own security problems, which is, uh, really interesting. I'm kind of, kind of spooky sometimes when you think about how that can tumble out of control without proper governance.

Jillian:

When you talk about agents talking to each other, the internet, I've seen those videos where like an agent will call, uh, what we, what they think is like a hotel agent, and then they realize they're talking to agents and they like, ask, do you wanna switch to like our language? It's so fascinating to see that happen. Is that what you're talking about? Like, we're just gonna have robots speaking robots speak to each other?

Meagan:

Well, we want them to speak to each other to some degree, you know, that helps us out. But, uh, when, when the language becomes something that we can't control, interpret or choose how we interact with it, that's how we get into really interesting scenarios, uh, that, that we have to react quickly and, and to reactively to,

Jillian:

It goes back to what you're saying about accountability. Yeah.

Meagan:

Who's, who's driving the machine. Yeah.

Jillian:

Who is driving the machine, who is accountable when it makes a mistake. Yeah.

Meagan:

Um, that should be defined on a case by case basis. Uh, I like more accountability over less. Um, we can get into that, but, uh, in most cases you wanna actually have a structure around that accountability. Right.

Jillian:

What does it mean to make it accountable though?

Meagan:

Yeah. So I think, um, when we talk about ai, we talk about accountability as well. I, I would, I would trust it if I have more accountability over the ai, I would, I would, the more that I can intervene in the AI's process, the more I trust it, the more I can relinquish some of that accountability, some of that control, uh, and accountability in that sense is being able to stop AI when we don't like what it's doing mm-hmm <affirmative>. Or to be able to intervene in a decision that it's about to make for us and say no, or to prevent it from making decisions that it ought not to. Okay. AI stay in your own lane. Right. Um, really, we wanna be able to show AI whose boss whenever we feel like it. Uh, and in some cases we're concerned that we're not gonna be able to do that. And that's why we have all this hesitation around, well, I'm afraid to use AI in this context or for this use case because I wanna be able to govern it properly.

Jillian:

This doesn't sound that different than how we would expect employees to behave. Right. We set rules around how we work. Uh, a retail shop may have rules about return policies, and you can't have a, a store clerk just making decisions about how to change that without the authority of the manager. Um, so it doesn't sound that different than, so you really are treating the AI like, like a teammate. Yeah. Where does this get complicated?

Meagan:

I think that we struggle right now with AI explainability, uh, especially when we have systems, uh, in AI that lack a proper accountability system. Mm-hmm <affirmative>. So explainability has been an issue with AI for a very long time that this is a major old problem. Yeah. So, um, when we talk about explainability, what we really mean is the ability for a model to be able to show transparently how it arrived at an inference, a suggestion, a conclusion, um, or in this case with ag agentic an activity, how it made those connections and how it got there.

Jillian:

If I put in a request and I'm watching it, quote unquote thinking, and it's basically its own transcription of what is thinking through that is what an example of how I can explain how it's made decisions, because I can read through that.

Meagan:

That's a really good point. Um, I think what you're observing there is a really great function of the user interface. Mm-hmm <affirmative>. Telling you that things are happening behind the scenes, and those things happening behind the scenes are different than what they were 10 years ago. When we used AI to go back in history just a bit, when we had the advent of deep learning, you know, very complex neural network machine learning models, they were already difficult to explain. You're dealing with really big data vectorized put in tensor forms. Those are nerdy words for <laugh>, big, long strings of numbers that are really too hard for us all to comprehend and hold in our heads. But we were at least able to say, Hey, this big bunch of numbers flipped on and off switches in enough enough times and enough ways to result in this particular, you know, inference with large language models, that type of thing.

Meagan:

Not that exact thing, but that type of thing. Those micro decisions mm-hmm <affirmative>. And connections and relations that the language that's input is making to the language that is eventually output that's happening at such a rate and at such a complexity that the model itself is not capable of explaining exactly how it got there. And the creators have been quoted to say that that is also something that they're not able to necessarily tease out of the machine today. So there's this struggle right now with being able to keep AI explainable, which is a great virtue, right? We want AI to be explainable, but the pace of how that AI is being adopted regardless of explainability mm-hmm <affirmative>. Is outstripping the research behind the explainability itself,

Jillian:

And ultimately the benefit of knowing the explainability, which is essentially understanding how it got from this point to this point, or how it came to this conclusion, you wanna know that, so that if it does do something crazy, you can go in and be like, oh, we need to tweak this and prevent that from happening in the future. Yeah. Yeah.

Meagan:

That's the benefit of explainability, but knowing that that research is lagging behind so many organizations desire to trust and use ai, even without that explainability, that's what keeps us up at night. Right.

Jillian:

Yeah. That's a, that's a conundrum because at the same time that you're raising these red flags that this is risky, dangerous, the, it's not, it's the, the research is not keeping up with where things are going. We we're being pressured left and right to deploy this. I mean, there are reports out there that show figures in the 50, 60% range of companies, largely tech companies that have deployed agentic AI and are using it at large scale. So I don't know, that's a bit nerve wracking to know that there's potentially agent AI agents out there that will, might do things that we can't explain why they do them or how to stop them.

Meagan:

It's certainly a hesitation for a lot of organizations, you know, specifically in industries where decision science has a lot of liability attached to it. Mm-hmm <affirmative>. Uh, and potentially legal, you know, repercussions as well. Um, if you're using AI in places that really requires high explainability in order to build trust and in order to keep the organization out of trouble mm-hmm <affirmative>. Those organizations are gonna gonna be fairly hesitant. But those might be the same organizations that are saying, we are AI first. Yeah. And AI first means we're moving fast. Yeah. We're empowering our employees with ai and we fully intend for, you know, all of our employees, uh, to help, you know, our customers use AI and those sorts of things. I think two things can be true at once. You can lack explainability in a lot of the AI that you might be using, and you can adopt it safely and securely as long as you know where the accountability is with every use case that you apply it to. We're gonna come back to that. I wanna

Jillian:

Know what is a low risk use case for agent AI versus a high risk use case,

Meagan:

Low risk use case? Well, I think that's actually a harder one to answer than the high risk <laugh> because <laugh>, because you

Jillian:

Could think, Megan, I'm trying to like get a set of doomsday here. Yeah,

Meagan:

Yeah. Um, a, a low risk use case, I mean, I would actually say meeting note summarizations one the most common mm-hmm <affirmative>. And it's one of the most fun. And, you know, it increases your productivity because absolutely. I don't know about you, but I, I turn on meeting note summarization on so many of my meetings that I need action items out of mm-hmm <affirmative>. Or I need to go back and find a particular detail, or that was a great quote or a great vocabulary mention that I wanna tease out. And that is fantastic. But there's also some risk to even using that in some cases where information might be shared in the meeting that no one intended to transcribe. No one intended to, you know, input to an AI model. And that does introduce a little bit of risk, but I would say that's a really easy one.

Meagan:

Another one is I like to use it to help me understand with talent management, um, how can I conduct better performance evaluations, right? Yeah. How can I ask better questions of the people on my team, the developers, the AI engineers, the machine learning engineers that, uh, have, you know, stacked up so many different skill sets over the course of a year mm-hmm <affirmative>. Um, how can I ask them better questions around how they're applying that, how they could stretch those goals, things like that. I use AI all the time, and, uh, especially with AgTech ai, I use it to help capture stories around the work that we've done. Uh, specifically if I've got a, uh, story that I wanna capture that someone in my team has accomplished, and I really wanna get a 360 view of the impact that we made, we might have a interview with that particular teammate that delivered this, you know, really great product to the customer, and we might say, Hey, tell us a little bit more about how you built it. Um, what were the challenges that you ran into? What could we learn from this? Next time we do it, have ai look at that transcript of the meeting, of that interview, uh, and come up with a punch list of things that we could put into our AI, runbooks and playbooks for how we deliver for customers. Uh, it helps us get our job done. Yeah. I think

Jillian:

We've used it a couple times for like after action reviews. Yeah. That's a, that's a formalized process to just look back on once a project's completed. It's a great way, I think, to let everybody in that conversation just think on their feet and do a garbage dump of whatever's on their brain, and then let the AI figure it out. Uh, you were joking before we started about how you'll talk ca caveman to the Yes to the chat pt, and it is remarkably good at, in inference and like, what is it that we actually mean when we say those words? So, um, okay. So you're giving us some good use cases. I mean, a lot of, a lot of generative AI use cases. Do you have an, an agent AI use case that like, does a specific task for you on a regular basis?

Meagan:

Yes. Um, following up with that one where we capture stories mm-hmm <affirmative>. Um, I have it go a step further, generate my deck, generate my talk track, go ahead and email that back to my stakeholders on that project and ask for feedback. Now, this is something that you want to be airtight, right? Mm-hmm <affirmative>. So if you're, if you're contacting people with agent ai, there's checks that you want in place, you want to review that before it gets sent out. I love an automated email notification as much as the next person <laugh>, but if I get too many of them, I'm, I'm not gonna pay attention to them. Or if I find a, uh, a mistake in one of them, it's gonna be an issue. So, when I use a Agentic AI to help me capture, summarize stories, create content to tell our stories around how we've helped customers at Insight mm-hmm <affirmative>. Um, I, I get really, uh, I get really detailed around getting the errors out of that content. The reason being, if someone's gonna interact with something that's AI generated and they find errors and they find hallucinations or they find problems, uh, it will erode the trust of not only the AI that built it, but also maybe the people that are behind the ai. Right? So, I want everyone that interacts with me and the AI generated content that I produce to trust that I've looked over it, I've done a second pass, and, you know, they're, they're getting the best of both me and the ai. Yeah.

Jillian:

Let's go back to your, your accountability and explainability. Why, why is this so important to address right now? I

Meagan:

Think the trade off between model explainability and model performance has reached something that no one has ever seen before.

Jillian:

<laugh>, what

Meagan:

Do you mean? Um, so, so the, the ability for us to optimize AI models to perform incredibly well, by the way, we're very hard on AI models. We have very little forgiveness for the mistakes that they make. Uh, especially once we've gotten used to a particular experience, we expect that experience over and over. So when, when we've optimized models so much to perform so well, and look at our data, analyze it, make relationships, you know, understand those relationships and contextualize them, uh, but we can't necessarily trace the explainability. We are at a place where we're adopting and then we're asking questions later.

Jillian:

Mm.

Meagan:

I don't think we're gonna have a, in the near future, uh, a point in time where we've totally closed the gap between explainability and very advanced models that perform very well mm-hmm <affirmative>. Right. The better a model performs, it's also very possible that it might be harder to explain, because it's making very sophisticated connections between things that maybe we wouldn't make or maybe aren't so obvious. That gap is something that we should just keep in mind, that, hey, if we're giving the AI a lot of decisions to make, and it we, we know it performs very well, do we also need it to be explainable? If not, that's where accountability comes in.

Jillian:

Hmm. Yeah, that actually makes sense because we, our expectation of the AI is to make those connections that we would naturally see. So you're kind of questioning how much do we care about the explainability as long as we have someone accountable. So ultimately, when you're putting together an agent system, you know, let's talk about that ownership. If, if when an AI agent executes something that's risky, who at that point is the accountable person? Is it the tool? Is it the maker of the tool? Is it the person? Is that something you have to decide the front end

Meagan:

That is something that has to be decided?

Jillian:

How do you decide that?

Meagan:

Uh, it's a really great question. Um, there is a fantastic paper. Um, maybe we can link it, but, uh, the way that I like to think about this is when you've got too many players in the field for the AI itself, or when you have too many cooks in the kitchen all working on an AI solution together, you've got a data curator, you've got a AI developer, you have a front end designer and developer that's just on the technical side. You might have an AI steering committee that let the use case, you know, come into fruition and be funded. And then you have all the stakeholders and users of the product itself. Mm-hmm <affirmative>. At every point of that entire AI products lifecycle, you have decisions that are being made on what data to include how to build the AI configuration, uh, how does it show up to the user?

Meagan:

What's the UI component that's letting them interact with this particular ai? What does it look like? What's, what does it feel like? What does it allow the user to do? And then you've got the users themselves that might be subject matter experts, but they, they might not be. At the end of the day, you've got something called a moral crumple zone, which is, where do I put the accountability so that it's, it's not a blame game mm-hmm <affirmative>. When bad things happen. Mm-hmm <affirmative>. And I think traditionally it's been an issue that we, we have a hard time understanding how AI works, and that puts us in a place of not being able to trust it, because there's not necessarily an accountability framework for a lot of these tools out of the box. But when the electrical grid goes down, you don't necessarily need to know how it works to know that someone has to be called to get it back up again. <laugh>.

Meagan:

Um, and the, the antidote to the moral crumple zone of not having one person who's accountable for every outcome of what an AI product might produce, is to diagram and plot out where the decision points made the decision on the data, the decision on the AI configuration, the decision on the user experience, all the way out to user adoption, what the inferences actually are, what happens when it's not correct. What happens when a user has to intervene, and at every point everybody gets a share of the accountability. Mm-hmm <affirmative>. So not one person, not one entity is entirely accountable or responsible for the success of the AI product, or the mistakes that it might make. You know, we've had a really great example of this. We're working with a, uh, emergency services company, and they wanted to be able to empower their clinicians to deliberate over different emergency response protocols, um, using AI to help summarize across, you know, hundreds and hundreds of documents, Hey, what's happening over here with this protocol, under this emergency circumstance?

Meagan:

What's happening over here? Are they the same? Do we have a standardized experience for this protocol, or do we need to standardize it to get it to a place where we all feel healthy about what's gonna be done in the field? So if you think about that entire system, you, again, you have the, the developers on the backend who are making decisions on data to include, you know, knobs and levers to set from the technical perspective, but you've also got the end users, which are clinicians deciding the protocols, but it's also the folks that are implementing those protocols out in the field. So when you look at it, maybe an EMS responder, and you say, well, what is your accountability for the AI product that informs how a protocol should be standardized when you deliver a service to a patient in the field who's in need of emergency medical care?

Meagan:

They're, they might scratch their heads and say, well, I had nothing to do with that AI system, so what does that have to do with me and my work? And it, it, that's why it's important to get the share of accountability Right. Per role, because there's different levels of control that each role might have and some might have none. Mm-hmm <affirmative>. Um, and I think it's perfectly, you know, fair to say, a big part of ai, uh, trepidation is understanding of when you inform the end user of when they're using ai, do they know that AI is underneath the system that they're using? Um, awareness of use is a concept that I love to think and talk about, because it's one of the bigger, uh, areas where if you have, it's one of the bigger areas of risk mitigation in ai. 'cause if you've understood that you're using ai, you can then make decisions that are informed by, okay, well I know how much I trust AI in this scenario, so I'm gonna make my decision accordingly. I'm gonna use it as decision support instead of my decision maker. And that protects me from a little bit of risk.

Jillian:

So if I were to apply that to the medical scenario, the AI is telling me this patient likely has X condition or needs this treatment, but me using my 20 plus years experience in the field, I know that a fraction of that is correct, but I know the bigger picture, so I'm actually gonna do this treatment option. I've used that AI to help support my thinking, but I'm not relying on it because I know it's coming from ai and I know it, it's, it can make mistakes is what you're saying. Yeah.

Meagan:

Like, let's empower the end user to use AI as a second opinion. Mm-hmm <affirmative>. Or to use AI to maybe reinforce or remind of a certain perspective on a problem, but maybe not replace the entire decision making system. Um, and you don't actually get that interaction unless the user knows that AI is sitting behind whatever the instructions are. If they believe that those instructions are point blank, this is what you are going to be instructed to do. This is a verbatim protocol. Mm-hmm <affirmative>. There is no varying from it because it was not AI generated. And you've got a different dynamic where they're not able to challenge it.

Jillian:

Yeah. So we're talking about human, human in the loop,

Meagan:

Human in the loop, human in the loop informed, um, informed of the role that they have in the decision making process themselves. Mm-hmm <affirmative>. And informed of whether or not they're actually using ai.

Jillian:

Is there a difference between human in the loop versus human in command? Mm.

Meagan:

Human in the loop allows people to correct AI and make it better. Human and command doesn't necessarily assume that that's what's happening. Um, you need both to make a system robust. Um, you want human in the loop as much as you can without creating extra work for the human. I think that's, that's a big part of it, but there is a critical level of massive, Hey, I need labeled data. I need to know whether or not the AI is producing something that is helping the user is, is correct. Um, and, and that's where human in the loop is really valuable. And it might be worth the time investment for some organizations to say, Hey, I want, um, one or two, you know, engineers on my manufacturing line to help correct AI as it sees that it's making mistakes. Mm-hmm <affirmative>. Um, a little bit goes a long way with labeled data.

Jillian:

I want to shift us into the seat of the C-level team, the CEOs, the CFOs, the folks who are responsible to likely a board of directors or just the company at large to succeed. And again, we know that there's pressure to adopt AI and agentic AI seems to be like the next thing that people are going after. So if I'm a CEO and I'm going to be green lighting a new agentic AI project, what do I need to know or ask or see before I can proceed with a project with confidence? Woo.

Meagan:

That's a big question. I know.

Jillian:

No pressure. But, you know, this is, I think this is something that people really have to grapple with, and there's a follow on question to this, but I wanna get your big answer first. <laugh>.

Meagan:

Yeah. The, the, the big answer, um, starts with where is the decision being made? And that could be at multiple points of an AI use case. If you are employing agentic ai, you need to understand all of the different functions that you're asking the AI to step in and do mm-hmm <affirmative>. And define those lanes well. Right. We, we ask AI to stay in its lane, right? We want it to be as sophisticated as possible, create as many, uh, you know, well-informed relationships between data as possible so that it can give us the most contextual, you know, grounded response. But we also need it to function like we tell it to. What are we asking it to do? What are we not asking it to do, and how is my engineering team going to turn that expectation into reality? That's one of the first questions I would ask as a C-level that's looking at a brand new use case and saying, is this something that AI is a good candidate for to help solve?

Meagan:

Not just a matter of can we, but should we? Mm. Right. Is it going to be too easy for this AI to this AI in its own use case to overstep its boundary and, and potentially provide, um, insight where it ought not to, or to influence decisions where it ought not to? Um, I, I think most use cases fall under the safe category as long as proper governance are put in. Um, I don't think we've actually come across a use case where without proper governance, we would not have AI as part of the steps that would help it, you know, get to the finish line.

Jillian:

I'm gonna go back to the C-level seat. We are contemplating that new agent AI project. Who needs to be around the table with me? Do I need to have legal here? My, obviously my head of it. Do I need my CISO here? Like, we're gonna talk to you about a new project. Who are you meeting with?

Meagan:

Yeah, I, I would say that a lot of organizations now have an AI advisory council or an AI steering committee that can help them, you know, curate the right people in that room. But in absence of something like that, I know that your IT leaders are going to need to be involved because this is new technology that you're inviting in. Mm-hmm <affirmative>.

Meagan:

Lines of business that are most benefited by ai, right. Uh, from functional areas like hr, legal, finance, um, so many of those areas have the richest benefits that are, they're ripe for the taking, um, in ai, those particular leaders as the folks that are gonna be driving a lot of the use cases internally, in addition to that legal and maybe procurement as voices in the room to help you understand, hey, if I'm gonna be inviting a new technology in, where are the legal risks? What barriers do we have to procurement? And additionally, if we're going to be, you know, using our own data, we're gonna be connecting that. Um, who from the security side needs to be involved to make sure that we're staying safe. Those folks are at the table, we may invite more. I'm always gonna advocate for a data scientist to be at that table, uh, because you don't always need to know how the AI works, but hey, if you can figure it out, sure. Doesn't hurt <laugh>. Um, so yeah,

Jillian:

Let's talk a little bit about culture a little. Going back to kind of the sense of accountability and having this tool kind of do some work for me. It sounds like a little bit of a culture shift, and you mentioned this a little bit earlier about like if it saves work or not. And if I'm constantly having to look over the shoulder of an AI agent to make sure that it's doing the right work and coming to the right conclusions, it doesn't really feel like it's helping me out very much <laugh>, at least not for now. Like, do you see that changing?

Meagan:

Yeah, I mean, I, I think with a agentic it's a little bit easier because I mean, it, it might be counterintuitive that AG agentic might present a little bit less of a risk of having to look over the shoulder of ai mm-hmm <affirmative>. But really what ag agentic, uh, makes possible is different AI functions being able to be chained together to arrive at the solution. So as long as you and the AI agree on what the end game is, you might not care how it gets there. <laugh>,

Jillian:

<laugh>,

Meagan:

Um, you, you might, and, and I, I think that's, that's gonna be a really important, um, distinction is our experience with generative AI and the content that it produces is something that we are used to reviewing before shipping AgTech. It's a little bit different because we're just asking it to get, to get things done. So if we need to look over its shoulder, what are we exactly doing? Are we looking over it shoulder to make sure that it makes the right decisions at the right time? Using the right data that could be faster or slower depending on the use case, right. Or the gravity of the decision. Um, I use AG agentic AI to help me build carts online for groceries. What, what, so yeah, I mean, a lot of, a lot of the tools can kind of just help you based off of your history, you know, autopopulate your cart.

Meagan:

You can give it some commands sometimes, and it will, you know, arrange like, well, this is what we think you need to do to, um, you know, make your Thanksgiving dinner, whatever it is, right? This is, this is the full set. You can double check that, but I'm not gonna pour over that, you know, intensely unless I see the price tag at the bottom being like, that's your red flag. Yeah, that's the red flag. Yeah. And I think that's really important is like when we, when we're watching over the shoulder of agent ai, we need to know what the red flags are so we don't have to babysit it as much as we would some of the earlier models that were simply generative, right? Mm-hmm <affirmative>. With generative ai, we were looking for problems with, you know, hallucinations. We were looking for, oh, that's blatantly incorrect. That is not grounded in the truth, or that's pulling from irrelevant or old data with agen ai at this point, we've solved a lot of the data relevancy issues we were seeing in early versions of LLMs. We're now moving forward to solving issues around, well, it made a decision that I didn't like, so I'm just gonna go in and correct that one decision. And then we're off to the races. I think we're working with a, a different babysitting technique. The AI grew up <laugh>. Yeah. Right. Yeah. It's different problems. Now,

Jillian:

Is there a, a general use case of, of agent AI that you would share? Like something that isn't gonna require sensitive connections? Like, I'm not connecting AI to my email, I'm just not right?

Meagan:

<laugh>? Yeah. I mean, and a lot of folks are to, to give their email responses a particular, particular personal flare. Um, that is not me yet. <laugh>, if, if I email you, you are hearing from me. Um, but yeah, I mean, I think a agentic AI has a lot of power when it comes to scheduling and logistics. That's my favorite use case for it right now. How so? How

Jillian:

Does that work? What do you, what do you prompt it?

Meagan:

So, yeah, so I, I wouldn't say I use this personally, but from a professional perspective, we, we have a lot of, um, you know, customers that we talk to in the logistics, transportation, manufacturing space. And a, a lot of what they have to do on a day-to-day basis is schedule, Hey, when is a truck delivery coming in? When am I shipping items X through Y? How do I integrate that with my track and trace system? Right? A lot of really intricate details around the whens and the whats of the things that are moving, right? Um, and inventory management has always been highly, highly complex, and it's required heavy rule-based systems. Um, and those rule-based systems, you know, whenever you, you hear the words rule-based system, think that's a fantastic application for agent ai because no one's having to tell the agent, Hey, here's the rules that I have always functioned by.

Meagan:

Therefore, you're going to need to keep up with those and then add more. You're really pointing it at your data and saying, Hey, find the path of least resistance here. That's your new rule. Um, and then let us apply constraints, right? Let us apply constraints. Well, yes, if you found the most optimal delivery window from this one supplier to their company that they're supplying to is on Sundays at 8:00 AM fantastic. But no one's gonna be there. So apply the constraint that it needs to be Monday at 8:00 AM and if that's already taken by someone else, move it to the next available slot. Right? Agents should be able to handle that level of nuance and should be able to now for the first time, respond to those nuances and constraints via natural language. I should be able to say, Hey, I'm not available and without handpicking, which times I am available, it's going to go figure it out and know. Um, I think agentic AI has a special power because you're able to, you know, plug these systems together. You're able to say, Hey, agent one, you're in charge of scheduling agent two, you're in charge of talking to the person that might be applying the constraints like staffing. Mm-hmm <affirmative>. Agent three, you're in charge of making sure that when the staff arrives, they actually know that a scheduled delivery is coming. Can you all work it out together, <laugh>, and then when everyone shows up for work, everybody knows what they're gonna do that day is

Jillian:

Agent three or accountability

Meagan:

Agent. Each agent is accountable for its own thing. And I, I love that <laugh>, I love, I love embedding, um, I love the concept of embedding accountability into the agent workflows, right? This is that concept of like, every AI agent stays in its own line. Mm-hmm <affirmative>. Hey, scheduling agent, you're just in charge of making sure nothing's scheduled at two o'clock in the morning. That's your job. You get that job right? Everything else works smoothly, right? It's just like a kitchen. Everybody's got their thing, everybody's passing their thing off to the next step. Mm-hmm <affirmative>. As long as you know your, you know, territory of decision making that that agent's gonna function a lot more healthy.

Jillian:

Do, do organizations that you're talking to, do leaders understand the best approach with agents like you just articulated, having those specialized agents just have their one little lane, their one thing responsibility, like this agent just cracks eggs, this agent just slips pancakes. It's not a chef agent that can do everything from french toast to, I don't know where my brain is going. I must be hungry, <laugh>, um, you know, steak and fries. Like, is there a common misconception that you're having to face when people approach you with an agentic AI request?

Meagan:

Yeah. Um, I, I think there's not one single answer to whether or not agents take care of one small thing, and there's a lot of them. And when the agent is one big thing, and it takes care of a lot of different thing d different functions mm-hmm <affirmative>. The answer is probably somewhere in between. And it depends on the use case. And my, my advice would be to let the use case inform you where the agents live, how many there are, and what they're responsible for. Because that's going to be a little bit different every time. You might have a large language model that is capable of writing code, capable of connecting to other agents that might be specific for, that might be really good for certain tasks, and it can coordinate it with, you know, maybe a master agent that converts that natural language to activities over three sub-agents.

Meagan:

That might be enough in other circumstances. You might have a broader organizational structure to your agents where you've got lots of master agents mm-hmm <affirmative>. That are contextually aware of different scenarios, circumstances, and have different data backgrounds. Um, it depends, and I think that's gonna be important. That's, that's one of those, uh, c-level questions, right? When you're talking to C-level executive about, Hey, I'm about to embark on this AI journey in a new use case, how many agents do I need? Is this a 10 agent problem, or is this a two agent problem? The, the answer to that question, again, it, it resides in where are the decisions being made? What do they need to be contextually grounded in? Do they need to be isolated from other decisions that they don't impact one another in a cyclical way or in some other way that would create contamination? Um, that's gonna be really important.

Jillian:

I think use cases continues to be like, one of the biggest questions we run into. People know they need to do this, they're eager to get something launched or, or at least piloted. I feel like you could apply agents in so many different scenarios. I mean, you've lifted, listed off so many different ones just in this conversation. So when you meet with a client, where do you start? How do you identify what's going to be the most valuable ROI, or maybe they're just looking for the fastest direction to ROI, what does that process look like?

Meagan:

Yeah. I value versus technical feasibility is where we like to start plotting everything on sort of like an XY axis, like I always imagine like value and basically the return on your investment really being like the, the y axis, right? You wanna get as high on the value side of the spectrum as possible. And with technical feasibility, of course, you want to go after those use cases that have high technical feasibility. These would be things where it's proven either in your own company or with others that the ai, the agents, the, you know, network of it, all the pieces can talk to each other. And it's been proven, we know we can do this somewhere in the top side of that quadrant. The top right hand of that quadrant is the sweet spot, right? High value, high feasibility.

Jillian:

Uh, Megan, before I let you go, any last thoughts about, I know we really talked the gamut around agentic ai and I, I could just talk to you for hours about this because your, your viewpoint on this is obviously very technical, but you understand the pragmatic applications and really the business applications, and you can translate this stuff so well. So thank you for bearing with all of my questions. Um, but taking it back to the accountability piece that we started on, any final thoughts on like where you see that going or, uh, best advice for managing that challenge today?

Meagan:

The last thing I would say around AI and accountability is that we really want to be able to promote trust around AI when it makes sense. And we get to define as AI users and builders when it makes sense to trust AI and how much, um, and through accountability frameworks, we can do that.

Meagan:

I, as a technical person, right? I as a developer, I, I, I like to build algorithms. Um, I want to help drive that engine forward to help with explainability, to help with the trust, to help with the performance, right? We, we ask often, um, is it better for AI to be explainable or to be right? Um, and I, I wanna say, why not both <laugh>? Um, but for the, for the business users out there of AI and the leaders out there that are, they're gonna be driving AI within their organizations, I would say, you know, adopting accountability frameworks and understanding where the decisions are being made in your AI is the most important thing you can do today to have awareness of where things could fall apart and to identify opportunities to have competitive edge with ai, right? If you know your AI systems better than anyone else, you already are ahead of the curve. So, um, happy to, to help with that. But really appreciated talking with you about this. Um, I think I've learned a few things just in this conversation myself, <laugh>, um, and yeah.

Jillian:

Thanks. Thank you. Thank you.

Speaker 3:

Thanks for listening to this episode of Insight on If today's conversation sparked an idea or raised a challenge, you're facing head to insight.com. You'll find the resources, case studies, and real world solutions to help you lead with clarity. If you found this episode to be helpful, be sure to follow insight on, leave a review and share it with a colleague. It's how we grow the conversation and help more leaders make better tech decisions. Discover more@insight.com. The views and opinions expressed in this podcast are of those of the hosts and the guests, and do not necessarily reflect on the official policy or position of insight or its affiliates. This content is for informational purposes only, should not be considered as professional or legal advice.

Learn about our speakers

Headshot of Stream Author

Meagan Gentry

National AI Practice Manager and Distinguished Technologist, Insight

Meagan leads Insight’s U.S. National AI Practice. With over a decade as a data science and machine learning practitioner, she helps our customers translate AI vision into measurable outcomes — from strategy and operating models to production deployments — with a sharp focus on responsible AI, risk, and enterprise change. She advises senior leaders on where AI creates durable advantage, and on technical feasibility of some of the dreams that organizations want to see turned to reality. Her client work spans all industries, and she is a champion of the Innovate@Insight program, which facilitates inventorship through a fast-track to getting our creations patented and promoted.

Headshot of Stream Author

Jillian Viner

Marketing Manager, Insight

As marketing manager for the Insight brand campaign, Jillian is a versatile content creator and brand champion at her core. Developing both the strategy and the messaging, Jillian leans on 10 years of marketing experience to build brand awareness and affinity, and to position Insight as a true thought leader in the industry.

Subscribe Stay Updated with Insight On

Subscribe to our podcast today to get automatic notifications for new episodes. You can find Insight On on Amazon Music, Apple Podcasts, Spotify and YouTube.