Insight ON ‘Stratlock’ and the AI Asterisk: Why Pilots Fail — and What Pragmatic Leaders Do Differently 

Is your AI strategy in “stratlock”? CTO Amol Ajgaonkar reveals why most AI pilots fail — and how leaders can drive real business outcomes with a pragmatic, iterative approach.

Generative AI is everywhere, but most organizations still struggle to turn hype into results. A widely cited MIT study claims 95% of AI pilots fail — but CTO of Innovation Amol Ajgaonkar isn’t convinced. In this episode, he challenges that stat and explains why the real issue isn’t the tech, but how leaders approach strategy, data, and execution.

Ajgaonkar shares a pragmatic framework for AI adoption: start small, build guardrails, and focus on measurable business outcomes. He warns against “Strat Lock” — the trap of endless planning — and offers clear steps to move from strategy to action. If your organization is stuck in pilot mode or chasing instant ROI, this conversation will help you reset and move forward.

Tune in to this compelling episode, and don't forget to like, share, and subscribe for more insightful conversations.

Audio transcript:

‘Stratlock’ and the AI Asterisk: Why Pilots Fail — and What Pragmatic Leaders Do Differently 

Jillian Viner:

I finally get my clone. Yes.

Amol Ajgaonkar:

So, and then you actually do, and so you control that clone. So you own all of those agents.

Jillian:

I mean, the, the idea of having little clones with me to go out and do mm-hmm <affirmative>. The work is very appealing. But that does sound like one step closer. Just

Amol:

Like I don't think you could. No, not really. No. So the, this is not for replacing humans with it. That's the augmentation of it.

Jillian:

If you're making technology decisions that impact people budgets and outcomes, you are in the right place. Welcome to Insight on the podcast for leaders who need technology to deliver real results. No fluff, no filler, just the insight you need before your next big decision. Hi, I am your host, Jillian Weiner, and today we're getting insight on a pragmatic approach to AI with the mall edge goer, CTO of innovation at Insight. Let's go. Are you working anything fun right now?

Amol:

Oh, I need to speak to you. Okay. On how we can scale our, uh, marketing folks personalities,

Jillian:

Scale our personalities.

Amol:

So with, um, imagine an agent, Uhhuh <affirmative> that acts like Jillian talks like Jillian, uh, oh. Writes like Jillian, wait

Jillian:

A minute. Are you trying to replace me or

Amol:

No? That is the point is that we would then have multiple, Jillian, so you get to control I finally

Jillian:

Get my clone <laugh>.

Amol:

Yes. So, and then you actually do, and so you control that clone. So you say, Jillian would say, my style is this, this, this, this, this. Mm-hmm <affirmative>. And one agent would be, let's say a podcast grip writer. The other agent could be proofreading something. Mm-hmm <affirmative>. Like multiple roles.

Jillian:

I'm getting to see how much you know about my job.

Amol:

And so that is where, uh, you would create these agents, Uhhuh, <affirmative>, and then people who come to you, Uhhuh, <affirmative>, like, Julian, can you do this? Let me, like, ask the agent. And then the agent would do it for you. Interesting. And then you would get to see how that agent is responding as well. Mm-hmm <affirmative>. So if you felt like, yeah, I actually wouldn't say something like that. You would just go in and then change your personality and see how it responds. I dunno. And then you're making me nervous. And then you could just, so you own all of those agents,

Jillian:

The mini gillions just running around. Mm-hmm <affirmative>. Telling people what to do. Mm-hmm. Mm-hmm <affirmative>. Um, that sounds scary. I have agents, like I've set up my own agents for different workflows and things, but,

Amol:

But right now people still come to you. Yeah. Then you use your agent to do it.

Jillian:

I mean, the, the idea of having little clones of me to go out and do mm-hmm <affirmative>. The work is very appealing. But that does sound like one step closer to just like,

Amol:

I don't think it could. No, not really. This is not for replacing humans with it. Mm-hmm <affirmative>. And it's the augmentation of it. So if I could have you have a 10 x impact on the business mm-hmm <affirmative>. Right now, you would be restricted by physical time. Correct. You just cannot do more than Correct. 10 hours a day. Right. But what if I took some load off of you so that you would be happier doing what you're doing, and then maybe do even more without doing anything?

Jillian:

Yeah. I mean, I'm quite honestly, like, I feel that a little bit today already. Mm-hmm <affirmative>. Because we were so quick to get AI tools in our hands. Yeah. That, when I think back to that period of time, like even though it was very experimental and there was a lot of, it took time to set it up and get the results that I needed, but even taking the time to do that, it's it, like, there's no way I could have done the volume of work that I did during that time without the assistance of ai. Right.

Amol:

And so if, and I think there's a little bit of, um, adding a human touch to those

Jillian:

Absolutely. Uh,

Amol:

To those agents. Because it's your personality that would try to capture. So, and that only, you know Yeah. And you can control. So as you, you know, in a couple of years change the way you think or you've said, you know what, I used to do it this way, now I do it this way. Great. Yeah. You just change it. And now everybody else gets to see the new you. So you could add like a name to it. You can add like a logo to it. It can do all of that fun

Jillian:

Stuff. <laugh>, just the digital version. Just a question. I mean, we talked about this a little bit with Hillary actually. It like, the value that we bring in that situation, the way the reason that it's an augmentation and replacement is because you are bringing your professional experience mm-hmm <affirmative>. Your, your lived in experiences. Yeah. Like the AI can give you pretty good starter places, but there's rarely an opportunity or, or a situation where like, AI's gonna give me something and I'm gonna go Yep. Ready to ship. Yeah. Like, I'm always like, eh, I didn't quite understand the tone or intention or whatever. Right. So,

Amol:

And I think that's why the human in the loop kind of thing Yes. Where somebody's doing the grunt work for you mm-hmm <affirmative>. But you are bringing in your expertise Yeah. To say, eh, that's not right. Yeah. Let me update this. So you cannot ever remove the human from it, because AI is not there yet. Like, where I know just gets excited, AI's gonna do everything. It's not <laugh>. And you have to like, keep asking clarifying questions, give it more information. You have to do all of that stuff. Yeah. I've been in places like even for coding, when they say it's amazing, I, I agree. It is great. Mm-hmm <affirmative>. But there are times when it can make you go down a rabbit hole and it'll say, and it's very promising. So it'll give you an answer, a code. It'll, like, I ran it, but it didn't work. Said, oh, I see what the problem is. And you get like, oh, great. You found the problem, gives you this a little bit different code, which might or might not fix it, but add another stuff. And like, it's still not working. Oh, now I know. I think it's, and it's so convincing. It'll give you like, this, this, this, and this. All of those are wrong. But if you didn't know like, what that code is supposed to do, you'll be like, this is amazing. Let me just copy paste it, run it. It's still not working.

Jillian:

<laugh>, this is why I'm glad we get to talk to you today, because you are like the pragmatic voice behind a lot of technologies, particularly ai. You're the CTO of innovation, which tells me that you're constantly looking at things mm-hmm <affirmative>. From like, how can we look at this from a different perspective? You've got 23 patents to your name. Mm-hmm <affirmative>. I, again, like, I'm glad we get to have this conversation with you because right now the conversation around AI and the value from AI feels a little bit disoriented. And this all kind of, I mean, it's been brewing, I think for a while, but it really hit inflection point over the last couple weeks just based on a headline. Right? Yeah. And I'm sure you know those who are listening. You, you know what I'm talking about. It's that headline from an MIT report that 95% of gen I gen AI pilots are failing. Um, there's a huge asterisk in that statement. Mm-hmm <affirmative>. For multiple reasons, but there's a lot of asterisk around AI in general, which, we'll, we'll talk about a lot of that stuff. Um, but, you know, that was a couple weeks ago. There's been a lot of commentary since then. It still feels a little bit disoriented. I'm curious from, from your perspective, your very pragmatic perspective. Like, when you saw all that initial curve fluffle about this headline, what was your reaction?

Amol:

Um, my first thought was, what was the, the sample size, 95% of what? Right. It's, it's, it's a great headline if you wanna scare people. Uh, but there was no detail about how many did they, what type of AI solutions were you talking about? Mm-hmm <affirmative>. Because AI, right now, at least when people say ai, everybody thinks about generative ai, but AI is a broader term. Right? So is it machine learning? Is it just generative ai? Is it something else? Um, so those were like, I, I, I don't think I believed in the 95%. I think it's, I, I just like generative ai. There's a hype, and then there is pessimism, right? Mm-hmm <affirmative>. About it. Like there is a middle way where actually there is value in there. There's a band in there where you have to, if you drive in that lane, you're gonna see the value, um, with reasonable costs.

Amol:

And, um, so I think that unless we know more about that report, I wouldn't put too much emphasis on that. Um, because we've seen successes. We've seen successes internally. We've seen successes with our customers. So to say that 95% fail, if that was truly the case, wouldn't all customers be like, eh, we don't wanna do this. Yeah. Wouldn't all the infrastructure players be like, well, if 95% of these are failing, why are we investing so much in it? Right. So there are so many different, uh, ways of thinking about the same Yeah. Thing that I don't think it was very clear. I think it was a, it was a high profile.

Jillian:

It was a headline grabber. Yeah. Well, it was hard initially to get your hands on the report to even answer some of those questions. Mm-hmm <affirmative>. What was the sample size? What were they measuring what to measure, I think has been Yeah. A challenging point from day one. But I think it's a healthy conversation to have. There's so much at stake here. There's so much that's getting disrupted by ai. I think it's healthy for us to challenge, you know, what is it that we're measuring mm-hmm <affirmative>. Are we taking the right approach? Uh, I love to do a little dive into Reddit sometimes. Okay. To get like the on the ground pulse of, you know, how are people reacting to things Reddit did not disappoint, as you can imagine. There was a lot of interesting commentary across the board. There were some really great stories about how individuals are using ai mm-hmm <affirmative>.

Jillian:

In their everyday lives for everything from like creating coloring books for their kids to really using it to transform their day to day mm-hmm <affirmative>. And be more productive and get out of their, you know, cycles of stuck. Right. Uh, in the workplace though, there were some pretty repeated themes that I think this report and that the reporting that followed the MIT report supported mm-hmm <affirmative>. Um, I'm gonna read you part of what somebody said. So first of all, you know, this is an unvarnished view from Reddit overall. People feel like there's a lot of executive hype in fomo. It's an overdrive Reddit users access, say that execs are quote unquote losing their minds of regenerative ai, which I think we can all attest to, and we're sort of in this stage of a rational exuberance of the hype cycle. It feels like everyone's trying to get their hands on this gold rush.

Jillian:

Right? So somebody said that, and they're an employee from an engineering firm. They've seen AI predominantly invested in marketing for automating project summaries and marketing materials. It's just puking out copy pasta that few people read. Anyway, we work in data intensive environment and AI could absolutely change the way we do things, but right now, it seems like a mad rush to get staff to figure out how to use AI tools instead of restructuring our internal workflows. Mm-hmm <affirmative>. We've talked about that piece Yeah. A few times about how it's not just replicating what you're doing, but really looking at what is the work that's getting done and what's the best way to do it may not be the way that you're doing it today. Right. Overall though, it does feel like the conversation is disoriented. And you said the question earlier, like, maybe execs are wondering, like, should I be investing so much time, energy, resources, finances mm-hmm <affirmative>. Into this AI conversation, or is this really still hype? Right. Has the temperature changed in your mind because of all of this? Or are people kind of coming to their senses about it?

Amol:

I think people are getting more realistic about it. And I think, you know, what we do, right? The experience that we have, because we adopted generative ai when it came out, we were, you know, a couple of months, we had the entire end, you know, company start using it. So we understood its limitations. We understood where it's really good and when, eh, yeah, it's right. Take it little, some time to go. Yeah. Um, so, and I think I've started to see that, uh, in the market as well. But given our experience, I think it is, it is in, it is our job essentially to educate as well, to bring our experience, um, front and center and say, this is great. Yes, it'll work, but it won't work this way. Hmm. Or if you're looking for, uh, an outcome of a hundred percent, maybe not every time you will get good results, maybe 90% of the time there has to be, uh, a little bit of, um, let's weigh the outcomes that you're looking for and then go from there.

Amol:

And that's why whenever we work with our customers, we are, we are looking for smaller outcomes. What is a tangible outcome that you can measure, that we can build something for you mm-hmm <affirmative>. Um, and deploy, test it, work with the actual end users and see how it's adding value to their day-to-day lives. Right. And then once you start to see the value, then let's add more use cases to it. So thinking of it as a, as a platform rather than a one-off solution is usually helpful because you're not trying to boil the ocean. Yeah. You're saying, let me take the small use case. I know there is value because I can save X amount of hours, uh, from one person's life or a hundred people's lives. Right. And they can actually add some real, bring in their experience, bring in their talent to augment what they do and do it faster or do it better, um, and reduce their stress maybe.

Amol:

Right. So I think, um, we are in a great position, I believe, to have that conversation to temper the hype a little bit, so that they are not going like, yes, it's gonna solve, you know, all my problems and it's gonna make me money. Uh, versus, you know, let's, let's figure out what are your real challenges in the business. Uh, pick one of those and let's solve for that. And once you see the value in it, uh, what it also does is once you have a smaller use case and everybody starts using it, um, they also understand what works and what doesn't work. What are the expectations from this type of a technology? And you know, that AI is advancing so quickly. The new models are coming up every two weeks or every four weeks. So it's, most of times it's only going to get better.

Amol:

Yeah. Uh, from there. But you still have that mindset of, well, it could make a mistake. Let me evaluate that again. Let me go check if it's a decision that you're making based on that. Let me double check. Right. So, um, I think I've seen people understand that gap. Now, whether it was like, this is gonna solve everything for me, versus this doesn't work at all, and people are actually coming in the middle and saying, okay, I see the value, uh, in this, um, so that I can manage my expectations, my team's expectations, the company's expectations from what this solution can do.

Jillian:

Iterative, that's what I'm taking away what you're saying. Take it iteratively, take it one piece at

Amol:

A time, one piece at a time, and build on top of it. It

Jillian:

All compounds.

Amol:

And it, there are intangible benefits of that. Like I said, it's the people's expectations from the technology. Right? Now, when people get paranoid about you are maybe losing their jobs, like AI is gonna come and replace everything, that is not always the case. It'll augment what you do. Right. So that's why those use cases are important on what we select. Uh, and what we build for is important.

Jillian:

I wanna point the finger entirely at leadership, but I think one of the other arguments that have clearly come out of this conversation from that report is that it's really not the technology that's at fault. It is the organizations. It's the leadership.

Amol:

I, I would say it's not just the leadership, right? It's everybody. Because these days, everybody is reading up on news. AI is the new thing. So even if they didn't want to read, they're like, it's in your face to read this. It's in your face. You have to read it. Mm-hmm <affirmative>. Um, so everybody has a different expectation from that technology. So just like leadership. And they had a bigger exposure, uh, to other technologists and people in that level. Um, they come in from that point of view. Um, people who are actually maybe doing the work, um, on the floor, maybe coding, they have a different perspective because they're reading as well. Yeah. So it's, um, uh, I wouldn't completely just say it's leadership that's, you know, getting hyped up. It's everybody who's getting hyped up. And then there are pockets of people who are like, let's take a break.

Jillian:

To your point though, you've got leaders who are, you know, reading and trying to get themselves schooled up about this technology. But, you know, critics would say that you're, it's not your forte. And you have leaders with very little technology experience who are, you know, shepherding this very monumental change. And it feels like that's where a lot of the gaps are happening, because they might be envisioning things or, or putting out these really grandiose, you know, goals. And then your technical team is sitting back here like, well, that's nice, but that's never gonna happen. So like, how do you, how do you bridge that gap?

Amol:

I think it's a matter of just communication, right? So the, the leaders are visionaries, right? They, that is their job. They want to push, you know, the boundaries of what we can do, what the company should do, how they should excel. That is literally their job. Um, and then supporting staff. This is a technical supporting staff. Mm-hmm <affirmative>. Has to be, uh, up to date with what's happening so that they can better inform the leader and say, yes, we can do this, or we cannot do this. Or if we do it, there's a 70% chance we'll see success. 30% might not, so that they can make better decisions. Right. So I think the, the technologists that are supporting the executives, um, it is their job to inform them. Uh, I don't, I don't see why the executives should be reading up on the details of how do I implement eight way protocol and MCP? Like, why would they spend good time doing that? Right. Uh,

Jillian:

They're probably just calling you up like, Hey, I just saw this cool thing. Are we doing that? Yeah,

Amol:

Exactly. That is, or, you know, the technologist's job to say, yes, I've done it. Mm-hmm <affirmative>. Uh, it's in beta, it's, you know, it works or it doesn't work. Uh, or it works, but maybe there is not enough security profile around it yet that we should be looking into or just wait. That's the supporting technologist, you know, per that person's job in there. So, um, I'm always happy when the leadership comes and says, we should do it, because that means they are trying to push the boundary of where the company should go. And then everybody else should be saying, yes, we can do this, or it'll, we can do this, but it'll be expensive. Or we really, we don't have the skills to do it. Or the technology isn't there.

Jillian:

Honesty, just telling them straight up what's, what's the situation? Yeah. What are those?

Amol:

Because at the end of the day, we are all on the same team, right? Yeah. We, the, the leader is not somewhere out there trying to make something happen. They're our leaders, so we need to support them.

Jillian:

Yeah. You know, the other gap that we see too, and again, reflecting back to when AI kind of first entered the enterprise insight was super quick to put AI in the hands of all of our employees. And I think at that time it was okay to say, here, go play. Tell us mm-hmm <affirmative>. You know, what can you do? How is this making your lives better or not? Uh, nowadays that approach feels really like a bit of a handicap. Like you are doing yourself a, a disservice if you are simply throwing these expensive tools at your teams, expecting them to adopt it. And we're hearing from, from the ground that like, you know, requiring teammates to figure out how to put a tool into their workflows. Mm-hmm <affirmative>. Is not only not effective, but it's also very demoralizing. So what would your advice be to leaders? Because I've heard you say too, like, as much as you don't wanna be a proponent of overhype, there's also danger of sitting on the sidelines and just Oh, yeah. Waiting to, I don't know what you're waiting for, but <laugh>, right. You're not doing it right. So how do you, how do you advise leaders to push out an AI strategy that's effective?

Amol:

So I think the one part of, uh, any AI strategy is putting the guardrails around mm-hmm <affirmative>. Right? So making people aware of, uh, what they should and should not do, but not be too prescriptive. Like, you can only put this thing in there and then that's, that's it, right? So when you do that, you are clamping down on the creativity of your people. But if you say, please don't put our, you know, sensitive company information in there. So

Jillian:

Compliance plan, it's a

Amol:

Compliance plan, right? Mm-hmm <affirmative>. So you put your compliance plan in there, uh, but then I, I'm always of the opinion that you give the tools to the people, they will figure out how to use it to their benefit. If they see the benefit along with that, you kind of give them use cases or examples, because it is literally how we also communicate with any large language model. If you want an output of a certain type, you say, here are a few examples, you know, of how I want you to do things. And it gives you the output along the lines of what the examples that you've given. Similarly, when you give your teammates, here's a tool and here's how could, you could use it, but think of any other way you want to use it as long as you are within this compliance. I know people will sometimes can get overwhelmed by too many tools. Mm. Uh, but this is the time where these tools will change how they work.

Jillian:

I have rebuilt agents between different platforms as they have changed. Yeah. Because I find that, oh, this platform actually got a little bit better now, and I'm gonna put that over here.

Amol:

And, and so now since you know how it works mm-hmm <affirmative>. You were able to switch between those platforms, like, oh, let me just try it here and see if I get a better result. Yeah. Because that result was promising. Yep. So you took the effort to learn and move on from that.

Jillian:

Yeah. Amal, you're spoken like a, a true innovator, and I know you lead our, our patent program here at Insight, and I think one of the things that you guys have done really well is demonstrate and share the stories of the patent holders because you are, in essence, showing to the rest of the company. Like, here's, here's some ideas of how other people are thinking differently, and here's an example of someone trying something new and were successful. You kind of take that same mindset to AI where you're really leaning on like your AI champions to show mm-hmm <affirmative>. You know, successful use cases or, or even just be like mentors and coaches for others. Yeah.

Amol:

And, and that's really helpful, right? As, as you get a few people excited or they're already excited because they've read and they're trying on their own time mm-hmm <affirmative>. Um, they are coming in with real world pragmatic experience. And when y when they actually speak to their peers and members of other departments, right? And they say, you know what, you could do it this way because they're constantly thinking, right? They are excited about it. So even though I might not know anything about warehousing and how it works, but I, I might walk in like, Ooh, you I see that. Could we do it this way? And even though I might not be accurate on how it would be used, the, the domain expertise that's there, they'll be like, oh yeah, let me try this way. I would just change this, this, this to this. Let's give it a shot.

Amol:

The other part of it is the, you can try on much faster than other solutions in the past. What do you mean? I'll give an example. Let's say you were using, um, our own tool, right? Uh, horizon. EI, um, insights internal AI tool. Internal BI tool tool. Mm-hmm <affirmative>. Tool to ask a question. Right? And you went in, in a place like, maybe if I took this document and if I wanted to ask a question about this document and that document, whether it's warehousing numbers and some manual in there, it's like, maybe would it work if I were to ask this question when both documents were there? If you had to do that before you would actually have to build a solution, get a, you know, data person, get a software developer and build something out, and then they would tell you whether it worked or not.

Amol:

Now you would just put those two documents in our generative AI tool, be like, Hey, reference this from this and this from this, and tell me if this worked. And now you've proven out whether it would actually work or not. If it did work, now you can take that and put it into your workflow. And so the proving out whether something works or not is now at a lower cost mm-hmm <affirmative>. Than what it would traditionally be when you had data sets that were in different places, uh, different systems. If you have access to those systems right now, and if you, if you have a safe way of putting that information in a Ative AI tool, you could just export it from this, export it from this, put it together, ask a question. If it gives you an answer that is acceptable, now you have a use case. Let's say if this happened on an automated, in an automated way, or if this happened, uh, to be a tool for people who could just click on of a button, get an answer. I've augmented their work where they had to go and sniff through these many data records from here and then marry it with this data, it's made it easier for them. So I think the, the innovation using this is gonna accelerate.

Jillian:

Yeah. So if I'm hearing you correctly, it's the, the progression, the improvements of the technology mm-hmm <affirmative>. And it depends of course, what you have in your enterprise, but it allows teammates to basically experiment rapidly. Yes. Fail fast mm-hmm <affirmative>. And then share their successes. And if it's something that can be easily replicated for a different teammate, like you've now scaled an AI use case,

Amol:

Right? And so now you can actually figure out what the cost is mm-hmm <affirmative>. And what the value is, and if the value surpasses the cost, you can convert that into a solution that it can roll out to the end enterprise. And so that, this is like one of the examples where you can start small, understand the value, build a solution around it, give it to the people that you want. Yeah. Uh, to use it Right. And then move on. So it's almost like a collection of tools, like a tool belt. Yeah. You got all of these tools in place, and people will start using those tools. Once they give them the tool belt, they will use it because it makes their lives easier.

Jillian:

Yeah. I, I can't tell you how many times I've like shown something to a teammate and then they're like, oh my gosh. Right. Oh, this is so cool. I'm gonna use this. So, right. I agree with you. When it comes to larger scale AI innovation, though, I've also heard you use this very <laugh>, very fun term. I I did you coin this? The Strat lock term? Can you explain this? Yeah.

Amol:

So e essentially there are so many meetings about strategy, right? Let's,

Jillian:

That's have a strategy called talk about strategy. Yeah.

Amol:

Mm-hmm <affirmative>. So there's so much strategy, meeting and strategy, and we'll talk about we are got a big, big strategy about this and big strategy about that. And you stay there and there's no execution on it. Mm-hmm <affirmative>. So you finally then come up with like, this is our Uber strategy. Now you start executing on it and figure out that yeah, the strategy didn't really work, <laugh>. So you go back to the board, drawing board and you start strategizing again. Mm-hmm <affirmative>. Right? So that is kind of, you kind of locked yourself in strategy. So you got strategy locked, like

Jillian:

I'm picturing like shoots and ladders. Like it feels like you're making progress, progress. And then you're back at

Amol:

Squares back in square one. Yeah. So it's important to kind of create a strategy. Yes. You don't want to run like with whatever comes in your brain. Um, you need to have a plan for it, uh, an informed plan, but then start executing on it. If you second guess yourself every time and you need, you know, 20 people to weigh on that strategy and stuff like that, you're gonna lose time. Mm-hmm <affirmative>. And that is where initially there used to be a ramp up, like there, I forget, there was a, where there were early adopters, and there's a whole thing like,

Jillian:

Uh, the change curve, you mean?

Amol:

Uh, curve adoption curve. Like curve. Mm-hmm <affirmative>.

Jillian:

Your stragglers, your early adopters,

Amol:

Your right. And that used to work for other technologies mm-hmm <affirmative>. Because the, those still took a little while to mature. But here in, in this space, in ai, in this world right now, things are moving so fast that if you spend too much time thinking about what to do mm-hmm <affirmative>. There'll be other people who will think, do fail, think do succeed.

Jillian:

So it's okay that you're playing shoots and ladders because at some point you're gonna get the card that gets you to the ladder that gets you to like the top of the board game and you're closer to

Amol:

Yes. But if you're just keep on, like, I don't know if, if ladders is the strategy part and execution is the, the snake part, <laugh>, but, uh, if you just keep strategizing and planning mm-hmm <affirmative>. And never execute.

Jillian:

Right.

Amol:

Then I, I see that as a problem. Yeah. I would rather have someone create a strategy, execute quickly. Yep. See whether they're succeeding or failing. Get inputs from either if it's a external facing strategy, see if the market reacts to it positively or not. Yeah. Uh, if we are able to execute on it or not. And if it doesn't work, switch.

Jillian:

Get up. If you fail, get up. Try again,

Amol:

Try again. But if your strategy, strategy takes six months,

Jillian:

Right. And I mean, at this point, you've, you are starting over anyway because the technology and everything is so different, has changed.

Amol:

Exactly. The strategy itself is outdated at that point.

Jillian:

Yeah. Joy said very similar comments to our very first episode where when she came on, and that was kind of her, her mission critical statement to everybody is like, just go, just stop thinking about it. Just go. Yep.

Amol:

And that is, that is so important. Like we've, we've seen that, um, I do that every day. Like people's like, do you want read the manual? It's like, no, let me just, just give me <laugh>.

Jillian:

Just I figure it out

Amol:

Kind of guy. Yeah. We'll figure it out. Right.

Jillian:

Remind me not to have you help me put IKEA furniture together.

Amol:

Oh, yeah. You don't want me to like that <laugh>, like, I'll do it, I'll take it apart and I'll put it back again. Finally, I'll get it. Right. But I'll do it a couple of times.

Jillian:

<laugh>. But that's, that seems like your, your mo uh, as an inventor, you're probably very good at breaking things, taking them apart, putting 'em back together when

Amol:

You, uh, the, the first part Yes. Putting back together. I'm not so sure.

Jillian:

What do you think is broken about the way that we're approaching AI today?

Amol:

Um, I think the strategy part is broken. I think there are too many strategies about ai, less execution about ai. You

Jillian:

Don't feel like that's reckless if you just move fast on a strategy? Or is it okay to be reckless? It sounds like, uh,

Amol:

I, I'm not sure I would categorize it as reckless. As long as you have a strategy. I'm not saying just do it without a strategy. Mm-hmm <affirmative>. I'm saying come up with a strategy strategy fast enough, um, so that you can execute on it, because the things will change Yeah. At that point. So I think right now, the, when people don't execute on it, or they have this, um, vision of what AI can do for them mm-hmm <affirmative>. And that happens because they haven't executed on it. Right. So when you execute those learnings will inform your next strategy. Mm-hmm <affirmative>. And what really worked and what didn't really work, whether it was a particular large language model that didn't work, it was maybe a prompting strategy that didn't work. Maybe it was the choice of other supporting technologies that were wrong. It could be so many things, uh, that might not work for a particular use case, but they could be great for another one. Yeah. But those learnings never happen if you don't actually do it

Jillian:

Real quick. What do you need to have in that strategy? Like, if you're gonna quickly do strategy and go, what are the essentials that need to be accounted for so that it's not reckless?

Amol:

It depends on what you're building that strategy for. Let's say it's a business AI strategy. You wanna understand what are your strengths, what are your weaknesses? What can you actually deliver to the customer? I what are you trying to achieve? If you don't know what you're trying to achieve, that strategy is gonna fail anyways. So if you say, I want to be able to help these type of customers faster, uh, by using this technology, okay, which customers, which type of customers? What are your strengths? How will you deliver? Uh, do you have the discipline to deliver it in a timely fashion? You don't want that to, you know, fall apart. Mm-hmm <affirmative>. Those would be the types of components or silos, um, you know, or sections from a strategy point of view that I would put in place. And then also addressing the cost, addressing security, addressing data residency.

Amol:

Like, because when we, if we have a strategy, whether it's internal or external, we have to make sure that our data is secure, security is, is correct, and the profiles are right. It's not too restrictive and nothing can happen. It's not too relaxed where we actually lose data or stuff like that. Right. So we have to make sure that that's in place. Foundational stuff. Then coming up with, okay, if I have this, then, uh, which customers should I go to, um, with this strategy? Who would this benefit? So having that answer will help you come up with a strategy and then go with it, build something quick, maybe talk to your customers and say, Hey, we've built this, or this is what we are thinking. Is it useful to you? Pull those relationships in, have a conversation, open conversation with them, if that aligns, and that should inform your strategy. Mm-hmm <affirmative>. If, if you create a, if you create a strategy that way, then your execution, you at least know that there were customers who wanted it. Right. Otherwise, you're coming up with something that you don't know who wants it.

Jillian:

So I took rapid fire notes while you were talking mm-hmm <affirmative>. So I threw that question at you, but it sounds like before you begin any initiative, your strategy needs to consider ultimately your goal. What is it that you're trying to solve for? What is it that you're trying to achieve as far as outcomes? Who is it for? Uh, what is the timeframe that you're going to execute that mm-hmm <affirmative>. What kind of feedback are you going to be searching for? Can you get that feedback? Yeah. What are the weaknesses and strengths you have internally? And that could be skills, that could be the talent that you have, if you have to outsource, get a partner, your tech capabilities, kind of along the same patterns. Mm-hmm <affirmative>. Understand your security and most critically, your data readiness. Right. All right, Amal, I'd like to play game with you. Uh oh. Called red light green light. Okay. This is the asterisk edition. Uh, I'm gonna throw out some common statements or assumptions that we hear in AI conversations, and you're gonna tell me if this is a red light, a green light, and why? So, okay. Again, think of this as the AI asterisk. These are the footnotes and the disclaimers that leaders should hear before they make any more AI decisions. Statement number one, we need this AI project to save money immediately. Red light, green light.

Amol:

Yeah. Red light.

Jillian:

Why?

Amol:

Nothing is I legit <laugh>, you would actually be spending money to build it first, and then it'll start saving you money over time. Because ai, unless it is some very specific use case where it's a low hanging fruit, and the output is exactly what you want without real crazy downsides to it, uh, then it would work for most of the cases. I would say, if you go in with that, like, I wanna save money immediately and not happy.

Jillian:

All right. Statement two, we're measuring success by how much productivity improves red light, green light.

Amol:

See, that's, that's the amber kind of thing.

Jillian:

<laugh> yellow light.

Amol:

It's a yellow light there. What?

Jillian:

Slow down the ask stress

Amol:

On that. Uh, it's, it, first of all, they need to know how are they measuring productivity right now to see whether they,

Jillian:

Yeah. Is it a vibe? Are people feeling more productive? Yeah. Or is there an actual,

Amol:

Because these are intangible Right. Things that, that cannot be measured in a scientific way unless you're

Jillian:

Not gonna see this on the p and l.

Amol:

Yeah. It's, it's not gonna be that. Or if it's something on the lines of, Hey, you have to churn out 20 documents in a week. Okay, now you're churning out 25, but who does that? Right. Right. So the, it's, uh, it's more on the lines of if I can do, if I'm, like you said, maybe it's a little bit of a feeling, if I can do x uh, in a day, and I instead, once I have this tool, maybe I can do X plus five mm-hmm <affirmative>. Whatever that five is, then it's more productive. Uh, but again, it's, it's the value proposition of that tool. Is the plus five worth it

Jillian:

Right. To the

Amol:

Organization?

Jillian:

Are you, are you giving a better work-life balance mm-hmm <affirmative>. Or are you offsetting headcount by increasing workloads? Yeah.

Amol:

Because they get <crosstalk> and even the cost of any solution, any AI solution or any solution for that matter, there is a cost associated with that solution. Right. So when you're looking at an outcome from that solution, you, you'll basically say, okay, I'm, you know, if they spent less five hours less, and in that five hours they did something different, what was that different? And how valuable was that to the organization? Then you'll be able to quantify whether spending, you know, whatever thousands of dollars on this solution was worth it or not. So there has to be a, uh, that's why I said it's Amber, because it's a very, it's a conversation. Yeah. It's not, uh, Hey, you saved so many hours, or you suddenly became so productive that you did your stuff in an hour and you were done. Right. What, what did you do for the rest of the time? <laugh>? So that, again, it's a conversation rather than a like, yes, no kind of thing. Yeah.

Jillian:

All right. I love this one. We built an AI agent. It works, but no one's using it.

Amol:

That means they got the use case wrong. <laugh>,

Jillian:

Right? Is it the use case or is it the adoption, the rollout?

Amol:

It, so people will adopt things that, first of all, they don't fear, and second, it adds to their, you know, either productivity, work-life balance makes it easier for them to do. Mm-hmm <affirmative>. Nobody will adopt something that they think will not add any value, or it might replace them at some point. Um, or it just makes their life more difficult. Right? If it's, it's one of those, then they're not gonna use it. But primarily, if the use case itself is wrong, where they don't need it, you give them a tool that they don't need, they're not gonna use. But if you show them like, Hey, remember you spend four hours creating this report, here's the tool, run it and then do something else. It'll come back with all the stuff, and then you, you can just crosscheck it and it'll tell you where it got the data from. Okay, now I just save four hours, because who wants to sit and copy paste stuff from one system into an Excel spreadsheet, and then look at that, right? Yeah. So, uh, that's why I say

Jillian:

Is I largely three reports that no one ever

Amol:

Good to know. I'll send you some

Jillian:

<laugh>. All right. Our invent says our AI use case can absolutely be done. No problem. Red light, green light. Oh,

Amol:

Great. Red light. Red light. So red <laugh>, uh, because if, if they've really done it, they would be like, yes, it can be done. But remember, it's ai. So there could be a few things that are off and will work to tweak it, to make it work as much as possible for you for the, you know, accuracy you need. Mm-hmm <affirmative>. But if they come back and say, oh, absolutely, it's a hundred percent it's gonna work. Um, that is what I call the PowerPoint promise <laugh>. Is it? Everything works on PowerPoint.

Jillian:

Amal, I think we need you in marketing <laugh>. Um, we've got legacy systems, but we'll figure out that data integration later.

Amol:

We've red light most of the times. Uh, but given like this, like the new MCP servers and that integration strategy, as long as that data is not critical for the outcome you're looking for, you could think of it from a later point of view, but most of the times it's critical. So you at least need to know what data is there in the legacy systems mm-hmm <affirmative>. And how it needs to be used to provide the outcome that you want. So if it is critical to the outcome, you cannot think of it later. You have to think of it first. Do you actually have the data? Can you access the data? Can you extract the data and use it? There's so many other questions that will come up that have to be answered before you can actually say the outcome would be correct, or the outcome would be what you want. Um, so it, it depends. But most of the times that would be a red light to say, we'll, we'll talk about it later. Like, eh, no, let's talk about it now. Because most of the times if they are saying, let's talk about it later, that means there is some trauma associated with accessing that data and they don't want to deal with that right now. It

Jillian:

Trauma.

Amol:

Oh, oh, absolutely. So they, uh, whether it's team dynamics, it's uh, it's just a very old system. It is very finicky. Nobody wants to touch it because it might break. Nobody has expertise. There's so many different reasons why No, someone might not want to go down that route. Yeah. Um, but if you have that conversation, and if it is that case where it's finicky, you know, you look at it the wrong way and it'll break, uh, then we can have other conversation. Like, where else can we get that same data from? Um, is there a place? Um, so maybe let's talk about that first is get that data in place and then come up with this outcome.

Jillian:

Finally, we've laid out a thoughtful AI strategy, we're ready to go.

Amol:

Good for you. Execute faster <laugh>.

Jillian:

So green light, green light, go, go, go. Yeah.

Amol:

It'll just be, if you have a strategy, go, you know, don't talk about it anymore. Next day you should wake 'em. It's like, first step, I'm gonna do this. And you start doing it.

Jillian:

I like it. I like it. So again, if leaders have been listening, and the initial headlines kind of made them a little bit uneasy, the following conversations have really added some discomfort to whether or not to continue with ai. Maybe you're feeling like you're in Strat Lock, that strategy lock, um, <laugh> feel like maybe you're part of the 95%, you're probably not part of the 95% mm-hmm <affirmative>. But, uh, really the point is, is like, it's okay. Don't worry about it. Yep. You're not failing. Um, and if things aren't going well, it's probably because of not so the technology, but maybe how you're thinking about it, how you're rolling it out. Maybe you're acting too slowly. So there's some questions that you need to ask yourself and your leadership team. So you're gonna check me on this, okay? Mm-hmm <affirmative>. First, are we solving a real problem or just trying to check the box on ai? Check number two, is our strategy executable or just aspirational? Mm-hmm <affirmative>. Do our teams have the skills, data and context to, to succeed? Are we building iteratively or are we expecting instant ROI mm-hmm <affirmative>. And find finally, are we measuring what matters or just tracking productivity or vibe?

Amol:

Right? Absolutely. I mean, if, if you can have an honest conversation around those questions, you'll figure out the right use case. You'll figure out whether it's doable or not. And again, it's, it's not to like the, that the technology doesn't work or the, you know, this whole thing is a fa like none of that. I think AI is phenomenal. It surprises me every day in what it's capable of doing. There is real value in there, but you still need to have a real business use case and be realistic about do you have all the supporting mechanisms to actually execute on it so that you will get the, uh, the outcome that you're looking for, and you will not be part of apparently the 95%, uh, of the failure that that's there.

Jillian:

And if you are failing, it's okay. You just found another reason. Oh, absolutely not to use

Amol:

Ai. Yeah. If you fail, I would say you still have to use ai. Just maybe approach it differently. Yeah. Because maybe your use cases to, uh, aspirational, maybe look for a smaller use case. There is definitely value in this. Um, and companies that will adopt AI now will be better off for the future.

Jillian:

Amal, thank you so much for being here.

Amol:

Thank you, Julian. This was fun.

Speaker 3:

Thanks for listening to this episode of Insight on if today's conversation sparked an idea or raised a challenge, you're facing head to insight.com. You'll find the resources, case studies, and real world solutions to help you lead with clarity. If you found this episode to be helpful, be sure to follow insight on, leave a review and share it with a colleague. It's how we grow the conversation and help more leaders make better tech decisions. Discover more@insight.com. The views and opinions expressed in this podcast are of those of the host and the guests, and do not necessarily reflect on the official policy or position of insight or its affiliates. This content is for informational purposes only, should not be considered as professional or legal advice.

Learn about our speakers

Headshot of Stream Author

Amol Ajgaonkar

CTO, Product Innovation, Insight

Amol draws upon two decades of experience to drive thought leadership and strategy for outcomes-focused solutions. He is a technology leader with a passion for operationalizing Intellectual Property (IP) and fostering innovation to create customer-focused offerings and accelerators. His expertise lies in transforming innovative ideas into practical solutions, aligning them with customer needs and optimizing their implementation within the company. Amol excels in orchestrating cross-functional teams, fostering collaboration and building strategic partnerships to deliver results.

Headshot of Stream Author

Jillian Viner

Marketing Manager, Insight

As marketing manager for the Insight brand campaign, Jillian is a versatile content creator and brand champion at her core. Developing both the strategy and the messaging, Jillian leans on 10 years of marketing experience to build brand awareness and affinity, and to position Insight as a true thought leader in the industry.

Subscribe Stay Updated with Insight On

Subscribe to our podcast today to get automatic notifications for new episodes. You can find Insight On on Amazon Music, Apple Podcasts, Spotify and YouTube.