Insight ON High-Performing Teams Aren’t Using AI — They’re Working With It

Agentic AI is more than a tool — it’s a teammate. Learn how leaders are using it to drive business outcomes and empower their teams.

In this episode, Michael Nardone, Cloud Solutions Director and Distinguished Technologist at Insight, joins host Jillian Viner to unpack the shift from AI as a tool to AI as a teammate. They explore how agentic AI is transforming workflows, freeing up human capacity, and enabling more strategic thinking.

From vibe coding to agentic Software Development Life Cycles (SDLC), Michael shares real-world examples of how organizations are deploying AI to automate low-value tasks, improve operational efficiency, and drive innovation. He also offers a reality check on AI maturity across industries, emphasizing the importance of governance, interoperability, and business alignment.

For leaders wondering how to start — or scale — their AI journey, this episode delivers a clear playbook: focus on business outcomes, ensure technical feasibility, and treat AI as a strategic partner, not just a tool. 

If you liked this episode, share it with a colleague.

Have a topic you’d like us to discuss or question you want answered? Drop us a line at jillian.viner@insight.com

If you treat AI like a tool, you’re limiting its potential."

— Michael Nardone, Cloud Solutions Director, Insight

Audio transcript:

High-Performing Teams Aren’t Using AI — They’re Working With It

Jillian Viner:

Convince me I need to do agentic AI right now.

Michael Nardone:

You know, we, we look at technology shifts, right? We think of the nineties, right? It was kind of the age of the internet. We had things like eBay and, you know, our first pass at Google. And then in the, the, the two thousands, we went to Cloud and SaaS, AWS is released Salesforce, um, ServiceNow, and then the 2010s were mobile, right? You know, things like Airbnb and Uber, and, um, Instagram. And then we look at the 2020s is now the age of ai. That is where things are going, right? That's been the trajectory of technology. Um, and again, the train's leaving the station. You're either gonna be on board or you're gonna be a casualty on, on the s and p Vivi.

Jillian:

If you're making technology decisions that impact people, budgets and outcomes, you are in the right place. Welcome to Insight on the podcast for leaders who need technology to deliver real results. No fluff, no filler, just the insight you need before your next big decision. Today we're looking at ai, not as a tool, but as your next teammate. We'll be chatting with Michael, nor the director of Cloud Services and distinguished technologists at Insight. We're gonna focus his conversation primarily around ATech ai, because this is the new, this is the new, I'll say, uh, what's a good chess move? The Queen's gambit, right? , thank you Netflix for making us all chess fans for a hot minute. Uh, it does feel like Agent AI is going to be the queen's gambit of ai, at least for now, until the next big breakthrough happens. So, uh, just for those who maybe aren't paying attention as deeply and watching these chess matches, how would you define agentic ai?

Michael:

So, agentic AI is when we extend past our generation and summarization, um, categorization of information, right? And we're giving agency, right? Mm-hmm . Meaning decision making qualities, ability to determine the path rather than taking a fixed course of action. Um, and agents are what allow us to take, I'll call it the thinking, the reasoning, the understanding from what we call AI today, and then apply that to ourselves, you know, in the real world, in software context. And again, agents are what allow us to take all that amazing investment and models and across these new service stacks and broadly a push and apply changes to our environments, to how we see the world and to how we generate workflows, right? So you can feel that it's just that natural extension, right? It was amazing when chat, GPT could tell me, Hey, here's a great business plan, or here's a great set to how to automate a process, right? You know what, that's a great idea. Chat, GBT, why don't we get a fleet of agents for you and give them agency, and then go ahead and accomplish that great plan that you set forth for me. Mm-hmm

Jillian:

. You said that very eloquently. I always think of it as like chat. GPT so far, or generative AI so far has been like a really helpful coach and boss telling me what to do, and now it feels like there's a bridge to actually helping me do the work,

Michael:

For sure. Yeah. We, we think about that as, um, and, and it's interesting. I am gonna pick on that a little bit, right? Sure. We think of a coach or a boss, right? Mm-hmm . Um, the, the best context that I have is you're still the thought leader and our AI are largely, they're your thought partner, right? They're gonna provide, you know, recommendations, potential avenues to take, but ultimately the decisions still with the human today mm-hmm . Um, and I think that's so important, right? And so when, when we give agency to our agents, right, it's all about giving the right parameters, the right prompt, the right context, right boundaries, right? Um, because again, we're seeing that shift between, you know, ooh, chat, DBT can tell me what to do to, oh, now it can help me do it as well through kind of this agentic ecosystem that we're seeing kind of as, again, I view it as just a natural extension of where we're going.

Jillian:

Yeah, I agree. I agree. Especially when you get really familiar with it, and it starts to understand and mimic your thought process. It really is like having kind of a second brain that you can outsource to, to do some things. But it is, um, again, I think this is really the game changer moment. I know we hate to use that word , but, uh, because before it felt like, yes, it could help me think through things and come up with ideas, and it could refine my language, but I still had to do like the administrative work. I still had to go do the legwork. I still had to build the PowerPoint, do the copy pasting, and now it's going to have the ability to do some of that for me. So it's even more of a headstart on things

Michael:

For sure. Yeah. I mean, you're, you're providing the intent mm-hmm . You're providing the, the outcomes that you're looking for. Um, and as again, if we've grown in AI trust and safety, uh, we're able to, you know, the thing that really empowers a AgTech right? Is the fact that we've built these ecosystems around it, right? I think a two of our kind of major partners here at, in, at Insight with Microsoft and in, and, uh, ServiceNow mm-hmm . Right? And so huge interoperability, uh, announcements that they had, right? Agents developed on the Microsoft platform. So that's things like copilot studio, um, of course Azure, ai, Foundry, those are completely interoperable with ServiceNow on their now assist platform, right? And their, their AI platform. And so dynamically exchanging of information, right? Mm-hmm . So a really great example, it's a very IT example, but let's say Jillian's a new teammate, right?

Michael:

And we can, you know, go into any of those agents on either side and say, Hey, I need a new workflow. Jillian's a new teammate, she's gonna need her new laptop. We're gonna need to asset tag that, we're gonna need to distribute that kit that, oh, and we need to make a business process workflow. She's gonna need some IDs. Oh, you know what? Jillian's amazing on our team, right? We gotta make sure she has access into the right, you know, video equipment or software that she needs to perform her role. And again, that's the agency piece, right? Because what, what's the right sequence of those things, right? That's where as we get better reasoning models and as things progress, okay, it's logical that I need to put in this software goes on that asset before it ships out to Gillian, and that's where you're letting, um, again, these models begin to sequence and reason and complete those tasks for us, right? What's amazing is two or three years ago, this conversation that we're having now and even a workflow that might seem as simple as that w was actually not a, a real possibility without a lot of human intervention. And we're just able to see those things happen with even less and less human oversight, human oversight is still absolutely critical. You wanna always be validating your outcomes, providing reinforcement learning and feedback loops. Um, but it is really cool to see how far we've come and what I call less than three years.

Jillian:

Yeah. It feels very fast. Very fast. I mean, even a year ago, we would not be having this conversation. Um, I I, I wanna ask you kind of a sensitive question because I think you can't talk about agent AI without sort of addressing the elephant in the room. And what, you know, often gets referred to as say in the quiet part out loud, if we are outsourcing tasks, even though these are small tasks, if we are outsourcing them to agents, how do you first of all make sure that the work that you're offputting to an agent is work that you really don't want a human doing? And how do you make sure also that you're keeping the humans focused on the thoughtful work and, and maintaining, you know, a workforce that is feeling strong and, and and confident, and you're grooming the next generation, generation of workers coming in.

Michael:

Yeah. So, I mean, again, great question, right? Because there are things that make us uniquely human. Mm-hmm . AI simply isn't doing yet. Um, certainly can be better at us than in many things, but, uh, there's still a very much human aspect to everything that we're doing with ai. Uh, if we think about a lot of the repetitive tasks that we're asking AI to do, these are things that are, um, I call them low thought type of activities, right? Yeah. You know, if for, for most companies around the world, your people, your labor force is not only one of your biggest expenses, but it really is your greatest asset, right? It's your competitive differentiation in a lot of cases alongside your ip, right? And thought leadership, um, and being able to take your teammates right, and be able to have that thoughtful ability to go after things that are just not the day to day, right?

Michael:

So you think of as this greatest asset, you know, are you spending six hours in tasks that could be offloaded or given to ai? Right? Those are the things that, where now I have six amazing hours of ideation with Gillian. I have six amazing hours where you can be there with your AI teammate mm-hmm . Saying, Hey, you know, now that I'm freed up from this set of workload, what's our business plan? What's the way to capture that next piece? Right? And so that's where the power comes in, is being able to instruct direct, but don't outsource your thinking to AI either, right? So, you know, for, again, I have peers on both kind of sides of this conversation. Mm. Um, is AI going to take our jobs, right? Are we going to go to a two day work week? Are we gonna be irrelevant? Um, and, you know, I tend to fall on the positive side of that conversation, uh, to

Jillian:

Where that doesn't surprise me about you, ,

Michael:

If you know my disposition, probably not too much of a surprise. Um, but the reality is, is that it's amazing teammate, right? Mm-hmm . And you're able to take the things that make you uniquely able to dissect context and understanding and those elements and really put them in, because what we found is that humans enrich prompts, they enrich the responses that we get, right? Um, having two AI networks continue to, um, try to train on gen data that they've generated, you'll hear things like, you know, we have a data gap issue, right? We've trained on all known human information now. Um, and so we've seen that when we have, you know, two AI models trying to train each other, it is not the same as the outcomes and the richness that we get from people.

Jillian:

I would agree with that.

Michael:

I would say that may change in a few years. Yeah. But, but as we see it now, right? People and humans in the loop still provide a, a very valuable set of inputs. Mm-hmm. AI to this point is not able to replicate our inputs into those models.

Jillian:

Yeah. I actually feel like a lot of more conversations I'm having this la lately really reinforces that statement. And I think our cmmo actually put this really well in one of our earlier episodes. I think she referred to it as like, you are more of the art director. You're not ma necessarily writing the line by line of the script, but you have the vision and the direction, and you're pulling the pieces together. Just this past weekend, I was working on a very big project, and I was collaborating with ai, and I was just like, I was so amazed and excited and like so sucked into this thing because I was, it was going so much faster than it ever would've gone if I had to do it all on my own. And it really was intoxicating in a way to give that, that direction to the AI and have it spit out, you know, a draft of something that, again, it would've taken me hours to do. And that's the thoughtful work, was understand how I needed to formulate it or what content needed to go in it. It wasn't the word by word verbiage. So, um,

Michael:

I, I love that . That is a, that is it, it shows how we've evolved with our interaction with ai. Yeah. You know, one of the things that you'll hear me say is, you know, if we treat AI like it's a tool, right? You'll hear as a tool, we're, we're limiting the potential mm-hmm . In the ceiling on what we can do with it, right? That interaction that you had there, if you would've said, you know, I had a, uh, quick debrief and a standup with a couple of my teammates, right? We all kind of huddled together. We bounce some ideas off each other. I said, Hey, you know, we have a thing called red teaming, right? Where mm-hmm . We'll take what we think is our most amazing idea. And then I say, Jillian, go ahead and poke some holes in this plan. , what could go wrong?

Michael:

What am I not thinking of from a contingency perspective? Or am I missing the mark here, right? Mm-hmm . It's no different than how you and I would bounce something off of each other. And I love to hear that, you know, you were able to, to dig down and actually treat AI like really your digital teammate in that case. Yeah. And again, when, when you train on what I call the sum of human intelligence comes up with some pretty good things, the amount of times that I've said, you know, I didn't think of that I should have though, , thank you,

Jillian:

. I hate those moments. I mean, I love those moments, but

Michael:

I hate those. And I still do use please and thank you. I don't know if it's a Midwest thing when working with AI in my prompts or, you know, planning for that scenario Yeah. That we all, we all don't talk about. Yeah. Um, just remember I use please and thank you,

Jillian:

. I I do too. I do too. I think what adds to the collaborative teammate aspect of it is when you are using the same tool over and over again, I'm just gonna use chat GPT as my example, because at home, personally, this is my favorite one. That four oh model knows me. It knows the way, I think it knows that I hate fluffy marketing jargon, so it will not use that in its responses back to me. Uh, so those types of, that interaction, it does help me feel like I've got a thought partner because it's, it's in line with the way I'm thinking. So I see this only growing as we get more into more integration with AI in the workforce, and, and these tools become more commonplace. I do wanna get a temperature check from you though, because I saw this report from Ernst and Young, now, they conducted this poll in April of this year, and they surveyed more than 500 technology leaders, and we know technology leaders lead the pace of technology adoption. But the report said that 48% of decis of IT leaders said that they were already deploying agent ai. And, uh, most of them were deploying it at scale. That seems really high to me. What are, what are you seeing? Does that feel right?

Michael:

Uh, that, that feels like we're casting a very wide net there, right? And so, yeah, if you think about how mature is your ent ai, right? To be able to, you know, turn on that feature in a, you know, a platform or a product that you've bought, I think is different than having it part of your DNA, right? Mm-hmm . So, you know, if that was a question that I were asking, right? Are you using it in production? At what scale is it part of your culture and how your team operates, right? Uh, you know, you think about, you know, the great example that you just gave is the first thing that we ask, Hey, you know, I should bounce that off ai. Or, Hey, I've got this business plan. Let me, let me feed a template in here and see where we can get to.

Michael:

And so, you know, when I hear that, that number seems actually pretty high in terms of what I would call productive mm-hmm . Agen ai, just because so many of these technologies are, are recently re announced, right? That ecosystem is really coming together in like, I would say the last six months. Uh, so again, did, did you turn the switch, right? It's like, it's like you got the new phone, did you install the app? But I would say, are you really using, yeah, the features. Um, I certainly will say, I think if you ask everyone in that 48% or so that are using Ag Agent ai, you would say, you know, how mature do you think you are? Rate yourself one to 10. I think you'd get a lot of, we're at two and three and we're moving forward.

Jillian:

So give us like a reality check from maybe clients that you're talking to are working with. Like, what kind of agents are they building?

Michael:

So I, I think, as you know, I align with our Microsoft Plus solution line. So we we're looking heavily at copilot, um, copilot Studio, Azure, ai, Foundry, uh, all the Azure, AI and cognitive services, of course, OpenAI. Um, and so those are a lot of the deep services that we're working with day in and day out, as well as many of our clients IT departments, right? Are connecting and integrating, right? ServiceNow has been, uh, really a stalwart and kind of the, the SaaS space from everything from IT service management to HR systems to finance, right? Um, really everything that you need to run your business. So we're seeing a lot of traction on that side of the business with things like now assist agents because we're taking business workflows, right? Mm-hmm. That, you know, are generally well-defined, and we're using AI to enhance them, optimize them, and return that labor and capacity back, uh, to our teammates.

Michael:

So we're seeing a lot of what I'll call those, um, you know, kinda like stage one, stage two type of use cases, right? Mm-hmm . We've identified 'em, we've done some automation and work around 'em over the years, and now we have some of those use cases to target. Um, you know, a few years ago when, when everything, I'll say kind of post Jet, GPT, uh, there was a lot of, Hey, we're gonna do AI to do ai. Right? Now you see much more maturity though around technical feasibility in business case. So a lot of our customers come to us today with, we've got some pretty precise use cases. We've done the math as I like to say, wow. And this is what we need to go after. This is what we need to build and rally around. And then they come to Insight and say, Hey, there is a preponderance of technologies to choose.

Michael:

How do I choose the right set of data platforms and complimentary agentic systems, right? Which AI platform do I want to use? I mean, no different than what we've chosen, you know, internally here at Insight. Um, but the nice part is, is as we've walked down that path, right? We've got our beautiful new Horizon AI hub, right? That I hope, uh, you know, all of our listeners here, uh, that our insiders at least have taken advantage of, um, we're able to really kind of say, Hey, we've walked that path. We're maybe just a little bit ahead of you. Let us show you how to get here.

Jillian:

Are any of the use cases that rip and replace of like RPI that was already set up?

Michael:

I mean, in some cases, right? I mean, you know, and RPA, I mean, the RPA have very specific platforms, right? Where we're seeing an infusion of ag agentic AI into those RPA platforms, right? Mm-hmm . So you've already invested in an RPA platform. If you're an RPA vendor, if you don't have ag agent, pretty much like on your, the precipice of your roadmap or already built in mm-hmm . You're going to be replaced, right? Mm-hmm. So across that space, really across every industry vertical, right? You know, we've, I a lot of, uh, RPA within healthcare, right? Claims processing payers and providers, right? We've seen those use cases. Those all are heavily benefit from, uh, kind of ag agenta capabilities. There. The nice part is you think about that use case, right? And that's one that I've personally worked on over my career. Uh, you've got very interesting ways to refer to, like patients on, on different medical forms, right?

Michael:

And so you think if you're like a, like, I'll pick on like a Blue Cross or Blue Shield or an Anthem or an Aetna, right? Very large, um, what we call payer, right? Providers or your doctor's, nurses, physician's, providers, or your doctors, nurses, physicians, hospitals, uh, just the coding for, you know, like a gender of a, of a patient, right? Is it an m, is it an f? Is it a neutral? Is it MALE? Is it, you'll see it misspelled, MAIL, right? And so that's where large language models really, really benefit on kind of those document flow in those workflows. And we're seeing that with bots being equipped to better understand natural language, but the agentic part comes in, oh, now that I've understood it, don't just tell Jillian, Hey, go fix this. It's, oh, I can actually correct that. I can put it in the workflow. And what that means, the, the actual outcome is right. I'm talking the technology piece. The actual outcome is, is that patients get better service. Mm-hmm . Patients get taken care of faster in little clerical errors and paperwork aren't slowing down, folks getting the care they need and providers, you know, getting the payment and financials that they need. And so, for me, seeing technology, right? We're talking a granular level that leads to better health outcomes. I mean, that's exactly why most of us do what

Jillian:

We do. Yeah. That's a beautiful way to, to think about it and, and steer towards that direction. I love that you said earlier that you, you like this expression of AI as a tool to AI as a teammate. Have you witnessed that transition of somebody moving from AI as my tool to AI as my teammate? And kind of what was the outcome of that?

Michael:

Yeah, so I was doing some mentoring with some folks on my team, right? And again, it's, it's so much easier to walk that, you know, take somebody down the path that you yourself have walked, right? Mm-hmm . And so, you know, it was, Hey, you know, lemme lemme see what your prompts look like, right? It was actually an inside GPT, and a lot of it was AI as a search engine, right? And again, copilot is amazing at this. I'll get a plugin for it, but, um, hey, you know, find me the PowerPoint that I worked on two years ago and collaborated with these folks and awesome.

Jillian:

Hey, I love that functionality.

Michael:

Yeah. So, so we love that aspect, right? Mm-hmm . But again, that's a tool, that's a search engine, right? Yeah. We've done Google and search engines, right? And they've gotten amazing over the years, um, very novel approaches in technology. But when we see that, okay, how can I use AI as a teammate, right? First thing I do is, Hey, let's sit down and look at how we're doing prompts, right? One of the ways you can be a great teammate to, to your new AI teammate is anchor some hashtags and use all caps, right? Sometimes if we feed a really large prompt, obviously the models are, are extremely well at parsing those out. Mm-hmm . All sorts of different things with tokenization and, and what they call, uh, stemming and ization and all sorts of really cool computer science words around it, right? , it's like magic, right? I mean, you, you, you know what happens under the covers, right?

Michael:

But it can still feel like magic in a lot of cases. Um, and what was really cool is, okay, anchor that prompt, all caps, hashtags, right? Okay. That's the task. Oh, you know what really, really helps give a persona to your new a IT mate? Hey, you're, you're an amazing consultant with 20 years of technology experience. You've seen us through the internet age to the rise of, of cloud and SaaS to mobile, and now the AI age, right? Kind of that trajectory. Um, you work at one of the most prestigious management firms, right? This is the persona that we're asking our new ai, this is a make

Jillian:

That you wanna hire

Michael:

Exactly. . Yeah. Uh, put your, you know, you're asking that persona and then you're giving context, right? Again, humans have amazing context for what we want to get done. Mm-hmm . Provide that context, right? You will get an even better outcome and then provide limits as well. Hey, I need you to produce, you know, we're working on this business plan, right? I'm talking to my AI teammate now, right? Working on this business plan, right? These are the actions and steps that I want to take. And then these are the steps that are off the limits, right? Provide those right boundaries and then watch the output come back. And the best thing is iterate, right? One of my favorite other, uh, I'll call it, call it prop engineering hack, is ask AI to interview you. Right? Hey, this is what I'm thinking. What do you think about this? Hey, ask me a question on this to make sure I'm prepared for whether it's an exam, right? If I'm still in school, whether it's a major presentation or even a podcast like this, hey, ask me some great questions. Really make it tough on me. My AI teammate. Pretend like you're Jillian and make it challenge .

Jillian:

Help me prepare for my QBR. But Joyce, yes.

Michael:

Um, for sure. But, but again, when you see that evolution, though, now this was hours in terms of PowerPoints and research Yeah. And did we get to the best solution? And I can just tell you, like watching the realization of, oh, oh, this is, I was not using this for its potential, right? Mm-hmm . This was, was not a teammate. It was a tool. And so if you really wanna unlock the potential in what these, what these really, these LLMs and this new ecosystem are capable of mm-hmm . Uh, is when you truly have to begin to look and say, that was a teammate. I can bounce ideas off. I can get to a better place. And I will tell you, the realization comes, uh, when you look at it and go, you know what? That plan, that course of action, it, there are some things in there that I didn't think of, but it's ultimately better. And that's exactly how you and I would work together, right? We would put down, you know, what we're trying to get accomplished, our goals. You would have your input, your, your thoughts, your background and history. I would put mine in and together we get to a better place. Yeah. And it's really no different with our a i mates. Yeah.

Jillian:

Uh, so first I hope Microsoft is listening because this has come up, I think every episode about how everyone uses their, the search function to like go find the file. So that should definitely be a feature that they find a way to even make better than it is today.

Michael:

It is everyone's first foray into gopi. But it's, is this really better than team search? Yes. Yes it

Jillian:

Is. Yes, it is. It's amazing. It saves you hours of hunting. Um, but to the second point, more importantly to your, your notion of treat like a teammate. I had this conversation with somebody recently about, I think sometimes people, managers are better prompters than people who aren't people managers, because you already have practice in framing a project with kind of those parameters of really being clear about what you're asking, how you, you know, expect to deliver it, giving it the context that it needs. Um, and you might be disappointed with the first time that it comes back, or you might be really surprised and impressed by what it delivers. You never really know

Michael:

As a very astute observation , that, that tho those that are, I would say, progressing the fastest. Right? Mm. Not that we're, we're taking everybody with us. Right. Or to the extent,

Jillian:

I hope so.

Michael:

But those that are going the fastest understand that you are marshaling resources. Yes. You are a good manager of teammates, right? You know how to talk, how to be clear in instruction and directions mm-hmm . And it's highly beneficial. Yeah. 'cause again, uh, I believe, you know, people, we will have roles that are much more around managing and overseeing ai, right? As opposed to doing the work ourselves, but validating the outcomes, the responses. Uh, and so, again, a great observation, right? If you're used to doing it with, uh, teammates, I would even say for, for us parents out there, right? Yeah. If you have some of those skills, they really do help When you're working with, again, I'm calling 'em your new AI teammates. Yeah,

Jillian:

Yeah. It's also a great mentor. You know, it can give you some good advice. Um, I wanna move on to something, and I'm gonna have to like put the restraints on you 'cause I know you're gonna get super technical on me, and that's okay. We do have, we do have, uh, fellow technologists listening. We hear this term vibe coding, I think, everywhere. And I wanna just start with making sure that we all understand what vibe coding is.

Michael:

Yeah. So, I mean, vibe, you know, is slang like you're just going with, it's a cool vibe. You're feeling it, you're kind of losing track of time a little bit . And so, you know, with, with our new agent teammates, right? Mm-hmm . You're able to kind of feel the vibe and, and kind of get lost in time and produce pretty robust and pretty solid code, right? And so you're just kind of feeling the vibes and going with it, and you're bouncing ideas back and forth saying, you know, Jillian's my, uh, age agentic vibe coding partner over there, and saying, you know, I'm really thinking we need an application to this back in. Looks like that. Yeah. That's pretty good here. What about this here? And we're iterating, we're going back and forth. It really looks a lot like paraprogramming and extreme programming that we've done for a number of years, where we have, instead of two human beings though, coding together, checking each other's logic, you know, optimizing right together. We now have our, our new AI agent there, um, doing it. But yes, vibe coding is, you're definitely getting lost in the vibes and outcomes code, and you may or may not know how you got there.

Jillian:

Okay? I wanna start a new trend called vibe content creation or vibe writing, because I feel like that can happen too. Um, do you remember where this stemmed from? Do you remember where that came about?

Michael:

I, I do not. You

Jillian:

Don't? Okay. I researched this, right?

Michael:

I'm, that's very interest. Love it. I'll love to hear it.

Jillian:

The Vibe coding was coined by AI researcher and open AI co-founder Andre Carpathy in early 2025. Apparently it was in, uh, social media posts where he described this new way of coding, uh, and is focused on the intuition of using AI to generate code with minimal oversight. So I thought that was very interesting. It's like forgetting that the code even exists, which then makes me think of like the Matrix, and you're just like seeing the green things trickling down, the speed that down the screen, but you're actually seeing, you know, the city and whatever. So what

Michael:

Started as a tweet and now Yeah, now we're

Jillian:

Talking about, I mean, isn't that how all recent terminology comes to be ?

Michael:

That it is

Jillian:

The, the, you've got kids, what's the new, everything's mid. That's probably passe now. They don't say mid anymore, eh,

Michael:

The kids say mid, right? We get

Jillian:

can chat GPT help me keep up with the language of my 17-year-old needs. It,

Michael:

It actually can .

Jillian:

I think that would be really helpful. Um, to that point though, I have also heard vibe coding kind of be talked about in parallels to like what TikTok has done for video production or what Canva's done to graphic design, where it opens the door to people who are, this is not their, their primary profession. They're not experts in these fields. But now you've got someone who can be at a keyboard vibe, coding, and like, create an app in an afternoon. Is that a real, is that a real thing?

Michael:

It, it is a real thing, right?

Jillian:

And what's the danger? I

Michael:

I think anyone who is as versed in software engineering, right? There is an engineering word in there, right? Mm-hmm . Uh, you think about the resilience, the hardening of production systems, right? How we make code for supportability, right? That's, those are things that are very disciplined in software engineering, right? Mm-hmm. And so, uh, a lot of the code that is outputted is, I I would call it on, on a large scale, you know, prototype ready, right? Ability to kind of demonstrate a thought or an idea a prototype, right? There's a lot of engineering discipline though. So be before you vibe code, and then go straight to production, right? There are a lot of way points, right? Uh, but what I will say is that it's making technology so accessible, right? Mm-hmm . Coding is no longer this, you know, dark mystical art that it looks like a bunch of hieroglyphs to, you know, those who are uninitiated, right?

Michael:

And I love, anytime we can do anything that makes coding very approachable, makes technology approachable, right? Yeah. We, we wanna hit those, um, kind of critical points, right? They call it the law of the fusion of innovation, right? We wanna hit those tipping points and vibe coating helps us hit those tipping points, right? Um, you know, you think about potentially, you know, like what are jobs gonna look like in 10 years with ai, right? Vibe, coating's, allowing the technology to be, again, more approachable. And, and quite honestly, there are a very great argument for taking a non-software engineer, someone who's not a professional developer or coder by day. Mm-hmm . They end up interacting on a different or even better level in a lot of cases, right? Software engineers say, Hey, I need to back this. I need resiliency here. I need to deployable here. I need to look like this. Here's the requirements, right? It's, it's a very, um, transactional type of interaction mm-hmm . You know, with, with the platform. Whereas someone who doesn't have that kind of terminology or those requirements right, can get in a much different kind of interaction. Um, I actually think there's going to be a, a high degree of innovation that takes place from, you know, things like vibe coding to where we can take kind of the engineering discipline that we have here, and then pair it with what I'll call just the free flowing innovation that comes outta these platforms.

Jillian:

We're talking a lot recently about citizen developers. And so hearing you talk about the approachability and this, this future of what this could look like, I'm putting my CMO hat on, or just my regular marketing hat on, and I'm imagining a really great application that I could potentially build on my own to fit exactly the use case that I'm looking for. It sounds like that could be something that happens in the future.

Michael:

Oh, that's a hundred percent what happens. In fact, you know, earlier you and I were talking about GPT five oh and or GPT five and a few of the first things I asked were, give me a thermodynamics application in a single H TM L page, or give me those migratory bird pilot in a single HTML page, right? Um, and so it is able to absolutely do that for you today. I think the, the low code no code platforms, right? You still, we kind of call it pro code, low-code, no code, right? If we distinguish between them, and it still wasn't quite accessible, right? Think about the way that you and I are interacting right now mm-hmm . We're encoding and transmitting information, otherwise

Jillian:

No ones and zeros and

Michael:

Speaking, you know, speaking English right here. Um, and so that's just such a much more natural way mm-hmm . To interact with the computer, right? You know, low-code, no-code solutions. Again, they're great, right? Citizen development right? Has its absolute place, right? We see great traction with things like the power platform, but how do humans wanna interact with a computer? I would much prefer our interaction here than on my keyboard, right? Even though I love my keyboard, you can tell I've, I've been there for, for a couple decades plus now. Uh, but the point being is that there, there was always this barrier of how do we, we interact and bring people even to low code, no code, right? We have composable pallets, it's drag and drop. It's supposed to be easier than the code in what we call an IDE, right? Our integrated development environment, our VS codes, right?

Michael:

And it was supposed to be easier, but it still didn't have like, what I would call broad market penetration, right? It was still a, what I call quasi tech professional. Like tech wasn't your day job, right? I'm in sales, in marketing and finance. There's way more apps that need written for our, our business, for my personal goals and development than we have professional software engineers for. So for me, I love the fact that I just view it, again, I keep using the word evolution that, you know, low code, no code solutions were, were great citizen development, but it just wasn't as approachable as being able to give natural language written or spoken English that we do hopefully every day or, or whatever your language is, wherever you're at watching us around the world. Um, and so just that ability to make it that much more accessible is why I think you'll see vibe coding and those things just absolutely take off. Maybe we're citizen development, low code, no code kind of plateaued. This is what I think is gonna have breakthrough that app gap ceiling.

Jillian:

And talk to me about the, the integration we're already seeing with vibe coding and software development life cycles.

Michael:

Yeah. So I mean, again, I, I kind of view them as kind of separate tracks, right? And so a little bit, right? So, you know, we think of true agent software development, lifecycle true ag agentic, SDLC, um, software development is, you know, depending on which model you look at, is seven to eight phases. And there's only really one of 'em that's code, right? I'll call it code to keyboard, right? Where I'm actually typing out coding and syntax. If you talk to any season developer, software engineer, they'll tell you the coding's actually the, the easy part. In most cases, uh, you know, when we think about the full SDLC, and this is why agentic SDLC is so powerful because we think about, we really generally start with design and requirements. This is typically if an agile software development, a, a ba mm-hmm . Right? A business analyst, right? Taking requirements, oh, Jillian, oh, you want an app that will, you know, track your budget and do your finances Ooh. And it's gotta integrate with your money market account. Ooh. And you're, you know what I mean? Yeah. And they're gathering the response, build

Jillian:

The story

Michael:

And yeah, the outcomes. And then we look at planning, we should build it on these platforms and, you know, we should potentially look at, at these technologies to do it. Ooh, and it should do this and integrations here. And then, then we begin to design and prototype the system. And it's not until that point that we actually put our first code to IDE, as we would call it, then we test and we integrate, right? Especially for test driven development, BDD, behavioral driven development, dd, we've got all these different

Jillian:

Great, it's a long process.

Michael:

And then we finally deploy that out into, not dev, not production, right? Mm-hmm . Then it goes into general a development or test environment. Yeah, we're getting feedback and then we go to user acceptance and qa,

Jillian:

Which by the way, my requirements have changed since they have, we first started.

Michael:

Yeah. What started two weeks ago is completely antiquated, right? Start over . And then, you know, and then we're beginning to maintain that code. We've written wonderfully designed, readable, maintainable, well-tested code, right? We, we never release things that are, are not production grade into production, right? . Um, but if you think about the SDLC there, right? So many folks look at LLMs because, you know, code is English in most cases, right? And a lot of other ASCI characters. But we're forgetting that in SDLC, that's just one of seven or eight different phases to deliver a really robust piece of software. So again, we love vibe coding because it's taking those first couple steps for us. Uh, but true agent SDLC lets us use the vibe coding aspect to, I believe, innovate, drive the ceiling on kind of the, what we call the app gap, right? We got, we have more need of apps than we can actually write through all the software engineers that are in the world. So we have to prioritize, right? Mm-hmm . Like, sorry, Jillian, you don't get your app because our CFO, you know, James asked for this really big project, we gotta do that one, right? And so it allows us to fill that gap, but it gives you a really easy way and a very approachable way to essentially get your app that you need plugged into that SDLC. So when we have 'EM work together, we now have approachability with that discipline and the engineering that, that we really need.

Jillian:

What does all that mean for, let's say, the CFO of the business? Why should they care?

Michael:

I mean, they should care. Again, we'll look at what outcomes are we going to drive, right? Mm-hmm . I mean, you think about, uh, one of the major elements of a, of A CFO that I look at is like your business transformation office, right? Mergers acquisition. What is our speed to identifying a great acquisition target or a great merger opportunity, right? Being able to capitalize on things like the speed to the market, right? So if you have a, a use case, right? Again, I, I'm trying to stay away from just the softballs. We've got better reporting, better financial accounting, more dashboards, right? Like CFOs, like what? We love a dashboard. None of 'em want more dashboards, right? But it's how can we empower your team to capitalize and more opportunities, how can we take the, the co amount of data that we have and make better decisions on that data, but also make it more approachable for someone in finance, right?

Michael:

Because, you know, if it takes Michael to have to navigate that through you, right? We've just moved the bottlenecks around, right? Mm-hmm. And so we think about everything that we're talking about today, um, from our large language models to our agent ecosystems, right? To our vibe coating, SD it's making those things approachable for someone who's an amazing person in finance that can just talk right over anything that I'm going to understand with them, right? And it's going to empower them to have the technology work for them. And so we love seeing that one two punch of them together, right? Because really vibe coating's in most cases, right? Again, this could change any year, right? Talk to me again next year, right? Same time, but it's not producing what I would call production quality that I can say, Hey, you know what, these are the markets we're gonna take, our CFO's gonna take to the street. But then when you pair that with agen SDLC principles, um, suddenly now you can, you can merge the two. And now I have production grade outcomes that should get my CFO excited, my speed to innovations there, my speed to decisions there. Mm-hmm. And quite honestly, as long as my team's being empowered with these things, um, we're getting the application technology we need to make better decisions and hopefully better financial outcomes.

Jillian:

I think it's usually true that with speed comes risk, and that will be the faster we go, the riskier we're going, is there a governance or risk concern that should be addressed? If we're starting to use agentic, SDLC, uh, or vibe coding ,

Michael:

There certainly are guardrails and boundaries that we want to put in place, right? A lot of times, uh, you know, we've went through this really with cloud transformation, right? Of, you know, we don't wanna sacrifice speed for control, right? There's this adage that, you know, you, you, if you go fast, we have to kind of pump the brakes, right? Or if we want control, right? Or if we have that high degree of control, it's glacial speed to procure the technology that we need or make the progress that we need. Mm-hmm . Uh, so what we've done really from, I'll call it cloud computing, really software defined, uh, technology environments, right? For a number of years is we embed policy in as we deploy, as we instantiate, and we put guardrails and boundaries around it, right? So what we've seen is, yes, doing it without the right governance, guardrails, boundaries, uh, you will not like the outcomes that you get, right?

Michael:

And I, I guarantee you can go look for some news articles of, Hey, gen AI deleted my production database, right? Like, those are the learning and growing pains along the way. So, uh, we take those as learning opportunities and we put up those guardrails. One of my kind of favorite pieces of technology that my team is currently working on are this concept of AI gateways, right? So for a lot of large language models, right? They're for really big enterprise use cases, we're generally not invoking them from the chat interface, right? Whether it's natural language or speech, text, video, or in most cases, us typing into a prompt, right? We're invoking them through series of APIs, right? And that's, that's kind of the agent ecosystem behind the scenes. Um, and so we've been securing and managing things like APIs for a while, but these new eco AI ecosystems, we have things like tokens. Now, Hey, you know what, Jillian's open AI costs or to GBT costs are through the roof . How do I put some boundaries around that? Right? She, she's

Jillian:

Using it too much. I don't think she's doing any work on her own.

Michael:

Our AI budget is, is out the window, right? . Um, and so we need to put those guardrails up mm-hmm . So we have these concepts of AI gateways. Now we're evolving technologies that kind of solve these problems for safety and governance and monitored and auditable usage, right? For your, your chief information security officer, right? Your regulatory, our chief compliance officer, and we're giving them AI aware and context, right? So we take something like an API gateway, and now it's now an AI gateway, right? So now I can want the request through it. I can look at tokens per minute. I can understand, am I going to an approved large language model? Am I doing it on an approved and secure pathway? Am I looking, getting a response, um, in an approved fashion against data that is, you know, whether it's internal, it's protected. And so we're seeing a lot of the technologies that kind of helped us solve these challenges in the 2010s as I'll call them. We're extending them now to be AI aware. Uh, so we're taking a lot of really good things that we learned, and then applying that safety and security pretty broadly,

Jillian:

Hearing you talk about tokens and threshold, I'm having like flashbacks when I had, you know, so you have so many minutes per month on your phone plan and

Michael:

Yeah, so many SMS texts that you can send international callings extra.

Jillian:

Yeah. Um, all right. Let's, let's look at the leader's playbook here. I wanna start with what's not being asked. So what questions are leaders not asking about gentech ai, but should be, and then especially before they approve projects or budgets or, or even start changing, you know, organization strategies or teams. What is the critical thing that A CEO needs to know before entering into an agent AI project?

Michael:

Yeah, I mean, for, for us, it's all about technical feasibility and business value, right? You should, before you embark on any project, this is really discipline we've had for a number of years, uh, and AI hit, and we kind of threw some of that discipline out the window. Uh, but we always need to start with what is the outcome that we're looking to drive? What is the business value that's inherent and is it feasible with the current state of technology? Right? But we're always anchoring in what are we looking to accomplish? What are we looking to do here?

Jillian:

How do you walk a client through that? Because that sounds like a very daunting question to start with.

Michael:

It is. We, we have a, again, I'll be biased in this answer, but we have a, what I can say, be very nice value stream mapping framework, right? Where we're able to take look at processes, right? And, um, a little, a little, I'll call it the cheat code, right? For all of our listeners are ask AI these things, right? Like, anytime I'm doing a business case or consultative engagement, I am enriching our outputs, our ips, our templates with, Hey, what should we be asking for this industry vertical? What should we be asking for this specific customer based on maybe their data or details that is already known, right? Mm-hmm . Whether that's publicly accessible or even enriched, right? If it's private information they'd like us to have. Um, but those are some of the best ways to ensure that you are asking the right questions in getting to the right outcomes, right?

Michael:

And so, by, again, using our, those new agentic teammates as an expert consultant with 30 years in the industry of that customer, you're going to get some pretty rich outputs of this is what we're seeing, these are the areas that you should be attacking, these are the major tech trends hitting your industry, and then this is how they can be supported through ai. And those are all the things that every C-I-O-C-X-O are really asking themselves of, where can I apply, you know, generative AI technologies, reasoning models, um, and per get the best outcomes to my business, right? There are, there are really kind of like three main areas that we look at, right? Just from like employee productivity, operational excellence, and then innovation, right? So generally across those three veins are where we see everything. Uh, and that's just where you should start. The cool part is though, if you're using your AI teammate, this process becomes very iterative and it begins to learn, just like you said earlier, hey, it was really cool that GPT kind of learned my writing style, knows my likes and dislikes.

Michael:

You'll begin to have models that begin to fit your organizational DNA things are learned more to where, yeah, these are our preferred course of actions. These are our DNA that we know we can execute well. Uh, and you'll see things really tuned in. So again, you ask, where should we start? What are they not asking? First thing I think NA C-suite should be doing is going to AI platform of choice and saying, Hey, what, what are the biggest things approaching my business and what should I be doing with AI that I'm not? So the question you asked me, which is fantastic, is the exact question that I ask AI to always kind of get my temperature and take a kind of a check on my knowledge as well,

Jillian:

Ask AI what to do with ai. So if they're using co-pilot, as co-pilot, they're using Gemini as Gemini

Michael:

Co-pilot, what am I not doing with you that I could be, that would be amazing. And in a lot of cases, it's what am I already paying for that we're not unlocking the value add?

Jillian:

Hmm. That seems to be a common question with most software. Like, we're only using a a fraction of its true value. So it makes sense to apply that thinking to generative ai.

Michael:

That's why I said a lot of these principles, we've kind of paid tuition on Yeah. In

Jillian:

The last few

Michael:

Decades. And now we're just learning and kind of applying a new context in, in the new AI world. Yeah.

Jillian:

If you could, I don't know, make it so that every employee had their little AI agent, like, is that something you would do? And how would you, how would you frame that? How would you organize that?

Michael:

So, so for me, I mean, you're talking about a coworker that has the sum of informa of human information at their fingertips, right? Mm-hmm . Um, for me, that's a teammate that's not impacting or taking your job. That is someone there that's able to enrich your day to day, and they, they're going to make you better, right? Hey, I need a plan to accomplish this. Let me bounce some ideas off you. And so I think if you can really frame it as this is what's going to accelerate you, this is what's going to free you up for, again, the things that maybe AI can't do, right? That list becomes smaller and smaller. Um, but for the most part, right? We wanna be freed up to do the things that make us uniquely human. That's what gets me excited is the fact that, um, you know, going after the things that AI can't do and freeing me up to do the things that make me a human, right? Mm-hmm . Being able to enter in the same report that I've done a thousand times, um, that's not exciting, but being able to think and iterate it, ideate, and, you know, create some new channels and new pathways, right? Those are what should excite us as people.

Jillian:

Yeah. Please do my expense report for me. Thanks, . Um,

Michael:

Oh my goodness. Yeah. If we can get an Atlantic flow to do my concur,

Jillian:

I'm sure it's out there. I'm sure it's out there. Um, again, when you're talking to, to clients, are there like common mistakes or misunderstandings about agent ai, things that they're just, do you just wish everybody could understand right now?

Michael:

, there, there are, right? It, it's, it's not an easy button, right? Like there's, there, there require, requires, it's an easy button. Mm-hmm . Requires business contact knowledge and understanding, right? Your, your business processes form the input they form the basis, right? So if you don't have great business contact business processes, right? If you don't have a great data estate, right? If you have many different ways to reflect the same, like customer record or systems that lack data integration, your agentic will be capped. It will struggle, um, with those. So those are, you know, some of the major things that I would tell any customers. Make sure you're, you're AI ready, right?

Jillian:

Yeah. That was actually be my next question to you because they may have the enthusiasm, but if the, if they're not ready in that sense, and it's looking at their operating model, their data health, um, even the people skills, then they're not gonna get anywhere with this. So those are the signals that you're looking at that you might actually say, hold on, we, you're not ready for this. We need to go back and do some foundational work first. Yes.

Michael:

And again, the major thing is like, listen, it's not a panacea for everything. Mm-hmm . It is. I I, I, I won't use the word incremental because it's bigger shift than that. Mm-hmm . But it has very high potential. Let's make sure that you're ready for it. Right? People should know that the very high, but we're, we're in the early stages. Get yourself ready. Get yourself mature and take advantage of it.

Jillian:

How do you measure the success?

Michael:

Oh, for me, I mean, that's easy, right? It's, it's the business outcomes that we set to achieve, right? This is what we intended to do. This is what results we, we wanted to get. And then you have KPIs against where you got to, right? Hey, give me a business plan that increases operational efficiency by 10% by taking a number of actions give. And then when I interact with ai, it'll be, give me your top six recommendations in your order of priority, and then I'll pick apart that priority, and then I'll send it back and say, Hey, um, I, I think you were spot on with like one to two and three, but 4, 5, 6 don't really agree with, right? And then we iterate, we go back and forth and get to that solid and robust plan.

Jillian:

What's your reaction if you see something like productivity gains high on that list.

Michael:

I, I mean, again, productivity gains are, I'm returning labor and human capital back, right? So productivity gains are, you know, Jill and Michael didn't need to spend eight hours in PowerPoint. We had a beautiful templatized mock-up on brand standards, making our CMO happy, and that took an hour. And then we got to focus on the content. We got to focus on our audience, you know, rather than spending the time on the, what I'll call the bits and bys, the style, the syntax, we got to focus on what's our audience, what do they want to hear? Mm-hmm . What's gonna be most relevant to them. Uh, anytime I present, I'm always like, how can I add value to my audience? How can I enrich their day and how they're doing? So I get to focus on, again, the things that make us human and not on how do I make a PowerPoint that looks great?

Jillian:

You are so in front of A-C-M-O-A-C-F-O, and the CEO convinced me I need to do agent AI right now.

Michael:

Uh, I I mean the, the train's leaving the station. You're either on board or you're gonna go outta business, right? I mean, that is the, that is the short of it, right? I mean, it is a, um, you know, we, we look at technology shifts, right? We think of the nineties, right? Was kind of the age of the internet. We had things like eBay and, you know, our first pass at Google, and then in the, the, the two thousands we went to cloud and SaaS, AWS is released Salesforce, um, ServiceNow, and then the 2010s were mobile, right? You know, things like Airbnb and Uber and, um, Instagram, and then we look at the 2020s is now the age of ai. That is where things are going, right? That's been the trajectory of technology. Um, and again, the trains leaving the station, you're either gonna be on board or you're gonna be a casualty on, on the s and p 500.

Jillian:

You've convinced me that I need to do this, that agentic AI is certainly a strategic imperative, but I'm fearful of which tool to use or which lab to go with. How do you make that call?

Michael:

Yeah. So very common question that we are asked day in and day out with our, with our clients every day. Uh, so for us, it's all about being open, flexible, brokering services, right? That give you flexibility of different LLMs, different reasoning models, uh, different agent ecosystems, right? So the, we're seeing so many of the technologies and recent enhancements are all about the ecosystem, right? So I think what's so unique about this technology trend, right? For someone like myself is, um, over the years we've always seen where, you know, it's tried to force you into a box like, Hey, you buy platform X, Y, Z, it's like Hotel California, right? You can't check out, it's like, oh my gosh, I've got this legacy asset. Now I can't get away from it, right? It's got proprietary data formats. It's, uh, you know, it's got a proprietary coding model. It's, I cannot bring even take my data out of it, right? And so there's a lot of fear of if I pick the wrong thing, data's locked away. If, if that standard doesn't win or that platform lags, I've suddenly bet my business on the loser, right? Um, but what we're really seeing is all the technologies are actually helping hedge against that. I kind of call it future proofing your investments, right? So you've probably talked with some of our other guests around a CP agent, communication protocol, maybe even a to a right. Agent, agent agents,

Jillian:

We've not gotten into those topics yet. All right?

Michael:

This is all right, fantastic. So I won't go too deep, right? But so many of these topics, and I'm sure you've talked to MCP, right? Model context, protocol, maybe all of these amazing technologies are all about interoperability of agents, right? Being able to create great AI enabled applications that essentially with the flick of a switch, I can swap out different models, right? So right now, we use a lot of the claw, the sonnet family of models for like SDLC as we've talked about. Um, being able to say, Hey, I wanna try GPT five was a flip of a switch. We actually have our teams, uh, for a lot of our a what I'll call our age agentic, um, code base analysis, right? Millions of line of codes, right? What are the areas of which you can repair defects and move, kind of move the ball forward, right?

Michael:

Mm-hmm . Get vulnerabilities, make it more secure, right? We were able to say, Hey, you know what? We wanna try out GBT five, switch it over, right? That's how I could confidently say, Hey, things are looking a little slower there, right? I mean, better, better outcomes if I can stand, you know, if I can wait the extra time, right? Um, and so those are some of the, the major things that we see is that, uh, you know, getting to that end point or that end state, um, through all those different layers of reasoning, um, in everything,

Jillian:

I like how you actually, you went through there because I think the, the uncertainty that we're feeling is the, the horse race of the labs is like, it's changing week by week. One week one model's outperforming the next, and that could change in the future. So I understand there's probably fear of like, are we choosing the right model? And is, is our horse gonna be the winning horse here? Yeah.

Michael:

Then there's also, and that's where so many of those technologies are, you are able to select a, a different model in the backend. You're able to connect up, you know, we've already talked about the Microsoft ServiceNow examples, right? But we have the same thing, right? With a lot of our platform stacks across things like Amazon and of course anthropic mm-hmm . And, um, you know, hugging face is a marketplace for these things, right? If you've heard that. And so being able to connect all those different systems, right, in kind of the agent ecosystem, we're excited because as opposed to proprietary protocols, that it's just one company, you're their standard, right? So what it really means is that if you choose copilot studio, right, for your agent platform for knowledge workers, right? You're able to interoperate with many other systems. Right? Now, obviously there's data and considerations, right?

Michael:

But the reality is, is that these systems will be highly interconnected, right? Think about it. If, if your agent needs to accomplish a task that requires lots of communication and lots of different agents mm-hmm . In diverse systems, right? Making a cloud ecosystem's absolutely not going to work. Um, and so again, we're very excited that the latest set of protocols, technology standards, keep it open and flexible. So to kind of come back to the question, right, is, you know, we, we take the fear away by staying open and flexible and being well documented. Well, apportioned standards are what gonna prevent you from feeling like you're locked in or feeling like your switching costs will be so abnormally high that you can't take advantage of the latest tech if you chose wrong today.

Jillian:

So really, you should just go,

Michael:

Should, yeah. Whatcha waiting for

Jillian:

Michael, thank you so much for talking to us today. It was lovely to have you.

Michael:

It was wonderful to be here. Thank you so much, Jill.

Speaker 3:

Thanks for listening to this episode of Insight on. If today's conversation sparked an idea or raised a challenge, you're facing, head to insight.com. You'll find the resources, case studies, and real world solutions to help you lead with clarity. If you found this episode to be helpful, be sure to follow insight on, leave a review and share it with a colleague. It's how we grow the conversation and help more leaders make better tech decisions. Discover more@insight.com. The views and opinions expressed in this podcast are of those of the hosts and the guests, and do not necessarily reflect on the official policy or position of insight or its affiliates. This content is for informational purposes only, should not be considered as professional or legal advice.

 

Learn about our speakers

Headshot of Stream Author

Michael Nardone

Cloud Solutions Director and Distinguished Technologist, Insight

Michael is obsessed with helping clients achieve business outcomes and deliver real value through modern cloud platforms and accelerated software principles. In his 20 years of experience with enterprise technology, Michael has held various roles across deep technical administration, engineering, architecture and strategy, with a focus on leadership and equipping teams for change. He enjoys the learning journey and rapid pace that technology brings and strives to build high performing organizations that embrace those principles.

Headshot of Stream Author

Jillian Viner

Marketing Manager, Insight

As marketing manager for the Insight brand campaign, Jillian is a versatile content creator and brand champion at her core. Developing both the strategy and the messaging, Jillian leans on 10 years of marketing experience to build brand awareness and affinity, and to position Insight as a true thought leader in the industry.

Subscribe Stay Updated with Insight On

Subscribe to our podcast today to get automatic notifications for new episodes. You can find Insight On on Amazon Music, Apple Podcasts, Spotify and YouTube.