SaugaTalks

AI Readiness in Enterprises: From Shadow AI to Competitive Edge

Irene

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 39:48

Send us Fan Mail

In this episode of SaugaTalks, host Irene Lyakovetsky sits down with Seth Earley, CEO of Earley Information Science, to demystify AI readiness for enterprises. From taming shadow AI experiments to building robust governance, they explore why 40-95% of generative AI initiatives fail—and how to turn yours into a success.

Key insights:
- AI readiness maturity: Assess knowledge, operations, tech, and governance with a 74-question model.
- Bridging knowledge and AI: Use semantic frameworks (isness/aboutness), taxonomies, and componentization for unified ecosystems.
- Operational integration: Map processes to find "information leverage points" without disruption—focus on differentiation for competitive advantage.
- Success stories: 20-30% efficiency gains in field service, 50% faster RFP responses, and streamlined code reviews.
- Pitfalls: IP leakage, $5M+ failed projects due to poor content curation, and pilot-to-production gaps.
- Advice: Build ontologies/knowledge graphs, prioritize use cases, capture baselines, and avoid vendor lock-in.

Ideal for AI leaders, IT pros, and execs tackling enterprise AI adoption, legacy tech, RAG, ontologies, and ROI. Learn to avoid disasters and scale from pilots to production!

- Website: https://saugatalks.com
- YouTube: https://www.youtube.com/channel/UCCcHTwZGRuMaWTZTechKAtw
- X: https://twitter.com/lyakovet
- LinkedIn: https://www.linkedin.com/company/saugatuckworldwide/
- Patreon: https://www.patreon.com/SaugaTalks
- Connect with Seth: https://earley.com | LinkedIn: Seth Earley | Podcast: Earley AI

Subscribe to SaugaTalks for more in-depth discussions on enterprise AI, automation, and innovation. Like, comment, and share your biggest AI challenge below—we'd love to hear from you! 

Get Seth's book: https://www.earley.com/ai-powered-enterprise-book-seth-earley
Visit Earley Information Science: https://www.earley.com/


🚀 Welcome to SaugaTalks — Where AI Meets the Future of Business! I’m Irene, Founder & Host of SaugaTalks, bringing you unfiltered conversations with the brightest minds in AI, technology, and digital transformation. In every episode, we break down the trends shaping industries, explore game-changing innovations, and share practical insights you can use today.

🔹 Let’s Connect
LinkedIn → https://www.linkedin.com/in/irenelyakovetsky/
X (Twitter) → https://twitter.com/lyakovet
YouTube → https://www.youtube.com/@SaugaTalks

📌 About SaugaTalks: SaugaTalks is your go-to channel for fresh perspectives on AI, automation, cybersecurity, and digital transformation. Whether you’re a business leader, innovator, or tech enthusiast, you’ll find actionable takeaways in every conversation.

✅ Enjoyed this conversation?

Like 👍 the video

Subscribe 🔔 to SaugaTalks.

Share 📢 with your network.

#SaugaTalks #AI #ArtificialIntelligence #DigitalTransformation #FutureOfWork #Technology #Leadership #BusinessInnovation #AIReadiness #EnterpriseAI #KnowledgeManagement #AIGovernance #DigitalTransformation #AIReadiness #EnterpriseAI #AIGovernance #DigitalTransformation #TechInnovation 

Support the show

Irene: Hello. Hello. This is Irene with SaugaTalks. Hello. Thanks everyone for joining. Thank you. I'm Irene Lyakovetsky. I speak with fascinating people in tech and we cover AI, AI adoption, AI strategies, enterprise AI and everything that comes with it. Of course, tech stack, legacy applications. Sometimes we talk how they work with these new shiny wonderful tools we're all excited about. Today is my day because Seth Early is with me. Seth, how are you?

Seth: I'm great. Nice to be here. Thanks for having me.

Irene: Lovely, lovely. Seth, I so appreciate our time today. Okay, we're going to discuss AI readiness. All right. It may not be the shiniest topic I can tell you, right? People rather demo what AI is capable of and how wonderful enhanced we all are with using the tool. But hey, you know what? When you're in enterprise world, you have to understand what you're doing. Could you please talk to us what AI readiness means for today's enterprise?

Seth: Sure. So what a lot of organizations are doing is they're experimenting, right? They're trying a lot of different things. Their developers are off doing stuff many times with unsanctioned tools. So there's a lot of shadow AI in the organization. And what needs to happen is you need to have control over those experiments, have control over those technologies and even when you have sanctioned tools, tools that are approved, you need to manage those efforts in a consistent way with a decision-making framework with operational controls with the right technical infrastructure and you need to think about knowledge and data readiness. So when we look at readiness for an organization, we have a 74 question maturity AI readiness maturity model and it cuts across those areas. It cuts across knowledge readiness, operational readiness, technical readiness and governance readiness. And so all of those things are important because if you start deploying AI without any of those elements or you're missing those elements or being weak in those areas, you're going to have problems. So we need to look at AI very holistically understand what processes we're trying to enable, right? Understand what needs to change in terms of operations. We need to now look at content and data operations very differently than we did before, especially with unstructured content and knowledge. So when we think about readiness, it's really looking across multiple facets of the organization. How do you make decisions? How do you approve tools? How do you bring new tools in? How do you manage the costs? Right? What are the decision-making frameworks that allow you to control AI deployment, not leave the organization open to unnecessary risks and really have a clear understanding of what the payback is and what the ROI will be. And many organizations try to deploy the technology without these elements especially on the knowledge side. Many of the generative AI initiatives are failing. Um I forgot what you know there's a lot of statistics out there. Uh MIT study said 95% are failing. You know Forester said 40% are failing whatever there's you pick your statistic but the reason why is because there's no content operations and knowledge operations in place that ensure that you have the ground truth for your generative AI. Generative AI works most effectively on your data, right? It's I mean the summarization is fine having it do certain things certain tasks, but you get the greatest value when you're using it as an enabler for your own knowledge and your own processes. Not necessarily using the AI to generate a document or content from scratch, but using it in a way that derives new IP from your existing IP. So a lot of that requires that we have a sense of what knowledge is useful for the organization, who owns that knowledge and how we're managing it and curating it. So those are some of the elements that we look at when we look at AI readiness.

[music]

Irene: You mentioned knowledge a few times quite a few times I noticed right. So can we connect knowledge and AI in a meaningful way? Meaning that how enterprises effectively can bridge the existing knowledge which is of course the databases documents right employee expertise of course right with AI systems to create that unified knowledge ecosystem?

Seth: Yes and one of the elements that's very important is a semantic framework and a semantic framework is about the terminology and the meaning of information it's understanding what we like to call the isness and aboutness. What is this piece of information and what can you tell me about it? So if I have say statements of work or contracts um a contract could be an employment contract, it could be a purchase and sale contract, it could be a work order, all sorts of different contracts. The isness is it's a contract. If I hand you the document, that's what it is. But then how do I tell a thousand of those apart? That's what we call the aboutness. So it would be contract type and the contract type might be again work order or statement of work or purchase and sale or employment. But the idea is you have to do that for all of the artifacts in your enterprise. You have to understand the nature of that information. That's the nature of knowledge. That's how you distinguish different pieces of information and how you get the exact information you need for a particular task. Um we did a project for a company called Applied Materials a number of years back. They build these semiconductor fabrication plants and the CFO said why do we need a taxonomy? Taxonomy is one of the ways of organizing that information right that gives you the isness and aboutness. Why do we need that? Why don't we just get Google? And I said do you have a chart of accounts for your finance organization? He said of course I do. I said why don't you get rid of your chart of accounts and just get Google because a taxonomy is a chart of accounts for knowledge. So we need to organize that information in a way that identifies what's important the nature of that information and how we tell it apart from other information. So if you have something like field service and you have field service technicians going out and servicing some kind of equipment, if they have a particular error code and they want to look up what that error code means, they need to understand well the system needs to understand well what's the context? What piece of equipment are you working on? What is the model? What's the configuration? And then I can give you the meaning of that error code. So that content needs to be tagged with all of that information. We need to classify that content in a way that we can precisely pull back exactly what we want because again the AI in the large language models don't necessarily have that context. They don't necessarily know the difference between certain pieces of content. We also have to make sure that we have the latest and greatest information. We need to make sure we have the current version, the approved version, the latest version, whatever it might be. So if you have a lot of content, a lot of documents, but you haven't curated them, you haven't managed them, you haven't tagged them and organized them, you haven't applied a content life cycle to them, you're going to have out-of-date content. You're going to have documents that are not relevant and all of that is noise, right? All of that is is, you know, and if you think of trying to find a needle in a haystack, that's a bigger haystack, right? You want to make the haystack smaller. So get rid of the old content, get rid of that out-of-date knowledge and just curate the things that you specifically need for that individual and for that process. Now of course AI can help you with all of that. We use AI accelerators in developing that knowledge architecture and in curating that content and cleaning that content up. The other thing you have to do is you have to componentize it, right? So if you have a service manual that's 300 pages long, a field service rep doesn't want to go through that 300-page manual and when you search it online, you might you still have to go through that document, right? You want an answer. You don't want a document, right? We don't want 100 documents. We don't want a single document. We don't want a 300-page document. We want an answer. Those answers are in components. What we do is we break that content up so that we can build question answer systems so that we have a very specific piece of that document that answers the question for the technician. So that is called componentization. Every large language model componentizes content to ingest it into a vector space. You have to break it up. Now it'll break it up in different ways. It'll break it up according to you know fixed length or it could be on paragraphs. But if you break it up into semantically meaningful chunks, in other words, you know that this chunk of content is a troubleshooting step for that error code for that particular piece of equipment and that particular model number, then you have the ability to get exactly that piece of content back. If you don't have that structure, if you're not componentizing the information and if you're not tagging it with semantically meaningful terminology, right, you're not going to be able to pull that information back. So that's the critical critical element that generative AI requires. It's retrieval augmented generation, right? We're using the large language model to interpret our question to be able to then go against a vector database, pull that information back and make that more conversational, but we're pulling the information from our content source, from our knowledge source, from our data. You're not asking the LLM necessarily for the answer, although it's pretty good at, you know, giving you answers, right? We're asking it to help access our information and make that more readily available to people. So that's one of the critical things that we can do with AI is process that information, structure it, curate it, componentize it, tag it, and then you can have it readily available for retrieval in the exact context in which someone needs that information.

[music]

Irene: Fantastic. Seth you already touching on operational integration okay those nuts and bolts of everyday's operations in the enterprise right so kind of continuing on that what kind of what key steps for integrating AI in day-to-day operations right um without disrupting workflows you would recommend you know?

Seth: Yeah yeah that's a great question and it really is about understanding processes mapping processes right because AI you know even though it's getting really good. It's not going to replace a person. It's not going to replace an entire process. It will and although it's getting better, right? It's getting better at that. It's usually you look at a part, you map a particular process and you understand points in that process where you need better information, faster information, new answers, whatever it might be. Those are the points of I call them information leverage points, right? They're the places where if you have an intervention, it's going to have a downstream impact that's going to speed up lots of other processes, right? It's a critical information point and that's where you have an intervention. That's where you can say, how will AI help me get better or faster or new information at that point, right? So, it's understanding the processes. It's mapping say a customer life cycle or a software development life cycle. We're doing work right now helping organizations improve their ability to write code using AI using coding assistance but there's a lot to that that's a really important one to integrate into operational processes so we have to understand what that life cycle is and understand where we can use the tools most appropriately right we're not just going to ask it to do everything we're going to ask it to do very specific things and I like to say that you can't automate what you don't understand right you need to understand the process and you need to understand the points in that process where you have a bottleneck where you have a problem with information or you have incorrect information or slow information or a manual process those are the places where we're going to intervene and that is where we're going to get great value from AI it's you know right now the routine things that people are doing like summarization or writing emails or preparing marketing campaigns that's fine but that's not going to give you competitive advantage, right? Standardization, if you do what everybody else does, you're going to get efficiency, but it does not give you competitive advantage. Competitive advantage comes from differentiation. And differentiation comes from knowledge, right? Your knowledge of customers, knowledge of the problems, knowledge of the solutions, of the marketplace, of the competitive landscape, of all of those factors. That's how you differentiate in the market. That's how you compete. The competitive advantage is based on your knowledge. So when you can leverage your knowledge more effectively using AI, that is what's giving you that differentiation and the competitive advantage. And that is really about understanding those operational processes and looking at where you can have an intervention that's going to speed up the information metabolism of the organization. Right? It's giving people the answers and the and one person's output is another person's input. But what we want to do is speed up all of those processes throughout the entire enterprise and building out a knowledge architecture for your AI or sometimes it's called an ontology. Sometimes it's called a knowledge graph. What it is is it's the knowledge scaffolding for all of the content and data and information in the enterprise. It's a structure on which you hang your knowledge and then you are able to access it in different ways. And so it's very very important to start with that semantic architecture that is a foundational element and that's something that we're doing for a very large globally known consultancy. This is a consultancy that goes out and helps many fortune 500 fortune 1000 companies deploy AI. What we're helping them do is build this semantic infrastructure this semantic layer so they can make better use of their content and their knowledge. So this is something that is frequently overlooked. People don't look at the data and the content and it's not fun, right? It's not sexy. It's hard work. It's trudgery, right? Peo...

[truncated section: The transcript cuts off here in the provided document, but continues with congratulations on successes and discussion of disasters.]

Seth: ...of all congratulations on all these successes I know how much hard work going in to that just to report you know 20% efficiency gain this is amazing it's hard to believe this is fabulous absolutely right could you talk to us about some disasters all right we all make mistakes and people with best intentions, right? We're not applying right tools. Sometimes we're doing lots of different things. We're human.

Seth: What about disasters, right? What about some things? Yeah. Yeah. Sure. So, you know, I think that some of the things that we've heard about are, you know, leakage of IP, right? Being able to, you know, taking knowledge that's critical, that's confidential, that's proprietary, and ingesting it into a public LLM. And that's leakage. That's leakage of data. That's leakage of confidential information. You don't want that stuff out there. That's something that people have heard of. And that's why we need to think about having a contained environment. And of course there are safeguards that you can switch on in terms of using large language models to not train their model with your content. But you know we know that there have been software companies that have said one thing and done another thing in the past. In terms of AI efforts we have seen is just projects that have failed with lots of investment based on not having these fundamental pieces. So, I'm trying to think of the you know, we have not we've seen them in our customer environments where lots of investment like several million dollars was invested in an AI initiative and at the end of the day they ended up with a crappy search engine. This is from a company in life science. I think they spent $5 million and they tried to deploy AI and what they did was they did not have that focus on the content curation and the content life cycle. So they ended up with something that nobody could use. And what that does is it gives you a very significant black eye in on the part of the organization and on the part of leadership. You know people don't want to fail. It's a career-limiting mistake when you spend $5 million on something and end up with something nobody can use. So, that's one of the bigger ones that I have seen. But, you know, I was talking to a colleague at a bank and he was saying that he doesn't know he knows many many organizations that have done similar things that have spent 5 or 10 million on failed initiatives. Sorry, I have a cat in the room [laughter] trying to kick her out. But he said that, you know, every organization of his size has spent 5, 10, $15 million on failed AI initiatives. So it's a shame, you know, because what ends up happening is you have vendors that are learning on your dime, right? They don't know how to do this, but they're going to figure it out. And you know, they can sell the project, but then they can't deliver it. or they can deliver it, but it's going to cost a lot more time and money and frustration and hassle. So, one of the things that our customers have said about us is that, you know, we understand the fundamentals, right? We understand this stuff and we're not trying to just sell a tool. We take the time to understand that organization and those processes and then we understand what's necessary to be successful to build those governance processes to build those foundational architectures to build the proof of value projects. Here's the other thing that happens. You build a project and it's called a proof of concept and it uses very lovingly handcurated, handcrafted artisan data, right? Because you're spending this time on cleansing it and curating it and tagging it and organizing it. Well, guess what? When you go to deployment, you don't have that luxury. You don't have your operational data is going to be completely different and your infrastructure is going to be completely different. Going from pilot to production is a huge problem, a huge challenge. And most organizations, you know, if they're running into problems, it's going from POC to production. So what we try to do is build what we call proof of value, POV. And what that is is it's built with production content, production data. It's built in a way that uses your real environment and tested against real world conditions with real world data, not this, you know, craft handcrafted artisan data. Right? So, so that helps build these things with an eye toward production and deployment and operationalization. So, that's another area where organizations are not successful because they're not considering what are my production data requirements and how am I going to get my data in that shape that I need so that it can be as successful as the proof of concept project.

Irene: You start on answering right how to avoid some of the common and what is it high impact mistakes all right so this is one of them absolutely I wasn't those strangers I know exactly how you're testing your application perfectly in testing environment and then in production something goes wrong we all been there question is right any other advice you know for leaders who we're all learning by the way right we're all evolving so those who are on that journey yeah yeah yeah any other advice how to avoid those disasters?

Seth: Yeah. So, you know, again, whenever there's an inflection point in technology, there's so many unknowns and you know, the vendors don't know. There's a lot of researchers out there that are finding answers, but the problem is the answers and the solutions are not widely deployed, right? It's not well known. So I think the biggest challenge is getting a partner that is experienced with the things that you're trying to do and that has a track record of success doing the things you're trying to do and being able to begin with that view of not the technology but the business outcome right and that's you know that's technology 101 right is don't focus on the technology the shiny bit like squirrel right that's a distraction don't focus on the shiny bit. Focus on what you need in terms of an end state. Focus on what that business outcome will be and how you're going to measure that business outcome and then back into that with regard to looking at those processes. So there's a lot of work that and I get that some organizations just feel like they need to experiment, right? They want to test drive the tools. They want to see what they can do. The developers are like that. So you can have a certain amount of your budget that's left to experimentation and failures. Right? In other words, learn but learn in a controlled way with a controlled budget without a mission not trying to support a mission critical process. Right? Choose your learning opportunities carefully and prioritize on the things that are within your reach. So, you know, you could say, well, I want something that's going to be enterprise-wide so everybody sees what a great job we're doing or how powerful this is. Well, maybe you want something small and controlled, right? So, your prioritization about the complexity of the project, the number of processes that it touches, whether it's internal or externally facing, whether it has hard ROI or soft ROI, those are all priority points, right? Prioritization points, but they're going to vary depending upon the organization. Maybe you do want something that's going to be complex and cut across lots of processes because you've already learned on some smaller processes, right? Or maybe you want to say, well, I want to keep this low-key and I'm going to do it at the department level so that if it does fail, we're not going to have a big black eye for the organization. So, really considering what those learning opportunities are because you have to build maturity. This stuff has to be a core competence. There's one company I have been talking to who is working with a vendor who has a great success track record. Their technology is solid. They do understand these principles of knowledge engineering and the semantic architecture. But guess what? They will not let the customer see that architecture. They want they don't expose the knowledge graph or the ontology or the taxonomies to the customer. Well, guess what? That's a core that needs to be a core competence because you compete on that stuff, right? That is going to provide your competitive differentiation. If you are locked into a vendor, you cannot outsource your core competence, right? If you're locked into a vendor and they are not giving you visibility to something, you have to go to them whenever you need change because these things are living breathing things, right? They're, you know, taxonomy is never done, ontology is never done. There's always new products, processes, you know, topics, whatever it might be. Especially in like popular culture that's changing all the time especially in technology changing all the time. So you need to be able to keep that structure up to date and if you don't have control and visibility control over and visibility to that you're not going to be successful in the long run. You're going to be locked into a vendor that's going to increase your prices every year and in fact this vendor tries to control the entire ecosystem. So it's a very very significant vendor lock-in and even though they're performing well and they're solving the problem it's very risky for this organization to proceed down this pathway because they cannot own that infrastructure that knowledge architecture which has to cut across everything right it's not just about knowledge it's about data operations it's about customer service it's about manufacturing it's about all the other processes that you need to be concerned with so by locking themselves into this one vendor without that control, they're doing themselves a disservice. They're going to be limiting their options in the future. And that's why we're kind of talking to them about how to mitigate that, right? But again, that's something that's so critical to evolving your capabilities because if you're locked in, you know, this stuff changes so fast that you want to switch in and switch out best of breed components. And if you don't have the flexibility to do that, then you're going to be stuck and over the long run, you're not going to be as competitive as you need to be.

[music]

Irene: So, thank you so very much. This is fantastic. It's so many things that you know can derive from this conversation. This is a good place to tell our audience where what's the best way to contact you and Earley Information Science and your team. Where are you? What's the best way to find you?

Seth: Sure. So, the website is earley.com. Now, make sure you put an E before the Y. I should have gotten www.early.com. I didn't. I have www.earley.com. So, and it's seth@earley.com, right? Just my first name at lastname.com. I'm on LinkedIn, Seth Earley. You can connect with me there. You can go to the website. We have a podcast as well called Earley AI and we have the book, we have webinars, we have lots of content, we have white papers and blog posts and articles and just a ton of IP that we've built over the last few decades really. But that's where to find me. So earley.com, seth@earley.com and again e don't forget the e before the y.

Irene: Perfect. Perfect. Seth, thank you so very much. Can we finish up with few takeaways? Okay, we mentioned so important moving parts in implementing AI or any technology innovation to that matter. Right. So could you please finish maybe with few takeaways we can remember this conversation by?

Seth: Sure. Yeah. Absolutely. And I think the most critical is building that information architecture, that knowledge architecture, that enterprise architecture, building the ontology, building that knowledge scaffolding, right? Because that's going to serve you in every part of the organization. It's going to speed up the information metabolism of the enterprise because everybody will get faster, better information more quickly, be able to produce their deliverables more quickly, get products to market, and so on. So that's number one, build that knowledge and information architecture. Number two is build a library of use cases. What does good look like? How do you know what good is? How do you know you're being successful? How do you know your pilot or proof of value or production is successful? We have to understand what at the end of the day will this tool do for my user. And so building those use cases and they're very specific, they're unambiguous, they're measurable. That's the critical thing. The third is to capture baselines, right? What are your current baseline processes based on those use cases so you have something to measure against. The other piece of this and you could say this might be number one is have the right governance structure. How do you make decisions? How do you select tools? How do you measure the success that how do you control the experiments? Now, you have to have a certain amount of maturity even to do that, right? So you don't want to start with like a big complex governance committees and so on. You want to start small and manageable and be able to identify the tools you're using take inventory of those understand what processes you're enabling and so on. But that governance process is so critically important because it helps direct the organization in terms of AI in the long run. We need representation from different stakeholders and business groups. And I think the last thing I would say is, you know, recognize that you're going to make some mistakes, learn, evolve, build maturity in these areas because again, it it's going to be part of your core competence, right? Maybe not building the algorithms, but applying them and integrating them in your environment. So that has to be looked at. So I'd say look at this holistically. Look at the operational processes, look at the knowledge processes, look at the technical capabilities and be able to understand how you're going to measure and deploy these things. So that's kind of where I would begin the process is you know understanding what your business problem is and building those use cases and taking those baselines.

Irene: Thank you. Thank you Seth so very much for your time.

Seth: Thank you. You're welcome. It was very nice. Enjoyed the conversation.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.