The first wave of enterprise AI adoption mostly failed, and the tools weren't the real problem.
Organisations that were early adopters of AI, the ones who jumped on chatbots and copilots and off-the-shelf automation, are quietly doing something different. They're building their own applications.
Not from scratch. Nobody's training foundation models in-house unless they're a tech giant with money to burn. But they are combining AI models and workflow automation into purpose-built applications that solve very specific problems. Some are doing this in Power Apps, some through orchestration platforms like UiPath Maestro or Action Fabric that coordinate AI, automation, and people across end-to-end workflows. The approach varies but the driver is the same: the generic tools weren't getting it done.
I think this is where the real value of AI is finally starting to land.
Why Off-the-Shelf AI Tools Fail for Enterprise Processes
I'm not hereto bash off-the-shelf products. Microsoft Copilot, general-purpose document AI, plug-and-play automation platforms, they've all done heavy lifting for awareness. They showed people what was possible. But for organisations dealing with complex, domain-specific processes, those tools hit a ceiling fast.
Take document processing in a large enterprise. Sounds simple on paper. Documents come in, data gets extracted, someone makes a decision. But in practice you've got dozens of document types, inconsistent formats, legacy systems that don't talk to each other, compliance rules that shift by jurisdiction, and people in the loop who need proper context to make good calls. No generic tool handles all of that well. You get maybe 60% of the way there, and then your team ends up spending more time working around the tool than they did with the old manual process.
One operations manager I worked with put it plainly: "We spent six months implementing a platform that was supposed to save us time. We ended up building a second job just to manage it."
That gap between what generic tools promise and what complex processes actually need is what's driving the shift toward custom AI applications.
The Three-Layer AI Stack: What Organisations Are Actually Building
What's interesting about what organisations are building now is that it's not just an AI model with a front end bolted on. It's a real combination of AI, machine learning, and process automation working together. They're building what I'd call a Three-Layer AI Stack: an application layer, an AI layer, and an automation and orchestration layer.
The Application Layer: Handles user experience, data management, and all the plumbing that makes something production-ready rather than a proof of concept on someone's laptop. This is where a lot of AI projects have fallen over historically. Model works great in a demo. Nobody built the infrastructure around it.
The AI Layer: This is where it gets interesting. The smartest teams aren't just running one type of model. They're combining machine learning with generative AI and getting each to do what it's best at. An ML model might handle document classification or entity extraction, things it can do fast and cheaply at scale. Then a generative AI model reviews those outputs, cross-checks them against business rules, and flags anything that doesn't look right. You end up using Gen AI as a verification layer on top of your ML pipeline. Instead of trusting a single model's output and hoping for the best, you've got two fundamentally different approaches checking each other's work. I'm seeing this more and more, and it changes the reliability conversation completely.
The Automation and Orchestration Layer: The bit that ties it all together. This is what moves work through systems, triggers actions off AI outputs, handles exceptions, and keeps humans in the loop where they need to be. A good orchestration layer is the difference between an AI that can classify a document and a system that classifies it, routes it to the right team, kicks off an approval workflow, updates downstream systems, and surfaces anything unusual for someone to look at.
Teams are reaching this stack through different routes. Some building in low-code platforms like Power Apps, others using orchestration frameworks like UiPath Maestro or Action Fabric to coordinate AI, automation, and human decision points end to end. The tooling matters less than the thinking behind it.
Why Custom AI Builds Are Viable Now
A few things have come together to make custom AI applications practical in ways it just wasn't a couple of years ago.
The cost of inference has dropped hard. Running classification and extraction models at scale used to be painfully expensive. Now it's a rounding error on most budgets. That alone changes the economics of building something custom.
The tooling has caught up too. Orchestration frameworks are production-ready. UiPath has added agentic capabilities with Agent Builder. Platforms like Action Fabric let you wire together AI models, automation, and human decision points without stitching everything together from scratch. Low-code tools have moved well past simple form builders into territory where you can build real AI-integrated workflows. The gap between working prototype and production deployment has shrunk a lot.
But the biggest factor is that organisations have learned from their first wave of AI adoption. Everyone saw what happened when teams deployed technology without rethinking the underlying process. It didn't work. The teams doing this well now start with the process. They map out where the real friction is, then apply the right mix of AI, automation, and application design to deal with it. That process-first thinking is what separates the projects that deliver actual ROI from the ones that end up as expensive demos nobody uses.
Combining Machine Learning and Generative AI for Higher Confidence
I want to spend a bit more time on this because I think it's one of the most underappreciated patterns in enterprise AI right now.
Most organisations started with either machine learning or generative AI, not both. ML teams built classifiers and extraction models. Separately, other teams were experimenting with large language models for summarisation or content generation. The interesting bit is what happens when you layer them together.
Here's a practical example. Say you've got an ML model classifying incoming documents into 30 categories. It's fast, it's cheap, and it's right about 92% of the time. Sounds good until you run the numbers at volume. Processing 10,000 documents a day at 92% accuracy means 800 misclassifications. Every day. That adds up quickly.
Now bring in a generative AI model as a second pass. It doesn't need to look at every document. It reviews the ones where the ML model's confidence score is below a certain threshold, or spot-checks a percentage across the board. The Gen AI model can read the document in context, compare it against known patterns, and either confirm the classification or send it to a human. Your effective accuracy goes from 92% to something much closer to 99%, without adding headcount. In some cases, you can push even higher by having the Gen AI model explain its reasoning, which gives your team an audit trail they never had before.
ML for speed and scale, Gen AI for reasoning and verification. That's the direction I see the most mature organisations heading, and it makes a lot more sense than trying to force one approach to do everything.
What This Means for How People Actually Work
The organisations getting this right aren't replacing staff. They're removing the worst parts of people's jobs. The manual sorting, the copy-pasting between systems, the mindless triage work, that's what gets automated. Experienced people end up spending their time on work that requires actual judgment, which is usually why they were hired in the first place.
A claims team that used to spend half its day triaging incoming documents now has a system that does it for them. An operations team gets intelligently prioritised work queues instead of wading through a shared inbox every morning. People are still making the important calls. They just get to them faster and with better context.
There's a commercial angle too. When you automate the mechanical parts of a process, adding volume doesn't automatically mean adding headcount. For any business watching operational costs closely, that's a real shift in how growth scales.
The first wave of enterprise AI adoption mostly failed, and the tools weren't the real problem.
Organisations that were early adopters of AI, the ones who jumped on chatbots and copilots and off-the-shelf automation, are quietly doing something different. They're building their own applications.
Not from scratch. Nobody's training foundation models in-house unless they're a tech giant with money to burn. But they are combining AI models and workflow automation into purpose-built applications that solve very specific problems. Some are doing this in Power Apps, some through orchestration platforms like UiPath Maestro or Action Fabric that coordinate AI, automation, and people across end-to-end workflows. The approach varies but the driver is the same: the generic tools weren't getting it done.
I think this is where the real value of AI is finally starting to land.
Why Off-the-Shelf AI Tools Fail for Enterprise Processes
I'm not hereto bash off-the-shelf products. Microsoft Copilot, general-purpose document AI, plug-and-play automation platforms, they've all done heavy lifting for awareness. They showed people what was possible. But for organisations dealing with complex, domain-specific processes, those tools hit a ceiling fast.
Take document processing in a large enterprise. Sounds simple on paper. Documents come in, data gets extracted, someone makes a decision. But in practice you've got dozens of document types, inconsistent formats, legacy systems that don't talk to each other, compliance rules that shift by jurisdiction, and people in the loop who need proper context to make good calls. No generic tool handles all of that well. You get maybe 60% of the way there, and then your team ends up spending more time working around the tool than they did with the old manual process.
One operations manager I worked with put it plainly: "We spent six months implementing a platform that was supposed to save us time. We ended up building a second job just to manage it."
That gap between what generic tools promise and what complex processes actually need is what's driving the shift toward custom AI applications.
The Three-Layer AI Stack: What Organisations Are Actually Building
What's interesting about what organisations are building now is that it's not just an AI model with a front end bolted on. It's a real combination of AI, machine learning, and process automation working together. They're building what I'd call a Three-Layer AI Stack: an application layer, an AI layer, and an automation and orchestration layer.
The Application Layer: Handles user experience, data management, and all the plumbing that makes something production-ready rather than a proof of concept on someone's laptop. This is where a lot of AI projects have fallen over historically. Model works great in a demo. Nobody built the infrastructure around it.
The AI Layer: This is where it gets interesting. The smartest teams aren't just running one type of model. They're combining machine learning with generative AI and getting each to do what it's best at. An ML model might handle document classification or entity extraction, things it can do fast and cheaply at scale. Then a generative AI model reviews those outputs, cross-checks them against business rules, and flags anything that doesn't look right. You end up using Gen AI as a verification layer on top of your ML pipeline. Instead of trusting a single model's output and hoping for the best, you've got two fundamentally different approaches checking each other's work. I'm seeing this more and more, and it changes the reliability conversation completely.
The Automation and Orchestration Layer: The bit that ties it all together. This is what moves work through systems, triggers actions off AI outputs, handles exceptions, and keeps humans in the loop where they need to be. A good orchestration layer is the difference between an AI that can classify a document and a system that classifies it, routes it to the right team, kicks off an approval workflow, updates downstream systems, and surfaces anything unusual for someone to look at.
Teams are reaching this stack through different routes. Some building in low-code platforms like Power Apps, others using orchestration frameworks like UiPath Maestro or Action Fabric to coordinate AI, automation, and human decision points end to end. The tooling matters less than the thinking behind it.
Why Custom AI Builds Are Viable Now
A few things have come together to make custom AI applications practical in ways it just wasn't a couple of years ago.
The cost of inference has dropped hard. Running classification and extraction models at scale used to be painfully expensive. Now it's a rounding error on most budgets. That alone changes the economics of building something custom.
The tooling has caught up too. Orchestration frameworks are production-ready. UiPath has added agentic capabilities with Agent Builder. Platforms like Action Fabric let you wire together AI models, automation, and human decision points without stitching everything together from scratch. Low-code tools have moved well past simple form builders into territory where you can build real AI-integrated workflows. The gap between working prototype and production deployment has shrunk a lot.
But the biggest factor is that organisations have learned from their first wave of AI adoption. Everyone saw what happened when teams deployed technology without rethinking the underlying process. It didn't work. The teams doing this well now start with the process. They map out where the real friction is, then apply the right mix of AI, automation, and application design to deal with it. That process-first thinking is what separates the projects that deliver actual ROI from the ones that end up as expensive demos nobody uses.
Combining Machine Learning and Generative AI for Higher Confidence
I want to spend a bit more time on this because I think it's one of the most underappreciated patterns in enterprise AI right now.
Most organisations started with either machine learning or generative AI, not both. ML teams built classifiers and extraction models. Separately, other teams were experimenting with large language models for summarisation or content generation. The interesting bit is what happens when you layer them together.
Here's a practical example. Say you've got an ML model classifying incoming documents into 30 categories. It's fast, it's cheap, and it's right about 92% of the time. Sounds good until you run the numbers at volume. Processing 10,000 documents a day at 92% accuracy means 800 misclassifications. Every day. That adds up quickly.
Now bring in a generative AI model as a second pass. It doesn't need to look at every document. It reviews the ones where the ML model's confidence score is below a certain threshold, or spot-checks a percentage across the board. The Gen AI model can read the document in context, compare it against known patterns, and either confirm the classification or send it to a human. Your effective accuracy goes from 92% to something much closer to 99%, without adding headcount. In some cases, you can push even higher by having the Gen AI model explain its reasoning, which gives your team an audit trail they never had before.
ML for speed and scale, Gen AI for reasoning and verification. That's the direction I see the most mature organisations heading, and it makes a lot more sense than trying to force one approach to do everything.
What This Means for How People Actually Work
The organisations getting this right aren't replacing staff. They're removing the worst parts of people's jobs. The manual sorting, the copy-pasting between systems, the mindless triage work, that's what gets automated. Experienced people end up spending their time on work that requires actual judgment, which is usually why they were hired in the first place.
A claims team that used to spend half its day triaging incoming documents now has a system that does it for them. An operations team gets intelligently prioritised work queues instead of wading through a shared inbox every morning. People are still making the important calls. They just get to them faster and with better context.
There's a commercial angle too. When you automate the mechanical parts of a process, adding volume doesn't automatically mean adding headcount. For any business watching operational costs closely, that's a real shift in how growth scales.
How to Build a Custom AI Workflow Without Boiling the Ocean
If this resonates, my advice is pretty simple. Pick one process. Make it something that's painful, well-understood, and runs at decent volume. Map it end to end. Work out where AI can actually help versus where basic automation or even just a better interface would solve the problem. Then build something that addresses that specific workflow, in whatever platform or toolset makes sense for your team.
I've watched plenty of organisation's stall because they tried to build a platform for everything on day one. The ones that get traction pick a single problem, solve it properly, prove it out, and expand from there. Every team we've seen successfully scale a custom AI application started with something small enough to finish.
None of this is about chasing trends. Your processes are yours. Your data is yours. The thing that actually moves the needle for you is probably going to look different from what works for someone else, and that's specificity is the point. The barriers to building something that fits have never been lower.
...
Jason Rodgers heads enablement at Blackbook AI, working with enterprise teams to design and deliver AI-driven solutions for complex business processes. If you're mapping out a workflow that's hit the ceiling with off-the-shelf tools, get in touch.
How to Build a Custom AI Workflow Without Boiling the Ocean
If this resonates, my advice is pretty simple. Pick one process. Make it something that's painful, well-understood, and runs at decent volume. Map it end to end. Work out where AI can actually help versus where basic automation or even just a better interface would solve the problem. Then build something that addresses that specific workflow, in whatever platform or toolset makes sense for your team.
I've watched plenty of organisation's stall because they tried to build a platform for everything on day one. The ones that get traction pick a single problem, solve it properly, prove it out, and expand from there. Every team we've seen successfully scale a custom AI application started with something small enough to finish.
None of this is about chasing trends. Your processes are yours. Your data is yours. The thing that actually moves the needle for you is probably going to look different from what works for someone else, and that's specificity is the point. The barriers to building something that fits have never been lower.
...
Jason Rodgers heads enablement at Blackbook AI, working with enterprise teams to design and deliver AI-driven solutions for complex business processes. If you're mapping out a workflow that's hit the ceiling with off-the-shelf tools, get in touch.






