MIT NANDA’s 2025 State of AI in Business report has been widely covered for a stark finding: 95% of AI pilots fail. To some, that sounded like risk. To operators, it should sound like acceleration.
Pilots fail not because the technology is weak, but because the approach is wrong. Companies that attempt to bolt large language models onto old workflows, or rely on generic off-the-shelf tools, discover their limits quickly. The organizations moving forward are those rethinking from first principles, embedding AI into the core of their processes, and building on infrastructure designed for experimentation at scale.
Adoption Is Ubiquitous
The data confirms that AI is not a niche experiment. While only 40% of companies reported purchasing official LLM subscriptions, over 90% of employees across industries acknowledged regular use of AI tools in their daily work. In practice, nearly every knowledge worker already uses AI for tasks like drafting content, analyzing data, or automating small steps in workflows.
This matters: adoption is not waiting for central IT. It is spreading laterally, from employees up, driven by necessity and convenience. The report notes:
“The most effective AI-buying organizations no longer wait for perfect use cases or central approval. Instead, they drive adoption through distributed experimentation.”
This pattern is durable. Small, fast-moving trials win; bureaucratic, top-down initiatives stall. Organizations that embrace distributed experimentation will not just catch up, they will define the standard.
The Failure of Generic Tools
One CIO interviewed for the report put it bluntly: “Maybe one or two are genuinely useful. The rest are wrappers or science projects.” Most generic, one-size-fits-all tools fail because they cannot capture the domain-specific knowledge that makes a business unique.
The standout performers in the report were those embedding AI directly into workflows, scaling from narrow but high-value footholds. The future is not another general-purpose SaaS wrapper; it is thousands of lightweight, process-specific agents that know your systems, context, and data.
This is where Agentuity delivers advantage. Our platform eliminates the need for companies to choose between fragile internal builds and overpriced generic platforms. By removing infrastructure overhead, Agentuity makes internal agents both faster to deploy and dramatically less expensive to maintain, finally unlocking the economics that favor bespoke over off-the-shelf.
Speed Determines Outcomes
The report highlights a widening gap in deployment speed. Mid-market companies that succeeded moved from pilot to production in an average of 90 days. Enterprises took nine months or more. The pattern is simple: long pilots collapse, fast cycles succeed.
Delay is failure. The market is changing too quickly for nine-month experiments. Iteration and speed are the difference between capturing value and being left behind.
Agentuity enables this acceleration. By skipping the infrastructure build step, companies move directly from idea to production-ready agent in weeks, not quarters. For executives, this is the critical shift: in AI, time-to-value is not a metric, it is survival. Building AI agents on traditional infrastructure creates unnecessary friction that slows this process.
Back-Office Automation: Hidden ROI
Budgets often flow to sales and marketing — over 50% of GenAI spend, according to the report. Yet the highest ROI frequently comes from automating the back office. The data is clear: 70% of employees prefer AI for drafting emails, 65% for basic analysis. AI has already “won the war for simple work.”
These are not glamorous use cases, but they are transformative. Automating routine processes compounds quickly: each hour saved in accounting, HR, or operations frees resources to focus on higher-value activities. The companies embedding AI into repetitive tasks through sophisticated agent workflows are creating durable competitive advantage, even if the wins look modest at first.
The Lock-In Trap
Executives voiced a recurring concern: once workflows are trained into a system, switching costs become prohibitive. As one executive told the report's authors, "Once we've invested time in training a system to understand our workflows, the switching costs become prohibitive."
This is precisely the wrong future. The AI ecosystem is still too fluid for companies to bet everything on a single vendor or model. What wins today may be obsolete tomorrow. Lock-in kills innovation.
Agentuity prevents this. The platform supports mainstream languages, frameworks, and AI models, with full flexibility for companies to bring their own tools or build custom solutions. This ensures flexibility without compromise, and protects businesses from being forced into vendor lock-in.
Conclusion
The pattern is clear: generic platforms fade, workflow-embedded agents endure. Long pilots collapse, fast cycles succeed. Centralized approvals delay, distributed experimentation scales. The GenAI divide is not between failure and success. It is between those repeating old approaches, and those building for the new reality.