Technology

Why enterprises keep getting AI wrong – and what it actually takes to get it right 

In the upper floors of corporate America, budgets are larger than ever, board presentations are more confident, and press releases announce transformations. And yet, quietly, behind closed doors, the projects are being shelved. 

AI was supposed to be the technology that changed everything – and for a growing number of enterprises, it is changing something, just not in the way they planned. The implementations stall, pilots never reach production, and the promised returns fail to materialize.

And rather than asking hard questions about why, many organizations simply move on, allocate fresh budgets, and try again with a different vendor and the same underlying assumptions. 

The models, infrastructure or data are not the problem; assumptions are. The deeper failure is strategic: a persistent tendency to treat AI as something which is acquired rather than something which is done. Enterprises have become extraordinarily sophisticated at procuring AI technology, but have become less experienced in defining what they actually need to accomplish. 

This is not a failure mode. It rhymes with every major technology wave of the past three decades – the ERP rollouts in the 90s, the dot-com era’s faith in connectivity generating value, the big data boom of the 2019s when firms built vast data lakes and struggled with extracting anything coherent. Each cycle produced enormous investment, genuine excitement, and a reckoning when the technology refused to deliver outcomes that hadn’t been properly defined in the first place.  

According to S&P Global Market Intelligence’s 2025 survey across North America and Europe, 42% of companies abandoned most of their AI initiatives last year – nearly triple the rate of 2024. Beyond this, the average organization scrapped 46% of its AI proof-of-concepts before they reached production. 

These are not fringe cases, or underfunded experiments for that matter. They are mainstream corporate AI programmes failing at scale. 

The context problem no one is talking about 

Satyen Sangani, CEO and co-founder of data intelligence company Alation, has spent over a decade watching enterprises struggle with a deceptively simple problem: they have data, but they don’t have context. 

Satyen Sangani

When Sangani founded Alation in 2012, the world was in the grip of the big data boom. Hadoop, Cloudera, and massive data lakes were everywhere everywhere, but the information they contained was essentially opaque. A database table could be about customers, vendors, or something else entirely, and there was no way to tell just by looking at it. 

The company’s answer was the data catalogue: a kind of internal Wikipedia for enterprise data that allowed people to search, understand, and trust the information they were working with. 

Fast forward to today, and Sangani sees an almost identical problem playing out at a much larger scale. “If AI needs to go do something, it needs to have context about what it is that it’s trying to do, and it needs to be able to use the right data,” he told The Sociable. “You need context in order to make it work correctly.” 

This analysis cuts to the heart of why so many enterprise AI projects collapse. Informatica’s CDO Insights 2025 survey found that data quality and readiness was the top obstacle to AI success, cited by 43% of implementations. Yet, most enterprises are still leading with tech rather than use cases – buying platforms, counting tokens, and declaring AI readiness without ever defining what outcome they’re actually trying to achieve. 

Buying the car won’t fix the problem 

Sangani described encountering a customer who had issued a 100-line RFP for AI technology, specifying capabilities in extraordinary detail – but hadn’t identified a single use case the technology would be used for. 

“The technology is not going to solve your problem. It’s almost like – you go buy a brand new luxury car. That might be a nice thing to do, but it’s not going to fix a problem you have,” he stressed. 

The fix requires a fundamental inversion of how enterprises approach the work: instead of trying to get all data foundations right before deploying anything – a kind of perfectionist’s paralysis – organizations need to identify a narrow, well-defined problem and work backwards. A sales agent doesn’t need HR data or R&D infrastructure; it needs customer context, lead data, and product information.

Go deep on that slice, document rigorously, and build from there. 

This is not an abstract principle. Alation worked with Daimler Trucks to deploy agents that do the work of materials planners – identifying when manufacturing parts are likely to run out across hundreds of suppliers and multiple geographies. The agents don’t need access to everything; they need deep, accurate contexts about a specific slice of operations, and the result is a multiplication of human capability that no hiring plan could match. 

Trust is the real metric 

This use-case-first thinking also changes the trust calculus around AI agents, arguably the defining challenge of enterprise AI right now. When agents operate without a clear boundary, things go wrong in ways that are hard to predict and even harder to contain. 

“If you’re kind of like a self-driving car, you’re going to trust it to the extent that you believe that it’s going to get most of the exception conditions right,” said Sangani. The problem isn’t failure in isolation, but rather the blast radius of failure at scale. 

That observation takes on added urgency when it comes to security and governance. Instead of arguing that governance slows AI down, Sangani contends that AI makes governance more urgent and more consequential. 

“If you have a vulnerability, AI is going to make that vulnerability exposed five times faster. If you have a problem with compliance, that compliance problem is going to be highlighted tenfold faster,” the executive stressed. 

Gartner’s own research underscores the stakes: the analyst firm predicts that by 2029, “death by AI” legal claims will have doubled from the previous decade, specifically because decision-automation deployments lacked sufficient risk guardrails. Separately, the consulting firm also projects that fragmented AI regulation will now extend to cover 75% of the world’s economies by 2030, driving $1 billion USD in compliance spend. 

Governance, in other words, is not a brake on AI ambition – it is the infrastructure that makes ambition sustainable. 

The productivity paradox 

The productivity question is equally nuanced. U.S. Bureau of Labor Statistics data shows nonfarm business sector labor productivity rose nearly 5% in Q3 2025 – among the strongest quarters in recent years. But Sangani is candid that the picture at the individual level is more complicated. 

He describes a burst of AI-enabled productivity – two product briefs and a full off-site plan in an afternoon – alongside a cautionary tale about an AI tool that accidentally deleted years of family financial records. The lesson wasn’t that AI is unreliable, but that AI productivity requires active oversight. 

Research shows that 57% of employees admit to not checking AI-generated output for accuracy – a behavior Sangani deemed “AI sloth”: the production of more content that is simultaneously less trustworthy. The result, he predicted, is a bifurcation between workers who use AI carefully, checking outputs, refining results, and genuinely accelerating, and those who produce more noise at the same or lower quality. 

“It’s almost the equivalent of getting a writer to produce 20 more articles, but if it’s generated by AI, the credibility of those articles is not going to be very high,” he explained. 

The learning advantage 

The deeper argument running through Sangani’s thinking is about learning, not tooling. He described Alation’s mission not as building a data catalog or an AI platform, but as helping companies learn faster. 

In an environment where models, competitors, and market conditions shift constantly, AI’s productivity gains are highly dependent on context, varying significantly by user skill level and task complexity, which translates into organizational learning capacity possibly being a more durable competitive advantage than any specific AI deployment. 

The analogy of the dot-com bubble is apt: the late 1990s were defined not by a shortage of internet investment, but by a shortage of disciplined thinking about what problems the internet was actually solving. The companies that survived weren’t those with the most bandwidth – they were the ones who figured out use cases first. 

For enterprises working out where to start, Sangani’s advice is direct: “You unequivocally have to act. But you also have to solve a problem.” Buying the technology and hoping outcomes emerge is not a strategy, and neither is building the perfect data foundation before deploying into a single agent. 

The work is in the narrowing, and then doing it well enough to earn the trust to do more.

Featured image: Kamran Abdullayev via Unsplash+

Salome Beyer Velez

Recent Posts

The EU wants to put a ‘tax on disinformation’: Fractured Reality report

If your content is deemed to be disinformation by the ministry of truth, your speech…

20 minutes ago

You created the song. Now what? How Neural Frames is giving independent musicians a visual voice (Brains Byte Back Podcast)

In the latest episode of Brains Byte Back, host Erick Espinosa sits down with Dr.…

14 hours ago

How the launch of Prezent Vivo promises to change the communication landscape in life sciences permanently 

According to research from McKinsey, nearly a quarter of life sciences organizations had already deployed…

2 days ago

International think tank Horasis announces dates for its Asia Meeting

Horasis, the international think thank founded by Frank-Jürgen Richter, has announced that its Asia Meeting…

4 days ago

A new era of AI-native education is on the horizon 

While the use of AI in the classroom is always a hotly debated topic, the…

4 days ago

Unauthorized access to Anthropic’s Mythos model raises AI security concerns

Anthropic built a moat around its most powerful AI model yet. When tested, the defenses…

6 days ago