The AI Hype Gap
88% of organisations use AI. Only 6% qualify as "high performers" extracting real value (McKinsey). Gartner predicts 60% of AI initiatives will be abandoned by 2026.
The gap between adoption and value is enormous. The reason, as far as I can tell, is unromantic: most "AI products" are marketing with a model bolted on, and engineering tends not to forgive marketing.
I get nervous every time I see a deck with "AI-powered" in the headline and no architecture diagram in the appendix. There's usually a reason it isn't there. I'd love to be wrong about that. So far, mostly not.
The 80/20 Reality
After building two AI-first products (Conesta and RUBL), here is the part nobody puts on the landing page:
80% of any AI product is traditional software engineering. Database design. API architecture. Authentication. State management. Error handling. Deployment. Monitoring. The boring, essential plumbing that decides whether the product survives Tuesday.
AI is genuinely differentiated for about 10-12 features in a typical product:
- Classification — categorising inputs (lead quality, support ticket priority)
- Extraction — pulling structured data from unstructured text (contacts from emails, action items from meetings)
- Summarisation — condensing long content into actionable summaries
- Generation — creating content, responses, recommendations
- Pattern recognition — identifying trends in usage data, predicting churn
Everything else — date comparisons, calculations, database queries, form validation, workflow automation — is standard code that has been running reliably since before anyone had heard of a transformer. Calling it AI is marketing. Building it as AI is a self-inflicted bill that arrives every month from your model provider.
The "30-40% AI" Lie
We reviewed a product spec recently that had 335 features labelled across 6 modules, with roughly 30-40% tagged as "AI-powered." When we audited them, the real AI features were closer to 10-12%.
The rest were things like:
- "AI-powered date calculation" — that's subtraction
- "AI-driven data comparison" — that's a SQL query
- "Intelligent categorisation" — that's an if-else statement with a config file
I have written all three of these. None of them needed AI. None of them benefit from AI. Routing a date subtraction through a language model is the engineering equivalent of taking a helicopter to your kitchen.
This matters because AI is expensive to build, expensive to run, and (in my experience) expensive to debug at 3am when the rate limit hits. Treat every feature as an AI problem and you'll probably spend 10x the engineering time and compute cost for no additional user value. We also get paged more often when we do this, though I keep hoping that's just us.
What "AI-First" Actually Means
Our CEO has a line on this I think about more than I'd like to admit:
"AI-native products supersede AI-integrated ones the same way smartphones superseded phones with apps stapled on. Once the architecture changes, the old shape stops being a category."
At Fludigo, AI-first does not mean AI-everything. It means:
- Start with the intelligence layer — what decisions does the system need to make? What patterns does it need to recognise?
- Build the foundation traditionally — databases, APIs, authentication, UI. This is 80% of the work and it has to be boring. Boring means it doesn't wake anyone up at 3am.
- Apply AI where it's 10x better — where the alternative is a human spending hours on something AI does in seconds with acceptable error rates
- "AI proposes, human approves" — the system suggests, the user confirms. Trust gets earned over time, not assigned at launch.
In Conesta, AI generates learning paths because mapping concept relationships across thousands of topics is genuinely something AI does 10x better than a human with a whiteboard. The study rooms, resource marketplace, and achievement system? Standard engineering. They run on Postgres and good taste, and they are fine.
The Salesforce Cautionary Tale
Salesforce cut 4,000 support jobs and replaced them with AI (Agentforce). The result was a 58% success rate on simple tasks. Support quality deteriorated. Customers noticed. The model didn't apologise.
They prioritised AI hype over fixing the parts of the platform that were already wobbly. The lesson is uncomfortable: if your traditional software is unreliable, adding AI on top gives you unreliable AI on top of unreliable software, with twice the surface area for things to go wrong.
AI is the salt, not the steak. Salt makes a good steak great. If the steak is bad, more salt just makes it salty and bad.
How to Evaluate AI Products
Next time a SaaS company tells you their product is "AI-powered," ask:
- Which specific features use AI? If they cannot name them, it's marketing.
- What happens when the AI is wrong? If there is no fallback, it isn't production-ready. (It will be wrong. That is the question, not whether.)
- What's the human override? If you can't correct the AI, the AI is — in my view — a liability with a UI on it.
- What data does it need? If it needs 6 months of usage data to start working, it isn't solving your problem today. It's solving the problem of someone who will be using it in 2027.
Build the 80% first. Apply AI to the 20% where it's transformative. Ship. Then sleep. That's the order that's worked for us, anyway.
This is how we build at Fludigo. AI where it matters, solid engineering everywhere else. See our products.
