There's a particular kind of frustration that comes from using a tool that almost works. Not broken — just slightly off. Like a chair adjusted for someone a few centimetres taller than you. You can sit in it. You'll manage. But by the end of the day, you feel it.
That's the experience most professionals have with AI right now.
The tools exist. Some of them are genuinely impressive. But impressive in a demo and useful in practice are two different things, and the gap between them is where most AI products quietly fail the people they're supposed to help.
The demo problem
AI tools are almost universally shown at their best. Clean inputs, clear outputs, frictionless handoffs. The demo shows a therapist generating session notes in seconds. A teacher building a week of lesson plans before lunch. A consultant producing a report that would have taken three days.
What the demo doesn't show is the therapist spending twenty minutes correcting notes that missed the clinical nuance that actually mattered. The teacher rebuilding plans from scratch because the AI had no idea what this particular class had already covered. The consultant editing out confident-sounding fabrications before sending anything to a client.
The demo is real. The friction is also real. And for people whose work depends on precision — whose clients, students, or patients are affected by the quality of their judgment — the friction isn't a minor inconvenience. It's a reason to stop using the tool entirely.
Built for the pitch, not the practice
Most AI tools are built to win procurement decisions, not to survive contact with daily work. That's not cynicism — it's just how product development works when the people signing off on the budget are rarely the people doing the job.
The result is tools optimised for the impressive moment rather than the mundane one. And in most professions, the work is mostly mundane moments. Scheduling. Notes. Follow-up. Preparation. The administrative layer that sits around the actual practice and slowly eats it.
That's where people need help. Not with the headline task — the session, the consultation, the lesson — but with everything that surrounds it. The before and after. The record-keeping. The preparing to be present.
AI that only shows up for the impressive moment misses most of the day.
The context problem
Here's what most AI tools don't have: history.
A professional's relationship with a client, student, or patient is cumulative. What happened last time matters. What was tried and didn't work matters. What the person is working toward matters. The current session doesn't exist in isolation — it exists in a sequence, and understanding that sequence is most of what expertise actually is.
Generic AI tools have no access to that sequence. They start from zero every time. Which means the practitioner has to carry all of that context themselves, reconstruct it on the fly, and then figure out how to translate the AI's output back into something that accounts for it.
That's not saving time. That's adding a step.
The tools that actually help are the ones built around the practitioner's existing context — their notes, their history, their way of working. Not a general-purpose assistant that can technically do anything, but a specific one that actually knows something about this client, this case, this situation.
That specificity is hard to build. It requires thinking about the whole workflow, not just the moment of AI interaction. But it's the difference between a tool that impresses and a tool that sticks.
Why professionals stop
When you talk to professionals who've tried AI tools and stopped using them, the reasons cluster around a few familiar themes.
It doesn't know my context. It sounds confident but gets things wrong. I spend more time fixing it than I saved using it. It doesn't fit how I actually work.
None of these are complaints about AI being bad. They're complaints about AI being generic. Capable in the abstract, but not shaped to the specific demands of a specific practice.
The professionals who do stick with AI tools — who genuinely integrate them into daily work — almost always find ways to constrain the tool. They build their own prompts, their own workflows, their own guardrails. They do the work of making the generic specific. And then the tool becomes useful.
But that's a significant ask. Most people don't have the time or inclination to become prompt engineers on top of everything else they're already doing. They need the tool to do that work for them.
What actually helps
The AI tools that earn long-term trust in professional practice tend to share a few characteristics.
They're narrow. They do one thing well rather than everything adequately. A professional can trust a narrow tool because they can verify it. They know what it's for, they know what it produces, they know where it's likely to be wrong.
They work with existing context. They're built around the notes, the history, the records that already exist — not as a bolt-on, but as a foundation. The AI output is shaped by what the practitioner already knows about this situation.
They get out of the way. The best tools are ones you stop noticing. They reduce friction quietly, without demanding that you learn a new way of working. They fit into the practice rather than asking the practice to reshape itself around them.
And they're honest about what they are. Not magic. Not a replacement for judgment. A tool — a specific, useful, limited tool — that handles some of the weight so the practitioner can focus on the part that actually requires them.
That's what we're trying to build at shiftLeft.
Cadence is the first product — a client and session tracking tool with AI-assisted session prep, built around the idea that the work between sessions matters as much as the session itself. It's not trying to do everything. It's trying to do one thing well: help practitioners show up prepared, without the administrative overhead that gets in the way.