I've been using Otter, Granola, Fireflies, and Notion AI for the last six months. Two of them are in regular rotation at my company. They're genuinely good products and I'd recommend them.
This isn't a takedown. It's a category clarification.
If you're a CxO right now choosing between AI meeting tools, somebody confident has probably told you that their tool will "capture every action item from your meetings and never let anything slip through the cracks."
That sentence is half true. The tool will capture action items from each meeting. The cracks the salesperson is selling against, though, aren't the cracks that are losing your team time.
Three categories of work — three real ones — that no AI note-taker on the market currently does. Two of them are why your team is missing deadlines.
They don't link commitments across meetings
Same commitment, said three times in three different meetings, slightly different wording each time. To an AI note-taker, that's three separate action items.
- Monday: action item — "Raj to ship the migration."
- Wednesday: action item — "Raj is moving the database over."
- Friday: action item — "Raj will have the migration done by EOQ."
A system that treats each meeting as an independent unit sees three distinct things Raj is supposed to do. Open the action-items dashboard in any of these tools and count how many duplicates of the same underlying commitment you've accumulated. The number will surprise you.
The reality is that those three lines are one commitment, reaffirmed twice. And the count of reaffirmations is the most important piece of information about that commitment, more important than the description itself, because reaffirmations are the signal that the work isn't moving.
The reason none of the tools do this is that connecting "ship the migration" to "have the migration done by EOQ" requires understanding they refer to the same thing despite zero shared keywords. That's a semantic match, not a string match. The system has to embed each commitment into a vector space, find prior commitments by meaning rather than by words, and then decide whether something is the same thing said again or a new thing entirely.
That last decision — "this is the same thing said again" — is where all the actual product lives. Get it too aggressive and you merge unrelated work. Get it too conservative and you list every commitment three times. There's a narrow band where it works, and tuning that band is the engineering project.
The note-takers don't attempt it. Their job ends when the meeting ends.
They don't detect drift across reaffirmations
Drift is when the same commitment gets restated with a changed value attached.
- Monday: "I'll have it by Wednesday."
- Wednesday: "I'll have it by Friday."
- Friday: "I'll have it by next Tuesday."
- Next Tuesday: ?
Each line in isolation is a normal status update. The drift is the sequence, and the sequence is invisible if your tool treats each meeting as independent.
This is the slow no I wrote about last week. It's the single most expensive pattern in operating a leadership team, and none of your current tools watch for it.
Catching it requires three things: cross-meeting memory (the previous section), reaffirmation tracking, and trend awareness across the values attached to a commitment over time. Each of those is a straightforward engineering problem on its own. The combination is what nobody has built.
The note-takers don't do this not because it's exotic but because it's a different category of product. They're solving a per-meeting problem. This is an across-meetings problem.
They don't maintain state of work between meetings
Here's a quick test. Pull up your AI note-taker right now. Search for "the Q3 cost initiative" or whatever your equivalent is. What you'll get is every meeting where it was mentioned, probably 10 to 15 results, each one a separate transcript with a separate summary and a separate action-items list.
What you actually need is one entry. A single record that says:
Q3 cost initiative — original target $4M, current committed $3.4M, reaffirmed in 11 meetings, last status update 4 days ago, deadline moved twice (by 6 weeks total), owner is the CFO, related commitments [X, Y, Z].
That's state. The synthesised current shape of the commitment, derived from every meeting that touched it, kept up to date as new meetings happen.
It's also what you actually need to operate. The 15 separate transcripts are evidence — that's how you'd verify the state if you wanted to. But the state itself is the artifact.
No AI note-taker maintains state. They maintain transcripts. There's a fundamental difference: a transcript is a snapshot of a moment; state is a synthesis across moments. You can derive state from transcripts, but only if you have a system whose job is to do exactly that synthesis, and to keep updating it as new meetings happen.
That system is the layer that's missing.
Why I'm writing this
Two reasons, and I want to be honest about both.
The first is that I'm building this missing layer. It's called Continuum. It does the three things above and stops. It doesn't transcribe (the note-takers do that better than we will) and it doesn't summarise (same). It ingests whatever meeting notes you already have and builds the state layer on top.
If the gap I'm describing is one you've felt, the next step is to email me at hello@continuumstate.io. We're in design partner phase, which means we'll work alongside you for eight weeks at no cost in exchange for honest feedback.
The second reason is less commercial and I want to name it. The category language around AI meeting tools is misleading right now. The salesperson telling you their tool "never lets anything slip through the cracks" is selling a feature their tool doesn't have. Your team is going to miss deadlines because of that gap, and that's frustrating to watch.
The cracks are real. The tools that close them haven't been built yet. Most of them, anyway.
The carve-out, for clarity
What we are not:
- We're not Otter, Granola, or Fireflies. We don't transcribe.
- We're not Notion AI. We don't summarise.
- We're not Linear or Asana. We don't manage tickets.
- We're not Salesforce or HubSpot. We don't manage customer state.
What we are:
- The state layer between meetings.
- The thing that says: this commitment has been restated four times, the deadline has moved twice, and it's been 17 days since anyone gave it a real status update.
If you've read this far, you probably have a specific commitment in mind right now. I'd genuinely like to hear about it.
Three things. Cross-meeting linkage. Drift across reaffirmations. State of work between rooms.
Which one bites your team hardest? Different orgs answer differently and the answer informs what we ship next. hello@continuumstate.io.