The Meeting-to-Wiki Gap: Why Your Conversations Don't Flow Into Your Knowledge Base Yet
Every meeting tool now has an MCP server. None of them write back. Why the most valuable knowledge in your organization still doesn't compound — and what that gap looks like.
In his April 2026 post describing an LLM-maintained knowledge base, Andrej Karpathy listed the obvious use cases. Personal notes. Research. Code. And then, almost in passing, he named the one that should have made every meeting-software founder sit up:
"Business/team: an internal wiki maintained by LLMs, fed by Slack threads, meeting transcripts, project documents, customer calls. The wiki stays current because the LLM does the maintenance that no one on the team wants to do."
Meeting transcripts. Customer calls. Listed as primary source material for the self-maintaining wiki pattern. Not an afterthought — a named input, next to Slack and project docs.
It has now been several weeks since that post. In that time, a wave of meeting tools have launched Model Context Protocol servers, raised new rounds, and rebranded themselves as "knowledge bases." And yet, if you read the release notes carefully, none of them are doing what Karpathy described. Not one. The industry has wired up the pipes, but the pipes only flow one direction.
This is the meeting-to-wiki gap. It is the most valuable unbuilt product in the meeting AI category, and it is strange that it is still unbuilt.
The MCP Wave: Every Meeting Tool Got Connected
Between late 2025 and early 2026, every serious meeting tool shipped an MCP server. The timing was not coincidental — MCP became the de facto standard for connecting AI agents to external data, and no meeting vendor wanted to be the one that an agent could not read.
The list is now long enough that it reads like a category census:
- Otter.ai shipped an official MCP server with OAuth, supporting both Claude and ChatGPT as clients. Agents can search transcripts, pull action items, and query meeting metadata.
- Fireflies.ai released its official MCP alongside "AskFred" — a chatbot that answers questions across every meeting — and a "Knowledge Base" feature that suggests answers during live calls.
- Granola hit a $1.5B valuation in March 2026 and launched its official MCP server in February. Agents can query notes, transcripts, and highlights across a user's Granola library.
- Read.ai introduced Search Copilot, a unified retrieval layer across meetings, emails, and chat, exposed via MCP.
- Circleback built a single MCP connector that spans meetings, email, and calendar, positioning itself as a cross-surface context provider.
- tl;dv shipped an official MCP and registered it in the public MCP registry.
- Fathom is accessible via a community MCP through Truto's connector layer.
Read the list as an industry decision. The category collectively agreed, within a few months, that AI agents need structured access to meeting data. That is a real shift. Two years ago, meeting tools were closed silos whose value proposition was a nice summary email. Now they are context providers for whatever agent you happen to be using.
And yet.
Every Single One Is Read-Only
Here is the observation that matters more than the list itself: every meeting MCP server shipped so far exposes read operations. Search transcripts. Fetch summaries. Query action items. List meetings. Get attendees. Return highlights.
None of them write back.
None of them takes a meeting that just ended and updates a project page. None of them maintains a "Person: John Smith" entry that accumulates every reference to John across dozens of calls. None of them keeps a decision log that spans meetings, tagging each entry with who decided it and what it superseded. None of them synthesizes "everything Client Y has said about pricing across the last five calls" into a persistent artifact you can open tomorrow. None of them notices when Tuesday's meeting contradicted a decision from last month and flags the contradiction.
The pipeline breaks at the most valuable step. Capture works. Transcription works. Retrieval works. What does not work — what no product currently does automatically — is the part Karpathy actually described: compiling conversations into a knowledge base that compounds.
The MCP wave connected meetings to agents. It did not connect meetings to knowledge.
The Granola Limitation
Granola is the cleanest illustration of the gap, because it is the product most people would name if asked which meeting tool is best. $1.5 billion valuation as of March 2026. Genuinely excellent UX. Loved by the kind of power users who would, in any other era, have built this themselves.
And yet, a Granola power user recently wrote something that should be read as a diagnosis of the entire category:
"Granola doesn't have a place to store answers you get when you query your meetings. You have to copy and paste into a separate notes tool."
Sit with that for a moment. The best-capitalized, best-designed product in the meeting AI category captures meetings beautifully, lets you ask questions across them, and then discards the answers. The reasoning the LLM just performed over your corpus — the synthesis, the cross-referencing, the "here is what the client has said about pricing over five calls" — lives in a chat window. When the chat window closes, the reasoning is gone. If you want to keep it, copy and paste.
This is not a Granola failure. It is a category failure. Granola is doing exactly what every other tool in the space is doing. The query interface is treated as the product, and the knowledge that results from querying is treated as ephemeral. There is no "wiki" in the Karpathy sense — no persistent, LLM-maintained artifact that the next query can build on.
Granola captures brilliantly. It does not compile.
Otter's Pivot Is Evidence the Industry Knows
If you want proof that the category is aware of the gap, look at Otter.
Sam Liang, Otter's CEO, has spent the last year explicitly repositioning Otter as a "corporate meeting knowledge base." The phrase is now in the keynotes, the decks, the press. It is the stated direction of the company. The language is exactly right. The language is also ahead of the product.
Otter's actual surface, as of early 2026, is still a flat search layer over transcripts with improved summarization and a chatbot on top. There is no entity resolution across meetings. No decision log that accumulates. No "person" or "project" or "customer" page maintained by the LLM. No contradiction detection. No compiled artifact in the Karpathy sense. It is a better filing cabinet, rebranded as a knowledge base.
This is worth pointing out not as a criticism but as evidence. The biggest incumbent in the category has decided that "knowledge base" is the right positioning. They are putting marketing muscle behind it. What they have not yet done is build it. The gap between the positioning and the product is the gap this essay is about.
The Obsidian DIY Graveyard
If the tools are not doing it, are the users? Yes — and painfully.
The Obsidian community has been the canary in this coal mine for at least a year. Power users who understand the Karpathy pattern intuitively (many of them were running Dataview and linked-note workflows long before LLMs) have been trying to bridge meetings into their vaults with increasingly elaborate home-built setups:
- Shadow (shadow.do) — saves meeting transcripts as
.mdfiles into a synced folder. Basic, functional, no compilation. - Char — an open-source project that transcribes system audio plus typed notes into
.mdfiles, intended to drop into an Obsidian vault. - tsheil/obsidian_plugin_AI_meeting_notes — a community plugin using local Whisper and Ollama for a fully offline pipeline. Works, but is aimed at developers who are comfortable configuring model servers.
- SystemSculpt's workflow, published in February 2026, is a multi-step manual procedure: record, transcribe, run a prompt, paste the result into Obsidian, link by hand.
Every one of these solutions is fragmented. Every one is manual at one or more steps. Every one is built for someone who is already comfortable with local inference, custom scripts, or fragile automation. And every one of them exists because the commercial tools do not do it.
This is a significant signal. When the most technically sophisticated users in a category are building their own half-solutions out of duct tape, the demand is real and the supply is broken.
Why No One Has Built It
It is worth asking honestly why this gap persists. The answer is not that founders are lazy or that the idea is obscure. The answer is that it is genuinely hard and genuinely out of fashion.
The technical complexity is real. Read is easy. Write is hard. A read-only MCP that returns transcript chunks is a weekend project. A system that takes a meeting, resolves entities against a persistent graph, detects conflicts with prior decisions, updates the right wiki pages, and does all of this reliably enough that users trust the output — that is a months-to-years engineering problem. It requires schemas, idempotency, entity resolution, and the kind of quiet reliability that does not demo well.
Privacy is a genuine obstacle. Pumping meeting audio into a continuously LLM-maintained wiki — one that touches named individuals, client conversations, and internal decisions — raises real GDPR and data sovereignty questions. Who owns the compiled wiki? Where does it live? What happens when an employee leaves and asks for deletion? These questions have answers, but the answers take work, and the work has to be done before the first customer is onboarded.
VCs do not reward it. Meeting tools get valued on AI features that show up in a launch tweet: a better summary, a faster transcript, a live copilot. Knowledge graphs are a slow burn. You cannot screenshot a self-maintaining wiki the way you can screenshot an auto-generated summary. The category's funding dynamics have quietly pushed founders toward the demo-friendly features and away from the compound ones.
The second-brain graveyard casts a long shadow. Evernote. Roam. Skiff. Limitless (née Rewind). Mem. Investors have watched knowledge-management products fail, repeatedly, for a decade. Anything that smells like "build a second brain" triggers pattern-matching against those losses. The fact that LLMs have fundamentally changed the economics of knowledge compilation has not yet fully registered.
Most founders are building tools, not workflows. It is easier to build a better UI than to rethink how meeting knowledge should flow into the rest of a company's information architecture. The tools-vs-workflows distinction is the difference between a feature and a product thesis, and the category has mostly chosen features.
None of these reasons are disqualifying. They are just the reasons the gap is still a gap.
What the Right Product Would Do
Strip the question down to first principles. If you were building the product Karpathy described, specifically for meetings, what would it actually do?
- Capture privately. Bot-free recording, locally processable where possible, GDPR-aligned by default. You cannot ask users to pump sensitive meetings into a continuously compiled wiki unless the capture layer is trustworthy.
- Extract entities automatically from every meeting. People, companies, projects, decisions, commitments, dates. Not as tags on a transcript, but as first-class objects in a persistent graph.
- Synthesize across meetings into persistent artifacts. When you ask "what has the client said about pricing over the last five calls," the answer should not be a chat response that disappears. It should be a markdown file that exists tomorrow and gets updated after the sixth call.
- Write back, not just read. The MCP surface should expose update operations: "append to this decision log," "create a person entry," "flag this contradiction." Read-only is a starting point, not the product.
- Define the compilation schema explicitly. A
CLAUDE.mdorAGENTS.mdstyle file that tells the LLM how meeting content maps into the wiki: what counts as a decision, how people get merged, what a project page looks like. The schema is the product as much as the LLM is. - Preserve raw and wiki separately. Karpathy's
raw/vswiki/split applied to conversations: the transcripts are immutable, auditable, source-of-truth. The wiki is LLM-owned, rewriteable, and always current. You can always regenerate the wiki from the raw. You can never regenerate the raw.
That is the outline. It is not a mystery. It is a specification that could be written into a PRD this week. The reason it does not exist yet is the list of obstacles in the previous section, not a lack of clarity about the target.
One Company Is Trying
Proudfrog is the product most explicitly aimed at this gap — privacy-first capture, automatic entity extraction, and meeting knowledge that is designed to compound rather than disappear. There is more to build, and the honest framing is that the whole category is still reaching for the Karpathy bar. But the direction is the one this essay has been describing.
If you want the longer argument for why compiled knowledge beats flat transcripts at meeting scale, the companion piece is Karpathy's LLM Wiki and What It Means for Meeting Knowledge. For the practical workflow, see the complete workflow guide. For how this plays out day-to-day, the knowledge workers use case is the closest description. And if you want to see how the landscape stacks up, the comparison page lays it out honestly.
The gap is real. Someone is going to close it. The only question is who and when.
Frequently Asked Questions
What's an MCP server and why does it matter for meetings?
Model Context Protocol (MCP) is a standard, introduced by Anthropic and rapidly adopted across the industry in 2025, that lets AI agents connect to external tools and data sources through a consistent interface. For meetings, it matters because it is the plumbing by which an agent — Claude, ChatGPT, or anything else — can reach into your meeting tool and query transcripts, summaries, action items, and metadata without a custom integration per vendor. Every major meeting tool now ships an MCP server, which is why agents can suddenly "see" your meetings. The limitation is that today's meeting MCPs are read-only: agents can query, but nothing writes the results back into a persistent knowledge base.
Why don't meeting tools just write back to my notes app?
Because writing back is much harder than reading, and much less demo-friendly. A read MCP is a thin wrapper over a search API. A write MCP that maintains a compiled, self-consistent knowledge base has to do entity resolution (is this "John" the same John as last week?), conflict detection (does this decision contradict an earlier one?), schema management (what does a project page look like?), and idempotent updates (what happens if the same meeting is processed twice?). It also has to be trusted enough that users let it modify their notes without supervision. Most vendors have chosen to ship the easy half and call it a knowledge base.
How is this different from Otter's "knowledge base" feature?
Otter's positioning is a corporate meeting knowledge base, but the underlying product is still flat search and summarization across transcripts, with a chatbot layered on top. There is no persistent, compiled wiki. There is no decision log that accumulates across meetings. There is no entity page that gets updated after each call. The marketing language is right; the engineering is not there yet. The gap between "searchable meeting archive" and "LLM-maintained knowledge base in Karpathy's sense" is the gap this article is about, and Otter is firmly on the searchable-archive side of it.
Could I build this myself with Karpathy's pattern and a meeting tool's API?
Technically yes, and some people in the Obsidian community have tried. The usual recipe is: use a meeting tool's export or MCP to pull transcripts, run them through a local or cloud LLM with a prompt that extracts entities and decisions, and append the results to markdown files in a vault. This works as a weekend project for a single user. It breaks down when you need reliability, entity resolution across meetings, conflict detection, multi-user collaboration, or privacy guarantees. The DIY solutions listed in this article — Shadow, Char, the community Obsidian plugins, SystemSculpt's workflow — are all attempts at this, and all of them stop short of what a real product would need to do.
Will GDPR allow this in Europe?
Yes, with care. GDPR does not prohibit meeting transcription or LLM-compiled knowledge bases. It requires a lawful basis for processing, data minimization, purpose limitation, the right to erasure, and attention to cross-border transfers. A well-designed compiled-knowledge product handles these by making capture explicit and consented, keeping the raw transcripts separate from the compiled wiki (so deletion requests can cleanly remove both), processing in European regions where possible, and being transparent about what the LLM is doing with the data. The obstacle is not the regulation. The obstacle is that doing it correctly takes engineering and legal work that most vendors have postponed.