On Memory, Knowledge,
and the Autonomy of Expertise
There is a quiet crisis in how expertise lives and dies. A seasoned consultant carries twenty years of pattern recognition in her head — the ability to walk into a room and sense the unspoken power dynamics, the instinct for which question to ask third, the hard-won knowledge of what doesn't work. None of this is written down. Most of it cannot be. And when she stops practicing, it disappears.
We built entire civilizations on the transmission of expert knowledge. Apprenticeships. Guilds. Doctoral supervision. These are slow, intimate, fragile systems. They work — but they don't scale. And they leave no trace when the chain breaks.
Artificial intelligence was supposed to change this. It hasn't. What we got instead are systems that process language without understanding practice, that generate plausible text without grasping why an expert chose this word over that one. Systems that treat every interaction as if it were the first. Systems without memory.
We believe this is the central unsolved problem of applied AI: not generation, but retention. Not fluency, but fidelity. Not intelligence in the abstract, but intelligence that belongs to someone.
Memory is not storage. Databases store. File systems store. Vector embeddings store — with the added illusion of understanding.
Memory, in the sense that matters, is relational. It is the ability to recall not just what happened, but why it mattered. To connect a decision made in 2019 to a principle articulated in 2014. To recognize that today's client resembles — in a specific, non-obvious way — a case from seven years ago.
Human experts do this effortlessly and invisibly. They call it "experience." We call it the hardest computational problem nobody is working on.
Current AI architectures are structurally amnesiac. Large Language Models operate in an eternal present tense. Context windows expand, retrieval-augmented generation bolts on fragments of the past — but none of this produces memory in any meaningful sense. What it produces is search results dressed as recollection.
At RabbitsHat, we are building systems that remember the way practitioners remember: relationally, contextually, and with a sense of what matters.
Our work draws on Relational Frame Theory — a psychological framework that models how humans derive meaning not from isolated stimuli, but from the relationships between them. Equivalence, opposition, hierarchy, temporality, causality — these are the frames through which an expert organizes experience. We encode them computationally. Not as metadata tags, but as first-class cognitive structures.
Knowledge management has been a solved problem on paper since the 1990s. In practice, it remains a graveyard of abandoned wikis, outdated Confluence spaces, and SharePoint folders no one opens.
The reason is architectural. Traditional knowledge management systems treat knowledge as content — documents to be filed, tagged, and retrieved. But expert knowledge is not content. It is process. It lives in the interaction between a framework and a situation, between a principle and its exception, between what the textbook says and what actually works at 2 AM with a difficult stakeholder.
We reject the content model of knowledge. Instead, we build on the premise that expert knowledge is:
CaseOS — our applied platform — operationalizes this view. But the research behind it is more fundamental: we are studying how to make knowledge live inside computational systems the way it lives inside human minds.
Autonomy is a dangerous word in AI. It conjures images of systems acting without oversight, of algorithms making decisions humans don't understand. That is not what we mean.
When we say autonomous expert agents, we mean systems capable of:
We are not building another SaaS tool. CaseOS is an infrastructure for cognitive sovereignty — the right of an expert to own, control, and benefit from the computational representation of their own expertise.
We commit to three principles that are non-negotiable:
We are making a bet — not on technology, but on a class of people.
We believe that the world's most valuable knowledge is not in databases or documents. It is in the heads of independent practitioners who spent decades developing ways of seeing that no one else has. Consultants, analysts, coaches, researchers, clinicians, facilitators — people whose expertise is their product.
We believe these people deserve better tools than chatbots with amnesia.
We believe that memory, properly engineered, changes everything. And we believe that the next era of AI will not be defined by who builds the largest model — but by who builds the most faithful one.
RabbitsHat. Scaling human brilliance.
Amsterdam — 2026