RabbitsHat Lab

The RabbitsHat Manifesto

On Memory, Knowledge, and the Autonomy of Expertise

I. The Problem We Chose

There is a quiet crisis in how expertise lives and dies.A seasoned consultant carries twenty years of pattern recognition in her head — the ability to walk into a room and sense the unspoken power dynamics, the instinct for which question to ask third, the hard-won knowledge of what doesn't work. None of this is written down. Most of it cannot be. And when she stops practicing, it disappears.

We built entire civilizations on the transmission of expert knowledge. Apprenticeships. Guilds. Doctoral supervision. These are slow, intimate, fragile systems. They work — but they don't scale. And they leave no trace when the chain breaks.

Artificial intelligence was supposed to change this. It hasn't. What we got instead are systems that process language without understanding practice, that generate plausible text without grasping why an expert chose this word over that one. Systems that treat every interaction as if it were the first. Systems without memory.We believe this is the central unsolved problem of applied AI: not generation, but retention. Not fluency, but fidelity. Not intelligence in the abstract, but intelligence that belongs to someone.

II. What We Mean by Memory

Memory is not storage. Databases store. File systems store. Vector embeddings store — with the added illusion of understanding.Memory, in the sense that matters, is relational. It is the ability to recall not just what happened, but why it mattered. To connect a decision made in 2019 to a principle articulated in 2014. To recognize that today's client resembles — in a specific, non-obvious way — a case from seven years ago.

Human experts do this effortlessly and invisibly. They call it "experience." We call it the hardest computational problem nobody is working on.

Current AI architectures are structurally amnesiac. Large Language Models operate in an eternal present tense. Context windows expand, retrieval-augmented generation bolts on fragments of the past — but none of this produces memory in any meaningful sense. What it produces is search results dressed as recollection.At RabbitsHat, we are building systems that remember the way practitioners remember: relationally, contextually, and with a sense of what matters.

Our work draws on Relational Frame Theory — a psychological framework that models how humans derive meaning not from isolated stimuli, but from the relationships between them. Equivalence, opposition, hierarchy, temporality, causality — these are the frames through which an expert organizes experience. We encode them computationally. Not as metadata tags, but as first-class cognitive structures.

III. What We Mean by Knowledge

Knowledge management has been a solved problem on paper since the 1990s. In practice, it remains a graveyard of abandoned wikis, outdated Confluence spaces, and SharePoint folders no one opens.The reason is architectural. Traditional knowledge management systems treat knowledge as content — documents to be filed, tagged, and retrieved. But expert knowledge is not content. It is process. It lives in the interaction between a framework and a situation, between a principle and its exception, between what the textbook says and what actually works at 2 AM with a difficult stakeholder.

We reject the content model of knowledge. Instead, we build on the premise that expert knowledge is:

Procedural. It is embedded in action sequences, decision trees, and conditional heuristics that the expert often cannot fully articulate — but can demonstrate.
Contextual. The same expert applies different reasoning in different situations, not because she is inconsistent, but because her methodology is sensitive to context in ways that rules-based systems cannot capture.Evolving. Expert knowledge is not static. It changes with every project, every failure, every correction. A knowledge system that cannot grow with its user is a photograph, not a mirror.

CaseOS — our applied platform — operationalizes this view. But the research behind it is more fundamental: we are studying how to make knowledge live inside computational systems the way it lives inside human minds.IV. What We Mean by AutonomyAutonomy is a dangerous word in AI. It conjures images of systems acting without oversight, of algorithms making decisions humans don't understand. That is not what we mean.When we say autonomous expert agents, we mean systems capable of:

Sustained coherent action — maintaining a line of reasoning across days and weeks, not just within a single prompt-response cycle. An agent that can pick up where it left off. An agent with a yesterday.
Methodological fidelity — following the logic of a specific professional approach, not defaulting to generic best practices. When an organizational psychologist's agent conducts a preliminary interview, it should sound like that psychologist, not like a median of all psychologists who have ever published.
Appropriate initiative — knowing when to act, when to ask, and when to stop. This is not a technical feature. It is a property of systems that have internalized enough of their user's judgment to understand the boundaries of their own competence.

We are not building another SaaS tool. CaseOS is an infrastructure for cognitive sovereignty — the right of an expert to own, control, and benefit from the computational representation of their own expertise.

VI. The CommitmentWe commit to three principles that are non-negotiable:Intellectual property belongs to the expert. Every correction, every refinement, every piece of methodology that enters our system remains 100% the property of the person who created it. We are infrastructure, not extractors.Transparency of reasoning. Our agents do not operate as black boxes. If an agent makes a recommendation, the user can trace the logic back to the specific principles, cases, and memory structures that produced it.Cognitive science, not hype. We publish. We cite. We build on peer-reviewed theory. When we invoke psychological concepts, we mean them technically, not metaphorically. This is not "AI that thinks like a human." This is AI built with rigorous models of how humans think.

VII. The BetWe are making a bet — not on technology, but on a class of people.We believe that the world's most valuable knowledge is not in databases or documents. It is in the heads of independent practitioners who spent decades developing ways of seeing that no one else has. Consultants, analysts, coaches, researchers, clinicians, facilitators — people whose expertise is their product.We believe these people deserve better tools than chatbots with amnesia.

We believe that memory, properly engineered, changes everything.And we believe that the next era of AI will not be defined by who builds the largest model — but by who builds the most faithful one.

RabbitsHat. Scaling human brilliance.Amsterdam — 2026


OUR TEAM

Alexander EliseenkoCEO, Business Logic & Psychological Architecture
Alexander combines psychology expertise with over 12 years in organizational consulting to architect the psychological frameworks powering our AI systems. His background in change management and digital transformation, coupled with 5+ years developing ML system logic, enables the creation of AI Assistants with psychological authenticity. His academic connection as a lecturer at the Higher School of Economics (HSE) brings theoretical rigor to our practical applications.
Andrey KulikovCTO, Technical Implementation & ML
With over 18 years of experience in IT, IT security, and Big Data, Andrey translates psychological models into robust, scalable technical architectures. His background includes 8 years as CEO of SocialLinks, where he led international expansion in web investigation and crime prevention solutions. Andrey ensures our psychologically-informed AI maintains high standards of security and performance.

RabbitsHat
contact us