Use Cases
Memory infrastructure for every AI application
From customer support chatbots to healthcare AI, CLAIV gives your application persistent, structured memory with three API calls. Ingest conversations, recall context within token budgets, and forget with GDPR-compliant audit receipts.
Explore how teams across industries use CLAIV to build AI that actually remembers.
Use Case
AI Customer Support Memory
Build support chatbots that remember every customer interaction, preference, and past issue across sessions. Deliver personalized resolutions without asking customers to repeat themselves.
The Problem
Support chatbots built on stateless LLMs treat every conversation as the first. A customer who has already explained their billing issue, provided their account details, and described their setup has to repeat everything when the session resets or they return the next day. This creates frustration, increases resolution time, and erodes trust in AI-powered support.
Stuffing entire conversation histories into context windows doesn't scale. Long support threads get silently truncated, losing critical details about prior resolutions, account preferences, and escalation history. Vector search returns semantically similar text but misses the specific fact that the customer already tried resetting their password twice.
Escalation tracking is even harder. When a ticket is handed from a frontline agent to a specialist, context is lost or manually copy-pasted. The specialist asks the same diagnostic questions the customer already answered. For enterprise support teams handling thousands of accounts, this compounding inefficiency directly raises cost-per-resolution and drives customers toward self-service or churn. GDPR and CCPA compliance adds another layer: deletion requests require documented proof that data was actually removed, not just a system flag.
How CLAIV Solves It
CLAIV's ingest endpoint extracts structured facts from every support interaction: the customer's plan, their technical environment, past issues and resolutions, and stated preferences. These facts are stored with evidence spans pointing back to the exact messages they came from, so agents can verify the source of any recalled fact.
When a customer returns, the recall endpoint retrieves their complete context within your token budget. The support agent immediately knows the customer's history, avoids asking redundant questions, and can reference prior interactions. CLAIV's tiered memory system ensures high-priority facts (like open tickets or account status) are always included in the hot tier, while lower-priority details surface when the query is relevant.
The recall response includes a synthesized natural-language answer ready to inject into the agent's system prompt alongside the structured facts. Agents get the customer's full picture in one call — no custom context assembly logic required. As agents ingest outcomes from each resolved ticket, CLAIV accumulates richer context about account history, improving recall relevance over time.
If a customer requests data deletion, the forget endpoint removes their data and returns a timestamped audit receipt proving compliance, meeting GDPR and CCPA requirements with structured proof rather than best-effort promises.
Key Outcomes
- Customers stop repeating themselves — agents see full history on first message
- Escalation handoffs preserve all prior context automatically
- Hot-tier facts (open tickets, plan status) always surfaced regardless of query
- GDPR/CCPA deletion receipts satisfy legal and compliance teams
- llm_context.text injects directly into system prompt — no post-processing required
Features Used
Example: Returning Customer Interaction
Step 1: Ingest the support conversation
POST /v6/ingest
{
"user_id": "customer-4821",
"type": "message",
"content": "I'm on the Pro plan and my
webhook integrations stopped working
after updating to v2.3. I already tried
regenerating my API key."
}Step 2: Recall context on next session
POST /v6/recall
{
"user_id": "customer-4821",
"conversation_id": "support-conv-789",
"query": "What issues has this customer
reported?"
}
// → llm_context.text: "Customer is on Pro
// plan, reported webhook failures after
// v2.3 update, already attempted key regen."
// → answer_facts: [{ predicate: "has_plan",
// object_text: "Pro" },
// { predicate: "reported_issue",
// object_text: "Webhook failures" }]
Use Case
Memory for AI Personal Assistants
Create AI assistants with long-term user memory that persists across sessions, devices, and contexts. Build a continuous experience where the AI understands user preferences, habits, goals, and evolving needs.
The Problem
Personal AI assistants promise to know you over time, but most reset with every session. Users tell their assistant about dietary restrictions, work preferences, travel habits, and project goals only to start over the next day. The experience feels hollow because the AI has no mechanism to retain and organize long-term personal context.
Building memory from scratch means designing extraction logic, conflict resolution, temporal tracking, and storage infrastructure. Teams spend months building plumbing instead of the assistant experience that differentiates their product. And when users request data deletion, there's no standardized way to prove it happened.
Life changes create another challenge: users move cities, change jobs, shift dietary preferences, start new hobbies. A memory system that can't track these transitions will surface outdated facts — the user used to be vegetarian, or used to commute by train — with the same confidence as current information. For subscription-based AI products, this kind of stale-memory failure is a leading reason users cancel: the assistant still feels like a stranger even after months of daily use.
How CLAIV Solves It
Every conversation with the assistant is ingested into CLAIV, which automatically extracts structured facts about the user: their preferences, goals, relationships, and context. When facts change (the user switches jobs, moves cities, changes a preference), CLAIV's temporal tracking creates versioned edges between old and new values, so the assistant knows what changed and when — not just what the current state is.
Before each response, the assistant calls recall with the current query and a token budget. CLAIV returns the most relevant facts about the user, ranked by importance and recency, plus a synthesized natural-language summary ready to inject into the system prompt. The assistant responds with full awareness of the user's history without any manual context management or prompt engineering.
With tiered memory, core identity facts (the user's name, location, primary role) stay in the hot tier and are always included in recall. Secondary preferences populate on relevant queries. Historical facts that have been superseded move to cold storage, available for temporal queries like “what did I used to use for this?” but never surfaced as current. The result is an assistant that genuinely knows the user, across devices, sessions, and months of interaction.
Key Outcomes
- Assistant recalls context from months ago without requiring re-introduction
- Temporal tracking distinguishes current preferences from outdated ones
- Core identity facts are always present — no prompt engineering required
- Life transitions (job changes, relocations) handled automatically via supersession
- Verified deletion receipts for users who want their data removed
Features Used
Example: Evolving User Preferences
POST /v6/ingest
{
"user_id": "user-917",
"type": "message",
"content": "Actually, I switched from
VS Code to Cursor last month. Also
I'm focusing on Rust now instead
of Python."
}
// CLAIV automatically:
// → Creates new facts for Cursor + Rust
// → Marks VS Code + Python as superseded
// → Links old → new with temporal edgesPOST /v6/recall
{
"user_id": "user-917",
"conversation_id": "conv-session-123",
"query": "development environment setup"
}
// → llm_context.text: "Uses Cursor (switched
// from VS Code last month). Currently
// focused on Rust, previously Python."
// → answer_facts ranked by relevance + recencyUse Case
Shared Memory for Multi-Agent Chat Systems
Enable specialized AI agents to share context through a unified memory layer. One agent's findings are immediately available to another, creating coherent multi-agent experiences without custom state synchronization.
The Problem
Multi-agent architectures are increasingly common: a routing agent delegates to specialized agents for billing, technical support, scheduling, or research. But each agent operates in isolation. The billing agent doesn't know what the technical support agent learned about the customer's environment. The scheduling agent doesn't know the customer prefers morning meetings.
Building shared state between agents requires custom message buses, shared databases, and synchronization logic. When facts conflict between agents (one says the user is on the Free plan, another recorded Pro), there's no built-in mechanism to detect or resolve the contradiction. The result is inconsistent, sometimes contradictory responses that undermine user trust.
Tool call outputs compound the problem. When an agent calls an external tool (checking CRM status, querying inventory, running a calculation), those results need to be remembered across agent handoffs. Without a structured memory layer, agents re-call the same tools redundantly, wasting latency and budget, or lose critical data between steps. The engineering overhead to build reliable shared state across asynchronous agent pipelines is one of the biggest friction points in production multi-agent deployments.
How CLAIV Solves It
All agents ingest into the same CLAIV user namespace. When the billing agent learns the customer upgraded to Enterprise, that fact is immediately available when the technical support agent recalls context. CLAIV acts as a shared, structured memory layer that every agent reads from and writes to — no custom synchronization required.
Tool call outputs can be ingested using the type: "tool_call" event type, preserving the results of external tool invocations as structured facts available to any downstream agent. This eliminates redundant tool calls and keeps the full context of what the pipeline has already learned.
CLAIV's conflict-aware extraction detects when a new fact contradicts an existing one. If two agents record different plan statuses, the system resolves the conflict using temporal ordering and evidence quality, ensuring all agents see consistent, up-to-date information. See our benchmark results for how this performs at scale across complex, multi-turn agent interactions.
Each agent recalls with queries and token budgets optimized for its specific task, but all draw from the same structured fact store. The evidence spans let any agent trace where a fact originated, providing full auditability across the agent network without custom logging infrastructure.
Key Outcomes
- All agents share the same user context automatically — no synchronization logic
- Tool call results persist as structured facts across agent handoffs
- Conflict detection prevents agents from contradicting each other
- Per-agent recall queries return only relevant context for each agent's role
- Full audit trail of which agent ingested each fact, traceable via evidence spans
Features Used
Example: Cross-Agent Context Sharing
Billing agent ingests plan change
POST /v6/ingest
{
"user_id": "org-acme",
"type": "message",
"content": "Customer confirmed upgrade
to Enterprise plan. Annual billing,
50 seats, SSO required."
}Support agent recalls full context
POST /v6/recall
{
"user_id": "org-acme",
"conversation_id": "support-session-x",
"query": "account configuration needs"
}
// → llm_context.text includes facts
// from billing agent: Enterprise plan,
// annual, 50 seats, SSO required
// → Plus technical context from prior
// support interactionsUse Case
AI Tutoring with Long-Term Student Memory
Track student progress, learning preferences, and knowledge gaps over time. Build AI tutors that adapt their teaching approach based on each student's complete learning history.
The Problem
AI tutoring systems struggle with continuity. A student who mastered fractions last week shouldn't be re-taught the basics today. But without persistent memory, the tutor has no record of what was covered, what the student struggled with, or which explanations worked best. Each session starts from scratch, wasting time and failing to build on prior progress.
Education data is also uniquely sensitive, especially for minors. Schools and ed-tech platforms need verifiable data deletion capabilities when students leave or parents request removal. Approximate deletion isn't acceptable when student records are involved.
Adaptive curriculum is another unsolved problem. A tutor that remembers what a student knows today but not how they learned it can't adapt its teaching approach. Some students grasp concepts through worked examples; others need abstract rules first; others need visual representations. Without a structured record of which approaches produced breakthroughs for this specific student, every new concept starts from the tutor's default style rather than the student's proven optimal path. For ed-tech platforms competing on learning outcomes, this is a significant product differentiator waiting to be built.
How CLAIV Solves It
After each tutoring session, CLAIV ingests the conversation and extracts structured facts about the student's progress: concepts mastered, areas of difficulty, preferred explanation styles, and pace. These facts accumulate across sessions to form a comprehensive, evolving learner profile that grows richer with every interaction.
Before each new session, the tutor recalls the student's profile within a token budget, getting a synthesized summary of their current level, recent struggles, and optimal teaching approach. The temporal tracking shows how understanding evolved, so the tutor can identify patterns like recurring difficulties with specific concept types or which explanation formats led to breakthrough moments.
The tiered memory system keeps recent struggles and active learning goals in the hot tier, while mastered concepts move to warm storage (available when reviewing foundations) and long-ago completed topics move to cold. This ensures the tutor's system prompt focuses on what's most useful for today's session without overflowing the context window with a student's full two-year history.
When a student's data needs to be deleted, the forget endpoint provides a complete audit receipt documenting exactly what was removed and when, satisfying both FERPA and GDPR requirements with structured, verifiable proof — not a best-effort promise.
Key Outcomes
- Tutor builds on prior sessions — no re-teaching mastered concepts
- Preferred explanation styles stored and recalled for each student
- Temporal record of learning progression enables pattern detection
- Recent struggles in hot tier, long-completed topics in cold storage
- FERPA/GDPR deletion with full audit receipt for regulatory compliance
Features Used
Example: Adaptive Learning Progress
POST /v6/ingest
{
"user_id": "student-302",
"type": "message",
"content": "The student solved quadratic
equations correctly but struggled with
word problems that require setting up
the equation. They prefer step-by-step
visual breakdowns over verbal
explanations."
}POST /v6/recall
{
"user_id": "student-302",
"conversation_id": "tutor-session-003",
"query": "math proficiency and learning
preferences"
}
// → llm_context.text: "Proficient in
// solving quadratic equations. Struggles
// with word problem setup. Prefers
// visual step-by-step breakdowns."
// → answer_facts include temporal
// progression of mastered conceptsUse Case
Memory for AI-Powered Developer Tools
Power IDE assistants, code review bots, and developer copilots that remember project context, coding preferences, architecture decisions, and team conventions across sessions.
The Problem
AI coding assistants generate suggestions without understanding your project's architecture, your team's conventions, or your personal coding style. Every session, the assistant needs to re-learn that you use Tailwind instead of styled-components, that your API follows REST conventions, or that your team avoids certain patterns for performance reasons.
The context window approach fails for developer tools because project context is vast and heterogeneous: package choices, architecture decisions, naming conventions, testing preferences, and dozens of team-specific rules. You can't fit it all in a prompt, and vector search doesn't distinguish between “we tried this approach and abandoned it” and “this is our current standard.”
Convention drift is a subtler problem. Teams change their standards over time — they adopt a new testing framework, deprecate a pattern, migrate to a different state manager. An assistant that doesn't track these transitions will confidently suggest the old approach to a developer who just spent the last sprint migrating away from it. For teams using AI assistants on active codebases, this produces suggestions that create immediate technical debt rather than reducing it.
How CLAIV Solves It
As developers interact with the AI assistant, CLAIV ingests conversations and extracts structured facts about the project: tech stack choices, architecture patterns, code conventions, and personal preferences. These facts persist across sessions and evolve as the project changes, forming a living knowledge base of how this codebase specifically works.
When the developer asks for help, recall returns the relevant subset of project context within the token budget. If the developer asks about database queries, CLAIV surfaces facts about the ORM, the schema conventions, and prior decisions about query patterns, but not unrelated frontend preferences that would waste tokens. The query-relevance ranking ensures the assistant always sees the most applicable project context first.
CLAIV's temporal tracking is particularly valuable for developer tools: when a team migrates from one framework to another, the memory reflects the current stack with the transition clearly recorded. The assistant knows what you use now and what you replaced — it can even answer “why did we move away from X?” if that reasoning was captured during the migration discussion.
Project namespacing means individual developer preferences (e.g., user_id: "project-frontend") separate from user-level preferences, allowing the assistant to pull both project conventions and individual developer style simultaneously. Read the concepts docs for how to structure namespacing for your use case.
Key Outcomes
- Assistant knows your tech stack and conventions from the first message of every session
- Migration tracking prevents suggestions that reintroduce deprecated patterns
- Query-relevant recall surfaces database facts for DB questions, UI facts for UI questions
- Project and developer namespaces can be combined for layered context
- Team convention changes propagate automatically via temporal supersession
Features Used
Example: Project-Aware Code Suggestions
Ingest architecture decision
POST /v6/ingest
{
"user_id": "project-webapp",
"type": "message",
"content": "We decided to use tRPC
instead of REST for all new endpoints.
Zod for validation. React Query for
client-side state. No Redux."
}Recall before generating code
POST /v6/recall
{
"user_id": "project-webapp",
"conversation_id": "dev-session-today",
"query": "API and state management
patterns"
}
// → llm_context.text: "tRPC (not REST),
// Zod validation, React Query, no Redux"
// → Assistant generates tRPC router
// instead of Express route handlerUse Case
Compliant Memory for Healthcare AI
Track patient history, clinical context, and care preferences with full GDPR-compliant deletion. Structured audit trails ensure every memory operation is traceable and verifiable.
The Problem
Healthcare AI applications face a unique tension: they need rich patient context to provide useful assistance, but that context is among the most regulated data in any industry. HIPAA, GDPR, and sector-specific regulations require not just data protection but verifiable deletion when requested. “The model was retrained without your data” is not sufficient proof of deletion.
Clinical context is also inherently temporal. A patient's medication changes, symptoms evolve, and treatment plans are updated. An AI assistant that recalls outdated medical information is not just unhelpful but potentially dangerous. The memory system must track what is current versus what is historical, and surface the right information at the right time.
Care coordination across providers adds another layer. A patient seen by their GP, a specialist, and a telehealth service may have three separate clinical records with no shared context. When each AI assistant operates in isolation, each provider starts from scratch — asking about medications already recorded elsewhere, missing recent test results, or unaware of new diagnoses. For patients managing chronic conditions, this fragmentation leads to redundant tests, drug interaction risks, and reduced care quality.
How CLAIV Solves It
CLAIV's structured fact extraction captures clinical context with evidence spans that trace every fact back to its source message. When a medication is changed, temporal tracking records the transition with timestamps, so the AI knows both the current and historical state with full chronological detail. The tiered memory system ensures critical information (allergies, current medications, active diagnoses) stays in the hot tier and is always surfaced, while historical data remains available for deeper clinical queries.
For care coordination, shared user namespaces allow multiple providers to ingest into the same patient record. A specialist adds findings, the telehealth service adds follow-up notes, and the GP's assistant can recall the complete picture on the next visit. CLAIV's conflict detection flags when two providers have recorded contradictory information (e.g., different medication dosages), surfacing the discrepancy for clinical review rather than silently picking one.
The forget endpoint is designed for regulated environments. When a patient requests deletion, CLAIV returns a comprehensive audit receipt with timestamps, fact counts, and a verifiable record of what was removed. This serves as the compliance artifact that regulators and patients expect — not a promise, but documented proof.
Combined with enterprise deployment options, healthcare teams can operate CLAIV within their existing compliance frameworks and data residency requirements while gaining the benefits of structured, persistent memory for their AI applications.
Key Outcomes
- Current medications and active diagnoses always in hot tier — never missed
- Medication changes tracked with timestamps — outdated dosages clearly superseded
- Care coordination across providers via shared patient namespace
- Contradictory clinical facts flagged for review instead of silently resolved
- HIPAA/GDPR deletion with cryptographic audit receipts for compliance records
Features Used
Example: Patient Context with Audit
POST /v6/ingest
{
"user_id": "patient-8842",
"type": "message",
"content": "Patient switched from
metformin to jardiance due to GI side
effects. Current A1C is 7.2, down from
8.1 three months ago. No allergies
updated."
}
// CLAIV extracts:
// → Current medication: jardiance
// → Prior medication: metformin (superseded)
// → A1C: 7.2 (prior: 8.1, 3 months ago)
// → Each fact linked to source messagePOST /v6/forget
{
"user_id": "patient-8842"
}
// → {
// "receipt_id": "f47ac10b-...",
// "deleted_counts": {
// "events": 12,
// "facts": 47,
// "episodes": 5
// }
// }Why CLAIV
Purpose-built for AI memory
Unlike generic vector databases or conversation window approaches, CLAIV provides structured, deterministic memory designed specifically for AI chat applications.
Structured Facts, Not Embeddings
CLAIV extracts discrete, evidence-backed facts instead of embedding raw text. Every fact traces to its source message with character-exact spans. No more "similar but wrong" retrieval results.
See how it worksQuery-Driven Recall
Send a natural language query. Optionally include a conversation_id for conversation-scoped weighting, or omit it for cross-chat memory. CLAIV routes across multiple search channels in parallel, then synthesizes the most relevant facts into llm_context.text — ready to inject into your system prompt.
Read the docsAuditable Deletion
Forget returns a timestamped receipt documenting exactly what was deleted. Real compliance for GDPR, CCPA, HIPAA, and FERPA, with structured proof your legal team can reference.
Compare approachesReady to add memory to your AI?
Three API calls to persistent, deterministic memory. Start with the free tier and scale as your application grows.
Join teams building smarter AI across customer support, personal assistants, multi-agent systems, education, developer tools, and healthcare.