Add memory to your AI
in 2 API calls
Working memory in under 5 minutes. No infra required.
Stop rebuilding memory logic. CLAIV gives your chatbot, agent, or app persistent, structured memory across conversations and documents.
$ npm install @claiv/memoryTry it nowIngest — store a memoryimport { CLAIV } from '@claiv/memory' const claiv = new CLAIV({ apiKey }) await claiv.ingest(({ user_id: 'user-123', conversation_id: 'conv-456', type: 'message', role: 'user', content: 'I run a fitness business' });Recall — inject into promptconst memory = await claiv.recall(({ user_id: 'user-123', conversation_id: 'conv-456', query: 'What does the user do?' }); // memory.llm_context.text → // "User runs a fitness business" systemPrompt += memory.llm_context.text
Stop building this yourself
Every team rebuilds the same memory layer. Most ship something fragile. Here's what you're replacing.
// Memory logic you wrote — and now maintain
storeChatHistory(messages)
buildVectorSearch(embeddings)
rerankResults(chunks)
handleContradictions(facts)
manageTokenLimits(context)
pruneOldContext(history)
handleGDPRDeletion(userId)~8 systems · ~weeks of engineering · ongoing maintenance
// Two calls. That's the entire layer.
claiv.ingest({ user_id, content })
claiv.recall({ user_id, query })
// That's it.
// Extraction → automatic
// Contradiction handling → automatic
// Token budgeting → automatic
// GDPR receipts → automaticWorking in < 5 minutes · no infra to manage
The problem
Your AI doesn't have memory.
It has a token limit.
This is not a model problem. It's a memory problem.
Users repeat themselves every session
"I already told you my name, my preferences, my project details." Without persistent memory, every conversation resets.
Context gets truncated silently
Stuffing conversation history into prompts doesn't scale. Long threads get cut off mid-thought. Critical facts disappear without warning.
RAG returns similar text, not the right fact
Embedding-based retrieval finds "similar" text, not the specific fact you need. Contradictions pass through undetected.
No way to actually delete user data properly
GDPR requires provable deletion. "We asked the model to forget" isn't compliance. You need timestamped, auditable proof.
The solution
Three API calls. That's it.
No schemas. No pipelines. No memory logic to build.
Ingest
POST /v6/ingestStore messages → structured facts extracted automatically. Contradictions resolved. Timeline tracked. Evidence spans preserved.
Recall
POST /v6/recallGet ranked context → ready for your system prompt. Fact retrieval ranked within your token budget. One field to inject.
Forget
POST /v6/forgetDelete user data → with audit-proof receipt. GDPR-compliant deletion with timestamped documentation of exactly what was removed.
Under the hood
What happens when you ingest
Deterministic. Testable. Production-safe.
You send messages
Any conversation turn — user or assistant. You send it to /v6/ingest via API.
Facts extracted async
CLAIV parses structured facts from the text. Contradictions are resolved. Temporal links are built.
Versioned and indexed
Facts are timestamped, versioned, and tiered. High-importance facts always appear in recall.
Recall within token budget
You specify a token limit. CLAIV ranks, fits, and returns a ready-to-inject context string.
Differentiation
Not a vector database
with a memory label
Vector search finds similar text. CLAIV stores structured facts.
Document Memory
RAG that actually works
Upload documents once. CLAIV parses structure, indexes content, and injects the right sections into memory automatically — alongside conversation facts.
Section-aware retrieval
Not chunk spam. CLAIV understands document structure and surfaces the right section in context.
Works alongside conversation memory
Document facts and conversation facts are recalled together in a single /v6/recall call. One injection.
No pipeline to maintain
Upload via POST /v6/documents. Chunking, embedding, and indexing handled automatically.
curl -X POST https://api.claiv.io/v6/documents \
-H "Authorization: Bearer <API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"user_id": "user_42",
"project_id": "proj_abc",
"document_name": "q4-roadmap.txt",
"content": "Q4 roadmap: ship RAG..."
}'200 OK
{
"document_id": "doc_7fx9qr",
"sections": [{ "title": "Overview" }],
"spans_created": 14,
"status": "processing"
}We outperform typical memory systems
Tested across 10 dialogue datasets, 1,540 questions covering recall, temporal reasoning, and generation. Outperforms Mem0 and GPT-based approaches on their own published results.
Scores by category
Overall score
75.0%
vs competitors
All 10 LoCoMo dialogue sets · GPT-4o mini LLM judge · Best published results per provider
Use cases
If your product has a chat interface,
CLAIV gives it memory.
AI chatbots with memory
Users never repeat themselves. Preferences, context, and history persist across every session.
AI agents with workflows
Agents remember task state, user goals, and prior actions. Long-running workflows stay coherent.
Internal copilots
Retain institutional knowledge, team conventions, and workflow preferences across your organisation.
Customer support systems
Know the customer's history before they speak. Resolve faster. Escalate with full context.
Pricing
Pay for memory usage.
Nothing else.
No per-user fees. No seat licences. You pay for ingests — messages stored into memory.
Typical usage example
~30 messages/month
~$0.12/month
that's it — no hidden fees
Free
200 ingests/mo
Starter
5K ingests/mo
Growth
20K ingests/mo
Scale
75K ingests/mo
Stop rebuilding memory.
Free tier included. No credit card required.