Quickstart

Add persistent memory to your AI application in minutes. Core endpoints: ingest, recall, forget — plus the document system for RAG.

1Create an account and project

Sign up at claiv.io/signup and create your first project from the dashboard. Each project gets an isolated memory tenant with its own data and API keys.

2Generate an API key

Go to your project's API Keys page and create a new key. Copy it immediately — it's only shown once. Your tenant is inferred from your key; you never send a tenant_id in requests.

HeaderAll requests use Bearer auth
Authorization: Bearer YOUR_API_KEY

3Ingest your first event

Send a message to CLAIV. The API stores the event immediately and runs fact extraction asynchronously (1–5 seconds) in the background. Always send conversation_id — it is required on both ingest and recall and enables conversation history, working memory, and pending plans.

TypeScript
const response = await fetch('https://api.claiv.io/v6/ingest', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_API_KEY',
  },
  body: JSON.stringify({
    user_id:         'user-123',
    type:            'message',
    role:            'user',
    conversation_id: 'session-abc',   // required — enables working memory & history
    content:         'I use React and TypeScript. My deadline is March 15th.',
  }),
});

const { event_id, deduped } = await response.json();
Python
import requests

response = requests.post('https://api.claiv.io/v6/ingest',
  headers={'Authorization': 'Bearer YOUR_API_KEY'},
  json={
    'user_id':         'user-123',
    'type':            'message',
    'role':            'user',
    'conversation_id': 'session-abc',
    'content':         'I use React and TypeScript. My deadline is March 15th.',
  }
)

data = response.json()
# data['event_id'], data['deduped']
cURL
curl -X POST https://api.claiv.io/v6/ingest \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "user_id":         "user-123",
    "type":            "message",
    "role":            "user",
    "conversation_id": "session-abc",
    "content":         "I use React and TypeScript. My deadline is March 15th."
  }'
Response
{
  "event_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "deduped": false
}
Enrichment is asynchronous. After ingest, the worker runs the Extract → Map → Gate → Embed → Tier pipeline in the background (1–5 s). Recall may return empty results until the first enrichment cycle completes.

4Recall context for your LLM

Before your LLM responds, ask CLAIV for relevant context. You get back a pre-synthesized llm_context.text ready to inject into your system prompt — no post-processing needed. Send conversation_id on every recall call — it is required and drives conversation history and working memory.

TypeScriptRecall + LLM injection
const memory = await fetch('https://api.claiv.io/v6/recall', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_API_KEY',
  },
  body: JSON.stringify({
    user_id:         'user-123',
    conversation_id: 'session-abc',  // required — drives history & working memory
    query:           'What stack does this user work with?',
  }),
}).then(r => r.json());

// Inject the pre-synthesized narrative directly into your system prompt
const messages = [
  {
    role: 'system',
    content: memory.llm_context.text
      ? `User memory:\n${memory.llm_context.text}`
      : 'No relevant memory found.',
  },
  { role: 'user', content: 'What stack should we use for the new service?' },
];

// Pass to your LLM (e.g. OpenAI)
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages,
});

// Individual facts also available:
// memory.answer_facts       → facts directly answering the query
// memory.supporting_facts   → corroborating facts
// memory.background_context → broad always-on user context
Python
import requests

memory = requests.post('https://api.claiv.io/v6/recall',
  headers={'Authorization': 'Bearer YOUR_API_KEY'},
  json={
    'user_id':         'user-123',
    'conversation_id': 'session-abc',  # required
    'query':           'What stack does this user work with?',
  }
).json()

# Inject the pre-synthesized narrative into your LLM prompt
context_text = memory['llm_context']['text']
system_prompt = (
  f"User memory:\n{context_text}" if context_text
  else "No relevant memory found."
)
Response
{
  "answer_facts": [
    {
      "fact_id":       "uuid-1",
      "subject":       "user",
      "kind":          "preference",
      "predicate":     "prefers_framework",
      "object_text":   "React",
      "relation_phrase": "prefers working with",
      "source_text":   "I use React and TypeScript",
      "confidence":    0.95,
      "importance":    0.82,
      "tier":          "warm",
      "created_at":    "2026-03-06T10:00:00Z",
      "temporal_matches": []
    }
  ],
  "supporting_facts":   [],
  "background_context": [],
  "llm_context": {
    "text": "The user prefers React and TypeScript. Their project deadline is March 15th.",
    "fact_ids": ["uuid-1", "uuid-2"],
    "reference_time": "2026-03-06T12:00:00Z",
    "anchor_source":  "server_now"
  },
  "routing": {
    "mode": "single",
    "kinds": ["preference"],
    "predicates": ["prefers_framework"],
    "temporal_intent": null
  }
}

Inject llm_context.text directly into your system prompt — it's already synthesized and ready. The answer_facts array gives you individual facts with evidence spans, confidence scores, and predicate labels for custom rendering or citation display.

5Upload a document for RAG (optional)

Use POST /v6/documents to index reference material (product manuals, knowledge base articles, etc.) for retrieval-augmented generation. Each document is parsed into a sections → spans tree, embedded synchronously, and available for recall immediately. LLM-generated distillations complete asynchronously and unlock richer structured recall. project_id is required.

cURLUpload a document
curl -X POST https://api.claiv.io/v6/documents \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "user_id":       "user-123",
    "project_id":    "proj-abc",
    "document_name": "Product Manual",
    "content":       "... full document text ..."
  }'
Response
{
  "document_id":  "doc_uuid",
  "document_name": "Product Manual",
  "project_id":   "proj-abc",
  "collection_id": null,
  "sections": [
    { "node_id": "node_uuid1", "title": "Introduction" },
    { "node_id": "node_uuid2", "title": "Chapter 1" }
  ],
  "spans_created": 42,
  "status": "processing"
}
Spans are available for recall immediately after upload returns. Pass document_id on your next /v6/recall call to restrict retrieval to that document, or pass collection_id to recall across a group of documents. Delete a document with DELETE /v6/documents/:document_id.

6Forget (GDPR-compliant deletion)

Delete all memory for a user. Returns a receipt documenting exactly what was removed. Optionally scope by conversation_id, project_id, time range, or document_id.

TypeScriptFull user deletion
const result = await fetch('https://api.claiv.io/v6/forget', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_API_KEY',
  },
  body: JSON.stringify({ user_id: 'user-123' }),
}).then(r => r.json());

// result.receipt_id     → audit trail ID
// result.deleted_counts → breakdown of what was removed
Python
result = requests.post('https://api.claiv.io/v6/forget',
  headers={'Authorization': 'Bearer YOUR_API_KEY'},
  json={'user_id': 'user-123'}
).json()

print(result['receipt_id'])
print(result['deleted_counts'])
cURL
curl -X POST https://api.claiv.io/v6/forget \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "user_id": "user-123" }'
Response
{
  "receipt_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
  "deleted_counts": {
    "events":     12,
    "chunks":      8,   // non-zero when documents were also deleted
    "episodes":    3,
    "facts":      28,
    "claims":      1,
    "open_loops":  2
  }
}

Next steps