Async Patterns

Memori works with async/await out of the box in both Python and TypeScript. In Python, use AsyncOpenAI or AsyncAnthropic instead of their sync counterparts — everything else stays the same. TypeScript is natively async.

When to Use Async

ScenarioPythonTypeScriptWhy
Web serversYesDefaultConcurrent request handling
Chatbots with many usersYesDefaultNon-blocking I/O
CLI scriptsNoDefaultSync is simpler in Python
Jupyter notebooksNoEvent loop already running

TypeScript is natively async — all Memori SDK calls return Promises. No special async client or asyncio setup is needed.

Basic Async Setup

Async Setup
import asyncio
from memori import Memori
from openai import AsyncOpenAI

client = AsyncOpenAI()

mem = Memori().llm.register(client)
mem.attribution(entity_id="user_123", process_id="async_agent")

async def main():
    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "I prefer async Python."}]
    )
    print(response.choices[0].message.content)

asyncio.run(main())

Web Server Example

Web Server
import os
from fastapi import FastAPI
from pydantic import BaseModel
from memori import Memori
from openai import AsyncOpenAI

app = FastAPI()

class ChatRequest(BaseModel):
    message: str

@app.post("/chat/{user_id}")
async def chat(user_id: str, req: ChatRequest):
    client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    mem = Memori().llm.register(client)
    mem.attribution(entity_id=user_id, process_id="fastapi_async")

    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": req.message}]
    )
    return {"response": response.choices[0].message.content}