Integration Overview

Memori Cloud works with all major LLM providers and frameworks. Register any supported client and Memori handles memory capture, augmentation, and recall automatically — with your Memori API key and provider credentials, no database setup required.

Supported Providers

ProviderIntegrationPython InstallTypeScript Install
OpenAIDirect SDK wrapperpip install memori openainpm install @memorilabs/memori openai
AnthropicDirect SDK wrapperpip install memori anthropicnpm install @memorilabs/memori @anthropic-ai/sdk
Google GeminiDirect SDK wrapperpip install memori google-genainpm install @memorilabs/memori @google/genai
xAI GrokOpenAI-compatiblepip install memori openaiComing soon
Nebius AI StudioOpenAI-compatiblepip install memori openaiComing soon
DeepSeekOpenAI-compatiblepip install memori openaiComing soon
AWS BedrockLangChain adapterpip install memori langchain-awsComing soon
LangChainFramework supportpip install memori langchain-openaiComing soon
AgnoFramework supportpip install memori agnoComing soon
Pydantic AIFramework supportpip install memori pydantic-aiComing soon

All providers support sync, async, streamed, and unstreamed modes.

Pydantic AI

Register the Agent instance directly — Memori wraps run_sync and run automatically.

from memori import Memori
from pydantic_ai import Agent

agent = Agent("openai:gpt-4o-mini")

mem = Memori().llm.register(agent)
mem.attribution(entity_id="user_123", process_id="pydantic_agent")

result = agent.run_sync("Hello!")
print(result.output)

OpenAI-Compatible Providers

Any provider with an OpenAI-compatible API works by setting a custom base_url. Dedicated guides: xAI Grok, Nebius AI Studio, DeepSeek. Same pattern works for Azure OpenAI, NVIDIA NIM, and others.

import os
from memori import Memori
from openai import OpenAI

client = OpenAI(
    base_url="https://api.studio.nebius.com/v1/",
    api_key=os.getenv("NEBIUS_API_KEY"),
)
mem = Memori().llm.register(client)
mem.attribution(entity_id="user_123", process_id="my_agent")

response = client.chat.completions.create(
    model="meta-llama/Llama-3.3-70B-Instruct",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

OpenAI Responses API

from memori import Memori
from openai import OpenAI

client = OpenAI()

mem = Memori().llm.register(client)
mem.attribution(entity_id="user_123", process_id="my_agent")

response = client.responses.create(
    model="gpt-4o-mini",
    input="Hello!",
    instructions="You are a helpful assistant."
)
print(response.output_text)