掘金 人工智能 05月30日
OpenMemory MCP 如何实现跨AI Agent共享记忆
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenMemory是一个开源的AI记忆层MCP,旨在为大语言模型提供持久化和个性化的记忆能力。它允许AI助手记住用户的偏好、历史对话和重要信息,并且这些记忆可以跨不同的AI Agent或应用调用,真正实现用户数据的私有化。通过简单的安装和配置,OpenMemory可以集成到如Cursor和Trae等应用中,实现记忆互通,提升AI应用的智能化和用户体验。

💾OpenMemory是一个开源的AI记忆层 MCP,旨在为大语言模型提供持久化、个性化的记忆能力,核心在于让AI助手能够记住用户的偏好、历史对话和重要信息。

🛠️OpenMemory通过提供四个关键工具(add_memories, search_memory, list_memories, delete_all_memories)来实现记忆的管理和使用。其中,add_memories负责添加新的记忆,search_memory用于在存储的记忆中搜索相关信息,这两个工具是实现记忆功能的核心。

🔒OpenMemory在MCP内部设计了智能权限过滤机制,在记忆搜索前会进行【用户+应用+记忆+状态】四维权限检测,从而最大程度地保障用户记忆的隐私安全。

OpenMemory是一个开源的 AI 记忆层 MCP,为大语言模型提供持久化、个性化的记忆能力,让AI助手能够记住用户的偏好、历史对话和重要信息。更重要的是,该记忆可以跨 AI Agent /应用调用,真正做到属于用户自己的数据。

简单尝试

根据OpenMemory官方文档,本地把MCP安装一下

# 仓库拉下来git clone https://github.com/mem0ai/mem0.git#cd openmemory# copy一份环境配置(注意,要在.env文件中配一下OPENAI_API_KEY,这个key会用于做记忆embedding匹配)cp api/.env.example api/.env# 打开docker# builds the mcp server and ui make build # runs openmemory mcp server and uimake up  

跑起来之后,在http://localhost:3000会有个UI界面,可以手动查看和操作记忆。

我们给Cursor和Trae这两个App同时配上MCP,看看能不能做到记忆互通。

Claude目前无法安装这个MCP,Github上有人提了issue但还没解

在Cursor中配置

supergateway可以把stdio-based servers转成用sse方式访问

{  "mcpServers": {    "openmemory": {      "command": "npx",      "args": [        "-y",        "supergateway",        "--sse",        "http://localhost:8765/mcp/cursor/sse/user"      ]    }  }}

在Trae中配置

Trae的MCP本身支持sse配置

{    "mcpServers": {        "memory": {            "url": "http://localhost:8765/mcp/trae/sse/user"        }    }}

先在Cursor里让它叫我“Jerry大人”

然后在Trae里问问看应该如何称呼我

奈斯,记忆互通了。

当然,理论上你可以保存更复杂的记忆。

代码浅析

强烈建议clone下代码查看

github.com/mem0ai/mem0…

整体架构

OpenMemory项目中包含2部分内容

本章着重讲解api层MCP相关功能。

OpenMemory底层使用2个数据库

关于记忆处理

同时,OpenMempry在MCP内部设计了智能权限过滤,在记忆搜索前会进行【用户+应用+记忆+状态】四维权限检测,最大化保障用户记忆隐私安全。

下面我们针对部分核心逻辑详细讲讲(只保留关键代码)

入口文件

入口核心做了几步

# openmemory/api/main.py# 创建数据库Base.metadata.create_all(bind=engine)# 创建MCP Serversetup_mcp_server(app)# 创建HTTP接口(供ui项目通过可视化操作访问memory,可以不用关注)app.include_router(memories_router)app.include_router(apps_router)app.include_router(stats_router)app.include_router(config_router)
DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///./openmemory.db")# SQLAlchemy engine & sessionengine = create_engine(    DATABASE_URL,    connect_args={"check_same_thread": False}  # Needed for SQLite)SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)# Base class for modelsBase = declarative_base()# Dependency for FastAPIdef get_db():    db = SessionLocal()    try:        yield db    finally:        db.close()

MCP Tools

OpenMemory MCP中定义了4个Tool

from mcp.server.fastmcp import FastMCPmcp = FastMCP("mem0-mcp-server")@mcp.tool(description="Add a new memory. This method is called everytime the user informs anything about themselves, their preferences, or anything that has any relevant information which can be useful in the future conversation. This can also be called when the user asks you to remember something.")async def add_memories(text: str) -> str:    # xxx    @mcp.tool(description="Search through stored memories. This method is called EVERYTIME the user asks anything.")async def search_memory(query: str) -> str:    # xxx    @mcp.tool(description="List all memories in the user's memory")async def list_memories() -> str:    # xxx    @mcp.tool(description="Delete all memories in the user's memory")async def delete_all_memories() -> str:    # xxx

下面详细讲讲add_memoriessearch_memory

⭐️ add_memories

add_memories主要经过以下流程:

async def add_memories(text: str) -> str:    # Step 1 从mem0ai获取client    memory_client = get_memory_client()    # Step 2 获取数据库    db = SessionLocal()    # Step 3 从client获取memory添加结果(后面会讲)    response = memory_client.add(text,                                 user_id=uid,                                 metadata={                                    "source_app": "openmemory",                                    "mcp_client": client_name,                                })                                    # Step 4 SQLite同步处理    if isinstance(response, dict) and 'results' in response:        for result in response['results']:            memory_id = uuid.UUID(result['id'])            memory = db.query(Memory).filter(Memory.id == memory_id).first()            if result['event'] == 'ADD':                if not memory:                    memory = Memory(                        id=memory_id,                        content=result['memory'],                        state=MemoryState.active                    )                    db.add(memory)                else:                    memory.state = MemoryState.active                    memory.content = result['memory']                # Create history entry                history = MemoryStatusHistory(                    memory_id=memory_id,                    old_state=MemoryState.deleted if memory else None,                    new_state=MemoryState.active                )                db.add(history)            elif result['event'] == 'DELETE':                if memory:                    memory.state = MemoryState.deleted                    memory.deleted_at = datetime.datetime.now(datetime.UTC)                    # Create history entry                    history = MemoryStatusHistory(                        memory_id=memory_id,                        old_state=MemoryState.active,                        new_state=MemoryState.deleted                    )                    db.add(history)        db.commit()    return response        # app/models.py - Step 5 自动触发的分类逻辑 @event.listens_for(Memory, 'after_insert') def after_memory_insert(mapper, connection, target):         """记忆插入后自动分类"""         db = Session(bind=connection)         categorize_memory(target, db)         db.close()    def get_categories_for_memory(memory: str) -> List[str]:         """使用OpenAI对记忆进行分类"""         try:                 response = openai_client.responses.parse(                         model="gpt-4o-mini",                         instructions=MEMORY_CATEGORIZATION_PROMPT,              # 分类指令                         input=memory,                         temperature=0,                         text_format=MemoryCategories,                 )                          response_json = json.loads(response.output[0].content[0].text)                 categories = response_json['categories']                 categories = [cat.strip().lower() for cat in categories]                 return categories         except Exception as e:                 raise e

可以简单看下用于分类的Prompt

MEMORY_CATEGORIZATION_PROMPT = """Your task is to assign each piece of information (or “memory”) to one or more of the following categories. Feel free to use multiple categories per item when appropriate.- Personal: family, friends, home, hobbies, lifestyle- Relationships: social network, significant others, colleagues- Preferences: likes, dislikes, habits, favorite media- Health: physical fitness, mental health, diet, sleep- Travel: trips, commutes, favorite places, itineraries- Work: job roles, companies, projects, promotions- Education: courses, degrees, certifications, skills development- Projects: to‑dos, milestones, deadlines, status updates- AI, ML & Technology: infrastructure, algorithms, tools, research- Technical Support: bug reports, error logs, fixes- Finance: income, expenses, investments, billing- Shopping: purchases, wishlists, returns, deliveries- Legal: contracts, policies, regulations, privacy- Entertainment: movies, music, games, books, events- Messages: emails, SMS, alerts, reminders- Customer Support: tickets, inquiries, resolutions- Product Feedback: ratings, bug reports, feature requests- News: articles, headlines, trending topics- Organization: meetings, appointments, calendars- Goals: ambitions, KPIs, long‑term objectivesGuidelines:- Return only the categories under 'categories' key in the JSON format.- If you cannot categorize the memory, return an empty list with key 'categories'.- Don't limit yourself to the categories listed above only. Feel free to create new categories based on the memory. Make sure that it is a single phrase."""

整体来说,核心的记忆向量处理是在mem0ai这个包中,在OpenMemory里只是对结果同步到SQLIte数据库中。

⭐️ search_memory

search_memory的描述中提到This method is called EVERYTIME the user asks anything.希望大模型能够在每次用户提问时都调用查看历史记忆,让AI能够基于历史记忆回答问题。

search_memory主要经过以下流程

@mcp.tool(description="Search through stored memories. This method is called EVERYTIME the user asks anything.")async def search_memory(query: str) -> str:    # Step 1 获取memory client    memory_client = get_memory_client()    # Step 2 获取数据库    db = SessionLocal()        # Step 3 在数据库中找到【 当前用户】  以及  【当前应用】  可访问的全部记忆    user_memories = db.query(Memory).filter(Memory.user_id == user.id).all()    accessible_memory_ids = [memory.id for memory in user_memories if check_memory_access_permissions(db, memory, app.id)]        # Step 4 构建Qdrant查询过滤条件    # Step 4.1 基础条件:只搜索当前用户的记忆    conditions = [qdrant_models.FieldCondition(key="user_id", match=qdrant_models.MatchValue(value=uid))]        if accessible_memory_ids:        # Step 4.2 权限条件:只搜索有权限访问的记忆        accessible_memory_ids_str = [str(memory_id) for memory_id in accessible_memory_ids]        conditions.append(qdrant_models.HasIdCondition(has_id=accessible_memory_ids_str))    filters = qdrant_models.Filter(must=conditions)        # Step 5:使用mem0ai的嵌入模型将查询向量化    embeddings = memory_client.embedding_model.embed(query, "search")        # Step 6:在Qdrant中执行向量相似度搜索    hits = memory_client.vector_store.client.query_points(        collection_name=memory_client.vector_store.collection_name,        query=embeddings, # 查询向量        query_filter=filters, # 权限和用户过滤        limit=10, # 返回top 10最相似的结果    )        # Step 7:处理Qdrant搜索结果    memories = hits.points    memories = [        {            "id": memory.id,            "memory": memory.payload["data"],            "hash": memory.payload.get("hash"),            "created_at": memory.payload.get("created_at"),            "updated_at": memory.payload.get("updated_at"),            "score": memory.score,        }        for memory in memories    ]        # Step 8:访问日志记录(此处省略)    # ...        return json.dumps(memories, indent=2)            

在搜索过程中

检索到的相关记忆会返回给大模型,用于做进一步处理。

mem0ai

mem0ai的定位是中间层,负责所有与AI相关的记忆处理

def get_memory_client(custom_instructions: str = None):     """获取mem0ai客户端"""         global _memory_client, _config_hash              # 1. 构建mem0ai配置         config = {                 # 告诉mem0ai使用Qdrant作为向量存储                 "vector_store": {                         "provider": "qdrant",                         "config": {                                 "collection_name": "openmemory",                "host": "mem0_store",  # Docker容器名                               "port": 6333,                         }                 },                 # 告诉mem0ai使用OpenAI作为LLM,用于给记忆分类                "llm": {                         "provider": "openai",                         "config": {                                 "model": "gpt-4o-mini",                                 "temperature": 0.1,                                 "max_tokens": 2000,                                 "api_key": "env:OPENAI_API_KEY"                         }                 },                 # 告诉mem0ai使用OpenAI作为嵌入模型,          "embedder": {                         "provider": "openai",                         "config": {                                 "model": "text-embedding-3-small",                                 "api_key": "env:OPENAI_API_KEY"                         }                 }         }              # 2. 支持自定义指令         if custom_instructions:        config["custom_fact_extraction_prompt"] = custom_instructions              # 3. 关键!使用配置初始化mem0ai         _memory_client = Memory.from_config(config_dict=config)         return _memory_client

用户和应用权限管控

OpenMemory还有一个特点是,针对记忆做了用户和应用维度的权限控制。

在本地给不同AI应用安装MCP时,会有不同的指令,如

可以看到MCP link的组成是

http://localhost:8765/mcp/``${app}``/sse/``${user}

通过不同的路由,OpenMemory可以知道具体是哪个用户通过哪个app来添加以及查询记忆的。

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenMemory AI记忆 大语言模型 MCP
相关文章