MarkTechPost@AI 16小时前
MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language Models
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

MemOS是一个专为大型语言模型(LLMs)设计的内存操作系统,旨在解决LLMs在记忆方面的局限性。当前LLMs主要依赖固定权重和短期上下文,难以长期保留和更新信息。MemOS通过统一的内存抽象(MemCube)管理参数、激活和明文内存,实现结构化、可追溯和跨任务的内存处理。这使得LLMs能够持续适应、内化用户偏好并保持行为一致性,从被动生成器转变为具备长期学习和跨平台协作能力的演进系统。

🧠 **记忆管理是核心**: MemOS将记忆视为语言模型中的首要资源,通过MemCube统一管理参数、激活和明文三种类型的内存,实现结构化、可追溯的记忆处理。

⚙️ **三层架构设计**: MemOS由接口层、操作层和基础设施层组成,分别处理用户输入、管理内存调度和组织,以及确保安全存储和跨代理协作。MemCube作为所有交互的媒介,实现可追溯的、策略驱动的内存操作。

🔄 **持续学习机制**: MemOS支持“内存训练”范式,模糊了学习和推理的界限,允许模型持续适应和进化。通过MemScheduler、MemLifecycle和MemGovernance等模块,MemOS维持一个持续且自适应的记忆循环,从用户提示到未来数据的存储。

💾 **三种内存类型**: MemOS将内存分为参数内存(模型权重)、激活内存(运行时状态)和明文内存(外部数据)。MemCube封装内容和元数据,实现动态调度、版本控制和访问控制,从而增强LLMs的适应性和信息检索能力。

🚀 **未来发展方向**: MemOS的未来目标包括实现跨模型共享内存、自演进的内存块,以及构建去中心化的内存市场,以支持持续学习和智能进化。

LLMs are increasingly seen as key to achieving Artificial General Intelligence (AGI), but they face major limitations in how they handle memory. Most LLMs rely on fixed knowledge stored in their weights and short-lived context during use, making it hard to retain or update information over time. Techniques like RAG attempt to incorporate external knowledge but lack structured memory management. This leads to problems such as forgetting past conversations, poor adaptability, and isolated memory across platforms. Fundamentally, today’s LLMs don’t treat memory as a manageable, persistent, or sharable system, limiting their real-world usefulness. 

To address the limitations of memory in current LLMs, researchers from MemTensor (Shanghai) Technology Co., Ltd., Shanghai Jiao Tong University, Renmin University of China, and the Research Institute of China Telecom have developed MemO. This memory operating system makes memory a first-class resource in language models. At its core is MemCube, a unified memory abstraction that manages parametric, activation, and plaintext memory. MemOS enables structured, traceable, and cross-task memory handling, allowing models to adapt continuously, internalize user preferences, and maintain behavioral consistency. This shift transforms LLMs from passive generators into evolving systems capable of long-term learning and cross-platform coordination. 

As AI systems grow more complex—handling multiple tasks, roles, and data types—language models must evolve beyond understanding text to also retaining memory and learning continuously. Current LLMs lack structured memory management, which limits their ability to adapt and grow over time. MemOS, a new system that treats memory as a core, schedulable resource. It enables long-term learning through structured storage, version control, and unified memory access. Unlike traditional training, MemOS supports a continuous “memory training” paradigm that blurs the line between learning and inference. It also emphasizes governance, ensuring traceability, access control, and safe use in evolving AI systems. 

MemOS is a memory-centric operating system for language models that treats memory not just as stored data but as an active, evolving component of the model’s cognition. It organizes memory into three distinct types: Parametric Memory (knowledge baked into model weights via pretraining or fine-tuning), Activation Memory (temporary internal states, such as KV caches and attention patterns, used during inference), and Plaintext Memory (editable, retrievable external data, like documents or prompts). These memory types interact within a unified framework called the MemoryCube (MemCube), which encapsulates both content and metadata, allowing dynamic scheduling, versioning, access control, and transformation across types. This structured system enables LLMs to adapt, recall relevant information, and efficiently evolve their capabilities, transforming them into more than just static generators.

At the core of MemOS is a three-layer architecture: the Interface Layer handles user inputs and parses them into memory-related tasks; the Operation Layer manages the scheduling, organization, and evolution of different types of memory; and the Infrastructure Layer ensures safe storage, access governance, and cross-agent collaboration. All interactions within the system are mediated through MemCubes, allowing traceable, policy-driven memory operations. Through modules like MemScheduler, MemLifecycle, and MemGovernance, MemOS maintains a continuous and adaptive memory loop—from the moment a user sends a prompt, to memory injection during reasoning, to storing useful data for future use. This design not only enhances the model’s responsiveness and personalization but also ensures that memory remains structured, secure, and reusable. 

In conclusion, MemOS is a memory operating system designed to make memory a central, manageable component in LLMs. Unlike traditional models that depend mostly on static model weights and short-term runtime states, MemOS introduces a unified framework for handling parametric, activation, and plaintext memory. At its core is MemCube, a standardized memory unit that supports structured storage, lifecycle management, and task-aware memory augmentation. The system enables more coherent reasoning, adaptability, and cross-agent collaboration. Future goals include enabling memory sharing across models, self-evolving memory blocks, and building a decentralized memory marketplace to support continual learning and intelligent evolution. 


Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

The post MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language Models appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

MemOS 大语言模型 记忆 人工智能
相关文章