MarkTechPost@AI 05月06日 07:40
How AI Agents Store, Forget, and Retrieve? A Fresh Look at Memory Operations for the Next-Gen LLMs
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文对基于LLM的AI系统中记忆的关键作用进行了深入研究,着重探讨了记忆的存储、检索和生成等基本操作。研究人员将AI记忆分为参数式、上下文结构式和上下文非结构式三种类型,并借鉴认知心理学,区分了短期和长期记忆。此外,文章还定义了六个核心记忆操作,包括巩固、更新、索引、遗忘、检索和压缩。通过分析超过30000篇AI论文,该研究概述了相关工具、基准和未来发展方向,为构建支持长期语境管理、用户建模和知识保留的记忆中心AI系统奠定了基础。

🗂️AI记忆被分为三类:参数式记忆(模型权重)、上下文结构式记忆(如索引对话历史)和上下文非结构式记忆(原始文本或嵌入)。这种分类有助于理解不同类型记忆在AI系统中的作用。

🔄定义了六个核心记忆操作:巩固(存储新信息)、更新(修改现有条目)、索引(组织以实现快速访问)、遗忘(删除过时数据)、检索(获取相关内容)和压缩(提炼记忆)。这些操作是AI系统有效管理和利用记忆的关键。

🔬通过分析2022-2025年间超过30000篇顶级AI论文,并将高影响力论文聚类为四个主题:长期记忆、长语境建模、参数编辑和多源集成,从而将每个操作和记忆类型映射到活跃的研究领域,并突出显示关键基准和工具。

🧱文章描述了一个分层的、以记忆为中心的AI系统生态系统,该系统支持长期语境管理、用户建模、知识保留和自适应行为。该生态系统包括基础组件(如向量存储、LLM以及检索机制)、记忆操作框架(如LangChain和LlamaIndex)、用于编排和持久性的记忆层系统(如Memary和Memobase)以及面向最终用户的产品(包括Me. bot和ChatGPT)。

Memory plays a crucial role in LLM-based AI systems, supporting sustained, coherent interactions over time. While earlier surveys have explored memory about LLMs, they often lack attention to the fundamental operations governing memory functions. Key components like memory storage, retrieval, and memory-grounded generation have been studied in isolation, but a unified framework that systematically integrates these processes remains underdeveloped. Although a few recent efforts have proposed operational views of memory to categorize existing work, the field still lacks cohesive memory architectures that clearly define how these atomic operations interact.

Furthermore, existing surveys tend to address only specific subtopics within the broader memory landscape, such as long-context handling, long-term memory, personalization, or knowledge editing. These fragmented approaches often miss essential operations like indexing and fail to offer comprehensive overviews of memory dynamics. Additionally, most prior work does not establish a clear research scope or provide structured benchmarks and tool coverage, limiting their practical value for guiding future advancements in memory for AI systems. 

Researchers from the Chinese University, the University of Edinburgh, HKUST, and the Poisson Lab at Huawei UK R&D Ltd. present a detailed survey on memory in AI systems. They classify memory into parametric, contextual-structured, and contextual-unstructured types, distinguishing between short-term and long-term memory inspired by cognitive psychology. Six fundamental operations—consolidation, updating, indexing, forgetting, retrieval, and compression—are defined and mapped to key research areas, including long-term memory, long-context modeling, parametric modification, and multi-source integration. Based on an analysis of over 30,000 papers using the Relative Citation Index, the survey also outlines tools, benchmarks, and future directions. 

The researchers first develop a three‐part taxonomy of AI memory—parametric (model weights), contextual‐structured (e.g., indexed dialogue histories), and contextual‐unstructured (raw text or embeddings)—and distinguish short‐ versus long‐term spans. They then define six core memory operations: consolidation (storing new information), updating (modifying existing entries), indexing (organizing for fast access), forgetting (removing stale data), retrieval (fetching relevant content), and compression (distilling memories). To ground this framework, they mined over 30,000 top‐tier AI papers (2022–2025), ranked them by Relative Citation Index, and clustered high‐impact works into four themes—long‐term memory, long‐context modeling, parametric editing, and multi‐source integration—thereby mapping each operation and memory type to active research areas and highlighting key benchmarks and tools. 

The study describes a layered ecosystem of memory-centric AI systems that support long-term context management, user modeling, knowledge retention, and adaptive behavior. This ecosystem is structured across four tiers: foundational components (such as vector stores, large language models like Llama and GPT-4, and retrieval mechanisms like FAISS and BM25), frameworks for memory operations (e.g., LangChain and LlamaIndex), memory layer systems for orchestration and persistence (such as Memary and Memobase), and end-user-facing products (including Me. bot and ChatGPT). These tools provide infrastructure for memory integration, enabling capabilities like grounding, similarity search, long-context understanding, and personalized AI interactions.

The survey also discusses open challenges and future research directions in AI memory. It highlights the importance of spatio-temporal memory, which balances historical context with real-time updates for adaptive reasoning. Key challenges include parametric memory retrieval, lifelong learning, and efficient knowledge management across memory types. Additionally, the paper draws inspiration from biological memory models, emphasizing dual-memory architectures and hierarchical memory structures. Future work should focus on unifying memory representations, supporting multi-agent memory systems, and addressing security concerns, particularly memory safety and malicious attacks in machine learning techniques. 


Check out the Paper. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. For Promotion and Partnerships, please talk us.

[Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

The post How AI Agents Store, Forget, and Retrieve? A Fresh Look at Memory Operations for the Next-Gen LLMs appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LLM AI记忆 记忆操作
相关文章