MarkTechPost@AI 2024年07月21日
Agent Symbolic Learning: An Artificial Intelligence AI Framework for Agent Learning that Jointly Optimizes All Symbolic Components within an Agent System
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

该研究提出了一种名为“智能体符号学习”的框架,该框架借鉴神经网络学习的思路,为训练语言智能体提供了一种全新方法。它将智能体管道映射到计算图,将节点映射到层,将提示和工具映射到权重,从而实现类似于反向传播的优化过程。该框架通过执行智能体、评估性能并利用“语言梯度”进行反向传播来优化智能体的所有符号组件,包括提示、工具和管道结构。

🤔 **智能体符号学习框架**:该框架借鉴神经网络学习的思路,将智能体管道映射到计算图,将节点映射到层,将提示和工具映射到权重,从而实现类似于反向传播的优化过程。

🚀 **语言梯度**:该框架通过执行智能体、评估性能并利用“语言梯度”进行反向传播来优化智能体的所有符号组件,包括提示、工具和管道结构。这种方法避免了局部最优解,能够有效地学习复杂的任务,并支持多智能体系统。

🏆 **优异性能**:该框架在LLM基准测试、软件开发和创意写作任务中表现出优异的性能,在复杂基准测试(如MATH)中始终优于其他方法。在软件开发和创意写作中,该框架的性能差距进一步扩大,超越了专门的算法和框架。

💡 **未来展望**:该框架有望将语言智能体研究从工程中心转向数据中心,并促进人工智能通用智能的发展。研究人员开源了代码和提示,以加速该领域的进展,有可能彻底改变语言智能体的开发和应用。

Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling the creation of language agents capable of autonomously solving complex tasks. However, the development of these agents faces significant challenges. The current approach involves manually decomposing tasks into LLM pipelines, with prompts and tools stacked together. This process is labor-intensive and engineering-centric, limiting the adaptability and robustness of language agents. The complexity of this manual customization makes it nearly impossible to optimize language agents on diverse datasets in a data-centric manner, hindering their versatility and applicability to new tasks or data distributions. Researchers are now seeking ways to transition from this engineering-centric approach to a more data-centric learning paradigm for language agent development.

Prior studies have attempted to address language agent optimization challenges through automated prompt engineering and agent optimization methods. These approaches fall into two categories: prompt-based and search-based. Prompt-based methods optimize specific components within an agent pipeline, while search-based approaches find optimal prompts or nodes in a combinatory space. However, these methods have limitations, including difficulty with complex real-world tasks and a tendency towards local optima. They cannot also holistically optimize the entire agent system. Other research directions, such as synthesizing data for LLM fine-tuning and exploring inter-task transfer learning, show promise but don’t fully address the need for comprehensive agent system optimization.

Researchers from AIWaves Inc. introduce agent symbolic learning framework as an innovative approach for training language agents that draws inspiration from neural network learning. This framework draws an analogy between language agents and neural nets, mapping agent pipelines to computational graphs, nodes to layers, and prompts and tools to weights. It maps agent components to neural network elements, enabling a process akin to backpropagation. The framework executes the agent, evaluates performance using a “language loss,” and generates “language gradients” through back-propagation. These gradients guide the comprehensive optimization of all symbolic components, including prompts, tools, and the overall pipeline structure. This approach avoids local optima, enables effective learning for complex tasks, and supports multi-agent systems. It allows for self-evolution of agents post-deployment, potentially shifting language agent research from engineering-centric to data-centric.

The agent symbolic learning framework introduces a unique approach to training language agents, inspired by neural network learning processes. This framework maps agent components to neural network elements, enabling a process similar to backpropagation. The key components include:

    Agent Pipeline: Represents the sequence of nodes processing input data.Nodes: Individual steps within the pipeline, similar to neural network layers.Trajectory: Stores information during the forward pass for gradient back-propagation.Language Loss: Textual measure of discrepancy between expected and actual outcomes.Language Gradient: Textual analyses for updating the agent components.

The learning procedure involves a forward pass, language loss computation, back-propagation of language gradients, and gradient-based updates using symbolic optimizers. These optimizers include PromptOptimizer, ToolOptimizer, and PipelineOptimizer, each designed to update specific components of the agent system. The framework also supports batched training for more stable optimization.

The agent symbolic learning framework demonstrates superior performance across LLM benchmarks, software development, and creative writing tasks. It consistently outperforms other methods, showing significant improvements on complex benchmarks like MATH. In software development and creative writing, the framework’s performance gap widens further, surpassing specialized algorithms and frameworks. Its success stems from the comprehensive optimization of the entire agent system, effectively discovering optimal pipelines and prompts for each step. The framework shows robustness and effectiveness in optimizing language agents for complex, real-world tasks where traditional methods struggle, highlighting its potential to advance language agent research and applications.

The agent symbolic learning framework introduces an innovative approach to language agent optimization. Inspired by connectionist learning, it jointly optimizes all symbolic components within an agent system using language-based loss, gradients, and optimizers. This enables agents to effectively handle complex real-world tasks and self-evolve after deployment. Experiments demonstrate its superiority across various task complexities. By shifting from model-centric to data-centric agent research, this framework represents a significant step towards artificial general intelligence. The open-sourcing of code and prompts aims to accelerate progress in this field, potentially revolutionizing language agent development and applications.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

Don’t Forget to join our 46k+ ML SubReddit

Find Upcoming AI Webinars here

The post Agent Symbolic Learning: An Artificial Intelligence AI Framework for Agent Learning that Jointly Optimizes All Symbolic Components within an Agent System appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

智能体符号学习 语言智能体 神经网络学习 语言梯度 数据驱动
相关文章