MarkTechPost@AI 16小时前
A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文详细介绍了如何使用微软的AutoGen框架构建复杂的多智能体工作流,并结合了Google Gemini模型。通过AutoGen的RoundRobinGroupChat和TeamTool,开发者可以轻松地将研究员、事实核查员、评论员、总结员和编辑等专业助手组合成一个名为“DeepDive”的工具。AutoGen处理了轮换、终止条件和流式输出等细节,简化了开发流程,使开发者能够专注于定义每个智能体的专业知识和系统提示,最终实现从简单的双智能体管道到复杂的五智能体协作的无缝扩展。

💡 使用AutoGen框架,开发者能够通过最少的代码编排复杂的多智能体工作流。

🤖 通过AutoGen的RoundRobinGroupChat和TeamTool,可以将多个专业助手整合到一个协作工具中。

🛠️ AutoGen简化了流程,开发者可以专注于定义每个智能体的专业知识和系统提示,而不是处理回调或手动提示链。

💻 文章演示了如何将Google Gemini集成到AutoGen框架中,利用OpenAI兼容的客户端,构建模块化和可重用的工作流。

✅ 最终,AutoGen抽象了事件循环管理、流式响应和终止逻辑,方便快速迭代智能体角色和整体编排。

In this tutorial, we demonstrated how Microsoft’s AutoGen framework empowers developers to orchestrate complex, multi-agent workflows with minimal code. By leveraging AutoGen’s RoundRobinGroupChat and TeamTool abstractions, you can seamlessly assemble specialist assistants, such as Researchers, FactCheckers, Critics, Summarizers, and Editors, into a cohesive “DeepDive” tool. AutoGen handles the intricacies of turn‐taking, termination conditions, and streaming output, allowing you to focus on defining each agent’s expertise and system prompts rather than plumbing together callbacks or manual prompt chains. Whether conducting in‐depth research, validating facts, refining prose, or integrating third‐party tools, AutoGen provides a unified API that scales from simple two‐agent pipelines to elaborate, five‐agent collaboratives.

!pip install -q autogen-agentchat[gemini] autogen-ext[openai] nest_asyncio

We install the AutoGen AgentChat package with Gemini support, the OpenAI extension for API compatibility, and the nest_asyncio library to patch the notebook’s event loop, ensuring you have all the components needed to run asynchronous, multi-agent workflows in Colab.

import os, nest_asynciofrom getpass import getpassnest_asyncio.apply()os.environ["GEMINI_API_KEY"] = getpass("Enter your Gemini API key: ")

We import and apply nest_asyncio to enable nested event loops in notebook environments, then securely prompt for your Gemini API key using getpass and store it in os.environ for authenticated model client access.

from autogen_ext.models.openai import OpenAIChatCompletionClientmodel_client = OpenAIChatCompletionClient(    model="gemini-1.5-flash-8b",        api_key=os.environ["GEMINI_API_KEY"],    api_type="google",)

We initialize an OpenAI‐compatible chat client pointed at Google’s Gemini by specifying the gemini-1.5-flash-8b model, injecting your stored Gemini API key, and setting api_type=”google”, giving you a ready-to-use model_client for downstream AutoGen agents.

from autogen_agentchat.agents import AssistantAgentresearcher   = AssistantAgent(name="Researcher", system_message="Gather and summarize factual info.", model_client=model_client)factchecker  = AssistantAgent(name="FactChecker", system_message="Verify facts and cite sources.",       model_client=model_client)critic       = AssistantAgent(name="Critic",    system_message="Critique clarity and logic.",         model_client=model_client)summarizer   = AssistantAgent(name="Summarizer",system_message="Condense into a brief executive summary.", model_client=model_client)editor       = AssistantAgent(name="Editor",    system_message="Polish language and signal APPROVED when done.", model_client=model_client)

We define five specialized assistant agents, Researcher, FactChecker, Critic, Summarizer, and Editor, each initialized with a role-specific system message and the shared Gemini-powered model client, enabling them to gather information, respectively, verify accuracy, critique content, condense summaries, and polish language within the AutoGen workflow.

from autogen_agentchat.teams import RoundRobinGroupChatfrom autogen_agentchat.conditions import MaxMessageTermination, TextMentionTerminationmax_msgs = MaxMessageTermination(max_messages=20)text_term = TextMentionTermination(text="APPROVED", sources=["Editor"])termination = max_msgs | text_term                                    team = RoundRobinGroupChat(    participants=[researcher, factchecker, critic, summarizer, editor],    termination_condition=termination)

We import the RoundRobinGroupChat class along with two termination conditions, then compose a stop rule that fires after 20 total messages or when the Editor agent mentions “APPROVED.” Finally, it instantiates a round-robin team of the five specialized agents with that combined termination logic, enabling them to cycle through research, fact-checking, critique, summarization, and editing until one of the stop conditions is met.

from autogen_agentchat.tools import TeamTooldeepdive_tool = TeamTool(team=team, name="DeepDive", description="Collaborative multi-agent deep dive")

WE wrap our RoundRobinGroupChat team in a TeamTool named “DeepDive” with a human-readable description, effectively packaging the entire multi-agent workflow into a single callable tool that other agents can invoke seamlessly.

host = AssistantAgent(    name="Host",    model_client=model_client,    tools=[deepdive_tool],    system_message="You have access to a DeepDive tool for in-depth research.")

We create a “Host” assistant agent configured with the shared Gemini-powered model_client, grant it the DeepDive team tool for orchestrating in-depth research, and prime it with a system message that informs it of its ability to invoke the multi-agent DeepDive workflow.

import asyncioasync def run_deepdive(topic: str):    result = await host.run(task=f"Deep dive on: {topic}")    print(" DeepDive result:\n", result)    await model_client.close()topic = "Impacts of Model Context Protocl on Agentic AI"loop = asyncio.get_event_loop()loop.run_until_complete(run_deepdive(topic))

Finally, we define an asynchronous run_deepdive function that tells the Host agent to execute the DeepDive team tool on a given topic, prints the comprehensive result, and then closes the model client; it then grabs Colab’s existing asyncio loop and runs the coroutine to completion for a seamless, synchronous execution.

In conclusion, integrating Google Gemini via AutoGen’s OpenAI‐compatible client and wrapping our multi‐agent team as a callable TeamTool gives us a powerful template for building highly modular and reusable workflows. AutoGen abstracts away event loop management (with nest_asyncio), streaming responses, and termination logic, enabling us to iterate quickly on agent roles and overall orchestration. This advanced pattern streamlines the development of collaborative AI systems and lays the foundation for extending into retrieval pipelines, dynamic selectors, or conditional execution strategies.


Check out the Notebook here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

The post A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AutoGen 多智能体 Gemini AI工作流
相关文章