MarkTechPost@AI 05月02日 11:20
Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文详细介绍了如何利用Fireworks AI和LangChain构建智能工具代理。首先,安装必要的软件包并配置Fireworks API密钥。然后,设置ChatFireworks LLM实例,并将其与LangChain的代理框架集成。文章还演示了自定义工具的创建,例如用于抓取网页文本的URL抓取器和将自然语言转换为可执行BigQuery查询的SQL生成器。最终,构建了一个REACT风格的代理,该代理能够动态调用工具、维护会话记忆,并提供由Fireworks AI驱动的复杂端到端工作流程。

🔑 教程首先介绍了安装和配置过程,包括安装langchain、langchain-fireworks等软件包,以及配置Fireworks API密钥,为后续的开发工作奠定了基础。

💻 接着,教程展示了如何实例化ChatFireworks LLM,并配置用于指令遵循,利用llama-v3-70b-instruct模型,设置温度和token限制,为模型提供即时提示的能力。

💬 教程通过一个情感分析的例子,演示了如何构建结构化的提示,调用LLM的invoke()方法,并输出模型对电影评论的情感分析结果。

🧠 教程详细阐述了如何添加会话记忆,包括定义一个包含过去对话的提示模板,设置ConversationBufferMemory,以及使用LLMChain将所有内容连接起来。通过几个示例输入,模型能够跨回合保留上下文。

🛠️ 教程定义了自定义工具,如FetchURLTool,用于抓取网页内容,以及GenerateSQLTool,用于将自然语言转化为BigQuery SQL查询。这两个工具被整合到一个REACT风格的代理中,实现了抓取数据和生成SQL查询的结合。

In this tutorial, we will explore how to leverage the capabilities of Fireworks AI for building intelligent, tool-enabled agents with LangChain. Starting from installing the langchain-fireworks package and configuring your Fireworks API key, we’ll set up a ChatFireworks LLM instance, powered by the high-performance llama-v3-70b-instruct model, and integrate it with LangChain’s agent framework. Along the way, we’ll define custom tools such as a URL fetcher for scraping webpage text and an SQL generator for converting plain-language requirements into executable BigQuery queries. By the end, we’ll have a fully functional REACT-style agent that can dynamically invoke tools, maintain conversational memory, and deliver sophisticated, end-to-end workflows powered by Fireworks AI.

!pip install -qU langchain langchain-fireworks requests beautifulsoup4

We bootstrap the environment by installing all the required Python packages, including langchain, its Fireworks integration, and common utilities such as requests and beautifulsoup4. This ensures that we have the latest versions of all necessary components to run the rest of the notebook seamlessly.

import requestsfrom bs4 import BeautifulSoupfrom langchain.tools import BaseToolfrom langchain.agents import initialize_agent, AgentTypefrom langchain_fireworks import ChatFireworksfrom langchain import LLMChain, PromptTemplatefrom langchain.memory import ConversationBufferMemoryimport getpassimport os

We bring in all the necessary imports: HTTP clients (requests, BeautifulSoup), the LangChain agent framework (BaseTool, initialize_agent, AgentType), the Fireworks-powered LLM (ChatFireworks), plus prompt and memory utilities (LLMChain, PromptTemplate, ConversationBufferMemory), as well as standard modules for secure input and environment management.

os.environ["FIREWORKS_API_KEY"] = getpass(" Enter your Fireworks API key: ")

Now, it prompts you to enter your Fireworks API key via getpass securely and sets it in the environment. This step ensures that subsequent calls to the ChatFireworks model are authenticated without exposing your key in plain text.

llm = ChatFireworks(    model="accounts/fireworks/models/llama-v3-70b-instruct",    temperature=0.6,    max_tokens=1024,    stop=["\n\n"])

We demonstrate how to instantiate a ChatFireworks LLM configured for instruction-following, utilizing llama-v3-70b-instruct, a moderate temperature, and a token limit, allowing you to immediately start issuing prompts to the model.

prompt = [    {"role":"system","content":"You are an expert data-scientist assistant."},    {"role":"user","content":"Analyze the sentiment of this review:\n\n"                           "\"The new movie was breathtaking, but a bit too long.\""}]resp = llm.invoke(prompt)print("Sentiment Analysis →", resp.content)

Next, we demonstrate a simple sentiment-analysis example: it builds a structured prompt as a list of role-annotated messages, invokes llm.invoke(), and prints out the model’s sentiment interpretation of the provided movie review.

template = """You are a data-science assistant. Keep track of the convo:{history}User: {input}Assistant:"""prompt = PromptTemplate(input_variables=["history","input"], template=template)memory = ConversationBufferMemory(memory_key="history")chain = LLMChain(llm=llm, prompt=prompt, memory=memory)print(chain.run(input="Hey, what can you do?"))print(chain.run(input="Analyze: 'The product arrived late, but support was helpful.'"))print(chain.run(input="Based on that, would you recommend the service?"))

We illustrate how to add conversational memory, which involves defining a prompt template that incorporates past exchanges, setting up a ConversationBufferMemory, and chaining everything together with LLMChain. Running a few sample inputs shows how the model retains context across turns.

class FetchURLTool(BaseTool):    name: str = "fetch_url"    description: str = "Fetch the main text (first 500 chars) from a webpage."    def _run(self, url: str) -> str:        resp = requests.get(url, timeout=10)        doc = BeautifulSoup(resp.text, "html.parser")        paras = [p.get_text() for p in doc.find_all("p")][:5]        return "\n\n".join(paras)    async def _arun(self, url: str) -> str:        raise NotImplementedError

We define a custom FetchURLTool by subclassing BaseTool. This tool fetches the first few paragraphs from any URL using requests and BeautifulSoup, making it easy for your agent to retrieve live web content.

class GenerateSQLTool(BaseTool):    name: str = "generate_sql"    description: str = "Generate a BigQuery SQL query (with comments) from a text description."    def _run(self, text: str) -> str:        prompt = f"""-- Requirement:-- {text}-- Write a BigQuery SQL query (with comments) to satisfy the above."""        return llm.invoke([{"role":"user","content":prompt}]).content    async def _arun(self, text: str) -> str:        raise NotImplementedErrortools = [FetchURLTool(), GenerateSQLTool()]agent = initialize_agent(    tools,    llm,    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,    verbose=True)result = agent.run(    "Fetch https://en.wikipedia.org/wiki/ChatGPT "    "and then generate a BigQuery SQL query that counts how many times "    "the word 'model' appears in the page text.")print("\n Generated SQL:\n", result)

Finally, GenerateSQLTool is another BaseTool subclass that wraps the LLM to transform plain-English requirements into commented BigQuery SQL. It then wires both tools into a REACT-style agent via initialize_agent, runs a combined fetch-and-generate example, and prints out the resulting SQL query.

In conclusion, we have integrated Fireworks AI with LangChain’s modular tooling and agent ecosystem, unlocking a versatile platform for building AI applications that extend beyond simple text generation. We can extend the agent’s capabilities by adding domain-specific tools, customizing prompts, and fine-tuning memory behavior, all while leveraging Fireworks’ scalable inference engine. As next steps, explore advanced features such as function-calling, chaining multiple agents, or incorporating vector-based retrieval to craft even more dynamic and context-aware assistants.


Check out the Notebook here. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

[Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

The post Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Fireworks AI LangChain AI代理 LLM
相关文章