MarkTechPost@AI 06月20日 15:41
Build an Intelligent Multi-Tool AI Agent Interface Using Streamlit for Seamless Real-Time Interaction
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍如何使用LangChain、Google Gemini API和Streamlit构建一个强大的交互式AI助手。该助手能够进行网络搜索、获取维基百科内容、执行计算、记忆关键信息并处理对话历史,所有这些都在浏览器中实时完成。通过Streamlit直观的界面,用户可以轻松与多智能体系统交互,无需编写大量代码,同时保持最大的灵活性。

🌐 首先,通过安装必要的Python和Node.js软件包来搭建AI助手应用的基础。这包括用于前端的Streamlit、用于智能体逻辑的LangChain,以及用于外部搜索和托管的Wikipedia、DuckDuckGo和ngrok/localtunnel等工具。

🔑 接着,配置环境,设置Google Gemini API密钥和ngrok身份验证令牌。将这些凭据分配给变量,并设置GOOGLE_API_KEY,以便LangChain智能体在执行期间安全地访问Gemini模型。

🛠️ 然后,定义InnovativeAgentTools类,为AI智能体配备特殊功能。这包括用于安全表达式评估的计算器、用于保存和调用信息的记忆工具,以及用于获取当前日期和时间的日期和时间工具。这些工具使Streamlit AI智能体能够像真正的助手一样进行推理、记忆和上下文响应。

🤖 最后,构建MultiAgentSystem类,这是应用程序的核心。在这里,使用LangChain集成Gemini Pro模型并初始化所有必要的工具,包括网络搜索、记忆和计算器功能。使用自定义提示配置ReAct风格的智能体,该提示引导工具使用和记忆处理。定义一个chat方法,允许智能体处理用户输入,在必要时调用工具,并生成智能的、上下文感知的响应。

In this tutorial, we’ll build a powerful and interactive Streamlit application that brings together the capabilities of LangChain, the Google Gemini API, and a suite of advanced tools to create a smart AI assistant. Using Streamlit’s intuitive interface, we’ll create a chat-based system that can search the web, fetch Wikipedia content, perform calculations, remember key details, and handle conversation history, all in real time. Whether we’re developers, researchers, or just exploring AI, this setup allows us to interact with a multi-agent system directly from the browser with minimal code and maximum flexibility.

!pip install -q streamlit langchain langchain-google-genai langchain-community!pip install -q pyngrok python-dotenv wikipedia duckduckgo-search!npm install -g localtunnelimport streamlit as stimport osfrom langchain_google_genai import ChatGoogleGenerativeAIfrom langchain.agents import create_react_agent, AgentExecutorfrom langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRunfrom langchain.memory import ConversationBufferWindowMemoryfrom langchain.prompts import PromptTemplatefrom langchain.callbacks.streamlit import StreamlitCallbackHandlerfrom langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapperimport asyncioimport threadingimport timefrom datetime import datetimeimport json

We begin by installing all the necessary Python and Node.js packages required for our AI assistant app. This includes Streamlit for the frontend, LangChain for agent logic, and tools like Wikipedia, DuckDuckGo, and ngrok/localtunnel for external search and hosting. Once set up, we import all modules to start building our interactive multi-tool AI agent.

GOOGLE_API_KEY = "Use Your API Key Here" NGROK_AUTH_TOKEN = "Use Your Auth Token Here" os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY

Next, we configure our environment by setting the Google Gemini API key and the ngrok authentication token. We assign these credentials to variables and set the GOOGLE_API_KEY so the LangChain agent can securely access the Gemini model during execution.

class InnovativeAgentTools:   """Advanced tool collection for the multi-agent system"""     @staticmethod   def get_calculator_tool():       def calculate(expression: str) -> str:           """Calculate mathematical expressions safely"""           try:               allowed_chars = set('0123456789+-*/.() ')               if all(c in allowed_chars for c in expression):                   result = eval(expression)                   return f"Result: {result}"               else:                   return "Error: Invalid mathematical expression"           except Exception as e:               return f"Calculation error: {str(e)}"             return Tool(           name="Calculator",           func=calculate,           description="Calculate mathematical expressions. Input should be a valid math expression."       )     @staticmethod   def get_memory_tool(memory_store):       def save_memory(key_value: str) -> str:           """Save information to memory"""           try:               key, value = key_value.split(":", 1)               memory_store[key.strip()] = value.strip()               return f"Saved '{key.strip()}' to memory"           except:               return "Error: Use format 'key: value'"             def recall_memory(key: str) -> str:           """Recall information from memory"""           return memory_store.get(key.strip(), f"No memory found for '{key}'")             return [           Tool(name="SaveMemory", func=save_memory,                description="Save information to memory. Format: 'key: value'"),           Tool(name="RecallMemory", func=recall_memory,                description="Recall saved information. Input: key to recall")       ]     @staticmethod   def get_datetime_tool():       def get_current_datetime(format_type: str = "full") -> str:           """Get current date and time"""           now = datetime.now()           if format_type == "date":               return now.strftime("%Y-%m-%d")           elif format_type == "time":               return now.strftime("%H:%M:%S")           else:               return now.strftime("%Y-%m-%d %H:%M:%S")             return Tool(           name="DateTime",           func=get_current_datetime,           description="Get current date/time. Options: 'date', 'time', or 'full'"       )

Here, we define the InnovativeAgentTools class to equip our AI agent with specialized capabilities. We implement tools such as a Calculator for safe expression evaluation, Memory Tools to save and recall information across turns, and a date and time tool to fetch the current date and time. These tools enable our Streamlit AI agent to reason, remember, and respond contextually, much like a true assistant. Check out the full Notebook here

class MultiAgentSystem:   """Innovative multi-agent system with specialized capabilities"""     def __init__(self, api_key: str):       self.llm = ChatGoogleGenerativeAI(           model="gemini-pro",           google_api_key=api_key,           temperature=0.7,           convert_system_message_to_human=True       )       self.memory_store = {}       self.conversation_memory = ConversationBufferWindowMemory(           memory_key="chat_history",           k=10,           return_messages=True       )       self.tools = self._initialize_tools()       self.agent = self._create_agent()     def _initialize_tools(self):       """Initialize all available tools"""       tools = []             tools.extend([           DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),           WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())       ])             tools.append(InnovativeAgentTools.get_calculator_tool())       tools.append(InnovativeAgentTools.get_datetime_tool())       tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))             return tools     def _create_agent(self):       """Create the ReAct agent with advanced prompt"""       prompt = PromptTemplate.from_template(""" You are an advanced AI assistant with access to multiple tools and persistent memory.AVAILABLE TOOLS:{tools}TOOL USAGE FORMAT:- Think step by step about what you need to do- Use Action: tool_name- Use Action Input: your input- Wait for Observation- Continue until you have a final answerMEMORY CAPABILITIES:- You can save important information using SaveMemory- You can recall previous information using RecallMemory- Always try to remember user preferences and contextCONVERSATION HISTORY:{chat_history}CURRENT QUESTION: {input}REASONING PROCESS:{agent_scratchpad}Begin your response with your thought process, then take action if needed.""")             agent = create_react_agent(self.llm, self.tools, prompt)       return AgentExecutor(           agent=agent,           tools=self.tools,           memory=self.conversation_memory,           verbose=True,           handle_parsing_errors=True,           max_iterations=5       )     def chat(self, message: str, callback_handler=None):       """Process user message and return response"""       try:           if callback_handler:               response = self.agent.invoke(                   {"input": message},                   {"callbacks": [callback_handler]}               )           else:               response = self.agent.invoke({"input": message})           return response["output"]       except Exception as e:           return f"Error processing request: {str(e)}"

In this section, we build the core of our application, the MultiAgentSystem class. Here, we integrate the Gemini Pro model using LangChain and initialize all essential tools, including web search, memory, and calculator functions. We configure a ReAct-style agent using a custom prompt that guides tool usage and memory handling. Finally, we define a chat method that allows the agent to process user input, invoke tools when necessary, and generate intelligent, context-aware responses. Check out the full Notebook here

def create_streamlit_app():   """Create the innovative Streamlit application"""     st.set_page_config(       page_title=" Advanced LangChain Agent with Gemini",       page_icon="",       layout="wide",       initial_sidebar_state="expanded"   )     st.markdown("""   <style>   .main-header {       background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);       padding: 1rem;       border-radius: 10px;       color: white;       text-align: center;       margin-bottom: 2rem;   }   .agent-response {       background-color: #f0f2f6;       padding: 1rem;       border-radius: 10px;       border-left: 4px solid #667eea;       margin: 1rem 0;   }   .memory-card {       background-color: #e8f4fd;       padding: 1rem;       border-radius: 8px;       margin: 0.5rem 0;   }   </style>   """, unsafe_allow_html=True)     st.markdown("""   <div class="main-header">       <h1> Advanced Multi-Agent System</h1>       <p>Powered by LangChain + Gemini API + Streamlit</p>   </div>   """, unsafe_allow_html=True)     with st.sidebar:       st.header(" Configuration")             api_key = st.text_input(           " Google AI API Key",           type="password",           value=GOOGLE_API_KEY if GOOGLE_API_KEY != "your-gemini-api-key-here" else "",           help="Get your API key from https://ai.google.dev/"       )             if not api_key:           st.error("Please enter your Google AI API key to continue")           st.stop()             st.success(" API Key configured")             st.header(" Agent Capabilities")       st.markdown("""       -  **Web Search** (DuckDuckGo)       -  **Wikipedia Lookup**       -  **Mathematical Calculator**       -  **Persistent Memory**       -  **Date & Time**       -  **Conversation History**       """)             if 'agent_system' in st.session_state:           st.header(" Memory Store")           memory = st.session_state.agent_system.memory_store           if memory:               for key, value in memory.items():                   st.markdown(f"""                   <div class="memory-card">                       <strong>{key}:</strong> {value}                   </div>                   """, unsafe_allow_html=True)           else:               st.info("No memories stored yet")     if 'agent_system' not in st.session_state:       with st.spinner(" Initializing Advanced Agent System..."):           st.session_state.agent_system = MultiAgentSystem(api_key)       st.success(" Agent System Ready!")     st.header(" Interactive Chat")     if 'messages' not in st.session_state:       st.session_state.messages = [{           "role": "assistant",           "content": """ Hello! I'm your advanced AI assistant powered by Gemini. I can:• Search the web and Wikipedia for information• Perform mathematical calculations• Remember important information across our conversation• Provide current date and time• Maintain conversation contextTry asking me something like:- "Calculate 15 * 8 + 32"- "Search for recent news about AI"- "Remember that my favorite color is blue"- "What's the current time?""""       }]     for message in st.session_state.messages:       with st.chat_message(message["role"]):           st.markdown(message["content"])     if prompt := st.chat_input("Ask me anything..."):       st.session_state.messages.append({"role": "user", "content": prompt})       with st.chat_message("user"):           st.markdown(prompt)             with st.chat_message("assistant"):           callback_handler = StreamlitCallbackHandler(st.container())                     with st.spinner(" Thinking..."):               response = st.session_state.agent_system.chat(prompt, callback_handler)                     st.markdown(f"""           <div class="agent-response">               {response}           </div>           """, unsafe_allow_html=True)                     st.session_state.messages.append({"role": "assistant", "content": response})     st.header(" Example Queries")   col1, col2, col3 = st.columns(3)     with col1:       if st.button(" Search Example"):           example = "Search for the latest developments in quantum computing"           st.session_state.example_query = example     with col2:       if st.button(" Math Example"):           example = "Calculate the compound interest on $1000 at 5% for 3 years"           st.session_state.example_query = example     with col3:       if st.button(" Memory Example"):           example = "Remember that I work as a data scientist at TechCorp"           st.session_state.example_query = example     if 'example_query' in st.session_state:       st.info(f"Example query: {st.session_state.example_query}")

In this section, we bring everything together by building an interactive web interface using Streamlit. We configure the app layout, define custom CSS styles, and set up a sidebar for inputting API keys and configuring agent capabilities. We initialize the multi-agent system, maintain a message history, and enable a chat interface that allows users to interact in real-time. To make it even easier to explore, we also provide example buttons for search, math, and memory-related queries,  all in a beautifully styled, responsive UI. Check out the full Notebook here

def setup_ngrok_auth(auth_token):   """Setup ngrok authentication"""   try:       from pyngrok import ngrok, conf             conf.get_default().auth_token = auth_token             try:           tunnels = ngrok.get_tunnels()           print(" Ngrok authentication successful!")           return True       except Exception as e:           print(f" Ngrok authentication failed: {e}")           return False             except ImportError:       print(" pyngrok not installed. Installing...")       import subprocess       subprocess.run(['pip', 'install', 'pyngrok'], check=True)       return setup_ngrok_auth(auth_token)def get_ngrok_token_instructions():   """Provide instructions for getting ngrok token"""   return """ NGROK AUTHENTICATION SETUP:1. Sign up for an ngrok account:  - Visit: https://dashboard.ngrok.com/signup  - Create a free account2. Get your authentication token:  - Go to: https://dashboard.ngrok.com/get-started/your-authtoken  - Copy your authtoken3. Replace 'your-ngrok-auth-token-here' in the code with your actual token4. Alternative methods if ngrok fails:  - Use Google Colab's built-in public URL feature  - Use localtunnel: !npx localtunnel --port 8501  - Use serveo.net: !ssh -R 80:localhost:8501 serveo.net"""

Here, we set up a helper function to authenticate ngrok, which allows us to expose our local Streamlit app to the internet. We use the pyngrok library to configure the authentication token and verify the connection. If the token is missing or invalid, we provide detailed instructions on how to obtain one and suggest alternative tunneling methods, such as LocalTunnel or Serveo, making it easy for us to host and share our app from environments like Google Colab.

def main():   """Main function to run the application"""   try:       create_streamlit_app()   except Exception as e:       st.error(f"Application error: {str(e)}")       st.info("Please check your API key and try refreshing the page")

This main() function acts as the entry point for our Streamlit application. We simply call create_streamlit_app() to launch the full interface. If anything goes wrong, such as a missing API key or a failed tool initialization, we catch the error gracefully and display a helpful message, ensuring the user knows how to recover and continue using the app smoothly.

def run_in_colab():   """Run the application in Google Colab with proper ngrok setup"""     print(" Starting Advanced LangChain Agent Setup...")     if NGROK_AUTH_TOKEN == "your-ngrok-auth-token-here":       print("  NGROK_AUTH_TOKEN not configured!")       print(get_ngrok_token_instructions())             print(" Attempting alternative tunnel methods...")       try_alternative_tunnels()       return     print(" Installing required packages...")   import subprocess     packages = [       'streamlit',       'langchain',       'langchain-google-genai',       'langchain-community',       'wikipedia',       'duckduckgo-search',       'pyngrok'   ]     for package in packages:       try:           subprocess.run(['pip', 'install', package], check=True, capture_output=True)           print(f" {package} installed")       except subprocess.CalledProcessError:           print(f"  Failed to install {package}")     app_content = '''import streamlit as stimport osfrom langchain_google_genai import ChatGoogleGenerativeAIfrom langchain.agents import create_react_agent, AgentExecutorfrom langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRunfrom langchain.memory import ConversationBufferWindowMemoryfrom langchain.prompts import PromptTemplatefrom langchain.callbacks.streamlit import StreamlitCallbackHandlerfrom langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapperfrom datetime import datetime# Configuration - Replace with your actual keysGOOGLE_API_KEY = "''' + GOOGLE_API_KEY + '''"os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEYclass InnovativeAgentTools:   @staticmethod   def get_calculator_tool():       def calculate(expression: str) -> str:           try:               allowed_chars = set('0123456789+-*/.() ')               if all(c in allowed_chars for c in expression):                   result = eval(expression)                   return f"Result: {result}"               else:                   return "Error: Invalid mathematical expression"           except Exception as e:               return f"Calculation error: {str(e)}"             return Tool(name="Calculator", func=calculate,                  description="Calculate mathematical expressions. Input should be a valid math expression.")     @staticmethod   def get_memory_tool(memory_store):       def save_memory(key_value: str) -> str:           try:               key, value = key_value.split(":", 1)               memory_store[key.strip()] = value.strip()               return f"Saved '{key.strip()}' to memory"           except:               return "Error: Use format 'key: value'"             def recall_memory(key: str) -> str:           return memory_store.get(key.strip(), f"No memory found for '{key}'")             return [           Tool(name="SaveMemory", func=save_memory, description="Save information to memory. Format: 'key: value'"),           Tool(name="RecallMemory", func=recall_memory, description="Recall saved information. Input: key to recall")       ]     @staticmethod   def get_datetime_tool():       def get_current_datetime(format_type: str = "full") -> str:           now = datetime.now()           if format_type == "date":               return now.strftime("%Y-%m-%d")           elif format_type == "time":               return now.strftime("%H:%M:%S")           else:               return now.strftime("%Y-%m-%d %H:%M:%S")             return Tool(name="DateTime", func=get_current_datetime,                  description="Get current date/time. Options: 'date', 'time', or 'full'")class MultiAgentSystem:   def __init__(self, api_key: str):       self.llm = ChatGoogleGenerativeAI(           model="gemini-pro",           google_api_key=api_key,           temperature=0.7,           convert_system_message_to_human=True       )       self.memory_store = {}       self.conversation_memory = ConversationBufferWindowMemory(           memory_key="chat_history", k=10, return_messages=True       )       self.tools = self._initialize_tools()       self.agent = self._create_agent()     def _initialize_tools(self):       tools = []       try:           tools.extend([               DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),               WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())           ])       except Exception as e:           st.warning(f"Search tools may have limited functionality: {e}")             tools.append(InnovativeAgentTools.get_calculator_tool())       tools.append(InnovativeAgentTools.get_datetime_tool())       tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))       return tools     def _create_agent(self):       prompt = PromptTemplate.from_template(""" You are an advanced AI assistant with access to multiple tools and persistent memory.AVAILABLE TOOLS:{tools}TOOL USAGE FORMAT:- Think step by step about what you need to do- Use Action: tool_name- Use Action Input: your input- Wait for Observation- Continue until you have a final answerCONVERSATION HISTORY:{chat_history}CURRENT QUESTION: {input}REASONING PROCESS:{agent_scratchpad}Begin your response with your thought process, then take action if needed.""")             agent = create_react_agent(self.llm, self.tools, prompt)       return AgentExecutor(agent=agent, tools=self.tools, memory=self.conversation_memory,                          verbose=True, handle_parsing_errors=True, max_iterations=5)     def chat(self, message: str, callback_handler=None):       try:           if callback_handler:               response = self.agent.invoke({"input": message}, {"callbacks": [callback_handler]})           else:               response = self.agent.invoke({"input": message})           return response["output"]       except Exception as e:           return f"Error processing request: {str(e)}"# Streamlit Appst.set_page_config(page_title=" Advanced LangChain Agent", page_icon="", layout="wide")st.markdown("""<style>.main-header {   background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);   padding: 1rem; border-radius: 10px; color: white; text-align: center; margin-bottom: 2rem;}.agent-response {   background-color: #f0f2f6; padding: 1rem; border-radius: 10px;   border-left: 4px solid #667eea; margin: 1rem 0;}.memory-card {   background-color: #e8f4fd; padding: 1rem; border-radius: 8px; margin: 0.5rem 0;}</style>""", unsafe_allow_html=True)st.markdown('<div class="main-header"><h1> Advanced Multi-Agent System</h1><p>Powered by LangChain + Gemini API</p></div>', unsafe_allow_html=True)with st.sidebar:   st.header(" Configuration")   api_key = st.text_input(" Google AI API Key", type="password", value=GOOGLE_API_KEY)     if not api_key:       st.error("Please enter your Google AI API key")       st.stop()     st.success(" API Key configured")     st.header(" Agent Capabilities")   st.markdown("-  Web Search\\n-  Wikipedia\\n-  Calculator\\n-  Memory\\n-  Date/Time")     if 'agent_system' in st.session_state and st.session_state.agent_system.memory_store:       st.header(" Memory Store")       for key, value in st.session_state.agent_system.memory_store.items():           st.markdown(f'<div class="memory-card"><strong>{key}:</strong> {value}</div>', unsafe_allow_html=True)if 'agent_system' not in st.session_state:   with st.spinner(" Initializing Agent..."):       st.session_state.agent_system = MultiAgentSystem(api_key)   st.success(" Agent Ready!")if 'messages' not in st.session_state:   st.session_state.messages = [{       "role": "assistant",       "content": " Hello! I'm your advanced AI assistant. I can search, calculate, remember information, and more! Try asking me to: calculate something, search for information, or remember a fact about you."   }]for message in st.session_state.messages:   with st.chat_message(message["role"]):       st.markdown(message["content"])if prompt := st.chat_input("Ask me anything..."):   st.session_state.messages.append({"role": "user", "content": prompt})   with st.chat_message("user"):       st.markdown(prompt)     with st.chat_message("assistant"):       callback_handler = StreamlitCallbackHandler(st.container())       with st.spinner(" Thinking..."):           response = st.session_state.agent_system.chat(prompt, callback_handler)       st.markdown(f'<div class="agent-response">{response}</div>', unsafe_allow_html=True)       st.session_state.messages.append({"role": "assistant", "content": response})# Example buttonsst.header(" Try These Examples")col1, col2, col3 = st.columns(3)with col1:   if st.button(" Calculate 15 * 8 + 32"):       st.rerun()with col2:   if st.button(" Search AI news"):       st.rerun()with col3:   if st.button(" Remember my name is Alex"):       st.rerun()'''     with open('streamlit_app.py', 'w') as f:       f.write(app_content)     print(" Streamlit app file created successfully!")     if setup_ngrok_auth(NGROK_AUTH_TOKEN):       start_streamlit_with_ngrok()   else:       print(" Ngrok authentication failed. Trying alternative methods...")       try_alternative_tunnels()

In the run_in_colab() function, we make it easy to deploy the Streamlit app directly from a Google Colab environment. We begin by installing all required packages, then dynamically generate and write the complete Streamlit app code to a streamlit_app.py file. We verify the presence of a valid ngrok token to enable public access to the app from Colab, and if it’s missing or invalid, we guide ourselves through fallback tunneling options. This setup allows us to interact with our AI agent from anywhere, all within a few cells in Colab. Check out the full Notebook here

def start_streamlit_with_ngrok():   """Start Streamlit with ngrok tunnel"""   import subprocess   import threading   from pyngrok import ngrok     def start_streamlit():       subprocess.run(['streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'])     print(" Starting Streamlit server...")   thread = threading.Thread(target=start_streamlit)   thread.daemon = True   thread.start()     time.sleep(5)     try:       print(" Creating ngrok tunnel...")       public_url = ngrok.connect(8501)       print(f" SUCCESS! Access your app at: {public_url}")       print(" Your Advanced LangChain Agent is now running publicly!")       print(" You can share this URL with others!")             print(" Keeping tunnel alive... Press Ctrl+C to stop")       try:           ngrok_process = ngrok.get_ngrok_process()           ngrok_process.proc.wait()       except KeyboardInterrupt:           print(" Shutting down...")           ngrok.kill()             except Exception as e:       print(f" Ngrok tunnel failed: {e}")       try_alternative_tunnels()def try_alternative_tunnels():   """Try alternative tunneling methods"""   print(" Trying alternative tunnel methods...")     import subprocess   import threading     def start_streamlit():       subprocess.run(['streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'])     thread = threading.Thread(target=start_streamlit)   thread.daemon = True   thread.start()     time.sleep(3)     print(" Streamlit is running on http://localhost:8501")   print("\n ALTERNATIVE TUNNEL OPTIONS:")   print("1. localtunnel: Run this in a new cell:")   print("   !npx localtunnel --port 8501")   print("\n2. serveo.net: Run this in a new cell:")   print("   !ssh -R 80:localhost:8501 serveo.net")   print("\n3. Colab public URL (if available):")   print("   Use the 'Public URL' button in Colab's interface")     try:       while True:           time.sleep(60)   except KeyboardInterrupt:       print(" Shutting down...")if __name__ == "__main__":   try:       get_ipython()       print(" Google Colab detected - starting setup...")       run_in_colab()   except NameError:       main()

In this final part, we set up the execution logic to run the app either in a local environment or inside Google Colab. The start_streamlit_with_ngrok() function launches the Streamlit server in the background and uses ngrok to expose it publicly, making it easy to access and share. If ngrok fails, the try_alternative_tunnels() function activates with alternative tunneling options, such as LocalTunnel and Serveo. With the __main__ block, we automatically detect if we’re in Colab and launch the appropriate setup, making the entire deployment process smooth, flexible, and shareable from anywhere.

In conclusion, we’ll have a fully functional AI agent running inside a sleek Streamlit interface, capable of answering queries, remembering user inputs, and even sharing its services publicly using ngrok. We’ve seen how easily Streamlit enables us to integrate advanced AI functionalities into an engaging and user-friendly app. From here, we can expand the agent’s tools, plug it into larger workflows, or deploy it as part of our intelligent applications. With Streamlit as the front-end and LangChain agents powering the logic, we’ve built a solid foundation for next-gen interactive AI experiences.


Check out the full Notebook here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

The post Build an Intelligent Multi-Tool AI Agent Interface Using Streamlit for Seamless Real-Time Interaction appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LangChain Gemini API Streamlit AI助手 多智能体系统
相关文章