MarkTechPost@AI 07月19日 07:55
o1 Style Thinking with Chain-of-Thought Reasoning using Mirascope
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了如何使用Mirascope库和Groq的LLaMA 3模型实现链式思考(Chain-of-Thought, CoT)推理。CoT方法通过将复杂问题分解为逻辑步骤来提高AI模型的准确性和透明度,尤其适用于多步骤任务。教程详细指导了如何设置数据模型、定义分步推理调用、生成最终答案,并以结构化方式可视化思考过程。文中以一个火车相遇的相对速度问题为例,展示了CoT推理的实际应用,包括安装依赖、配置API密钥、定义Pydantic模式、构建推理函数以及生成和展示完整的推理过程。

🚄 **链式思考(CoT)推理的核心优势**:CoT方法通过引导AI模型将复杂问题分解为一系列逻辑步骤来解决,而非直接跳到答案。这种逐步推理的方式显著提高了AI在处理多步骤任务时的准确性、透明度,并使其更可靠。

⚙️ **Mirascope与Groq的集成实现**:教程利用Mirascope库和Groq的LLaMA 3模型来执行CoT推理。通过定义一个Pydantic模型`COTResult`来结构化每个推理步骤(包含标题、内容和下一步动作),并使用`groq.call`装饰器来定义分步推理函数`cot_step`和最终答案函数`final_answer`。

🚂 **火车相遇问题实例**:文章以一个具体的数学应用题为例:“一列火车从A城早上9点出发,时速60公里;另一列火车从300公里外的B城早上10点出发,时速90公里,驶向A城,请问两车何时相遇?”来展示CoT推理的应用过程。

🔄 **生成与展示CoT响应**:`generate_cot_response`函数负责管理迭代推理过程,它逐步将用户查询发送给模型,跟踪每一步的内容、标题和响应时间,直到模型给出最终答案或达到最大步数限制。随后,`display_cot_response`函数将详细的思考步骤和总处理时间清晰地呈现给用户,增强了AI过程的可视化和可调试性。

In this tutorial, we’ll explore how to implement Chain-of-Thought (CoT) reasoning using the Mirascope library and Groq’s LLaMA 3 model. Rather than having the model jump straight to an answer, CoT reasoning encourages it to break the problem down into logical steps—much like how a human would solve it. This approach improves accuracy, transparency, and helps tackle complex, multi-step tasks more reliably. We’ll guide you through setting up the schema, defining step-by-step reasoning calls, generating final answers, and visualizing the thinking process in a structured way.

We’ll be asking the LLM a relative velocity question – “If a train leaves City A at 9:00 AM traveling at 60 km/h, and another train leaves City B (which is 300 km away from City A) at 10:00 AM traveling at 90 km/h toward City A, at what time will the trains meet?

Installing the dependencies

!pip install "mirascope[groq]" !pip install datetime

Groq API Key

For this tutorial, we require a Groq API key to make LLM calls. You can get one at https://console.groq.com/keys

Importing the libraries & defining a Pydantic schema

This section imports the required libraries and defines a COTResult Pydantic model. The schema structures each reasoning step with a title, content, and a next_action flag to indicate whether the model should continue reasoning or return the final answer.

from typing import Literalfrom mirascope.core import groqfrom pydantic import BaseModel, Fieldhistory: list[dict] = []class COTResult(BaseModel):    title: str = Field(..., desecription="The title of the step")    content: str = Field(..., description="The output content of the step")    next_action: Literal["continue", "final_answer"] = Field(        ..., description="The next action to take"    )

Defining Step-wise Reasoning and Final Answer Functions

These functions form the core of the Chain-of-Thought (CoT) reasoning workflow. The cot_step function allows the model to think iteratively by reviewing prior steps and deciding whether to continue or conclude. This enables deeper reasoning, especially for multi-step problems. The final_answer function consolidates all reasoning into a single, focused response, making the output clean and ready for end-user consumption. Together, they help the model approach complex tasks more logically and transparently.

@groq.call("llama-3.3-70b-versatile", json_mode=True, response_model=COTResult)def cot_step(prompt: str, step_number: int, previous_steps: str) -> str:    return f"""    You are an expert AI assistant that explains your reasoning step by step.    For this step, provide a title that describes what you're doing, along with the content.    Decide if you need another step or if you're ready to give the final answer.    Guidelines:    - Use AT MOST 5 steps to derive the answer.    - Be aware of your limitations as an LLM and what you can and cannot do.    - In your reasoning, include exploration of alternative answers.    - Consider you may be wrong, and if you are wrong in your reasoning, where it would be.    - Fully test all other possibilities.    - YOU ARE ALLOWED TO BE WRONG. When you say you are re-examining        - Actually re-examine, and use another approach to do so.        - Do not just say you are re-examining.    IMPORTANT: Do not use code blocks or programming examples in your reasoning. Explain your process in plain language.    This is step number {step_number}.    Question: {prompt}    Previous steps:    {previous_steps}    """@groq.call("llama-3.3-70b-versatile")def final_answer(prompt: str, reasoning: str) -> str:    return f"""    Based on the following chain of reasoning, provide a final answer to the question.    Only provide the text response without any titles or preambles.    Retain any formatting as instructed by the original prompt, such as exact formatting for free response or multiple choice.    Question: {prompt}    Reasoning:    {reasoning}    Final Answer:    """

Generating and Displaying Chain-of-Thought Responses

This section defines two key functions to manage the full Chain-of-Thought reasoning loop:

Together, these functions help visualize how the model reasons through a complex prompt and allow for better transparency and debugging of multi-step outputs.

def generate_cot_response(    user_query: str,) -> tuple[list[tuple[str, str, float]], float]:    steps: list[tuple[str, str, float]] = []    total_thinking_time: float = 0.0    step_count: int = 1    reasoning: str = ""    previous_steps: str = ""    while True:        start_time: datetime = datetime.now()        cot_result = cot_step(user_query, step_count, previous_steps)        end_time: datetime = datetime.now()        thinking_time: float = (end_time - start_time).total_seconds()        steps.append(            (                f"Step {step_count}: {cot_result.title}",                cot_result.content,                thinking_time,            )        )        total_thinking_time += thinking_time        reasoning += f"\n{cot_result.content}\n"        previous_steps += f"\n{cot_result.content}\n"        if cot_result.next_action == "final_answer" or step_count >= 5:            break        step_count += 1    # Generate final answer    start_time = datetime.now()    final_result: str = final_answer(user_query, reasoning).content    end_time = datetime.now()    thinking_time = (end_time - start_time).total_seconds()    total_thinking_time += thinking_time    steps.append(("Final Answer", final_result, thinking_time))    return steps, total_thinking_timedef display_cot_response(    steps: list[tuple[str, str, float]], total_thinking_time: float) -> None:    for title, content, thinking_time in steps:        print(f"{title}:")        print(content.strip())        print(f"**Thinking time: {thinking_time:.2f} seconds**\n")    print(f"**Total thinking time: {total_thinking_time:.2f} seconds**")

Running the Chain-of-Thought Workflow

The run function initiates the full Chain-of-Thought (CoT) reasoning process by sending a multi-step math word problem to the model. It begins by printing the user’s question, then uses generate_cot_response to compute a step-by-step reasoning trace. These steps, along with the total processing time, are displayed using display_cot_response.

Finally, the function logs both the question and the model’s final answer into a shared history list, preserving the full interaction for future reference or auditing. This function ties together all earlier components into a complete, user-facing reasoning flow.

def run() -> None:    question: str = "If a train leaves City A at 9:00 AM traveling at 60 km/h, and another train leaves City B (which is 300 km away from City A) at 10:00 AM traveling at 90 km/h toward City A, at what time will the trains meet?"    print("(User):", question)    # Generate COT response    steps, total_thinking_time = generate_cot_response(question)    display_cot_response(steps, total_thinking_time)    # Add the interaction to the history    history.append({"role": "user", "content": question})    history.append(        {"role": "assistant", "content": steps[-1][1]}    )  # Add only the final answer to the history# Run the functionrun()

Check out the Codes. All credit for this research goes to the researchers of this project.

Sponsorship Opportunity: Reach the most influential AI developers in US and Europe. 1M+ monthly readers, 500K+ community builders, infinite possibilities. [Explore Sponsorship]

The post o1 Style Thinking with Chain-of-Thought Reasoning using Mirascope appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

链式思考 Mirascope Groq LLaMA 3 AI推理
相关文章