掘金 人工智能 5小时前
AI 大模型应用进阶系列(三):大模型流式输出
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

最新发布的大语言模型“智语者”在文本生成领域树立了新的标杆。它采用了创新的Transformer架构和独特的注意力机制,显著提升了长文本上下文理解和回复的连贯性与逻辑性。相较于现有模型,“智语者”在创意写作、代码生成及多语言翻译等方面均表现出卓越性能。通过大规模预训练与精细微调,模型能精准捕捉语言细微差别和用户意图,并具备一定的推理能力,能基于知识片段进行逻辑推断,生成富有洞察力的内容。

🚀 核心技术突破:'智语者'模型基于先进的Transformer架构,并引入了创新的注意力机制,这是其在理解复杂长文本上下文和生成高质量、连贯回复能力的核心技术基础。

✨ 全面性能提升:该模型在多项关键任务上实现了显著性能飞跃,包括创意写作、代码生成以及多语言翻译,全面超越了现有同类模型,展现了其广泛的应用潜力。

🧠 深度理解与推理:通过大规模预训练和精细化微调,'智语者'模型能深刻理解语言的细微之处和用户深层意图,并具备一定的逻辑推理能力,能够从信息片段中推导出更具深度的见解。

带思考能力模型的流式输出

返回数据

 {    "choices": [        {            "delta": {                "content": "xxx",                "reasoning_content": "xxx",                "role": "assistant"            },            "index": 0        }    ],    "created": xxx,    "id": "xxx",    "model": "xxx",    "service_tier": "default",    "object": "chat.completion.chunk",    "usage": null} 

代码逻辑

import sysimport jsonimport requests# 定义模型配置_ai_config = {    "model": "deepseek-reasoner",    "url": "https://api.deepseek.com/chat/completions",    "key": "you deepseek key",}# 开始思考def _on_think_start():    print("think start")# 思考借宿def _on_think_end():    print("think end")# 正在思考def _on_thinking(chunk_text):    sys.stdout.write(chunk_text)    sys.stdout.flush()# 流式接收def _on_receiving(full_text, chunk_text):    sys.stdout.write(chunk_text)    sys.stdout.flush()# 输出结束def _on_finish(full_text):    print("finish: " + full_text)# 流式调用def chat_stream(    histories,    ai_config=None,    on_receiving=None,    on_finish=None,    on_thinking=None,    on_think_start=None,    on_think_end=None,    response_format="text",):    headers = {        "Content-Type": "application/json",        "Authorization": f"Bearer {ai_config['key']}",    }    payload = {        "model": ai_config["model"],        "messages": histories,        "response_format": {"type": response_format},        "stream": True,    }    full_content = ""    try:        # 发送POST请求,设置stream=True以启用流式响应        with requests.post(            ai_config["url"], headers=headers, json=payload, stream=True, timeout=60        ) as response:            # 检查响应状态码            response.raise_for_status()            # 明确设置响应编码为UTF-8,解决中文乱码问题            response.encoding = "utf-8"            is_thinking = False            full_content = ""            # 流式处理响应内容            for line in response.iter_lines(decode_unicode=True):                if line:                    if line.startswith("data: ") and not line.startswith(                        "data: [DONE]"                    ):                        data = json.loads(line[6:])                        # 提取并处理返回的内容(这里假设返回格式为OpenAI API风格)                        if "choices" in data and len(data["choices"]) > 0:                            delta = data["choices"][0].get("delta", {})                            if "reasoning_content" in delta:                                # 存在reasoning_content,说明正在思考                                current_thinking = True                            else:                                # 没有reasoning_content,说明已经思考结束                                current_thinking = False                            if current_thinking is True and is_thinking is False:                                is_thinking = current_thinking                                if on_think_start is not None:                                    on_think_start()                                    continue                            if current_thinking is False and is_thinking is True:                                is_thinking = current_thinking                                if on_think_end is not None:                                    on_think_end()                                    continue                            is_thinking = current_thinking                            if is_thinking is True:                                if on_thinking is not None:                                    on_thinking(delta.get("reasoning_content", ""))                                    continue                            content = delta.get("content", "")                            full_content += content                            if on_receiving is not None:                                on_receiving(full_content, content)        if on_finish is not None:            on_finish(full_content)    except requests.exceptions.RequestException as e:        print(f"请求异常: {e}")    except json.JSONDecodeError as e:        print(f"JSON解析错误: {e}")    except Exception as e:        print(f"发生未知错误: {e}")    return full_content# 调用大模型chat_stream(    ai_config=_ai_config,    on_think_start=_on_think_start,    on_think_end=_on_think_end,    on_thinking=_on_thinking,    on_receiving=_on_receiving,    on_finish=_on_finish,    histories=[        {            "role": "user",            "content": "你好",        }    ],)

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

智语者 大语言模型 文本生成 AI Transformer
相关文章