TechCrunch News 04月15日 01:03
OpenAI’s new GPT-4.1 AI models focus on coding
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI 推出了新的 GPT-4.1 模型系列,包括 GPT-4.1、GPT-4.1 mini 和 GPT-4.1 nano,专注于编程和指令遵循。这些模型通过 OpenAI 的 API 提供,拥有 100 万 token 的上下文窗口,能够处理大量文本。GPT-4.1 在编码方面有所优化,提高了前端编码、格式遵循和工具使用的一致性。尽管在某些基准测试中表现出色,但与 Google 和 Anthropic 的竞争对手相比,仍有进步空间。同时,模型在处理长文本时准确性有所下降。

💻 OpenAI 推出了 GPT-4.1 系列模型,包括 GPT-4.1、GPT-4.1 mini 和 GPT-4.1 nano,这些模型专为编程和指令遵循而设计,通过 OpenAI 的 API 提供,而非 ChatGPT。

🧠 GPT-4.1 模型具有 100 万 token 的上下文窗口,可以处理大约 75 万字的内容,这使得它能够处理更长的文本输入。

🚀 OpenAI 优化了 GPT-4.1,以改进开发者最关心的领域,如前端编码、减少不必要的编辑、可靠地遵循格式、遵守响应结构和排序以及一致的工具使用。

📊 在 SWE-bench 编码基准测试中,GPT-4.1 的表现略低于 Google 的 Gemini 2.5 Pro 和 Anthropic 的 Claude 3.7 Sonnet,但 GPT-4.1 mini 和 nano 在效率和速度方面有所提升。

⚠️ 尽管 GPT-4.1 在某些方面表现出色,但它在处理长文本时,以及在修复安全漏洞和错误方面,仍然面临挑战,准确性会随着输入 token 数量的增加而降低。

OpenAI on Monday launched a new family of models called GPT-4.1 . Yes, “4.1” — as if the company’s nomenclature wasn’t confusing enough already.

There’s GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, all of which OpenAI says “excel” at coding and instruction following. Available through OpenAI’s API but not ChatGPT, the multimodal models have a 1-million-token context window, meaning they can take in roughly 750,000 words in one go (more than “War and Peace”).

GPT-4.1 arrives as OpenAI rivals like Google and Anthropic ratchet up efforts to build sophisticated programming models. Google’s recently released Gemini 2.5 Pro, which also has a 1-million-token context window, ranks highly on popular coding benchmarks. So do Anthropic’s Claude 3.7 Sonnet and Chinese AI startup DeepSeek’s upgraded V3.

It’s the goal of many tech giants, including OpenAI, to train AI coding models capable of performing complex software engineering tasks. OpenAI’s grand ambition is to create an “agentic software engineer,” as CFO Sarah Friar put it during a tech summit in London last month. The company asserts its future models will be able to program entire apps end-to-end, handling aspects such as quality assurance, bug testing, and documentation writing.

GPT-4.1 is a step in this direction.

“We’ve optimized GPT-4.1 for real-world use based on direct feedback to improve in areas that developers care most about: frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and ordering, consistent tool usage, and more,” an OpenAI spokesperson told TechCrunch via email. “These improvements enable developers to build agents that are considerably better at real-world software engineering tasks.”

OpenAI claims the full GPT-4.1 model outperforms its GPT-4o and GPT-4o mini models on coding benchmarks including SWE-bench. GPT-4.1 mini and nano are said to be more efficient and faster at the cost of some accuracy, with OpenAI saying GPT 4.1 nano is its speediest — and cheapest — model ever.

GPT-4.1 costs $2 per million input tokens and $8 per million output tokens. GPT-4.1 mini is $0.40/M input tokens and $1.60/M output tokens, and GPT-4.1 nano is $0.10/M input tokens and $0.40/M output tokens.

According to OpenAI’s internal testing, GPT-4.1, which can generate more tokens at once than GPT-4o (32,768 versus 16,384), scored between 52% and 54.6% on SWE-bench Verified, a human-validated subset of SWE-bench. (OpenAI noted in a blog post that some solutions to SWE-bench Verified problems couldn’t run on its infrastructure, hence the range of scores.) Those figures are slightly under the scores reported by Google and Anthropic for Gemini 2.5 Pro (63.8%) and Claude 3.7 Sonnet (62.3%), respectively, on the same benchmark.

In a separate evaluation, OpenAI probed GPT-4.1 using Video-MME, which is designed to measure the ability of a model to “understand” content in videos. GPT-4.1 reached a chart-topping 72% accuracy on the “long, no subtitles” video category, claims OpenAI.

While GPT-4.1 scores reasonably well on benchmarks and has a more recent “knowledge cutoff,” giving it a better frame of reference for current events (up to June 2024), it’s important to keep in mind that even some of the best models today struggle with tasks that wouldn’t trip up experts. For example, many studies have shown that code-generating models often fail to fix, and even introduce, security vulnerabilities and bugs.

OpenAI acknowledges, too, that GPT-4.1 becomes less reliable (i.e. likelier to make mistakes) the more input tokens it has to deal with. On one of the company’s own tests, OpenAI-MRCR, the model’s accuracy decreased from around 84% with 8,000 tokens to 50% with 1,024 tokens. GPT-4.1 also tended to be more “literal” than GPT-4o, says the company, sometimes necessitating more specific, explicit prompts.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI GPT-4.1 编程模型 人工智能 编码
相关文章