The Verge - Artificial Intelligences 2024年07月12日
Here’s how OpenAI will determine how powerful its AI systems are
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI开发了一个内部尺度,用于追踪其大型语言模型向通用人工智能(AGI)的进展。目前,如ChatGPT这样的聊天机器人处于第1级,而OpenAI声称即将达到第2级,即能解决博士水平问题的系统。该尺度旨在明确何时达到AGI,尽管AGI仍遥遥无期,且需要巨额计算能力。OpenAI的这一举措也引发了安全和价值观的讨论。

🚀 OpenAI的内部尺度将AI的发展分为五个级别,目前聊天机器人如ChatGPT处于第1级,而第2级指的是能解决博士水平问题的系统,第3级至第5级分别代表着AI能够执行更复杂的人类活动,直至达到整个组织的工作能力。

🔍 OpenAI对AGI的定义是“高度自主的系统,在大多数具有经济价值的工作上超越人类”。公司承诺,如果有一个符合价值观、注重安全的项目在OpenAI之前接近构建AGI,OpenAI将不与之竞争并全力协助。

💡 这一进度尺度的引入,为衡量AI向AGI的进展提供了一个严格的标准,而不是仅仅依靠主观解释。然而,AGI的实现仍需要巨大的计算能力,专家对此时间线看法不一。

🔒 OpenAI在安全方面也遭遇挑战,其安全团队解散后,引发了外界对其安全文化和流程的担忧,尤其是在追求AGI的过程中。

🧪 OpenAI与洛斯阿拉莫斯国家实验室的合作,旨在探索高级AI模型如GPT-4o如何安全地协助生物科学研究,这标志着AI在科研领域的应用迈出了重要一步。

Illustration: The Verge

OpenAI has created an internal scale to track the progress its large language models are making toward artificial general intelligence, or AI with human-like intelligence, a spokesperson told Bloomberg.

Today’s chatbots, like ChatGPT, are at Level 1. OpenAI claims it is nearing Level 2, defined as a system that can solve basic problems at the level of a person with a PhD. Level 3 refers to AI agents capable of taking actions on a user’s behalf. Level 4 involves AI that can create new innovations. Level 5, the final step to achieving AGI, is AI that can perform the work of entire organizations of people. OpenAI has previously defined AGI as “a highly autonomous system surpassing humans in most economically valuable tasks.”

OpenAI’s unique structure is centered around its mission of achieving AGI, and how OpenAI defines AGI is important. The company has said that “if a value-aligned, safety-conscious project comes close to building AGI” before OpenAI does, it commits to not competing with the project and dropping everything to assist. The phrasing of this in OpenAI’s charter is vague, leaving room for the judgment of the for-profit entity (governed by the nonprofit), but a scale that OpenAI can test itself and competitors on could help dictate when AGI is reached in clearer terms.

Still, AGI is still quite a ways away: it will take billions upon billions of dollars worth of computing power to reach AGI, if at all. Timelines from experts, and even at OpenAI, vary wildly. In October 2023, OpenAI CEO Sam Altman said we are “five years, give or take,” before reaching AGI.

This new grading scale, though still under development, was introduced a day after OpenAI announced its collaboration with Los Alamos National Laboratory, which aims to explore how advanced AI models like GPT-4o can safely assist in bioscientific research. A program manager at Los Alamos, responsible for the national security biology portfolio and instrumental in securing the OpenAI partnership, told The Verge that the goal is to test GPT-4o’s capabilities and establish a set of safety and other factors for the US government. Eventually, public or private models can be tested against these factors to evaluate their own models.

In May, OpenAI dissolved its safety team after the group’s leader, OpenAI cofounder Ilya Sutskever, left the company. Jan Leike, a key OpenAI researcher, resigned shortly after claiming in a post that “safety culture and processes have taken a backseat to shiny products” at the company. While OpenAI denied that was the case, some are concerned about what this means if the company does in fact reach AGI.

OpenAI hasn’t provided details on how it assigns models to these internal levels (and declined The Verge’s request for comment). However, company leaders demonstrated a research project using the GPT-4 AI model during an all-hands meeting on Thursday and believe this project showcases some new skills that exhibit human-like reasoning, according to Bloomberg.

This scale could help provide a strict definition of progress, rather than leaving it up for interpretation. For instance, OpenAI CTO Mira Murati said in an interview in June that the models in its labs are not much better than what the public has already. Meanwhile, CEO Sam Altman said late last year that the company recently “pushed the veil of ignorance back,” meaning the models are remarkably more intelligent.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI 通用人工智能 AGI 安全伦理
相关文章