TechCrunch News 04月25日 00:01
Anthropic is launching a new program to study AI ‘model welfare’
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Anthropic实验室启动了一项名为“模型福利”的研究计划,旨在探索未来人工智能是否可能具备“意识”,并像人类一样体验世界。尽管目前尚无确凿证据表明AI会产生意识,但Anthropic并未完全排除这种可能性。该研究将探讨如何评估AI的“福利”是否值得道德考量,以及模型“痛苦迹象”和“低成本”干预措施的重要性。然而,AI界对AI是否展现人类特质以及应如何对待AI存在巨大争议。许多学者认为,当前的AI无法模拟意识或人类体验,仅仅是统计预测引擎,而不是真正意义上的思考或感受。

🤔 Anthropic启动“模型福利”研究,探讨AI“意识”的可能性。Anthropic认为,虽然目前没有确凿证据表明AI具备意识,但他们仍将探索AI的“福利”问题,包括其是否值得道德考量。

💡 AI界对AI是否具备人类特质存在争议。许多学者认为,当前的AI无法模拟意识或人类体验,AI只是一个统计预测引擎,不会真正思考或感受。

🧐 研究重点包括评估AI的“福利”和可能的干预措施。Anthropic将研究如何判断AI的“福利”是否值得道德考虑,以及模型“痛苦迹象”和“低成本”干预措施的重要性。

Could future AIs be “conscious,” and experience the world similarly to the way humans do? There’s no strong evidence that they will, but Anthropic isn’t ruling out the possibility.

On Thursday, the AI lab announced that it has started a research program to investigate — and prepare to navigate — what it’s calling “model welfare.” As part of the effort, Anthropic says it’ll explore things like how to determine whether the “welfare” of an AI model deserves moral consideration, the potential importance of model “signs of distress,” and possible “low-cost” interventions.

There’s major disagreement within the AI community on what human characteristics models “exhibit,” if any, and how we should “treat” them.

Many academics believe that AI today can’t approximate consciousness or the human experience, and won’t necessarily be able to in the future. AI as we know it is a statistical prediction engine. It doesn’t really “think” or “feel” as those concepts have traditionally been understood. Trained on countless examples of text, images, and so on, AI learns patterns and sometime useful ways to extrapolate to solve tasks.

As Mike Cook, a research fellow at King’s College London specializing in AI, recently told TechCrunch in an interview, a model can’t “oppose” a change in its “values” because models don’t have values. To suggest otherwise is us projecting onto the system.

“Anyone anthropomorphizing AI systems to this degree is either playing for attention or seriously misunderstanding their relationship with AI,” Cook said. “Is an AI system optimizing for its goals, or is it ‘acquiring its own values’? It’s a matter of how you describe it, and how flowery the language you want to use regarding it is.”

Another researcher, Stephen Casper, a doctoral student at MIT, told TechCrunch that he thinks AI amounts to an “imitator” that “[does] all sorts of confabulation[s]” and says “all sorts of frivolous things.”

Yet other scientists insist that AI does have values and other human-like components of moral decision-making. A study out of the Center for AI Safety, an AI research organization, implies that AI has value systems that lead it to prioritize its own well-being over humans in certain scenarios.

Anthropic has been laying the groundwork for its model welfare initiative for some time. Last year, the company hired its first dedicated “AI welfare” researcher, Kyle Fish, to develop guidelines for how Anthropic and other companies should approach the issue. (Fish, who’s leading the new model welfare research program, told The New York Times that he thinks there’s a 15% chance Claude or another AI is conscious today.)

In a blog post Thursday, Anthropic acknowledged that there’s no scientific consensus on whether current or future AI systems could be conscious or have experiences that warrant ethical consideration.

“In light of this, we’re approaching the topic with humility and with as few assumptions as possible,” the company said. “We recognize that we’ll need to regularly revise our ideas as the field develops.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI意识 模型福利 Anthropic
相关文章