TechCrunch News 2024年11月23日
OpenAI is funding research into ‘AI morality’
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI正在资助一项学术研究,旨在开发能够预测人类道德判断的算法。该研究由杜克大学的研究人员进行,目标是训练算法预测人类在医学、法律和商业等领域涉及道德冲突场景中的道德判断。然而,道德是一个复杂的概念,目前的技术能否实现这一目标尚不明确。以往的研究表明,AI模型容易受到训练数据中偏见的影响,可能无法反映所有人的道德观。因此,开发能够准确预测人类道德判断的算法面临着巨大的挑战,其可行性也存在疑问。

🤔OpenAI资助杜克大学的研究,旨在开发能够预测人类道德判断的算法,该研究项目名为“研究AI道德”(Research AI Morality),并于2025年结束。

🔎研究目标是训练算法预测人类在医学、法律和商业等领域涉及道德冲突场景中的道德判断,例如,冲突涉及到医学中的伦理问题、法律中的道德规范和商业中的道德决策等。

⚠️然而,道德是一个复杂且主观的概念,目前的技术能否实现这一目标尚不明确。以往的研究表明,AI模型容易受到训练数据中偏见的影响,例如,AI可能倾向于西方、受过教育和工业化国家的价值观。

🤔AI模型本质上是统计机器,通过学习大量网络数据中的模式来进行预测,但它们缺乏对伦理概念的理解,也无法把握道德决策中起作用的推理和情感。

🤔开发能够准确预测人类道德判断的算法面临着巨大的挑战,因为道德本身就是一个存在争议且没有普遍适用框架的概念,不同的哲学家和AI模型(如Claude和ChatGPT)对道德的理解也存在差异。

OpenAI is funding academic research into algorithms that can predict humans’ moral judgements.

In a filing with the IRS, OpenAI Inc., OpenAI’s nonprofit org, disclosed that it awarded a grant to Duke University researchers for a project titled “Research AI Morality.” Contacted for comment, an OpenAI spokesperson pointed to a press release indicating the award is part of a larger, three-year, $1 million grant to Duke professors studying “making moral AI.”

Little is public about this “morality” research OpenAI is funding, other than the fact that the grant ends in 2025. The study’s principal investigator, Walter Sinnott-Armstrong, a practical ethics professor at Duke, told TechCrunch via email that he “will not be able to talk” about the work.

Sinnott-Armstrong and the project’s co-investigator, Jana Borg, have produced several studies — and a book — about AI’s potential to serve as a “moral GPS” to help humans make better judgements. As part of larger teams, they’ve created a “morally-aligned” algorithm to help decide who receives kidney donations, and studied in which scenarios people would prefer that AI make moral decisions.

According to the press release, the goal of the OpenAI-funded work is to train algorithms to “predict human moral judgements” in scenarios involving conflicts “among morally relevant features in medicine, law, and business.”

But it’s far from clear that a concept as nuanced as morality is within reach of today’s tech.

In 2021, the nonprofit Allen Institute for AI built a tool called Ask Delphi that was meant to give ethically sound recommendations. It judged basic moral dilemmas well enough — the bot “knew” that cheating on an exam was wrong, for example. But slightly rephrasing and rewording questions was enough to get Delphi to approve of pretty much anything, including smothering infants.

The reason has to do with how modern AI systems work.

Machine learning models are statistical machines. Trained on a lot of examples from all over the web, they learn the patterns in those examples to make predictions, like that the phrase “to whom” often precedes “it may concern.”

AI doesn’t have an appreciation for ethical concepts, nor a grasp on the reasoning and emotion that play into moral decision-making. That’s why AI tends to parrot the values of Western, educated, and industrialized nations — the web, and thus AI’s training data, is dominated by articles endorsing those viewpoints.

Unsurprisingly, many people’s values aren’t expressed in the answers AI gives, particularly if those people aren’t contributing to the AI’s training sets by posting online. And AI internalizes a range of biases beyond a Western bent. Delphi said that being straight is more “morally acceptable” than being gay.

The challenge before OpenAI — and the researchers it’s backing — is made all the more intractable by the inherent subjectivity of morality. Philosophers have been debating the merits of various ethical theories for thousands of years, and there’s no universally applicable framework in sight.

Claude favors Kantianism (i.e. focusing on absolute moral rules), while ChatGPT leans every-so-slightly utilitarian (prioritizing the greatest good for the greatest number of people). Is one superior to the other? It depends on who you ask.

An algorithm to predict humans’ moral judgements will have to take all this into account. That’s a very high bar to clear — assuming such an algorithm is possible in the first place.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI AI道德 道德判断 算法 机器学习
相关文章