All Content from Business Insider 前天 23:37
The Godfather of AI says there's a key difference between OpenAI and Google when it comes to safety
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

“AI教父”Geoffrey Hinton在播客中分析了谷歌与OpenAI在推出聊天机器人方面的差异。他认为,谷歌因注重声誉而行动较慢,而OpenAI则因无后顾之忧而敢于冒险。Hinton指出,谷歌在AI安全方面表现负责,但OpenAI的策略尚不明确。文章还提到了谷歌在Bard发布后所犯的错误以及OpenAI在安全措施上的调整。总的来说,文章探讨了在AI竞赛中,声誉、风险和策略之间的复杂关系。

🤔 Hinton认为谷歌在发布聊天机器人方面行动较慢,主要原因在于其对声誉的担忧。谷歌希望维护其良好声誉,不愿因发布可能带来负面影响的AI产品而损害声誉。

🚀 OpenAI则采取了不同的策略,由于其没有像谷歌那样深厚的声誉,因此更敢于冒险,快速推出产品。Hinton认为,这种“无后顾之忧”的姿态使得OpenAI在AI竞赛中更具优势。

💡 文章还提到了谷歌在Bard发布后所犯的错误,以及谷歌CEO Sundar Pichai承认的“错误”。这表明,即使是像谷歌这样的大公司,在AI发展过程中也可能面临挑战和风险。

🛡️ OpenAI在安全措施方面也采取了不同的方法,最近的博客文章表明,该公司在调整安全要求时会考虑风险,并关注网络安全、化学威胁以及AI的自主改进能力。

Geoffrey Hinton spent more than a decade at Google.

When it comes to winning the AI race, the "Godfather of AI" thinks there's an advantage in having nothing to lose.

On an episode of the "Diary of a CEO" podcast that aired June 16, Geoffrey Hinton laid out what he sees as a key difference between how OpenAI and Google, his former employer, dealt with AI safety.

"When they had these big chatbots, they didn't release them, possibly because they were worried about their reputation," Hinton said of Google. "They had a very good reputation, and they didn't want to damage it."

Google released Bard, its AI chatbot, in March of 2023, before later incorporating it into its larger suite of large language models called Gemini. The company was playing catch-up, though, since OpenAI released ChatGPT at the end of 2022.

Hinton, who earned his nickname for his pioneering work on neural networks, laid out a key reason that OpenAI could move faster on the podcast episode: "OpenAI didn't have a reputation, and so they could afford to take the gamble."

Talking at an all-hands meeting shortly after ChatGPT came out, Google's then-head of AI said the company didn't plan to immediately release a chatbot because of "reputational risk," adding that it needed to make choices "more conservatively than a small startup," CNBC reported at the time.

The company's AI boss, Google DeepMind CEO Demis Hassabis, said in February of this year that AI poses potential long-term risks, and that agentic systems could get "out of control." He advocated having a governing body that regulates AI projects.

Gemini has made some high-profile mistakes since its launch, and showed bias in its written responses and image-generating feature. Google CEO Sundar Pichai addressed the controversy in a memo to staff last year, saying the company "got it wrong" and pledging to make changes.

The "Godfather" saw Google's early chatbot decision-making from the inside — he spent more than a decade at the company before quitting to talk more freely about what he describes as the dangers of AI. On Monday's podcast episode, though, Hinton said he didn't face internal pressure to stay silent.

"Google encouraged me to stay and work on AI safety, and said I could do whatever I liked on AI safety," he said. "You kind of censor yourself. If you work for a big company, you don't feel right saying things that will damage the big company."

Overall, Hinton said he thinks Google "actually behaved very responsibly."

Hinton couldn't be as sure about OpenAI, though he has never worked at the company. When asked whether the company's CEO, Sam Altman, has a "good moral compass" earlier in the episode, he said, "We'll see." He added that he doesn't know Altman personally, so he didn't want to comment further.

OpenAI has faced criticism in recent months for approaching safety differently than in the past. In a recent blog post, the company said it would only change its safety requirements after making sure it wouldn't "meaningfully increase the overall risk of severe harm." Its focus areas for safety now include cybersecurity, chemical threats, and AI's power to improve independently.

Altman defended OpenAI's approach to safety in an interview at TED2025 in April, saying that the company's preparedness framework outlines "where we think the most important danger moments are." Altman also acknowledged in the interview that OpenAI has loosened some restrictions on its model's behavior based on user feedback about censorship.

The earlier competition between OpenAI and Google to release initial chatbots was fierce, and the AI talent race is only heating up. Documents reviewed by Business Insider reveal that Google relied on ChatGPT in 2023 — during its attempts to catch up to ChatGPT.

Representatives for Google and OpenAI did not respond to BI's request for comment.

Read the original article on Business Insider

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Geoffrey Hinton 谷歌 OpenAI AI安全 聊天机器人
相关文章