Fortune | FORTUNE 2024年10月19日
Google DeepMind director calls for clarity and consistency in AI regulations
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州州长否决AI安全立法后,谷歌DeepMind高管呼吁就安全、负责和以人为本的人工智能达成共识。多位业内人士在峰会上讨论相关问题,提到若OpenAI等公司的模型如所说般强大,应有安全开发的法律义务,还提及加州法案的‘终止开关’条款等。DeepMind的Terwilliger认为应让监管者理解AI不同层面的区别,且负责任地构建AI对技术的长期采用至关重要。

🌐谷歌DeepMind的战略倡议主管Terra Terwilliger在峰会上表示,希望人工智能领域能达成一致性,以充分发挥该技术的益处。她认为应让监管者理解AI不同层面的区别,基础模型与应用模型的责任不同。

📄加州SB - 1047法案曾引起广泛讨论,该法案要求大型AI模型的开发者满足特定安全测试和风险缓解要求。Eclipse Ventures的普通合伙人Aidan Madigan - Curtis认为,若OpenAI等公司的模型确实强大,应有安全开发的法律义务。

🔌Madigan - Curtis提到现已失效的加州法案中的‘终止开关’条款,要求公司创建一种方法,在模型被用于灾难性用途(如制造大规模杀伤性武器)时将其关闭。

💪DeepMind的Terwilliger认为,即使监管要求不断变化,负责任地构建AI对技术的长期采用至关重要,这包括确保数据干净、为模型设置防护栏等各个层面。

In the wake of California’s governor vetoing what would have been sweeping AI safety legislation, a Google DeepMind executive is calling for consensus on what constitutes safe, responsible, and human-centric artificial intelligence.“That’s my hope for the field, is that we can get to consistency, so that we can see all of the benefits of this technology,” said Terra Terwilliger, is director of strategic initiatives at Google DeepMind, the company’s AI research unit. She spoke at Fortune’s Most Powerful Women Summit on Wednesday along with January AI CEO and cofounder Noosheen Hashemi, Eclipse Ventures general partner Aidan Madigan-Curtis, and Dipti Gulati, CEO for audit and assurance at Deloitte & Touche LLP US.The women addressed SB-1047, the much-discussed California bill that would have required developers of the largest AI models to meet certain safety testing and risk mitigation requirements. Madigan-Curtis suggested that if companies like OpenAI are building models that really are as powerful as they say they are, there should be some legal obligations to develop safely. “That is kind of how our system works, right? It’s the push and the pull,” Madigan-Curtis said. “The thing that makes being a doctor scary is that you can get sued for medical malpractice.”She noted the now-dead California bill’s “kill-switch” provision, which would have required companies to create a way to turn their model off if it was somehow being used for something catastrophic, like to build weapons of mass destruction.“If your model is being used to terrorize a certain population, shouldn’t we be able to turn it off, or, you know, prevent the use?” she asked.DeepMind’s Terwilliger wants to see regulation that accounts for different levels of the AI stack. She said foundational models have different responsibilities from applications that use that model.“It’s really important that we all lean into helping regulators understand these distinctions so that we have regulation that will be stable and will make sense,” she said.But the push to build responsibly shouldn’t have to come from the government, Terwilliger said. Even with regulatory requirements in flux, building AI responsibly will be key to long-term adoption of the technology, she added. That applies to every level of the technology, from making sure data is clean, to setting up guardrails for the model.“I think we have to believe that responsibility is a competitive advantage, and so understanding how to be responsible at all levels of that stack is going to make a difference,” she said.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 加州法案 监管共识 责任构建
相关文章