TechCrunch News 2024年12月18日
Google says customers can use its AI in ‘high-risk’ domains, so long as there’s human supervision
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

谷歌更新其生成式AI禁止使用政策,允许客户在有人监督的情况下,将其生成式AI工具用于医疗等‘高风险’领域的‘自动化决策’。此前条款曾暗示全面禁止,现谷歌称有人监督时一直可用于高风险应用。同时提到其他公司的相关规定及监管情况。

💡谷歌允许在人监督下用其生成式AI做高风险领域自动化决策

🚫OpenAI禁止其服务用于某些高风险自动化决策

🎯Anthropic允许AI在特定监督下用于高风险领域自动化决策

👀监管机构对影响个人的自动化决策的AI进行审查

Google has changed its terms to clarify that customers can deploy its generative AI tools to make “automated decisions” in “high-risk” domains, like healthcare, so long as there’s a human in the loop.

According to the company’s updated Generative AI Prohibited Use Policy, published on Tuesday, customers may use Google’s generative AI to make “automated decisions” that could have a “material detrimental impact on individual rights.” Provided that a human supervises in some capacity, customers can use Google’s generative AI to make decisions about employment, housing, insurance, social welfare, and other “high-risk” areas.

In the context of AI, automated decisions refer to decisions made by an AI system based on data both factual and inferred. A system might make an automated decision to award a loan, for example, or screen a job candidate.

The previous draft of Google’s terms implied a blanket ban on high-risk automated decision making where it involves the company’s generative AI. But Google tells TechCrunch customers could always use its generative AI for automated decision making, even for high-risk applications, as long as a human was supervising.

“The human supervision requirement was always in our policy, for all high-risk domains,” a Google spokesperson said when reached for comment via email. “[W]e’re recategorizing some items [in our terms] and calling out some examples more explicitly to be clearer for users.”

Google’s top AI rivals, OpenAI and Anthropic, have more stringent rules governing the use of their AI in high-risk automated decision making. For example, OpenAI prohibits the use of its services for automated decisions relating to credit, employment, housing, education, social scoring, and insurance. Anthropic allows its AI to be used in law, insurance, healthcare, and other high-risk areas for automated decision making, but only under the supervision of a “qualified professional” — and it requires customers to disclose they’re using AI for this purpose.

AI that makes automated decisions affecting individuals has attracted scrutiny from regulators, who’ve expressed concerns about the technology’s potential to bias outcomes. Studies show, for example, that AI used to make decisions like the approval of credit and mortgage applications can perpetuate historical discrimination.

The nonprofit group Human Rights Watch has called for the ban of “social scoring” systems in particular, which the org says threatens to disrupt people’s access to Social Security support, compromise their privacy, and profile them in prejudicial ways.

Under the AI Act in the EU, high-risk AI systems, including those that make individual credit and employment decisions, face the most oversight. Providers of these systems must register in a database, perform quality and risk management, employ human supervisors, and report incidents to the relevant authorities, among other requirements.

In the U.S., Colorado recently passed a law mandating that AI developers disclose information about “high-risk” AI systems, and publish statements summarizing the systems’ capabilities and limitations. New York City, meanwhile, prohibits employers from using automated tools to screen a candidate for employment decisions unless the tool has been subject to a bias audit within the prior year.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

谷歌 生成式AI 自动化决策 监管
相关文章