MarkTechPost@AI 01月05日
Google DeepMind Presents a Theory of Appropriateness with Applications to Generative Artificial Intelligence
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

这篇文章探讨了在不同社会情境下,人类和人工智能系统如何根据“适当性”标准调整行为。研究指出,适当性是动态且依赖于情境的,它影响着人类的决策过程,并对生成式人工智能的发展至关重要。文章提出了一种计算模型,解释了人类如何通过记忆和情境线索来判断适当的行为,并强调了社会规范在其中的作用。与传统的对齐框架不同,该研究认为社会凝聚力是通过冲突解决机制而非共享价值观来维持的。文章还强调,在设计生成式人工智能系统时,必须考虑适当性的情境依赖性,并建议未来可能需要为人工智能制定类似法人资格的法律框架。

🎭 '适当性'是指导人类在不同社会环境中行为的特定标准,人类会根据所处环境调整行为,人工智能系统也应如此。

🧠 研究人员提出了一个计算模型,解释人类如何利用记忆和情境线索来预测适当的行为,整合感官输入和过往经验进行决策。

⚖️ 文章强调社会凝聚力是通过冲突解决机制而非共享价值观来维持的,并指出适当性受到社会规范和集体行为的影响。

🤖 在设计生成式AI系统时,必须考虑到“适当性”的情境依赖性,并根据社会规范进行调整,从而实现负责任的应用。

📜 研究建议,随着人工智能系统变得更加自主,可能需要为它们制定类似法人资格的法律框架,以解决道德和运营问题。

Appropriateness refers to the context-specific standards that guide behavior, speech, and actions in various social settings. Humans naturally navigate these norms, acting differently based on whether they are among friends, family, or a professional environment. Similarly, AI systems must adapt their behavior to fit the context, as the standards for a comedy-writing assistant differ from those of a customer-service representative. A critical challenge is determining what is appropriate in a given situation and how these norms evolve. Since humans ultimately judge AI behavior, understanding how appropriateness influences human decision-making is essential for evaluating and improving AI systems.

The concept of appropriateness also plays a central role in the emerging domain of generative AI. All socially adept actors—human or machine—must moderate their behavior based on the context and community in which they operate. This parallels the content moderation challenges digital communities face, where moderators enforce explicit rules and implicit social norms. Generative AI systems face a similar task: regulating the content they generate to align with contextual appropriateness. However, standards of appropriateness vary between individuals and within the same individual across different situations. For example, a teaching assistant chatbot must behave differently from one designed for a mature-rated game. This highlights the complex and dynamic nature of appropriateness, which remains critical as AI expands into physical, cultural, and institutional domains traditionally dominated by human intelligence.

Researchers from Google DeepMind, Mila – Québec AI Institute, University of Toronto, and the Max Planck Institute introduce a “theory of appropriateness,” examining its role in society, neural underpinnings, and implications for responsible AI deployment. It explores how AI systems can act appropriately across diverse contexts, emphasizing norms that guide human behavior. The paper conceptualizes appropriateness as a dynamic, context-dependent governance mechanism for societal cohesion. Departing from traditional alignment frameworks, it critiques oversimplified moral core assumptions, advocating for AI to adapt to the pluralistic, evolving norms shaping human interactions rather than seeking a universal moral consensus.

The study introduces a computational model to elucidate how humans determine appropriate behavior across various contexts. It posits that individuals utilize a pattern completion mechanism, drawing from memory and situational cues to predict suitable actions. This process involves a global workspace that integrates sensory inputs and past experiences, facilitating decision-making. The model also considers the role of social conventions and norms, highlighting how collective behaviors influence individual appropriateness judgments. By understanding these mechanisms, the research aims to inform the development of generative AI systems that can navigate complex social environments responsibly.

The work frames human behavior and societal cohesion not through alignment but appropriateness, emphasizing that societies are maintained through conflict resolution mechanisms rather than shared values. The study presents a decision-making model that contrasts reward-based approaches, highlighting that appropriateness in human behavior emerges from a blend of societal influences. This model differentiates between explicit norms (articulated in language) and implicit ones (embodied in the brain’s patterns), which can guide interactions between humans and AI systems, especially in context-sensitive tasks.

The study calls for careful consideration when designing generative AI systems by recognizing that appropriateness is context-dependent and deeply linked to societal norms. It highlights that while AI lacks human-like context awareness, understanding appropriateness is vital for its responsible use. The paper also suggests that AI may eventually need specific legal frameworks similar to corporate personhood to address ethical and operational issues, particularly as AI systems become more autonomous. This underscores the importance of cognitive science in shaping AI governance and ensuring it aligns with societal expectations.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

FREE UPCOMING AI WEBINAR (JAN 15, 2025): Boost LLM Accuracy with Synthetic Data and Evaluation IntelligenceJoin this webinar to gain actionable insights into boosting LLM model performance and accuracy while safeguarding data privacy.

The post Google DeepMind Presents a Theory of Appropriateness with Applications to Generative Artificial Intelligence appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

适当性 生成式AI 社会规范 人工智能 计算模型
相关文章