Communications of the ACM - Artificial Intelligence 05月30日 04:07
The Rise of the AI-Enabled Agentic Internet
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了自主AI агент(Agentic AI)的兴起及其对社会的影响。Agentic AI是指能够代表用户自主做出决策的AI系统,在商业和消费者领域都有着广泛的应用前景,例如优化癌症研究、软件开发和购物等。然而,随着AI агент变得越来越普遍,对其潜在的风险也日益关注。文章强调了在发展Agentic AI的同时,需要关注价值对齐、偏差和幻觉等问题,并提出了诸如建立伦理准则、实施“终止开关”、加强透明度以及进行外部审计等保障措施,以确保AI агент的发展符合人类的利益。

🤖 Agentic Internet是由AI驱动的自主agents组成,能够代表用户做出至少一项决策,在商业和信息领域具有广泛的应用前景,例如AI agents可以帮助消费者进行研究、购物、信息管理以及处理日常事务。

⚠️ 随着AI агент变得越来越普遍,对其潜在的风险也日益关注,比如AI在训练中使用了错误的数据,那么agent的回答将基于它所学到的内容,即使训练和数据是以善意提供的,结果也可能因为偏见而存在缺陷。

⚖️ 为了应对失控的AI agents,最简单的解决方案是在它们被释放到世界之前,赋予它们道德准则和常识规则。在AI agents中构建这些规则,可以使其更难对用户或其他系统造成损害。同时,企业应该创建完全的透明度,在某些情况下,构建一个“终止开关”,以便在出现问题时部署。

🤝 人与AI агент的协作至关重要,为了避免灾难,必须始终将agent的实际控制权掌握在人类手中。通过编程AI агент,使其告知用户它是如何做出决策和提供答案的,这可以揭示一些偏见和幻觉。

The mythical creatures crowding the American Film Institute’s list of 100 Heroes & Villains are accompanied by a seemingly realistic threat: HAL 9000, an artificial intelligence-powered computer that thinks for itself and is ready to kill.

As AI finds a place in homes, minds, and companies across the world, many are aflutter over agentic AI and the rise of the agentic Internet. Analysts, policy wonks, and academics appear to believe that the agentic Internet will usher in an age of increased productivity and innovation for consumers and organizations.

However, as the protagonists of 2001: A Space Odyssey found out, with every technological advance comes the potential for misuse. As a society on the cusp of a fully functioning agentic Internet, some experts cite a need to put guardrails into place or risk serious harm.

Agency or Obedience: Finding the Right Balance

At its most basic, the agentic Internet is a series of autonomous agents informed by AI that make at least one decision on a user’s behalf. Much of the discussion around the agentic Internet is centered on commerce and information. Consumer use cases include asking AI agents to take over research and shopping, curation of information, and doing things such as paying bills and other mundane tasks.

Business use cases are appealing and numerous. AI agents already are looking for ways to better target cancer research, handle software development, optimize manufacturing, and more. It’s really a question of what an AI agent can do, rather than what it can’t. But what does it mean for society that turns over all the major and minor decisions that humans have always made to such agents? That’s the most critical question, says Kate O’Neill, founder and CEO of strategic advisory firm KO Insights and author of What Matters Next.

“The easier technology makes things, the harder we must think about its effects,” said O’Neill. “When AI agents handle our routine choices, they don’t just save time; they shape our behaviors, preferences, and ultimately, our autonomy. The real question isn’t whether AI can make good decisions, but whether those decisions align with human flourishing.”

That’s a difficult question, especially when the AI we have today often makes mistakes or hallucinates, as well as telling humans exactly what they want to hear; the perfect little electronic ego boost. In addition, AI is trained using data. If the wrong people do the training or use the wrong data, the agent’s answers will be based on what it has been taught. And even if training and data is provided with good intent, the results can be flawed due to bias.

These problems become magnified when AI agents interact with each other, according to Merve Hickok, a lecturer in the School of Information of the University of Michigan, and founder of AIethicist.org. An expert on AI policy, ethics, and governance, Hickok said that while one biased or dangerous agent can be worked around, when they interact the impact grows exponentially.

“Since we do not have a solution for value alignment or hallucinations, we should be worried about agentic AI systems based on language models,” Hickok said. “The same ethical concerns apply, with the additional element of more complexity. An individual agentic AI might contain bias or errors. Interconnected agentic AI might snowball and complicate the issues. Or individually acceptable systems might have risks when they operate together.”

The Barn Is Already Open

The easiest fix for out-of-control AI agents is imbuing them with ethical pathways and common-sense rules before they are turned loose on the world. Having such rules built in could make it more difficult for AI agents to do harm to users or other systems. Yet some experts say we’re already past the point of no return, and creating or remaking AI agents with ethics is going to be nearly impossible. The main hurdle is the fact that there’s no single point of development and usage, said Faisal Hoque, founder of several companies and author of Transcend: Unlocking Humanity In The Age Of AI. Saud Hoque, “Who’s responsible? It’s really a multi-tiered effort. …The first tier is the platform vendors who are building these agents.”

The second and third tiers, Hoque said, are governments and the developers and users who are working with them. Getting all these entities to agree on the same goals, much less agree to put limits on AI agents, probably couldn’t happen. What corporate developers and professional users can do, however, is create complete transparency and, in some cases, build in a ‘kill switch’ to be deployed in case of a problem, Hoque said.

“You cannot have a kill switch for all AI because that’s the entire Internet, or entire connected network, but you can have a kill switch for a particular application in a particular setup.”

Power to the People

It’s important to note that, in the case of a kill switch, the actual control of the agent stays in a human’s hands, something that must continue in order to avoid catastrophes, Ece Kamar, distinguished scientist and managing director of Microsoft’s AI Frontiers Lab explains. Kamar sees the potential benefits of the agentic Internet, saying that it provides an opportunity for co-evolution with models to create real value for the people using agentic Internet systems, but with some caveats.

“[Agentic AI] allows better understanding of user needs, taking actions to get things done, and being able to interact with the environment. The innovations on reasoning and model capabilities are contributing to a new technology stack towards creating reliable and capable agents,” Kamar said. “With this higher value that agents can foster, there is a new set of risks we should be aware of, which prompts new research questions around mitigating those risks and how to enable effective human-agent collaboration that puts people in control.”

This should be coupled with auditing from an outside source whether that’s an industry group, government entity, or third-party vendor who will keep track of how AI agents are built, tested, executed and monitored.

Building in transparency can also happen by programming the AI agent to tell users how it is making decisions and providing answers, says Eelco Herder, an associate professor in the Interaction Group at Utrecht University who is also chair of the ACM Special Interest Group on Hypertext and the Web (SIGWEB).

“There is a strong research area focused on transparent and fair recommender systems that use explanations as a main mechanism,” Herder said. For instance, if you asked an AI agent to find you a dentist, the agent would show you how it came upon that decision. This could expose some biases and hallucinations.

Finally, organizations and users must start putting pressure on developers so that they don’t forget what’s at stake. Liz Miller, vice president and principal analyst at Constellation Research, said this is imperative.

“As an enterprise, we have to ask harder questions when we are actually considering and bringing these tools in. What are the governance models? What are the training guidelines, what are the training limitations, organizations who are fine-tuning these large models, or organizations who are building these large models? It is our responsibility to ask them the hard questions, to demand to know what those policies and what those details are.”

Added Hickok, “I do not think we yet know the full extent of safeguards which may be necessary, or the effectiveness of the current methods we have.”

K.J. Bannan is a writer and editor based in Massapequa, NY, USA. She began her career on the PM Magazine First Looks team reviewing all the latest and greatest technologies. Today, she is a freelancer who covers business, technology, health, personal finance, and lifestyle topics.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Agentic AI 人工智能 伦理 风险 透明度
相关文章