EnterpriseAI 2024年09月25日
Is AI an Existential Threat to Humanity’s Future?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

人工智能的发展引发对其潜在存在威胁的担忧,一些观点认为需严格全球监管,也有研究表明其并非对人类构成存在威胁,但同时也警告并非毫无风险

🎯人工智能发展引发担忧,存在威胁论认为其若达超级智能或超越人类认知能力,且自主决策时无人监管,可能构成威胁

📄美国国务院报告称最先进AI系统在最坏情况下可能对人类构成灭绝级威胁,此报告基于对200多人的访谈

🔍巴斯大学和达姆施塔特工业大学的研究挑战存在威胁论,指出AI模型虽能按指令行事且语言熟练,但缺乏无明确指令下掌握新技能的能力,仍可预测、控制和安全

🧪该研究对LLMs的‘涌现能力’进行测试,包括上下文理解、复杂推理和互动学习等,评估其真正的能力来源及是否具备复杂推理能力,发现其能力主要来自于上下文学习而非真正的学习

The rise of artificial intelligence (AI) systems has sparked significant concern and anxiety regarding the implications of AI technology, particularly regarding the potential for an existential threat posed by advanced AI. 

According to the Center for AI Safety (CAIS), addressing the potential existential threat posed by AI must be considered a global priority, on par with other significant societal risks like pandemics and nuclear conflict. This perspective reflects a prevalent concern within the AI industry that such risks may start to emerge unless AI technology is strictly regulated on a global scale. 

Several factors have contributed to the belief that AI poses an existential threat including the hypothetical scenario of AI systems achieving superintelligence, exceeding human cognitive abilities. As AI systems become more autonomous, there are also concerns about the ability of the technology to make decisions without human oversight. 

A report commissioned by the US State Department that the most advanced AI systems could pose an extinction-level threat to humans in a worst-case scenario. The findings of this report were based on interviews conducted with over 200 individuals, including senior executives from leading AI firms and cybersecurity researchers. 

While these concerns are certainly valid, a groundbreaking study from researchers at the University of Bath and the Technical University of Darmstadt challenges the narrative, revealing that AI does not pose an existential threat to humanity.

The study was published as part of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) - the leading international conference focused on natural language processing. 

The findings of the research highlight that while ChatGPT and other large language models (LLMs) demonstrate capabilities to follow instructions and show proficiency in language, they lack the ability to master new skills without explicit instructions. As a result, they remain fundamentally predictable, controllable, and safe. 

As these AI models continue to evolve, they are likely to become more sophisticated in their ability to follow detailed prompts, however, they are unlikely to acquire the complex reasoning skills necessary for autonomous decision-making, according to this study. 

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies and also diverts attention from the genuine issues that require our focus,” said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the new study.

(Stokkete/Shutterstock)

The research was primarily based on testing the LLMs on their “emergent abilities”, such as contextual understanding, complex reasoning, and interactive learning. 

The collaborative research team, led by Professor Iryna Gurevych, conducted several experiments to test the emergent abilities of LLMs. This included testing them on tasks they had never encountered before. 

Assessing emergent abilities is challenging as LLMs possess a remarkable ability to use their in-context learning (ICL) to adapt to new situations and provide relevant outputs based on the context they have been given. 

One of the key goals of the study was to determine which abilities genuinely emerge without ICL and whether functional linguistic abilities in instruction-tuned models originate from ICL rather than being intrinsic.

The study evaluated 20 models across 22 tasks using two settings and multiple metrics, including bias tests and manual analysis. Four model families - GPT, T5, Falcon2, and LLaMA were chosen based on their known abilities and performance. 

The findings revealed that the LLM's abilities mainly come from ICL rather than “learning” new information based on a set of examples provided to them. This highlights the crucial difference between following instructions and possessing the reasoning abilities required to become completely autonomous or achieve superintelligence. The researchers concluded that LLMs lacked emergent complex reasoning abilities. 

While the study claims to have put the existential threat fears to ease, the researchers warn that the findings do not mean AI poses no threat at all. Instead, the research demonstrates that the  “claims about the emergence of complex thinking skills related to specific dangers lack evidence”. According to the researchers, further experiments need to be conducted to understand the other risks posed by AI models.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 存在威胁 LLMs 研究发现
相关文章