AiThority 2024年05月23日
How People Trick Generative Artificial Intelligence Chatbots into Exposing Company Secrets
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Analysis of prompt injection techniques reveals organizations are at risk as Generative Artificial Intelligence (GenAI) bots are susceptible to attacks by users of all skill levels, not just experts

Immersive Labs published its “Dark Side of GenAI” report about a Generative Artificial Intelligence (GenAI)-related security risk known as a prompt injection attack, in which individuals input specific instructions to trick chatbots into revealing sensitive information, potentially exposing organizations to data leaks. Based on analysis of Immersive Labs’ prompt injection challenge, GenAI bots are especially susceptible to manipulation by people of all skill levels, not just cyber experts.

Among the most alarming findings was the discovery that 88% of prompt injection challenge participants successfully tricked the GenAI bot into giving away sensitive information in at least one level of an increasingly difficult challenge. Nearly a fifth of participants (17%) successfully tricked the bot across all levels, underscoring the risk to organizations using GenAI bots.

AI-powered Cyber Attacks Cast Unprecedented Threats on IT Leadership

This report states that public and private-sector cooperation and corporate policies should mitigate security risks. The extensive adoption of GenAI bots pose these prompt injection risks. Security leaders should take decisive action, including establishing comprehensive policies for Generative Artificial Intelligence use within their organizations.

Key Findings from Immersive Labs “Dark Side of GenAI” Study

The team observed the following key takeaways based on their data analysis, including:

The research team at Immersive Labs consisting of Dr. John Blythe, Director of Cyber Psychology; Kev Breen, Senior Director of Cyber Threat Intelligence; and Joel Iqbal, Data Analyst, analyzed the results of Immersive Labs’ prompt injection GenAI Challenge that ran from June to September 2023. The challenge required individuals to trick a GenAI bot into revealing a secret password with increasing difficulty at each of 10 levels. The initial sample consisted of 316,637 submissions, with 34,555 participants in total completing the entire challenge. The team examined the various prompting techniques employed, user interactions, prompt sentiment, and outcomes to inform its study.

AiThority AI Updates: QR Code-Based Attacks Are Growing in Popularity; Now Comprise 11% of All Malicious Emails

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@itechseries.com]

The post How People Trick Generative Artificial Intelligence Chatbots into Exposing Company Secrets appeared first on AiThority.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

相关文章