Communications of the ACM - Artificial Intelligence 11小时前
The Real, Significant Threat of Shadow AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了企业在员工使用未经授权的AI工具(“影子AI”)时面临的挑战,以及如何应对。研究表明,许多企业员工使用未经IT部门批准的AI工具,带来了数据泄露、合规性问题等风险。文章强调了建立政策、提供安全工具、培训员工以及加强数据治理的重要性。专家建议企业应采取积极措施,如创建安全沙盒、明确使用指南,以平衡创新与风险,确保AI工具的安全使用。

💡 员工广泛使用未经授权的AI工具:调查显示,企业员工大量使用未经IT部门批准的AI工具,其中许多人通过个人账户访问ChatGPT等公开AI助手,甚至输入敏感信息,这构成了“影子AI”现象。

🛡️ 影子AI带来的风险:未经授权的AI工具使用可能导致数据泄露、违反合规性规定和决策偏差等问题,尤其是在银行和金融服务等高度监管的行业中,这些风险更为突出。

🤔 企业应对策略:专家建议企业应制定明确的AI使用政策和流程,提供安全的内部AI工具,加强数据治理和安全措施,如数据丢失防护(DLP)和隐私增强技术,并对员工进行培训,以平衡创新与风险。

🚧 建立安全措施:为了应对影子AI带来的风险,企业需要构建安全措施,包括明确的AI工具清单、数据治理策略以及安全沙盒和使用指南,以便在保护敏感数据的同时促进创新。

🚀 创新与风险的平衡:企业不应完全禁止AI工具的使用,而应通过建立风险管理框架,如设置安全防护措施,来确保安全合规地使用AI,从而实现创新和风险之间的平衡。

Just as Shadow IT—the practice of employees using applications at work that are not sanctioned by IT departments—has long been the bane of tech leaders’ existence, now organizations are finding themselves grappling with the use of unauthorized AI tools.

This is causing consternation as use of generative AI in the workplace becomes more commonplace. A recent study by cybersecurity firm Prompt Security found that organizations use an average of 67 AI tools, but 90% of them operate without official approval from IT.

Further, 68% of enterprise employees said they access publicly available GenAI assistants such as ChatGPT, Microsoft Copilot, or Google Gemini through personal accounts, and 57% have admitted entering sensitive information into them, according to a recent survey by Telus Digital.

Software developers are also culprits. A Capgemini report found that while 46% of software developers are already using generative AI, 63% of those use GenAI unofficially. This so-called shadow AI phenomenon presents a growing risk for organizations, especially in highly regulated industries such as banking and financial services, where security, compliance, and data integrity are critical.

“Put yourself in the shoes of a software developer: you have a question and your organization hasn’t put in place the tools to answer that question. Instead, you use your phone to browse an unapproved GenAI website and get your question answered promptly,’’ said Doug Ross, CTO and GenAI lead at Capgemini, adding that shadow AI is “real and significant.”

Organizations and financial institutions are increasing their use of AI to improve fraud detection, customer experience, and operational efficiency. Yet, the proliferation of unsanctioned AI tools exposes them to threats like data leakage, compliance violations, and unintentional biases in decision-making. Without proper oversight, AI-focused automation could become a security liability rather than an asset.

Employees are likely doing this because they aren’t happy with the tools their company offers. In fact, 35% are footing the bill themselves for generative AI tools they use at work, according to a recent Writer survey.

That begs the question of why employers aren’t providing tools with sufficient capabilities. Ross points to at least three reasons: complexity and lack of a full understanding of GenAI; the speed at which new tools and processes come out, all while firms go through their reviews for one tool; and organizational change management and the effort required to bring in people with varying AI skill levels.

Kartik Talamadupula, head of AI at platform provider Wand AI, agrees. “Employers are unfortunately not able to keep pace with the high demand for trying out generative AI models, tools, and apps [for] their developers and technical workforce,’’ he said.

Unlike traditional software, “GenAI is often not deterministic in the sense that it is much harder to evaluate all possible paths and outcomes when it comes to the output of GenAI systems,’’ adds Talamadupula, who is also an applied AI officer of the ACM Special Interest Group on AI (SIGAI), and a senior member of the Association for the Advancement of Artificial Intelligence (AAAI). “As a result, the power of GenAI is still concentrated very much at the wide, messy end of the funnel right now; users must access GenAI directly at the firehose.”

Wells Fargo, for one, is tackling shadow AI head on. Even as he believes AI should not be blocked because people will find ways around it, David Kuo, an executive director at Wells Fargo, said it is incumbent upon organizations to create a set of policies and procedures governing its use.

“Create guardrails to safeguard use of AI,’’ said Kuo, who is also a member of the ISACA Emerging Trends Working Group. “Encourage safe practices and educate—people, processes, and technology. You begin with awareness.”

Then, leaders should define a list of AI tools and technologies the organization would like to use and have that well-documented, he advises. It’s also critical to have a proper data governance strategy that spells out what data can be used for various AI use cases.

Proper handling of data ultimately will protect organizations from potential AI risk, Kuo said. Organizations should also have data loss prevention (DLP) and privacy enhancing technologies in place.

Ross echoes that, saying if organizations leverage DLP and other cyber monitoring tools, they will be able to understand what sites are being used and why. “If you see adoption of a new tool, ask why users they are attracted to it and what can be done to fulfill that need,’’ he said.

Ross also advises organizations to invest in internal GenAI tools that allow users to safely engage with models to answer questions and create content, as well as investing in training.

Training is an area where improvements can be made. The Capgemini report found that 51% of senior executives believe using GenAI in software engineering will require significant investment in upskilling and cross-skilling of the software workforce, and only 39% of organizations have a generative AI upskilling program for software engineering.

Alexy Surkov, U.S. and global model risk management leader at Deloitte who focuses on the banking industry, said banks may be slightly better prepared to handle unauthorized AI tools “because they’ve lived for years around robust controls and models.”

All organizations should give employees “safe sandboxes and guardrails and clear guidelines that if I play in this box, it’s safe and approved,’’ Surkov said. “That enables them to move faster” and innovate as they protect sensitive data. “Think of some of these guardrails as seatbelts and ABS brakes in cars; it’s something we have to have, but as a result, a car can drive faster than 100 years ago.”

Wells Fargo’s Kuo agrees, saying every organization is different, and leaders need to balance risk with reward. “You can have innovation, but make sure you set up guardrails that meet your risk management appetite. There’s no set formula; understand risk, manage it accordingly,’’ he said. “One thing you shouldn’t do is avoid AI or prevent it. That’s draconian.”

Esther Shein is a freelance technology and business writer based in the Boston area.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

影子AI AI工具 数据安全 风险管理
相关文章