Just as Shadow IT—the practice of employees using applications at work that are not sanctioned by IT departments—has long been the bane of tech leaders’ existence, now organizations are finding themselves grappling with the use of unauthorized AI tools.
This is causing consternation as use of generative AI in the workplace becomes more commonplace. A recent study by cybersecurity firm Prompt Security found that organizations use an average of 67 AI tools, but 90% of them operate without official approval from IT.
Further, 68% of enterprise employees said they access publicly available GenAI assistants such as ChatGPT, Microsoft Copilot, or Google Gemini through personal accounts, and 57% have admitted entering sensitive information into them, according to a recent survey by Telus Digital.
Software developers are also culprits. A Capgemini report found that while 46% of software developers are already using generative AI, 63% of those use GenAI unofficially. This so-called shadow AI phenomenon presents a growing risk for organizations, especially in highly regulated industries such as banking and financial services, where security, compliance, and data integrity are critical.
“Put yourself in the shoes of a software developer: you have a question and your organization hasn’t put in place the tools to answer that question. Instead, you use your phone to browse an unapproved GenAI website and get your question answered promptly,’’ said Doug Ross, CTO and GenAI lead at Capgemini, adding that shadow AI is “real and significant.”
Organizations and financial institutions are increasing their use of AI to improve fraud detection, customer experience, and operational efficiency. Yet, the proliferation of unsanctioned AI tools exposes them to threats like data leakage, compliance violations, and unintentional biases in decision-making. Without proper oversight, AI-focused automation could become a security liability rather than an asset.
Employees are likely doing this because they aren’t happy with the tools their company offers. In fact, 35% are footing the bill themselves for generative AI tools they use at work, according to a recent Writer survey.
That begs the question of why employers aren’t providing tools with sufficient capabilities. Ross points to at least three reasons: complexity and lack of a full understanding of GenAI; the speed at which new tools and processes come out, all while firms go through their reviews for one tool; and organizational change management and the effort required to bring in people with varying AI skill levels.
Kartik Talamadupula, head of AI at platform provider Wand AI, agrees. “Employers are unfortunately not able to keep pace with the high demand for trying out generative AI models, tools, and apps [for] their developers and technical workforce,’’ he said.
Unlike traditional software, “GenAI is often not deterministic in the sense that it is much harder to evaluate all possible paths and outcomes when it comes to the output of GenAI systems,’’ adds Talamadupula, who is also an applied AI officer of the ACM Special Interest Group on AI (SIGAI), and a senior member of the Association for the Advancement of Artificial Intelligence (AAAI). “As a result, the power of GenAI is still concentrated very much at the wide, messy end of the funnel right now; users must access GenAI directly at the firehose.”
Wells Fargo, for one, is tackling shadow AI head on. Even as he believes AI should not be blocked because people will find ways around it, David Kuo, an executive director at Wells Fargo, said it is incumbent upon organizations to create a set of policies and procedures governing its use.
“Create guardrails to safeguard use of AI,’’ said Kuo, who is also a member of the ISACA Emerging Trends Working Group. “Encourage safe practices and educate—people, processes, and technology. You begin with awareness.”
Then, leaders should define a list of AI tools and technologies the organization would like to use and have that well-documented, he advises. It’s also critical to have a proper data governance strategy that spells out what data can be used for various AI use cases.
Proper handling of data ultimately will protect organizations from potential AI risk, Kuo said. Organizations should also have data loss prevention (DLP) and privacy enhancing technologies in place.
Ross echoes that, saying if organizations leverage DLP and other cyber monitoring tools, they will be able to understand what sites are being used and why. “If you see adoption of a new tool, ask why users they are attracted to it and what can be done to fulfill that need,’’ he said.
Ross also advises organizations to invest in internal GenAI tools that allow users to safely engage with models to answer questions and create content, as well as investing in training.
Training is an area where improvements can be made. The Capgemini report found that 51% of senior executives believe using GenAI in software engineering will require significant investment in upskilling and cross-skilling of the software workforce, and only 39% of organizations have a generative AI upskilling program for software engineering.
Alexy Surkov, U.S. and global model risk management leader at Deloitte who focuses on the banking industry, said banks may be slightly better prepared to handle unauthorized AI tools “because they’ve lived for years around robust controls and models.”
All organizations should give employees “safe sandboxes and guardrails and clear guidelines that if I play in this box, it’s safe and approved,’’ Surkov said. “That enables them to move faster” and innovate as they protect sensitive data. “Think of some of these guardrails as seatbelts and ABS brakes in cars; it’s something we have to have, but as a result, a car can drive faster than 100 years ago.”
Wells Fargo’s Kuo agrees, saying every organization is different, and leaders need to balance risk with reward. “You can have innovation, but make sure you set up guardrails that meet your risk management appetite. There’s no set formula; understand risk, manage it accordingly,’’ he said. “One thing you shouldn’t do is avoid AI or prevent it. That’s draconian.”
Esther Shein is a freelance technology and business writer based in the Boston area.