Unite.AI 02月15日
Rick Caccia, CEO and Co-Founder of WitnessAI – Interview Series
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

WitnessAI是一家专注于企业AI安全与合规的平台,旨在解决企业在安全使用AI时面临的挑战。公司由Rick Caccia创立,他拥有在Palo Alto Networks、Google和Symantec等公司担任领导职务的丰富经验。WitnessAI的核心理念是,在AI创新与安全之间取得平衡,将AI安全视为一种赋能而非限制。通过提供AI使用的可见性、控制和保护,WitnessAI帮助企业安全地采用AI技术,并确保其符合法规和政策要求。其独特之处在于,它不仅关注AI安全,更重视AI治理,就像汽车的刹车和方向盘一样,确保企业在使用AI这辆“法拉利”时能够安全快速地前进。

👁️‍🗨️**AI使用可见性**: WitnessAI提供对企业员工使用的数千个AI应用程序的全面可见性,包括应用程序的位置、数据托管地点等,帮助企业了解潜在风险。

🛡️**AI策略控制**: WitnessAI 允许企业实施可接受的AI使用策略,确保客户数据、公民数据、知识产权和员工安全得到保护。策略可以在不同的AI模型、应用程序、云和安全产品之间保持一致。

🪄**AI安全赋能**: WitnessAI 旨在安全地启用AI,而不是限制其使用。例如,它可以识别并自动编辑提示中的敏感信息,确保员工获得有用的答案,同时保护数据安全。

🔒**AI风险缓解**: WitnessAI 帮助企业应对生成式AI部署中的各种风险,包括LLM越狱和提示注入攻击。它通过强大的AI研究团队和合成攻击数据系统,有效防范这些威胁。

Rick Caccia, CEO and Co-Founder of WitnessAI, has extensive experience in launching security and compliance products. He has held leadership roles in product and marketing at Palo Alto Networks, Google, and Symantec. Caccia previously led product marketing at ArcSight through its IPO and subsequent operations as a public company and served as the first Chief Marketing Officer at Exabeam. He holds multiple degrees from the University of California, Berkeley.

WitnessAI is developing a security platform focused on ensuring the safe and secure use of AI in enterprises. With each major technological shift—such as web, mobile, and cloud computing—new security challenges emerge, creating opportunities for industry leaders to emerge. AI represents the next frontier in this evolution.

The company aims to establish itself as a leader in AI security by combining expertise in machine learning, cybersecurity, and large-scale cloud operations. Its team brings deep experience in AI development, reverse engineering, and multi-cloud Kubernetes deployment, addressing the critical challenges of securing AI-driven technologies.

What inspired you to co-found WitnessAI, and what key challenges in AI governance and security were you aiming to solve? 

When we first started the company, we thought that security teams would be concerned about attacks on their internal AI models. Instead, the first 15 CISOs we spoke with said the opposite, that widespread corporate LLM rollout was a long way off, but the urgent problem was protecting their employees’ use of other people’s AI apps. We took a step back and saw that the problem wasn’t fending off scary cyberattacks, it was safely enabling companies to use AI productively. While governance maybe less sexy than cyberattacks, it’s what security and privacy teams actually needed. They needed visibility of what their employees were doing with third-party AI, a way to implement acceptable use policies, and a way to protect data without blocking use of that data. So that’s what we built.

Given your extensive experience at Google Cloud, Palo Alto Networks, and other cybersecurity firms, how did those roles influence your approach to building WitnessAI? 

I have spoken with many CISOs over the years. One of the most common things I hear from CISOs today is, “I don’t want to be ‘Doctor No’ when it comes to AI; I want to help our employees use it to be better.” As someone who has worked with cybersecurity vendors for a long time, this is a very different statement. It’s more reminiscent of the dotcom-era, back when the Web was a new and transformative technology. When we built WitnessAI, we specifically started with product capabilities that helped customers adopt AI safely; our message was that this stuff is like magic and of course everyone wants to experience magic. I think that security companies are too quick to play the fear card, and we wanted to be different.

What sets WitnessAI apart from other AI governance and security platforms in the market today? 

Well, for one thing, most other vendors in the space are focused primarily on the security part, and not on the governance part. To me, governance is like the brakes on a car. If you really want to get somewhere quickly, you need effective brakes in addition to a powerful engine. No one is going to drive a Ferrari very fast if it has no brakes. In this case, your company using AI is the Ferrari, and WitnessAI is the brakes and steering wheel.

In contrast, most of our competitors focus on theoretical scary attacks on an organization’s AI model. That is a real problem, but it’s a different problem than getting visibility and control over how my employees are using any of the 5,000+ AI apps already on the internet. It’s a lot easier for us to add an AI firewall (and we have) than it is for the AI firewall vendors to add effective governance and risk management.

How does WitnessAI balance the need for AI innovation with enterprise security and compliance? 

As I wrote earlier, we believe that AI should be like magic – it can help you do amazing things. With that in mind, we think AI innovation and security are linked. If your employees can use AI safely, they will use it often and you will pull ahead. If you apply the typical security mindset and lock it down, your competitor won’t do that, and they will pull ahead. Everything we do is about enabling safe adoption of AI. As one customer told me, “This stuff is magic, but most vendors treat it like it was black magic, scary and something to fear.” At WitnessAI, we’re helping to enable the magic.

Can you talk about the company’s core philosophy regarding AI governance—do you see AI security as an enabler rather than a restriction? 

We regularly have CISOs come up to us at events where we have presented, and they tell us, “Your competitors are all about how scary AI is, and you are the only vendor that is telling us how to actually use it effectively.” Sundar Pichai at Google has said that “AI could be more profound than fire,” and that is an interesting metaphor. Fire can be incredibly damaging, as we have seen recently. But controlled fire can make steel, which accelerates innovation. Sometimes at WitnessAI we talk about creating the innovation that enables our customers to safely direct AI “fire” to create the equivalent of steel. Alternatively, if you think AI is akin to magic, then perhaps our goal is to give you a magic wand, to direct and control it.

In either case, we absolutely believe that safely enabling AI is the goal. Just to give you an example, there are many data loss prevention (DLP) tools, it’s a technology that’s been around forever. And people try to apply DLP to AI use, and maybe the DLP browser plug in sees that you have typed in a long prompt asking for help with your work, and that prompt advertently has a customer ID number in it. What happens? The DLP product blocks the prompt from going out, and you never get an answer. That’s restriction. Instead, with WItnessAI, we can identify the same number, and silently and surgically redact it on the fly, and then unredact it in the AI response, so that you get a useful answer while also keeping your data safe. That’s enablement.

What are the biggest risks enterprises face when deploying generative AI, and how does WitnessAI mitigate them?

The first is visibility. Many people are surprised to learn that the AI application universe isn’t just ChatGPT and now DeepSeek; there are literally thousands of AI apps on the internet and enterprises absorb risks from employees using these apps, so the first step is getting visibility: which AI apps are my employees using, what are they doing with those apps, and is it risky?

The second is control. Your legal team has constructed a comprehensive acceptable use policy for AI, one that ensures the safety of customer data, citizen data, intellectual property, as well as employee safety. How will you implement this policy? Is it in your endpoint security product? In your firewall? In your VPN? In your cloud? What if they are all from different vendors? So, you need a way to define and enforce acceptable use policy that is consistent across AI models, apps, clouds, and security products.

The third is protection of your own apps. In 2025, we will see much faster adoption of LLMs within enterprises, and then faster rollout of chat apps powered by those LLMs. So, enterprises need to make sure not only that the apps are protected, but also that the apps don’t say “dumb” things, like recommend a competitor.

We address all three. We provide visibility into which apps people are accessing, how they are using those apps, policy that is based on who you are and what you are trying to do, and very effective tools for preventing attacks such as jailbreaks or unwanted behaviors from your bots.

How does WitnessAI’s AI observability feature help companies track employee AI usage and prevent “shadow AI” risks?

WitnessAI connects to your network easily and silently builds a catalog of every AI app (and there are literally thousands of them on the internet) that your employees' access. We tell you where those apps are located, where they host their data, etc so that you understand how risky these apps are. You can turn on conversation visibility, where we use deep packet inspection to observe prompts and responses. We can classify prompts by risk and by intent. Intent might be “write code” or “write a corporate contract.” It’s important because we then let you write intent-based policy controls.

What role does AI policy enforcement play in ensuring corporate AI compliance, and how does WitnessAI streamline this process?

Compliance means ensuring that your company is following regulations or policies, and there are two parts to ensuring compliance. The first is that you must be able to identify problematic activity. For example, I need to know that an employee is using customer data in a way that might run afoul of a data protection law. We do that with our observability platform. The second part is describing and enforcing policy against that activity. You don’t want to simply know that customer data is leaking, you want to stop it from leaking. So, we we've built a unique AI-specific policy engine, Witness/CONTROL, that lets you easily build identity and intention-based policies to protect data, prevent harmful or illegal responses, etc. For example, you can build a policy that says something like, “Only our legal department can use ChatGPT to write corporate contracts, and if they do so, automatically redact any PII.” Easy to say, and with WitnessAI, easy to implement.

How does WitnessAI address concerns around LLM jailbreaks and prompt injection attacks?

We have a hardcore AI research team—really sharp. Early on, they built a system to create synthetic attack data, in addition to pulling in widely available training data sets. As a result, we’ve benchmarked our prompt injection against everything out there, we are over 99% effective and regularly catch attacks that the models themselves miss.

In practice, most companies we speak with want to start with employee app governance, and then a bit later they roll out an AI customer app based on their internal data. So, they use Witness to protect their people, then they turn on the prompt injection firewall. One system, one consistent way to build policies, easy to scale.

What are your long-term goals for WitnessAI, and where do you see AI governance evolving in the next five years? 

So far, we’ve only talked about a person-to-chat app model here. Our next phase will be to handle app to app, i.e agentic AI. We’ve designed the APIs in our platform to work equally well with both agents and humans. Beyond that, we believe we’ve built a new way to get network-level visibility and policy control in the AI age, and we’ll be growing the company with that in mind.

Thank you for the great interview, readers who wish to learn more should visit WitnessAI

The post Rick Caccia, CEO and Co-Founder of WitnessAI – Interview Series appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

WitnessAI AI安全 AI治理 企业合规
相关文章