Fortune | FORTUNE 2024年10月24日
SoftBank, Mastercard, and Anthropic cyber chiefs sound alarms on AI phishing and deepfakes—but those aren’t the only things keeping them up at night
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AI时代下的网络安全问题,包括AI驱动的网络攻击威胁,如AI-powered phishing attacks和deepfakes,以及如何保护公司数据和应对不断演变的攻击。还提到了对未来AI模型发展风险的规划及防御者的应对策略。

🎯Team8调查发现AI-powered phishing attacks和deepfakes成为网络安全的主要担忧,四分之三的人认为对抗AI phishing attacks是一场艰难的战斗,超过一半的人认为deepfakes的威胁日益增加。

💻Gary Hayslip指出在AI时代保护公司数据免受供应链攻击是重大问题,组织需谨慎对待第三方供应商使用数据的情况。Adam Zoller也认为使用第三方AI工具时保护公司数据和系统是当前最大的安全难题。

📈对于开发最先进AI模型的公司,规划未来风险至关。Jason Clinton强调了“the scaling law hypothesis”的后果,同时对防御者利用AI应对攻击持谨慎乐观态度。

🌊Softbank的Hayslip认为这是一种‘cold war’,犯罪实体利用AI制造新威胁,推动安全方发展新技术,但威胁在不断升级。

When over 100 top cybersecurity leaders gathered in July at a retreat center in the California redwoods near San Jose, the serene sounds of rustling needles did not detract from discussions about how to deal with the latest AI-driven threats. Team8, the venture capital firm behind the event, surveyed the group—including Fortune 500 CISOs—and found that AI-powered phishing attacks and the rise of deepfakes had emerged as top concerns, just a year after many in the cohort had hoped generative AI would be nothing more than a passing fad. Three-quarters said that fighting AI phishing attacks–or phishing campaigns that make email, text or messaging scams more sophisticated, personalized, and difficult to detect–had become an uphill battle. Over half said deepfakes, or AI-generated video or audio impersonations, were becoming an increasingly common threat. However, Fortune spoke exclusively to several retreat attendees who said that while AI phishing and deepfakes certainly rank highly as current cybersecurity concerns, there are other issues keeping them up at night when it comes to the growing risks of AI-related cyber attacks on their companies. Company data exposed and even creepier deepfake scamsGary Hayslip, chief security officer at investment holding company SoftBank, said one of his biggest concerns is how to protect private company data from supply chain attacks in the age of AI–that is, dealing with risks from third-party vendors that have added generative AI features to their tools but have not implemented the necessary governance around the use of Softbank’s data. “There are good solid vendors…coming up with their own generative AI piece that’s now available with this tool you’ve been using for the last three years,” he said. “That’s cool, but what is it doing with the data? What is the data interacting with?” Organizations need to ask these questions as through they are quizzing a teenager who wants to download apps onto their smartphone, he added. “You have to be a little paranoid,” he said, adding that a company can’t “just open up the gate and let 1000s of apps come in and data just goes flying everywhere that’s totally unmanned.” Adam Zoller, CISO at Providence Health & Services, a not-for-profit healthcare system headquartered in Renton, Wash., agreed that protecting company data and systems while using third-party AI tools is his biggest security headache right now, particularly in a highly-regulated industry like healthcare. Suppliers may integrate LLMs into existing healthcare software platforms or biomedical devices and may not take security issues as seriously as they should, he explained. “Some of these capabilities are either deployed without our knowledge, like in the background as a software update,” he said, adding that he often has to have a conversation with business leaders, letting them know that using certain tools creates “an unacceptable risk.” Other security leaders are particularly worried about how current attacks are quickly evolving. For example, while deepfakes are already convincing, Alissa Abdullah, deputy CSO at Mastercard, said she was very concerned about new deepfake scams that are likely to emerge over the coming year. These would use AI video and audio not to pretend to be someone recognizable to the user, but a stranger from a trusted brand – a favorite company’s help desk representative, for example. “They will call you and say, ‘we need to authenticate you into our system,’ and ask for $20 to remove the ‘fraud alert’ that was on my account,” she said. “No longer is it wanting $20 billion in Bitcoin, but $20 from 1000 people – small amounts that even people like my mother would be happy to say ‘let me just give it to you.’” The exponential upward curve of AI capabilitiesFor CISOs at companies developing the most advanced AI models, planning for future risks is essential. Jason Clinton, chief information security officer at Anthropic, spoke at the Team8 event, emphasizing to the group that it’s the consequences of “the scaling law hypothesis” that worry him the most. This hypothesis suggests that increasing the size of an AI model, the amount of data it is fed, and the computing power used to train the model necessarily leads to a consistent and, to some extent, predictable increase in the model’s capabilities. “I don’t think that [the CISOs] fully internalized this,” he said of understanding the exponential upward curve of AI capabilities. “If you’re trying to plan for an enterprise strategy for cyber that just is based on what exists today, then you’re going to be behind,” he said. “A year from now, it’s going to be 4x year over year increase in computing power.”That said, Clinton said that he is “cautiously optimistic” that improvements in AI used by defenders to respond to AI-powered attacks. “I do think we have a defenders advantage, and so there’s not really a need to be pessimistic,” he said. “We are finding vulnerabilities faster than any attacker that I’m aware of.” In addition, the recent DARPA AI Cyber Challenge showed that developers could create new generative AI systems to safeguard critical software that undergirds everything from financial systems and hospitals to public utilities. “The economics and the investment and the technologies seem to be favoring folks who are trying to do the right thing on the defender side,” he said. An AI ‘cold war’Softbank’s Hayslip agreed that defenders can stay ahead of AI-powered attacks on companies, calling it a kind of ‘cold war.’“You’ve got the criminal entities moving very quickly, using AI to come up with new types of threats and methodologies to make money,” he said. “That, in turn, pushes back on us with the breaches and the incidents that we have, which pushes us to develop new technologies.” The good news is, he said, that while a year ago there were only a couple of startups focused on monitoring generative AI tools or providing security against AI attackers, this year there were dozens. “I can’t even imagine what I’ll see next year,” he said. But companies have their work cut out for them, as the threats are definitely escalating, he said, adding that security leaders cannot hide from what is coming. “I know that there is a camp of CISOs that want to scream, and they’re trying to stop it or slow it down,” he said. “In a way it’s like a tidal wave and whether they like it or not, it’s coming hard, because [AI threats] are maturing and growing that fast.” 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

网络安全 AI威胁 数据保护 防御策略 AI发展
相关文章