Fortune | FORTUNE 07月11日 00:39
How a deepfake of Marco Rubio exposed the alarming ease of AI voice scams
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AI深度伪造技术带来的日益增长的威胁,特别是语音克隆诈骗。文章指出,利用Eleven Labs等平台,只需少量音频样本即可生成逼真的语音,用于冒充他人进行诈骗。深度伪造事件数量和损失都在急剧增加,传统安全措施已失效。文章强调了加强安全措施的必要性,例如使用FIDO2等硬件安全密钥和行为监控系统,以对抗深度伪造攻击。此外,文章还提到了AI在影响全球民主和选举中的潜在危害,以及政府官员应避免使用不安全的通信平台。

🗣️ AI语音深度伪造技术变得越来越容易被滥用。仅需15到30秒的语音样本,即可生成逼真的语音克隆,用于欺骗和诈骗。

📈 深度伪造事件数量和造成的损失正在迅速增加。2025年上半年,深度伪造相关事件数量激增,损失高达数亿美元,主要涉及冒充公众人物进行欺诈性投资。

🛡️ 传统的安全措施(如用户名、密码和验证器应用程序)已不足以应对深度伪造。文章建议采用更强的安全工具,如FIDO2或WebAuthn通行密钥,以及监控用户行为,以识别异常情况。

🗳️ AI深度伪造正在影响全球民主和选举。它被用于诽谤对手,影响外交关系和决策,对国家安全构成威胁。政府官员应避免使用不安全的通信平台,以保护信息安全。

An audio deepfake impersonating Secretary of State Marco Rubio contacted foreign ministers, a U.S. governor, and a member of Congress with AI-generated voicemails mimicking his voice, according to a senior U.S. official and a State Department cable dated July 3. 

There’s no public evidence that any of the recipients of the messages, reportedly designed to extract sensitive information or gain account access, were fooled by the scam. But the incident is the latest high-profile example of how easy—and alarmingly convincing—AI voice scams have become.

With just 15 to 30 seconds of someone’s speech uploaded to services like Eleven Labs, Speechify and Respeecher, it’s now possible to type out any message and have it read aloud in their voice. Keep in mind, these are tools used perfectly legitimately for a host of things from accessibility to content creation – but like many AI technologies, can be misused by bad actors.

The threat of deepfakes has escalated

AI-generated deepfakes aren’t new, particularly of C-suite leaders and public officials, but they are becoming a bigger problem. Eight months ago, I reported that more than half of chief information security officers (CISOs) surveyed ranked video and audio deepfakes as a growing concern. That threat has only escalated. A new study by Surfshark found that in the first half of 2025 alone, deepfake-related incidents surged to 580—nearly four times as many as in all of 2024 (150 incidents), and dramatically higher than the 64 incidents reported between 2017 and 2023. Losses from deepfake fraud have also skyrocketed, reaching $897 million cumulatively, with $410 million of that in just the first half of 2025. The most common scheme: impersonating public figures to promote fraudulent investments, which has already resulted in $401 million in losses.

“Deepfakes have evolved into real, active cybersecurity threats,” Aviad Mizrachi, CTO and co-founder of software security company Frontegg, told me by email. “We’re already seeing AI-generated video calls successfully trick employees into authorizing multimillion-dollar payments. These attacks are happening now, and it’s a scam that is becoming alarmingly easy for a hacker to deploy.”

Part of the problem, Mizrachi added, is that traditional authentication methods—usernames, passwords, one-time codes, and authenticator apps—weren’t designed for a world where a scammer can clone your voice or face in seconds. That’s because these scams don’t necessarily involve breaking into an account—they rely on tricking a real person into handing over credentials or authorizing actions themselves.

“Those traditional security measures to check the identity of an individual obviously don’t work anymore,” he said, adding that most cybersecurity teams still overlook deepfakes—and that’s the vulnerability attackers exploit. A convincing fake voice on a voice mail message or video call can persuade someone to bypass normal procedures or approve a wire transfer, even if all the authentication tools are technically in place.

To guard against that kind of deception, Mizrachi said, organizations need to deploy stronger security tools that rely on physical devices—like a smartphone or hardware security key—to prove someone’s identity. These tools, known as FIDO2 or WebAuthn passkeys, are far harder for hackers to fake or phish. And beyond device checks, smart verification systems can also monitor behavioral signals—like typing speed, location, or login habits—to spot anomalies that a cloned voice can’t imitate. Those extra layers make it much harder for a deepfake attack to succeed.

Margaret Cunningham, director of security and AI strategy at security firm Darktrace, said that the impersonation attempt of Rubio demonstrates just how easily generative AI can be used to launch convincing, targeted social engineering attacks.

“This threat didn’t fail because it was poorly crafted—it failed because it missed the right moment of human vulnerability,” she said. “People often don’t make decisions in calm, focused conditions. They respond while multitasking, under pressure, and guided by what feels familiar. In those moments, a trusted voice or official-looking message can easily bypass caution.”

Deepfakes have impacted democracies around the world

Generative AI has also dramatically lowered the barrier to entry when it comes to media manipulation—making it faster, cheaper, and more scalable than ever before. And it is impacting democracies around the world: A recent New York Times report found that AI-powered deepfakes has transformed elections in at least 50 countries around the world, by using them to demean or defame opponents. 

“This is the new frontier for influence operations,” Leah Siskind, AI research fellow at the Foundation for Defense of Democracies, told me. “We’ve seen other instances of deepfakes of senior government officials used to gain access to personal accounts, but leveraging AI to influence diplomatic relationships and decision-making is a dangerous escalation. This is an urgent national security issue with serious diplomatic ramifications.”

For now, Siskind recommends that government officials steer clear of popular encrypted platforms like Signal, which while secure in terms of content, lack mechanisms for identity verification. “Given the ease of creating deepfake audio and building out realistic-looking accounts on any consumer-grade messaging app, senior government officials should stick to secure communication channels,” she said.

Note: Check out this new Fortune video about my tour of IBM’s quantum computing test lab. I had a fabulous time hanging out at IBM’s Yorktown Heights campus (a midcentury modern marvel designed by Eero Saarinen, the same guy as the St. Louis Arch and the classic TWA Flight Center at JFK Airport) in New York. The video was part of my coverage for this year’s Fortune 500 issue that included an article that dug deep into IBM’s recent rebound.

As I said in my piece, “walking through the IBM research center is like stepping into two worlds at once. There are the steel and glass curves of Saarinen’s design, punctuated by massive walls made of stones collected from the surrounding fields, with original Eames chairs dotting discussion nooks. But this 20th-century modernism contrasts starkly with the sleek, massive, refrigerator-like quantum computer—among the most advanced in the world—that anchors the collaboration area and working lab, where it whooshes with the steady hum of its cooling system.”

With that, here’s the rest of the AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

AI IN THE NEWS

xAI releases Grok 4 amid backlash over antisemitic posts. Elon Musk’s xAI has launched Grok 4, just months after its previous release, highlighting the rapid pace of AI development. Unveiled during a late-night livestream, Musk claimed the new chatbot outperforms most graduate students across disciplines and now supports improved voice interactions. xAI also touted benchmark results showing Grok 4 beating rivals like OpenAI. While Musk admitted the bot sometimes lacks common sense and hasn’t yet made scientific breakthroughs, he added, “that is just a matter of time.” Grok 4’s release comes just a day after xAI faced backlash for antisemitic content generated by the chatbot on X. The company removed the offensive posts and said it has since taken steps to block hate speech before Grok’s responses are published on the platform.

Perplexity launches AI browser to take on Google—starting with its power users. Perplexity is stepping up its challenge to Google with the launch of Comet, its first AI-powered web browser. Available initially to $200/month Max subscribers and a limited waitlist, Comet integrates Perplexity’s signature AI search engine front and center—offering summarized answers instead of traditional search results. The browser also debuts Comet Assistant, an in-browser AI agent that can summarize emails and calendar events, manage tabs, and interact with webpages in real time. CEO Aravind Srinivas has framed Comet as more than just a browser—it’s a stepping stone to what he calls an “AI operating system” designed to deeply embed Perplexity into users’ daily workflows. The move puts Perplexity in more direct competition with Chrome and Google’s own AI search experiments, as the startup bets on its browser becoming the new gateway for how people find and interact with information online.

OpenAI is reportedly gearing up to launch its own web browser. According to Reuters, OpenAI is also joining the browser wars as it moves to compete more directly with Google, and now Perplexity. Citing three sources familiar with the plans, the report says the browser could debut in the coming weeks and is designed to keep some user interactions within a ChatGPT-style interface—rather than directing users to external websites. The move would also give OpenAI access to the kind of user data that has long powered Google’s dominance in search.

FORTUNE ON AI

Apple’s AI efforts ‘have struck midnight’ and the only way it can stop getting further behind is acquiring Perplexity, analyst Dan Ives says —by Marco Quiroz-Guitierrez

Would you replace your CEO with an AI avatar? —by Alexandra Sternlicht

Why Coca-Cola’s CIO prioritizes big-impact AI pilot projects —by John Kell

Amazon’s tariff-clouded, seller-confused, AI-researched, weirdest Prime Day ever —by Jason Del Rey

AI CALENDAR

July 13-19: International Conference on Machine Learning (ICML), Vancouver

July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.

July 26-28: World Artificial Intelligence Conference (WAIC), Shanghai. 

Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.

Oct. 6-10: World AI Week, Amsterdam

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

EYE ON AI NUMBERS

80%

That's how much business software will be powered by AI that understands more than just text—including images, video, audio, and other data—by 2030, according to a new report from Gartner.  That’s a huge jump from less than 10% today.

This shift, known as “multimodal AI,” could change how businesses operate across industries like healthcare, finance, and manufacturing. For example, it could help software make smarter decisions by analyzing a mix of information (like medical images and patient notes), or even take proactive steps—like flagging fraud or optimizing supply chains—without human input.

Gartner analysts say companies building software will need to start investing now in these new AI technologies to stay competitive and deliver real value to their customers.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI深度伪造 语音克隆 网络安全 信息安全 诈骗
相关文章