Mashable 07月29日 17:36
Deepfake voice scams are more sophisticated than ever: How to keep your family safe
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

近期,AI语音克隆技术被不法分子用于新型诈骗。通过模仿亲友的语音,骗子能制造紧迫的假象,要求受害者支付赎金。这种诈骗手法越来越普遍,得益于AI语音生成技术的易用性和普及性,如ElevenLab等平台降低了技术门槛。受害者不仅包括老年人,一些公众人物也成为目标。专家提醒,警惕陌生来电中的紧急情况和情感勒索,同时建议设置家庭“密码”来防范此类骗局。社交媒体上的公开信息也可能被用于语音克隆,因此保护个人隐私至关重要。

🚨 **AI语音克隆诈骗的出现与演变**:文章以佛罗里达州一位母亲Sharon Brightwell的经历为例,详细描述了骗子如何利用AI克隆其女儿April的声音,制造车祸受伤的假象并索要赎金。这一案例揭示了AI技术如何被用于制造高度逼真的虚假信息,从而实施诈骗。骗子甚至冒充公职人员,进一步增加了骗局的可信度。这种技术的发展使得诈骗手段更加隐蔽和难以防范。

📈 **AI语音技术易用性与普及性是关键驱动力**:罗切斯特理工学院网络安全教授Matthew Wright指出,AI语音克隆技术之所以日益猖獗,主要在于其技术门槛的降低和易用性的提高。像ElevenLab等公司提供的服务,使得普通人也能相对容易地获取和使用语音克隆技术。文章还提到,一些AI语音公司在用户身份验证和授权方面存在漏洞,可能被不法分子利用进行非自愿的语音克隆,这进一步加剧了问题的严重性。

🛡️ **个人信息泄露与社交媒体的风险**:文章强调,社交媒体是骗子获取个人语音信息的重要途径。即使是简短的语音片段,也可能被用于制作逼真的语音克隆。因此,保护个人社交媒体账户的隐私设置至关重要。Wright教授建议,用户应谨慎发布包含自己或他人声音的视频,并确保隐私设置,以防止信息被恶意利用。即使是熟悉的人,在不确定时也应保持警惕。

💡 **防范AI语音克隆诈骗的建议**:为了应对此类诈骗,文章提供了多项防范建议。首先,对于来自未知号码的紧急请求,尤其是涉及金钱的,应保持高度警惕。其次,骗子常利用紧迫感和情感操纵来促使受害者迅速行动,这是重要的警示信号。此外,文章强烈建议家庭成员之间建立一个不公开的“密码”或秘密信息,作为身份验证的手段,以确认来电者的真实身份。 Wright教授还提到,通过银行进行的交易通常有更高的安全保障,因为银行能够识别并阻止可疑的诈骗行为。

Earlier this month, Florida mom Sharon Brightwell received a panicked call from her daughter, who said she was involved in a car crash that injured a pregnant woman. But it wasn’t Brightwell’s daughter, April, on the other end of the line; it was a cloned voice of hers.

After the deepfake made up a story about April driving and texting and stating her phone was confiscated by police — hence, the unknown phone number she allegedly called from — a man came on the line pretending to be a public defender and requested $15,000 to bail out April. Brightwell complied, gathering the cash, placing it in a box, and handing it to an Uber package courier at her home. It was only after another call came in, this time asking for $30,000 for the "injured pregnant woman," that Brightwell’s grandson realized his family was being scammed.

"To tell you the trauma that my mom and son went through that day makes me nauseous and has made me lose more faith in humanity," April wrote on a GoFundMe page. "Evil is too nice a word for the kind of people that can do this."

Deepfake scams target more than just seniors. This summer, the voice of Secretary of State Marco Rubio was faked via AI, and scammers (unsuccessfully) used it to attempt contact with foreign and domestic leaders. President Trump’s chief of staff Susie Wilkes was also the target of deepfake voice scammers this year.

These crimes are happening more because the technology to create such voices is getting better, says Matthew Wright, PhD, a professor and Chair of Cybersecurity at Rochester Institute of Technology.

"Also, this is increasing because the technology is getting easier to use," Wright adds. "I think you could have said a few years ago the technology for this probably exists but would have required more technological skill to access. Now, with [voice cloning company] ElevenLab and other services, it’s very easy."

ElevenLab is probably the most prominent software company providing realistic voice cloning for companies to use for various applications, like customer service. A March report from Consumer Reports cited ElevenLab, along with AI voice cloning companies like Lovo and Speechify, as too lax in their oversight of non-consensual voice cloning. The companies "simply require checking a box saying that the person whose voice is being cloned had given authorization," the report stated.

There are humans behind these voice cloning scams, Wright says, and they may be using those aforementioned services to create their deepfakes. Where crooks used to just cold-call vulnerable people to try to steal their money or information, they now have a powerful and convincing new tool in their arsenal.

"A lot of it is organized crime," Wright says. "What I’ve read about is there are organizations kidnapping people in one country, taking them to another, like Malaysia, for example, and they got them holed up in a special facility that’s secured from people leaving it, and just turning them into slaves."

The first step with many of these scammers is finding vocal examples that they can clone, Wright says.

Keeping yourself safe

While anyone can pull Secretary Rubio’s voice off the internet, how are private individuals being cloned by AI? The short answer: Social media.

"Maybe we have videos of us with our friends or family hanging out, having a good time," Wright says. "If you’ve got that type of social presence, all your settings should be set to private. Not only your settings, because this involves anyone else who has posted videos of you. They don’t need a lot of content. You got 30 seconds of someone’s voice and you can make a pretty good deepfake."

Even a hastily crafted cloned voice can deceive, Wright warns. 

"When they’ve done studies of whether people can detect if something is fake, usually with fairly shorter snippets, accuracy is low," he says. "It’s not something, even when it’s your friends, your family, it’s just not reliable to count on being able to tell whether it’s that person."

When answering calls from unknown numbers claiming to be friends or family members, be dubious, Wright says.

"All of the normal things when it comes to scams, and dealing with scams, those definitely apply here,” Wright says. “These calls for urgency and signals that something needs to be done really quickly [is a red flag]. Pleas where someone is trying to put you into an emotional state and then they request something — 'we need that money, we need it now'" is a bad sign.

Wright points out that specific requests for cash should especially raise alarms. Transactions that go through banks offer a measure of security, he says.

"The banks recognize these things and say, 'This is a scam,'" he says. "They’ll help you recognize it."

It’s always a good idea to create a "password" with friends and family members, especially those who may be susceptible to deepfake scams, like older people or those unfamiliar with technology.

"If you have a special secret that would not be available through social media, but just something the two of you know" that could keep you safe, Wright says. "I would go out of my way to set up something specific, which would essentially be a password between you two, so you know if I call and I’m asking for money, or asking you to give up some sort of information, you need to ask for this [secret]. Then you have some sort of shared assurance."

Have a story to share about a scam or security breach that impacted you? Tell us about it. Email submissions@mashable.com with the subject line "Safety Net" or use this form. Someone from Mashable will get in touch.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI语音克隆 深度伪造 网络诈骗 信息安全 隐私保护
相关文章