MIT Technology Review » Artificial Intelligence 前天 21:43
Battling next-gen financial fraud 
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文揭示了从2021年至2024年间,一个位于加拿大的犯罪团伙如何通过电话诈骗,骗取了美国老年受害者2100万美元。文章指出,随着大型语言模型(LLMs)的普及,声音克隆技术变得唾手可得,使得诈骗手段更加复杂和难以防范。此外,合成身份欺诈已成为美国增长最快的金融犯罪,每年给银行造成60亿美元的损失。文章强调,技术既加速了传统欺诈,也催生了全新的、大规模的欺诈形式,诈骗分子利用AI工具扩大攻击范围和效率。

📞 电话诈骗:犯罪团伙利用互联网协议电话(VoIP)技术,伪装成受害者的孙辈,通过定制化的对话骗取钱财,诈骗者掌握受害者的年龄、地址和收入等个人信息。

🗣️ 声音克隆技术:借助LLMs,只需一小时的YouTube视频和低廉的订阅费用,就能克隆声音,使得诈骗者能够创建更逼真的欺骗性攻击。

👤 合成身份欺诈:犯罪分子利用数据泄露,伪造身份,并通过廉价的凭证填充软件快速测试大量被盗凭证。AI驱动的文本转语音工具也能轻松绕过语音验证系统。

💡 技术催化与变革:Plaid的行业关系和数字信任负责人John Pitts指出,技术既加速了传统欺诈,也创造了新型大规模欺诈的可能。

🤖 AI放大攻击:诈骗者利用AI工具大幅增加攻击向量,例如在预付费诈骗中,AI能以更低的成本、更高的效率识别受害者,并同时进行大规模的数字对话。

From a cluster of call centers in Canada, a criminal network defrauded elderly victims in the US out of $21 million in total between 2021 and 2024. The fraudsters used voice over internet protocol technology to dupe victims into believing the calls came from their grandchildren in the US, customizing conversations using banks of personal data, including ages, addresses, and the estimated incomes of their victims. 

The proliferation of large language models (LLMs) has also made it possible to clone a voice with nothing more than an hour of YouTube footage and an $11 subscription. And fraudsters are using such tools to create increasingly more sophisticated attacks to deceive victims with alarming success. But phone scams are just one way that bad actors are weaponizing technology to refine and scale attacks. 

Synthetic identity fraud now costs banks $6 billion a year, making it the fastest-growing financial crime in the US Criminals are able to exploit personal data breaches to fabricate “Frankenstein IDs.” Cheap credential-stuffing software can be used to test thousands of stolen credentials across multiple platforms in a matter of minutes. And text-to-speech tools powered by AI can bypass voice authentication systems with ease. 

“Technology is both catalyzing and transformative,” says John Pitts, head of industry relations and digital trust at Plaid. “Catalyzing in that it has accelerated and made more intense longstanding types of fraud. And transformative in that it has created windows for new, scaled-up types of fraud.” 

Fraudsters can use AI tools to multiply many times over the number of attack vectors—the entry points or pathways that attackers can use to infiltrate a network or system. In advance-fee scams, for instance, where fraudsters pose as benefactors gifting large sums in exchange for an upfront fee, scammers can use AI to identify victims at a far greater rate and at a much lower cost than ever before. They can then use AI tools to carry out tens of thousands, if not millions, of simultaneous digital conversations. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

诈骗 AI 网络安全 金融犯罪
相关文章