Unite.AI 04月19日 01:08
How Scammers Use AI in Banking Fraud
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能(AI)如何被诈骗分子利用,以规避安全措施并进行金融诈骗。文章揭示了深度伪造、生成模型、个性化技术等AI技术在诈骗中的应用,例如冒充诈骗、虚假警告、账户接管等。文章还提出了银行和消费者可以采取的保护措施,包括多因素身份验证、KYC标准、行为分析和风险评估。强调了在AI时代,银行需要积极主动,采取措施以保护客户免受诈骗。

🎭 深度伪造技术被用于增强冒充诈骗。诈骗分子利用AI生成逼真的视频和音频,模仿高管进行诈骗,导致公司遭受巨额损失。

📢 生成模型被用来发送虚假欺诈警告。诈骗者利用AI批量发送欺诈警告,诱骗受害者泄露个人信息和银行账户信息。

🎯 AI个性化技术促进账户接管。诈骗者通过分析受害者的日常行为和购物习惯,定制诈骗信息,提高欺骗成功率。

🌐 生成式AI重塑虚假网站诈骗。诈骗者利用AI快速创建和更新虚假网站,模拟金融机构,窃取用户资金。

🛡️ 算法绕过活体检测工具。AI技术能够绕过活体检测,使得诈骗者能够利用伪造的身份进行欺诈。

🏦 银行应采取多重身份验证、改进KYC标准、使用行为分析和进行风险评估等措施,以对抗AI诈骗。

AI has empowered fraudsters to sidestep anti-spoofing checks and voice verification, allowing them to produce counterfeit identification and financial documents remarkably quickly. Their methods have become increasingly inventive as generative technology evolves. How can consumers protect themselves, and what can financial institutions do to help?

1. Deepfakes Enhance the Imposter Scam 

AI enabled the largest successful impostor scam ever recorded. In 2024, United Kingdom-based Arup — an engineering consulting firm — lost around $25 million after fraudsters tricked a staff member into transferring funds during a live video conference. They had digitally cloned real senior management leaders, including the chief financial officer.  

Deepfakes use generator and discriminator algorithms to create a digital duplicate and evaluate realism, enabling them to convincingly mimic someone’s facial features and voice. With AI, criminals can create one using only one minute of audio and a single photograph. Since these artificial images, audio clips or videos can be prerecorded or live, they can appear anywhere.

2. Generative Models Send Fake Fraud Warnings

A generative model can simultaneously send thousands of fake fraud warnings. Picture someone hacking into a consumer electronics website. As big orders come in, their AI calls customers, saying the bank flagged the transaction as fraudulent. It requests their account number and the answers to their security questions, saying it must verify their identity. 

The urgent call and implication of fraud can persuade customers to give up their banking and personal information. Since AI can analyze vast amounts of data in seconds, it can quickly reference real facts to make the call more convincing.

3. AI Personalization Facilitates Account Takeover 

While a cybercriminal could brute-force their way in by endlessly guessing passwords, they often use stolen login credentials. They immediately change the password, backup email and multifactor authentication number to prevent the real account holder from kicking them out. Cybersecurity professionals can defend against these tactics because they understand the playbook. AI introduces unknown variables, which weakens their defenses. 

Personalization is the most dangerous weapon a scammer can have. They often target people during peak traffic periods when many transactions occur — like Black Friday — to make it harder to monitor for fraud. An algorithm could tailor send times based on a person’s routine, shopping habits or message preferences, making them more likely to engage.

Advanced language generation and rapid processing enable mass email generation, domain spoofing and content personalization. Even if bad actors send 10 times as many messages, each one will seem authentic, persuasive and relevant.

4. Generative AI Revamps the Fake Website Scam

Generative technology can do everything from designing wireframes to organizing content. A scammer can pay pennies on the dollar to create and edit a fake, no-code investment, lending or banking website within seconds. 

Unlike a conventional phishing page, it can update in near-real time and respond to interaction. For example, if someone calls the listed phone number or uses the live chat feature, they could be connected to a model trained to act like a financial advisor or bank employee. 

In one such case, scammers cloned the Exante platform. The global fintech company gives users access to over 1 million financial instruments in dozens of markets, so the victims thought they were legitimately investing. However, they were unknowingly depositing funds into a JPMorgan Chase account.

Natalia Taft, Exante’s head of compliance, said the firm found “quite a few” similar scams, suggesting the first wasn’t an isolated case. Taft said the scammers did an excellent job cloning the website interface. She said AI tools likely created it because it is a “speed game,” and they must “hit as many victims as possible before being taken down.”

5. Algorithms Bypass Liveness Detection Tools

Liveness detection uses real-time biometrics to determine whether the person in front of the camera is real and matches the account holder’s ID. In theory, bypassing authentication becomes more challenging, preventing people from using old photos or videos. However, it isn’t as effective as it used to be, thanks to AI-powered deepfakes. 

Cybercriminals could use this technology to mimic real people to accelerate account takeover. Alternatively, they could trick the tool into verifying a fake persona, facilitating money muling. 

Scammers don’t need to train a model to do this — they can pay for a pretrained version. One software solution claims it can bypass five of the most prominent liveness detection tools fintech companies use for a one-time purchase of $2,000. Advertisements for tools like this are abundant on platforms like Telegram, demonstrating the ease of modern banking fraud.

6. AI Identities Enable New Account Fraud

Fraudsters can use generative technology to steal a person’s identity. On the dark web, many places offer forged state-issued documents like passports and driver’s licenses. Beyond that, they provide fake selfies and financial records. 

A synthetic identity is a fabricated persona created by combining real and fake details. For example, the Social Security number may be real, but the name and address are not. As a result, they are harder to detect with conventional tools. The 2021 Identity and Fraud Trends report shows roughly 33% of false positives Equifax sees are synthetic identities. 

Professional scammers with generous budgets and lofty ambitions create new identities with generative tools. They cultivate the persona, establishing a financial and credit history. These legitimate actions trick know-your-customer software, allowing them to remain undetected. Eventually, they max out their credit and disappear with net-positive earnings. 

Though this process is more complex, it happens passively. Advanced algorithms trained on fraud techniques can react in real time. They know when to make a purchase, pay off credit card debt or take out a loan like a human, helping them escape detection.

What Banks Can Do to Defend Against These AI Scams

Consumers can protect themselves by creating complex passwords and exercising caution when sharing personal or account information. Banks should do even more to defend against AI-related fraud because they’re responsible for securing and managing accounts.

1. Employ Multifactor Authentication Tools

Since deepfakes have compromised biometric security, banks should rely on multifactor authentication instead. Even if a scammer successfully steals someone’s login credentials, they can’t gain access. 

Financial institutions should tell customers to never share their MFA code. AI is a powerful tool for cybercriminals, but it can’t reliably bypass secure one-time passcodes. Phishing is one of the only ways it can attempt to do so.

2. Improve Know-Your-Customer Standards

KYC is a financial service standard requiring banks to verify customers’ identities, risk profiles and financial records. While service providers operating in legal gray areas aren’t technically subject to KYC — new rules impacting DeFi won’t come into effect until 2027 — it is an industry-wide best practice. 

Synthetic identities with years-long, legitimate, carefully cultivated transaction histories are convincing but error-prone. For instance, simple prompt engineering can force a generative model to reveal its true nature. Banks should integrate these techniques into their strategies.

3. Use Advanced Behavioral Analytics 

A best practice when combating AI is to fight fire with fire. Behavioral analytics powered by a machine learning system can collect a tremendous amount of data on tens of thousands of people simultaneously. It can track everything from mouse movement to timestamped access logs. A sudden change indicates an account takeover. 

While advanced models can mimic a person’s purchasing or credit habits if they have enough historical data, they won’t know how to mimic scroll speed, swiping patterns or mouse movements, giving banks a subtle advantage.

4. Conduct Comprehensive Risk Assessments 

Banks should conduct risk assessments during account creation to prevent new account fraud and deny resources from money mules. They can start by searching for discrepancies in name, address and SSN. 

Though synthetic identities are convincing, they aren’t foolproof. A thorough search of public records and social media would reveal they only popped into existence recently. A professional could remove them given enough time, preventing money muling and financial fraud.

A temporary hold or transfer limit pending verification could prevent bad actors from creating and dumping accounts en masse. While making the process less intuitive for real users may cause friction, it could save consumers thousands or even tens of thousands of dollars in the long run.

Protecting Customers From AI Scams and Fraud

AI poses a serious problem for banks and fintech companies because bad actors don’t need to be experts — or even very technically literate — to execute sophisticated scams. Moreover, they don’t need to build a specialized model. Instead, they can jailbreak a general-purpose version. Since these tools are so accessible, banks must be proactive and diligent.

The post How Scammers Use AI in Banking Fraud appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 金融诈骗 网络安全 银行
相关文章