Communications of the ACM - Artificial Intelligence 前天 00:40
AI and the Democratization of Cybercrime
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

人工智能正以前所未有的方式放大网络犯罪的威力,使得过去高门槛的攻击变得低成本且易于执行。从勒索软件、钓鱼攻击到深度伪造,AI技术正在重塑犯罪生态,降低了攻击者的技术门槛,提高了攻击的复杂性和效率。文章深入剖析了AI如何“工业化”网络犯罪,揭示了“服务型犯罪”模式的演变,以及AI在生成恶意代码、个性化钓鱼邮件和模仿语音方面的应用。尽管AI也为防御提供了强大助力,但文章强调,仅靠技术升级不足以应对挑战,呼吁从源头管理和监管AI能力,以期从根本上遏制AI驱动的网络犯罪。

🤖 AI已成为网络犯罪的强大助推器,显著降低了复杂攻击的成本和技术门槛。生成式AI能够生成逼真文本、模仿声音并自动化攻击链,使得过去只有专业黑客才能进行的活动,现在对技术能力较低的犯罪分子也触手可及。这导致了网络攻击的激增,如勒索软件事件的爆发式增长,以及高科技攻击手段的广泛应用。AI使得原本孤立的犯罪手段(如勒索软件、钓鱼和深度伪造)整合为一个低门槛的生态系统,大大扩展了犯罪的规模和影响力。

💼 网络犯罪正经历“工业化”转型,从“服务型犯罪”(CaaS)发展到“勒索软件即服务”(RaaS)。犯罪平台提供包括恶意软件生成器、钓鱼工具包、僵尸网络和初始访问代理在内的各种服务,并在暗网市场进行销售,甚至带有用户评价。RaaS模式的核心开发者维护核心技术和支付渠道,而大量“联盟成员”则负责执行入侵和谈判,收入分配模式类似于网约车服务。这些RaaS组织已发展得像中型SaaS公司,提供24/7技术支持,并不断推陈出新,例如“五重勒索”等更复杂的敲诈手段,以最大化受害者损失。

📧 AI在网络犯罪中的应用极大地提升了攻击的隐蔽性和针对性。如“暗网大语言模型”(Dark LLMs)能够根据提示生成“不可检测”的恶意代码和完美的钓鱼邮件,使得攻击者无需深厚的技术背景即可发动攻击。通过利用泄露的客户关系管理(CRM)数据,AI可以生成与公司内部沟通难以区分的超个性化钓鱼邮件,有效欺骗受害者。此外,深度伪造和语音克隆技术使得社交工程攻击更加难以防范,攻击者能够生成高度逼真的虚假视频和音频,冒充高管或亲属进行欺诈,已造成巨额经济损失。

🛡️ 面对AI驱动的网络犯罪,防御策略需要多维度升级。AI本身也可作为防御的“力量倍增器”,用于快速分析海量日志数据以检测异常,并能辅助识别深度伪造。然而,在识别高仿真深度伪造方面,人类专家仍优于现有AI检测器。核心防御措施包括:持续自动化打补丁以减少攻击面;实施零信任架构和不可变离线备份以应对双重勒索;强化身份认证,普及多因素认证(MFA)和抗钓鱼认证;以及通过演习模拟AI辅助攻击,提升应急响应能力。同时,加强情报共享和公私合作,快速响应和打击犯罪生态系统也至关重要。

⚖️ 文章呼吁对进攻性AI能力进行严格监管,将其类比为“管制商品”。作者认为,目前防御方承担了过多的责任,而忽视了对AI能力源头的控制。将能够自主扫描、利用或深度伪造的AI模型,纳入类似于管制化学品或爆炸物的监管框架,要求持有许可证、进行审计、建立责任追究机制。这种“上游”的控制措施,旨在从根本上切断犯罪工具的供应,而非仅仅应对已发生的攻击。通过对AI技术的许可和限制,可以有效限制其被滥用的范围,从而比单纯的防御性措施更具根本性。

Artificial intelligence (AI) has become one of the most potent force multipliers the criminal underground has ever seen. Generative models that write immaculate prose, mimic voices, and chain exploits together have lowered the cost of sophisticated attacks to almost nothing. 

This isn’t news. Last year, Jen Easterly, former Director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), warned that AI “will exacerbate the threat of cyberattacks [by making] people who are less sophisticated actually better at doing some of the things they want to do.”

The truth of that warning is already visible. The first quarter of 2025 saw an unprecedented 126% surge in ransomware incidents. Since then, there’s been a spree of high-impact attacks against high-profile brands. British retail institutions, global brands, major logistics operators, and more have all been hit by highly sophisticated attacks. 

Ransomware, phishing, and deepfakes have merged into a low-barrier ecosystem where a cloud-hosted toolkit, a stolen credential, and a crypto wallet now suffice to run an international extortion ring. 

This post peels back the mechanics and economics of that new criminal frontier and offers actionable insights for defense.

The Industrialization of Cybercrime: From CaaS to RaaS

Cybercrime-as-a-Service (CaaS) mirrors the legitimate SaaS market. Malware builders, phishing kits, botnets, and initial-access brokers are sold on dark-web storefronts that accept crypto, and even display customer reviews. 

The flagship product is Ransomware-as-a-Service (RaaS): core developers maintain the payload, leak-site, and payment gateway, while thousands of ‘affiliates’ conduct intrusions and negotiations. Payouts resemble ride-hailing splits, typically 70% to the affiliate and 30% to the platform, and affiliates can onboard in minutes.

RaaS outfits today look like midsize SaaS firms. They publish changelogs, run 24/7 ticket desks, and offer live chat to guide victims through buying cryptocurrency. Double-extortion (encrypt and leak) is baseline, with options for triple-extortion to pile harassment or DDoS on top. FunkSec, an AI-enabled crew first seen in late 2024, even offered “quintuple” extortion tiers that layer stock-price manipulation over leaks and DDoS. 

Top RaaS brands draw more than 15 million page views every month as victims, journalists, and even investors monitor newly posted archives. Operators monetize that audience with “marketing bundles”: Photoshop templates for branded ransom notes, boiler-plates that cite GDPR fines, and even customer-experience surveys that let victims rate the service.

AI: The Ultimate Democratizer of Crimeware

Dark LLMs for Everyone

Cheap, off-the-shelf language models are erasing the technical hurdles. FraudGPT and WormGPT subscriptions start at roughly $200 per month, promising ‘undetectable’ malware, flawless spear-phishing prose, and step-by-step exploit guidance. 

An aspiring criminal no longer needs the technical knowledge to tweak GitHub proof-of-concepts. They paste a prompt such as ‘Write a PowerShell loader that evades EDR’ and receive usable code in seconds.

Hyper-Personalized Phishing

Large language models (LLMs) fine-tuned on breached CRM data generate emails indistinguishable from genuine internal memos, complete with corporate jargon and local idiom. Large parts of 2025’s attack surge can be attributed to these AI-crafted lures, leading to an average of 275 ransomware attempts every day.

Deepfakes and Voice Cloning

Synthetic media eliminates the tells that once betrayed social-engineering scams. In early 2024, a finance clerk at U.K. engineering firm Arup wired $25 million after joining a video call populated entirely by AI-generated replicas of senior executives. The same year, Long Island police logged more than $126 million in losses from voice-clone ‘grandchild in trouble’ scams that harvest seconds of TikTok audio to impersonate loved ones. 

Autonomy at Machine Speed

Researchers pushed the envelope further with ReaperAI and AutoAttacker, proof-of-concept ‘agentic’ systems that chain LLM reasoning with vulnerability scanners and exploit libraries. In controlled tests, they breached outdated Web servers, deployed ransomware, and negotiated payment over Tor, without human input once launched. 

Fully automated cyberattacks are just around the corner.

The Mechanics and Economics of the New Frontier

Why does ransomware flourish even as some victims refuse to pay? The answer is pure economics: start-up costs for a cybercriminal enterprise are minimal, often under $500, while returns can reach eight figures. Analysts project ransomware costs could top $265 billion a year by 2031, while total cybercrime damages may hit $10.5 trillion globally this year

Dark-web marketplaces have grown into one of the world’s largest shadow economies. Listings resemble Amazon product pages, complete with escrow, loyalty discounts, and 24-hour ‘customer success’ chat. Competition drives platform fees down, so developers chase scale: more affiliates, more victims, more leverage. 

When one marketplace is taken down, others quickly appear to replace it. When LockBit vanished, many affiliates simply shifted to emerging brands like RansomHub. Disruption alone won’t end the business model.

Defending Against AI-Enhanced Extortion

AI as a Defensive Force-Multiplier

The same transformers that craft phishing emails can mine billions of log lines for anomalies in seconds. Managed detection and response providers say AI triage cuts investigation time by 70%.

Deepfake-detection models, behavioral analytics, and real-time sandboxing already blunt AI-enhanced attacks. However, studies have shown that trained humans remain better at spotting high-quality video deepfakes than current detectors. 

Core Protective Strategies

Core defensive practice now revolves around four themes. First, reducing the attack surface through relentless automated patching. Second, assuming breach via Zero-Trust segmentation and immutable off-line backups that neuter double-extortion leverage. Third, hardening identity with universal multi-factor authentication (MFA) and phishing-resistant authentication. Finally, exercising incident-response plans with table-top and red-team drills that mirror AI-assisted adversaries.

Governance and Collaboration

Frameworks such as the NIST AI Risk Management Framework 1.0 and its 2024 Generative AI profile provide scorecards for responsible deployment. The LockBit takedown shows that public-private task forces can still starve criminal ecosystems of infrastructure and liquidity when they move quickly and in concert.

Organizations should adopt an intelligence-led mindset. Automated collection of indicators from previous incidents, enrichment with open-source feeds, and sharing through platforms like Malware Information Sharing Platforms (MISPs) or industry Information Sharing and Analysis Centers (ISACs) compresses the time available to attackers. When that data feeds back into detection models, every victim becomes a training set for community defense and a virtuous learning loop.

Regulating Offensive AI: Treat It as a Controlled Substance

We keep lecturing companies about patch cadences and zero-trust diagrams while ignoring the tap that fills the bucket. Yes, every organization should harden MFA and segment networks, but let’s be honest: no patching policy can outrun a world where fully weaponized models are sold as casually as Spotify vouchers. I feel we’re placing the entire defensive burden on victims, so we’re managing symptoms, not the disease.

It’s time to move upstream and license offensive-AI capabilities the way we already license explosives, narcotics, and zero-day exports. Any model that can autonomously scan, exploit, or deepfake at scale should sit behind the regulatory equivalent of a locked cabinet, complete with audited access logs, financial surety, and criminal liability for willful leaks. Cloud providers and model builders love to invoke “dual-use,” but dual-use is exactly why controlled-substance laws exist: society decided that convenience doesn’t trump harm. Apply the same logic here, and we choke supply instead of eternally mopping the floor.

The Ongoing AI Arms Race

AI hasn’t invented new crime; it has franchised it. Today, a teenager with a crypto wallet can spin up FraudGPT on rented GPUs and launch an extortion campaign that once required a nation-state toolkit. Yet we keep treating defense as an endless game of speed-patching while the real accelerant—unfettered access to weapons-grade models—flows freely. If we can license weapons and cars, we can license autonomous exploit-chains and deepfake engines, too. Until regulators lock those capabilities behind audited cabinets, businesses will keep playing batter against a pitching machine on rapid fire.

That doesn’t let boards off the hook, because resilient basics still matter, but it does rebalance the battlefield. The next phase of this digital cold war demands a dual strategy: adaptive AI and zero-trust on the front line, plus upstream export controls that choke supply. Every defensive breakthrough will still feed offensive models, yet every license, access log, and legal deterrent hacks at the root instead of trimming branches. 

The finish line remains out of sight, but combining disciplined fundamentals with controlled-substance rules gives us a fighting chance at resilient survival.

Alex Williams is a seasoned full-stack developer and the former owner of Hosting Data U.K. After graduating from the University of London with a Master’s Degree in IT, Alex worked as a developer, leading various projects for clients from all over the world for almost 10 years. He recently switched to being an independent IT consultant and started his technical copywriting career.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 网络安全 网络犯罪 AI伦理 防御策略
相关文章