MIT Technology Review » Artificial Intelligence 02月11日
AI crawler wars threaten to make the web more closed for everyone
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AI爬虫对互联网生态带来的挑战。随着AI技术的发展,AI爬虫大量抓取网络数据,引发了网站所有者的反击,他们担心AI会利用其数据进行商业竞争。这种反击威胁到网络的开放性和透明度,可能导致更多的登录墙、付费墙和访问限制。文章分析了AI爬虫对新闻网站、艺术家和编码论坛等的影响,以及网站所有者采取的法律、技术手段来限制爬虫。最后,文章呼吁保护非商业用途的数据访问,避免牺牲开放网络以发展商业AI。

🤖AI爬虫的大量涌现,对网络生态造成冲击,它们无差别地抓取各类数据,包括文章、学术论文、论坛帖子等,为AI系统提供训练素材,但这也引发了内容创作者的担忧,他们害怕自己的劳动成果被AI利用并直接竞争。

🛡️网站所有者开始反击AI爬虫,主要通过法律诉讼、立法推动和技术手段。例如,通过版权诉讼限制数据使用,利用欧盟AI法案保护版权所有者的退出权,以及采用反爬虫技术阻止或收费非人类流量,但这些措施也可能误伤其他有益的爬虫。

🌐大型网站和科技公司更有能力应对这场“猫鼠游戏”,他们可以通过法律手段、签订数据许可协议或创建强大的爬虫来绕过限制。而小型创作者可能被迫将内容隐藏在登录墙或付费墙后,甚至完全下线,这将导致用户更难获取信息,网络多样性降低。

🤝文章呼吁,在保护数据创作者和出版商权益的同时,应通过法律、政策和技术手段,明确保护非商业用途的网络数据访问,避免因商业AI的发展而牺牲开放网络,维护网络的透明度和多样性。

We often take the internet for granted. It’s an ocean of information at our fingertips—and it simply works. But this system relies on swarms of “crawlers”—bots that roam the web, visit millions of websites every day, and report what they see. This is how Google powers its search engines, how Amazon sets competitive prices, and how Kayak aggregates travel listings. Beyond the world of commerce, crawlers are essential for monitoring web security, enabling accessibility tools, and preserving historical archives. Academics, journalists, and civil societies also rely on them to conduct crucial investigative research.  

Crawlers are endemic. Now representing half of all internet traffic, they will soon outpace human traffic. This unseen subway of the web ferries information from site to site, day and night. And as of late, they serve one more purpose: Companies such as OpenAI use web-crawled data to train their artificial intelligence systems, like ChatGPT. 

Understandably, websites are now fighting back for fear that this invasive species—AI crawlers—will help displace them. But there’s a problem: This pushback is also threatening the transparency and open borders of the web, that allow non-AI applications to flourish. Unless we are thoughtful about how we fix this, the web will increasingly be fortified with logins, paywalls, and access tolls that inhibit not just AI but the biodiversity of real users and useful crawlers.

A system in turmoil 

To grasp the problem, it’s important to understand how the web worked until recently, when crawlers and websites operated together in relative symbiosis. Crawlers were largely undisruptive and could even be beneficial, bringing people to websites from search engines like Google or Bing in exchange for their data. In turn, websites imposed few restrictions on crawlers, even helping them navigate their sites. Websites then and now use machine-readable files, called robots.txt files, to specify what content they wanted crawlers to leave alone. But there were few efforts to enforce these rules or identify crawlers that ignored them. The stakes seemed low, so sites didn’t invest in obstructing those crawlers.

But now the popularity of AI has thrown the crawler ecosystem into disarray.

As with an invasive species, crawlers for AI have an insatiable and undiscerning appetite for data, hoovering up Wikipedia articles, academic papers, and posts on Reddit, review websites, and blogs. All forms of data are on the menu—text, tables, images, audio, and video. And the AI systems that result can (but not always will) be used in ways that compete directly with their sources of data. News sites fear AI chatbots will lure away their readers; artists and designers fear that AI image generators will seduce their clients; and coding forums fear that AI code generators will supplant their contributors. 

In response, websites are starting to turn crawlers away at the door. The motivator is largely the same: AI systems, and the crawlers that power them, may undercut the economic interests of anyone who publishes content to the web—by using the websites’ own data. This realization has ignited a series of crawler wars rippling beneath the surface.

The fightback

Web publishers have responded to AI with a trifecta of lawsuits, legislation, and computer science. What began with a litany of copyright infringement suits, including one from the New York Times, has turned into a wave of restrictions on use of websites’ data, as well as legislation such as the EU AI Act to protect copyright holders’ ability to opt out of AI training. 

However, legal and legislative verdicts could take years, while the consequences of AI adoption are immediate. So in the meantime, data creators have focused on tightening the data faucet at the source: web crawlers. Since mid-2023, websites have erected crawler restrictions to over 25% of the highest-quality data. Yet many of these restrictions can be simply ignored, and while major AI developers like OpenAI and Anthropic do claim to respect websites’ restrictions, they’ve been accused of ignoring them or aggressively overwhelming websites (the major technical support forum iFixit is among those making such allegations).

Now websites are turning to their last alternative: anti-crawling technologies. A plethora of new startups (TollBit, ScalePost, etc), and web infrastructure companies like Cloudflare (estimated to support 20% of global web traffic), have begun to offer tools to detect, block, and charge nonhuman traffic. These tools erect obstacles that make sites harder to navigate or require crawlers to register.

These measures still offer immediate protection. After all, AI companies can’t use what they can’t obtain, regardless of how courts rule on copyright and fair use. But the effect is that large web publishers, forums, and sites are often raising the drawbridge to all crawlers—even those that pose no threat. This is even the case once they ink lucrative deals with AI companies that want to preserve exclusivity over that data. Ultimately, the web is being subdivided into territories where fewer crawlers are welcome.

How we stand to lose out

As this cat-and-mouse game accelerates, big players tend to outlast little ones.  Large websites and publishers will defend their content in court or negotiate contracts. And massive tech companies can afford to license large data sets or create powerful crawlers to circumvent restrictions. But small creators, such as visual artists, YouTube educators, or bloggers, may feel they have only two options: hide their content behind logins and paywalls, or take it offline entirely. For real users, this is making it harder to access news articles, see content from their favorite creators, and navigate the web without hitting logins, subscription demands, and captchas each step of the way.

Perhaps more concerning is the way large, exclusive contracts with AI companies are subdividing the web. Each deal raises the website’s incentive to remain exclusive and block anyone else from accessing the data—competitor or not. This will likely lead to further concentration of power in the hands of fewer AI developers and data publishers. A future where only large companies can license or crawl critical web data would suppress competition and fail to serve real users or many of the copyright holders.

Put simply, following this path will shrink the biodiversity of the web. Crawlers from academic researchers, journalists, and non-AI applications may increasingly be denied open access. Unless we can nurture an ecosystem with different rules for different data uses, we may end up with strict borders across the web, exacting a price on openness and transparency. 

While this path is not easily avoided, defenders of the open internet can insist on laws, policies, and technical infrastructure that explicitly protect noncompeting uses of web data from exclusive contracts while still protecting data creators and publishers. These rights are not at odds. We have so much to lose or gain from the fight to get data access right across the internet. As websites look for ways to adapt, we mustn’t sacrifice the open web on the altar of commercial AI.

Shayne Longpre is a PhD Candidate at MIT, where his research focuses on the intersection of AI and policy. He leads the Data Provenance Initiative.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI爬虫 网络开放性 数据版权 反爬虫技术
相关文章