少点错误 03月23日
Transhumanism and AI: Toward Prosperity or Extinction?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了超人类主义者对人工智能的多元观点:一部分人视其为解放的希望,另一部分人则视为生存威胁。文章审视了兴奋、谨慎和争议之间的关系,揭示了人们对未来的思考。文章深入分析了超人类主义者对AI的态度,强调了他们对AI潜在风险的认知,以及在技术进步与人类安全之间的平衡。此外,文章还介绍了不同超人类主义学派对AI的不同看法,包括对风险的评估、对技术发展的态度,以及对人类未来的愿景。

🤔 超人类主义者对AI的迷恋源于其潜在的变革力量,可以加速技术进步,挑战生物极限,并引发关于人类在有意识机器面前如何重新定位自身的哲学思考。

⚠️ 围绕AI的风险是多方面的,包括大规模虚假信息、全球网络攻击以及加剧不平等。一些超人类主义者,特别是技术进步主义者,已经意识到他们所支持的技术带来的危险,并努力识别和控制这些风险。

🧐 超人类主义者对AI的看法存在分歧,主要分为五类:认为AI风险低或不存在并能带来巨大好处;认为高风险值得尝试,以获得潜在收益;认为机器接管是自然秩序的一部分;主张加速技术进步;以及呼吁减缓或停止先进AI的开发,以避免生存灾难。

💡 AI安全领域与超人类主义之间存在密切联系,两者都关注人类的未来。AI安全由技术爱好者开发,他们希望构建高度智能的AI,同时避免随之而来的巨大风险。

🧠 提升人类认知能力以应对AI的观点主要有三种:增强人类能力以保持竞争力,但这种策略注定失败;将人类与AI融合,但存在时间上的问题;以及将重点放在重新组织社会,以适应AI时代,而非与AI竞争。

Published on March 22, 2025 6:16 PM GMT

This article explores the multiple transhumanist views on AI: a promise of emancipation for some, an existential threat for others. Between enthusiasm, caution, and controversy, it sheds light on those who think about the future.
 

Transhumanists: Blind Tech Enthusiasts?

November 30, 2022, marked a turning point. On that day, OpenAI unveiled ChatGPT. Since then, artificial intelligence has received unprecedented media coverage, sparking both enthusiasm and concern. Yet, long before the general public took interest in it, one community already saw it as a major issue: transhumanists.
For decades, transhumanist thinkers have seen artificial intelligence as a key element of humanity’s future, and that is not by chance. Their fascination with AI stems from three main reasons :

That third point might come as a surprise. After all, aren’t transhumanists just dreamy technophiles who believe all technical progress is inherently good?

In reality, many transhumanists, especially technoprogressives, are fully aware of the dangers associated with the technologies they support. For them, fully unlocking the potential of transhumanism first requires identifying and anticipating the risks so they can be better controlled.

When it comes to artificial intelligence, those risks are not just significant; they are existential. For several years now, leading experts have warned of the threat posed by an uncontrolled superhuman AI, even considering the possibility of human extinction. Faced with such a scenario, what exactly is the transhumanist stance?

 

The Diversity of Transhumanist Views on AI

If there is one thing that is clear, it is that there is no consensus. On one side, some believe there is no reason to worry, while others are calling for an immediate halt in development to avoid extinction. To better understand this contrasting landscape, we can distinguish several major schools of thought.
The first group believes existential risks from AI are low or nonexistent, and that it will bring major benefits. This is the position of Ben Goertzel, a pioneer of artificial general intelligence and president of Humanity+, the largest transhumanist association. Generally speaking, this downplaying of risks often stems from a lack of familiarity with, or outright rejection of, the scientific field of AI safety, where researchers estimate a 30 percent probability, on average, that AI could cause human extinction.

The second category acknowledges that existential risks are high but believes they are worth taking. Dario Amodei, CEO of Anthropic, estimates a 25 percent probability that AI could lead to human extinction within the next five years. Still, he sees the gamble as justified by the potential benefits. He believes AI could deliver “a century of discoveries in a decade,” leading to increased lifespan, morphological freedom, neuro-enhancement, and other transhumanist advancements. Convinced that superhuman AI is inevitable, he prefers his company to be the one behind it rather than leaving such a breakthrough to less ethical actors.

The third position, more radical, holds that the succession of humanity by machines is part of the natural order. If a more intelligent species emerges, it makes sense for it to take over civilization. Isaac Asimov shared this view, as did Larry Page, Google’s co-founder, who once accused Elon Musk of “speciesism” for favoring humanity over future digital intelligences. This view often relies on the assumption that AIs will eventually become conscious. But if that assumption turns out to be false, we could end up in a universe where every experience is replaced by electrical circuits and optimization functions. Nick Bostrom warns of the risk of creating a world full of technological marvels but with no one left to enjoy them: a “Disneyland without children.”

The fourth category advocates for a breakneck race toward technological progress. Followers of “effective accelerationism” aim to maximize energy consumption and push humanity up the Kardashev scale. This movement draws inspiration from Nick Land’s accelerationism, which called for a radical transformation of society by driving technological progress to its peak to bring capitalism to its natural conclusion. Convinced that technology will solve humanity’s problems, they are directly opposed to AI safety-focused approaches. Their most influential figure is Marc Andreessen, author of the Techno-Optimist Manifesto. However, the movement, born on Twitter through memes and provocations, is so chaotic that it is hard to take seriously.

Finally, the fifth category includes those calling for a slowdown, or even a halt, in the development of advanced AI to avoid an existential catastrophe. Among them, Nick Bostrom, co-founder of the World Transhumanist Association, stood out by publishing a foundational text on existential risks in 2002, after introducing the notion of superintelligence in 1997 with Hans Moravec in transhumanist journals. On his side, Eliezer Yudkowsky, a major figure in AI safety, began engaging in 1999, at just 20 years old, on the SL4 mailing list, a space for advanced discussions on transhumanism. Both show that transhumanism and caution about AI are not mutually exclusive.
 

The Links Between Transhumanism and AI Safety

At its core, transhumanism is about humanity’s future: what destiny do we wish for our species? Should we enhance our cognitive abilities? Explore space? The possibilities are vast. However, these ambitions would lose all meaning if humanity were to disappear. That realization is what led some transhumanists to focus on studying existential risks, especially those linked to artificial intelligence. This explains why it was primarily thinkers from the transhumanist world who initiated and popularized AI safety.

Furthermore, a shared mindset clearly connects the two fields. Accepting the conclusions of AI safety requires long-term thinking, acknowledging the transformative potential of technology, accepting logical arguments even when they defy intuition, and taking seriously scenarios often dismissed as science fiction. These are the same qualities needed to embrace transhumanist ideas. Looking toward the future and exploring technological advances through a particular lens almost inevitably leads to engagement with both.

It is often assumed that people working in AI safety are pessimists with a visceral fear of technology, but that is incorrect. AI safety was in fact developed by technophiles who wanted to build highly intelligent AIs while avoiding the immense risks that come with them. No one would call a civil engineer “anti-bridge” just because they make sure bridges do not collapse. Likewise, AI safety experts are not “anti-AI,” and certainly not “anti-tech”; they simply want to ensure we build safe systems that will not lead to humanity’s extinction.

Critics of both transhumanism and AI safety have also noticed the connection between the two. This is how computer scientist Timnit Gebru and philosopher Émile Torres came up with the acronym TESCREAL, encompassing transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism. In their view, these ideologies are interconnected, overlapping, and share common roots.

While that analysis is not entirely unfounded, it should be taken with a grain of salt. After all, grouping together figures as different as Eliezer Yudkowsky and Marc Andreessen raises questions about the validity of such classifications. In truth, this kind of grouping makes it easier to lump together diverse schools of thought under one label in order to deliver sweeping criticisms, without dwelling on the nuances of each argument. In other words, it becomes a caricature.

It is therefore important to note that while AI safety did emerge under the influence of transhumanist thinkers, it has since greatly diversified. Most AI safety experts today probably do not identify as transhumanists. Moreover, as we have seen, transhumanist opinions on AI existential risks vary widely. It would be wrong to assume the entire movement shares those concerns.
 

Enhancing Ourselves to Face AI?

There are three major transhumanist trends that argue for boosting human cognitive abilities in order to face artificial intelligence.

The first, supported by figures like Laurent Alexandre, advocates enhancing human capacities to stay competitive in the job market. But this strategy is doomed to fail. AI does not sleep, eat, take breaks, it thinks fifty times faster than us and can duplicate itself instantly. The war is lost before it even starts. Beyond being unrealistic, this approach is also undesirable. It fits into an ultra-capitalist logic where work is seen as an absolute necessity. But we could instead see the end of work as a good thing, liberating humanity from labor. When we invented the tractor, we did not try to give humans giant wheels to compete; we reorganized society so people no longer had to work fourteen-hour days in the fields. Why not apply the same logic with AI?

The second approach aims to merge humans with AI to avoid becoming obsolete. This is notably Elon Musk’s goal with Neuralink, which develops brain implants. But this idea runs into a timing problem: AI will likely surpass humans well before these chips become widespread. And even if they did arrive in time, they would not solve the issue. How could a brain implant protect us from a superhuman AI determined to wipe us out? On the contrary, it might even increase the risks. Getting an implant that AI could hack at will does not sound like the best strategy.

It is also important to point out that both of these first approaches run counter to a central principle of technoprogressive transhumanism: freedom of choice. They reject the idea of a society where those who do not seek self-enhancement are marginalized or left behind. Each individual should be able to choose freely whether or not to optimize their cognitive abilities, based on their own preferences. Technoprogressives do not want to impose transhumanism on everyone; they want to expand the horizon of possibilities.

The third approach, promoted by Eliezer Yudkowsky, aims to make humans smarter so they can secure AI before it becomes uncontrollable. For them, the technical and philosophical challenge is so great that baseline human intelligence might be insufficient. Again, timing is an issue. AI will likely surpass humanity well before we can produce a genetically enhanced generation or roll out revolutionary brain implants. Aware of this problem, Eliezer Yudkowsky has floated the idea of using CRISPR for genetic therapy in adults to increase intelligence, but he admits this is unlikely.

In the end, our goal should not be to enhance humans to compete with AI, but to design artificial intelligence that aligns with our values. That is, without a doubt, the greatest challenge in human history.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

超人类主义 人工智能 AI安全 技术进步
相关文章