少点错误 2024年12月09日
The first AGI may be a good engineer but bad strategist
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AGI在工程方面可能有优势,而人类在策略和智慧方面可能占优。还提到AGI可能缺乏智慧,人类在工程能力上存在劣势,以及AGI可能带来的潜在影响,如AGI接管、制造危险的自复制纳米机器人等。

🎯AGI在工程方面可能具优势,但智慧和策略或不足

🧠人类在策略和智慧上可能占优,工程能力有劣势

🚧AGI接管存在风险,后续AGI可能在智慧和策略上改进

💣AGI若擅长工程,可能制造危险的自复制纳米机器人

Published on December 9, 2024 6:34 AM GMT

AGI may have an advantage in engineering, but humans may have an advantage in strategy and wisdom.

AGI disadvantage in wisdom

Wisdom and strategy is much harder to evaluate than engineering ability. The only way to evaluate long term wisdom is to let the agent make a decision, wait years, and see if the agent's goals have advanced. Evolution and natural selection had hundreds of thousands of years to optimize human wisdom, and improving wisdom is a high priority for evolution. AGI labs do not have hundreds of thousands of years, so AGI might lack wisdom.

AI algorithms produce black boxes which its creators do not understand, but somehow work in getting a desired result. Generally speaking, this only works when the desired result can be evaluated. We cannot evaluate an AI's long term wisdom (beyond correcting mistakes which fall below the human level).

Human disadvantage in engineering

Evolution and natural selection did not give humans very good mental math ability, because adding large numbers didn't help our prehistoric ancestors. Likewise, engineering ability only helped our prehistoric ancestors a little bit. Making spears is very helpful for survival, but it only needed to be invented once and can be copied afterwards. If you want better spears, having the engineering ability to design a jumbo jet will not help you very much. You're better off relying on trial and error with your rock chipping techniques and testing out spears you make.

Therefore, the first AGI built might be very good at engineering, but bad at wisdom and strategy.

Caveat

It's possible that even if the AGI's intuitive wisdom and strategy are not superhuman, its actual decisions may be superhuman simply because it thinks much longer about all possible regrets, and has less ego-driven overconfidence.

Potential implications

AGI takeover

Even if the first AGI built is poor in wisdom and strategy, doesn't mean we're safe from AI takeover. A second AGI built by the first AGI might be much better at wisdom and strategy, and it might be misaligned due to unwise mistakes by the first AGI.

The first AGI itself isn't necessarily safe. Poor wisdom and strategy does not mean you can't take over the world. If you can engineer self replicating machines, even a chatbot-like level of strategizing might be enough.

It does mean that AGI control methods have a higher chance of working, contradicting the assumption that control is far less useful than alignment.

Self replicating nanobots

If the AGI is really good at engineering, it may be able to make self replicating nanobots.

Self replicating nanobots are dangerous because they can be weaponized, or they can accidentally go out of control and spread in a grey goo scenario.

Hierarchical mutation prevention

My idea is that self replicating nanobots should never replicate their "DNA," or self replication instructions. Instead, each nanobot can only "download" these self replication instructions from a higher level nanobot. I'm not sure if this idea is new. I wrote a post on this.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AGI 人类优势 AGI接管 自复制纳米机器人
相关文章