少点错误 2024年11月25日
Two flavors of computational functionalism
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了计算功能主义(CF)的两种不同理解:理论CF和实践CF。理论CF认为,只要完美模拟人脑的物理过程,就能产生意识;实践CF则认为,在经典计算机上模拟人脑的粗粒度模型,就能产生意识。作者认为,实践CF更贴近我们关心的问题,例如AI意识、意识上传和模拟假说等。文章还分析了CF的支持论点,例如AI的进步和认知计算的成功,并指出这些论点主要支持实践CF。作者计划在后续文章中深入探讨实践CF的有效性,从而更全面地理解意识的本质。

🤔 **理论CF:**认为完美模拟人脑的物理过程,包括原子级别的细节,就能产生意识。这种模拟需要巨大的计算量,甚至超越了可观测宇宙的原子数量,在实践中难以实现。

💻 **实践CF:**认为在经典计算机上模拟人脑的粗粒度模型,就能产生意识。这种模拟更贴近我们关心的问题,例如AI意识、意识上传和模拟假说等,也更符合功能主义最初的设想。

💡 **CF的支持论点:**主要来自AI的进步和认知计算的成功。这些论点表明,我们能够在计算机上模拟人脑的功能,并可能由此产生意识,但这些论点更多地支持实践CF而非理论CF。

🧠 **CF与意识相关问题:**AI意识、意识上传和模拟假说等问题都与实践CF密切相关,因为这些问题都涉及到在有限的资源下创造或模拟意识的可能性。

🚀 **后续探讨:**作者将继续深入探讨实践CF的有效性,这将有助于我们更好地理解意识的本质,以及人工智能是否能够真正拥有意识。

Published on November 25, 2024 10:47 AM GMT

 

This is intended to be the first in a sequence of posts where I scrutinize the claims of computational functionalism (CF). I used to subscribe to it, but after more reading, I’m pretty confused about whether or not it’s true. All things considered, I would tentatively bet that computational functionalism is wrong. Wrong in the same way Newtonian mechanics is wrong: a very useful framework for making sense of consciousness, but not the end of the story.

Roughly speaking, CF claims that computation is the essence of phenomenal consciousness. A thing is conscious iff it is implementing a particular kind of program, and its experience is fully encoded in that program. A famous corollary of CF is substrate independence: since many different substrates (e.g. a computer or a brain) can run the same program, different substrates can create the same conscious experience.

CF is quite abstract, but we can cash it out to concrete claims about the world. I noticed two distinct flavors[1] of functionalism-y beliefs that are useful to disentangle. Here are two exemplar claims corresponding to the two flavors:

In this sequence, I’ll address these two claims individually, and then use the insights from these discussions to assess the more abstract overarching belief of CF.

How are these different?

A perfect atomic-level brain stimulation is too expensive to run on a classical computer on Earth at the same speed as real life (even in principle).

The human brain contains atoms.The complexity of an N-body quantum simulation precisely on a classical computer is O().[3] Such a simulation would cost  operations per timestep. Conservatively assume the simulation needs a temporal precision of 1 second, then we need  FLOPS. A single timestep needs more operations than there are atoms in the observable universe (~), so a classical computer the size of the observable universe that can devote an operation per atom per second would still be too slow.

Putting in-principle possibility aside, an atom-level simulation may be astronomically more expensive than what is needed for many useful outputs. Predicting behavior or reproducing cognitive capabilities likely can be achieved with a much more coarse-grained description of the brain, so agents who simulate for these reasons will run simulations relevant to practical CF rather than theoretical CF.

Practical CF is more relevant to what we care about

In my view, there are three main questions for which CF is a crux: AI consciousness, mind uploading, and the simulation hypothesis. I think these questions mostly hinge on practical CF rather than theoretical CF. So when it comes to action-guiding, I’m more interested in the validity of practical CF than theoretical CF.

AI consciousness: For near-future AI systems to be conscious, it must be possible for consciousness to be created by programs simple enough to be running on classical Earth-bound clusters. If practical CF is true, that demonstrates that we can create consciousness with simple programs, so the simple program of AI might also create consciousness.

If theoretical CF is true, that doesn't tell us if near-future AI consciousness is possible. AI systems (probably) won’t include simulations of biophysics any time soon, so theoretical CF does not apply to these systems.

Mind uploading: We hope one day to make a suitably precise scan of your brain and use that scan as the initial conditions of some simulation of your brain at some coarse-grained level of abstraction. If we hope for that uploaded mind to create a conscious experience, we need practical CF to be true. 

If we only know theoretical CF to be true, then a program might need to simulate biophysics to recreate your consciousness. This would make it impractical to create a conscious mind upload on Earth.

The simulation hypothesis: Advanced civilizations might run simulations that include human brains. The fidelity of the simulation depends on both the available compute and what they want to learn. They might have access to enough compute to run atom-level simulations.

But would they have the incentive to include atoms? If they’re interested in high-level takeaways like human behavior, sociology, or culture, they probably don’t need atoms. They’ll run the coarsest-grained simulation possible while still capturing the dynamics they’re interested in.

Practical CF is closer to the spirit of functionalism

The original vision of functionalism was that there exists some useful level of abstraction of the mind below behavior but above biology, that explains consciousness. Practical CF requires such a level of abstraction so is closely related. Theoretical CF is a departure from this, since it concedes that consciousness requires the dynamics of biology to be present (in a sense).

The arguments in favor of CF are mostly in support of practical CF. For example, Chalmer’s fading qualia experiment only works in a practical CF setting. When replacing the neurons with silicon chips, theoretical CF alone would mean that each chip would have to simulate all of the molecules in the neuron, which would be intractable if we hope to fit the chip in the brain.[4]

CF is often supported by observing AI progress. We are more and more able to recreate the functions of the human mind on computers. So maybe we will be able to recreate consciousness on digital computers too? This is arguing that realistic classical computers will be able to instantiate consciousness, the practical CF claim. To say something about theoretical CF, we’d instead need to appeal to progress in techniques to run efficient simulations of many-body quantum systems or quantum fields.

CF is also sometimes supported by the success of the computational view of cognition. It has proven useful to model the brain as hardware that runs the software of the mind, via e.g. neuron spiking. The mind is a program simple enough to be encoded in neuron spiking (possibly plus some extra details e.g. glial cells). Such a suitably simple abstraction of the brain can then run on a computer to create consciousness - the practical CF claim.

So on the whole, I’m more interested in scrutinizing practical CF than theoretical CF. In the next post, I’ll scrutinize practical CF.


  1. ^

    These flavors really fall on a spectrum: one can imagine claims in between the two (e.g. a “somewhat practical CF”).

  2. ^

    1 second of simulated time is computed at least every second in base reality.

  3. ^

    There could be a number of ways around this. We could use quantum monte carlo or density functional theory instead, both with complexity O(N^3), meaning a simulation would need 10^75 operations per timestep, once again roughly the size of the observable universe. We could also use quantum computers - reducing the complexity to possibly O(N), but this would be a departure from the Practical CF claim. Such a simulation on Earth with quantum computers is in principle possible from a glance, but there could easily be engineering roadblocks that make it impossible in practice.

  4. ^

    Could the chips instead interface with, say, a Dyson sphere? The speed of light would get in the way there, since it would take ~minutes to send & receive messages, while neuron firing details are important at << seconds.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

计算功能主义 意识 人工智能 模拟 功能主义
相关文章