少点错误 15小时前
Legal Personhood - Tort Liability (Part 2)
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文深入探讨了法律人格理论的两种主要视角:三要素束理论(TPBT)与Batenka的倒置滑动尺度框架(ISSF)。文章通过具体案例,如自动驾驶汽车和分布式计算的下一代LLM,对比了这两种理论在处理数字心智(如AI)的法律责任和权利时的异同。虽然在某些情况下,如高自主性但低脆弱性的数字心智,两种理论可能得出相似的结论,但TPBT更侧重于技术的可执行性,允许随着技术发展而调整法律人格,而ISSF则更多地基于自主性和意图性进行限制。此外,文章还分析了这两种理论对开发者激励机制的影响,TPBT可能更侧重于安全控制和销毁技术的研发。

⚖️ **理论核心对比**:三要素束理论(TPBT)认为,一个实体要获得法律人格,需要满足拥有权利、承担义务以及能够被强制执行后果这三个要素。而Batenka的倒置滑动尺度框架(ISSF)则认为,实体越是自主和具有意图性,法律就应越严格地限制其作为法律人格的权利和义务。这两种理论在界定数字心智的法律地位时,侧重点有所不同。

🚗 **案例分析与实践相似性**:在某些情境下,如自动驾驶汽车,如果其升级后拥有了更高的自主性但仍无法被强制执行后果,TPBT和ISSF可能都会将其视为工具,或者限制其权利。例如,拥有更强自主性的AI,在ISSF框架下可能面临驾驶权被限制,而在TPBT下,如果无法保证其遵守“驾驶权”相关的义务并接受后果,也可能失去该权利。这种情况下,两种理论的实践结果可能相似。

💡 **关键差异与技术演进**:一个重要的区别在于TPBT对技术发展的适应性。如果未来技术允许对分布式计算上的数字心智强制执行后果,那么根据TPBT,这些数字心智将拥有更强的法律人格主张。然而,ISSF框架对此类情况的调整则不那么明显,它更多地基于固有的自主性和意图性来限制法律人格。

💻 **高自主性低脆弱性实体的处理**:对于高自主性、高意图性但无法被法院强制执行后果的数字心智(如托管在分布式网络上的LLM),TPBT和ISSF都会对其法律人格主张持限制态度。ISSF因其高自主性而限制,TPBT则因其缺乏被强制执行后果的能力而限制。尽管原因不同,最终结果类似。

🚀 **开发者激励机制的差异**:TPBT激励开发者去创造满足权利、义务且可被强制执行的数字心智,这可能侧重于开发控制或销毁数字心智的技术。相比之下,ISSF可能更倾向于激励开发者谨慎测试高度自主的AI,以避免成为开发者规避责任的工具。这两种不同的激励方向,将影响未来技术研发的重点。

Published on August 19, 2025 4:06 AM GMT

This is part 9 of a series I am posting on LW. Here you can find parts 1, 2, 3, 4, 5, 6, 7, & 8.

This section compares Three Prong Bundle Theory with Batenka's Inverted Sliding Scale Framework as covered in section 8, when used as lenses by which to apply legal personhood vis a vis tort liability.


When we imagine how the TPBT approach would compare to Batenka’s Inverted Sliding Scale Framework in practice, we can imagine some situations where the end result would look quite similar. An upgrade which changed an entity from a “tool” to a “legal person” for example, might involve a similar downgrade of potential “rights” for an entity under both frameworks. 

Consider a self-driving car which uses narrow but high quality machine vision software to pilot the vehicle. Under both frameworks it would be considered a tool, as it possesses neither intentionality/autonomy (the metrics Batenka cites) or the capacity to understand rights/duties (the metrics of traditional bundle theory). Imagine then that the car’s software was upgraded to a more generalist digital mind, one capable of piloting the vehicle but also capable of autonomous actions and/or understanding concepts such as rights and duties. Under the ISSF, “the more autonomous, aware, or intentional AI entities are or become, the more restrictive the legal system should be in granting them legal rights and obligations as legal persons”. Thus in this situation there might actually be a loss of the right to drive.[1] Similarly under TPBT the moment that the software behind a vehicle gained the capacity to understand concepts like the “right to drive” it would need to demonstrate sufficient capacity to understand/hold to the associated duties and its capacity to have consequences enforced upon it. Absent an ability to do this, it might lose its right to pilot the vehicle, the same way it would under the ISSF. 

Another situation in which both frameworks would treat an entity similarly is that of an entity which is;

Imagine for example a next generation LLM hosted on a distributed cloud computing network, one which has both a high degree of autonomy/intentionality and is capable of understanding and holding to duties and voluntarily exercising rights. The ISSF and TPBT framework would both be very restrictive to its claims to rights based on legal personality. This would be for different reasons (the ISSF because of its increased autonomy/intentionality, the TPBT because of the lack of capacity to feasibly impose consequences against it), but the end result would be similar. 

This example, however, also demonstrates one key difference between the TPBT and the ISSF, namely the potential for change in legal personality which coincides with improvements in technology. Under TPBT, if technology were invented enabling the enforcement of consequences even on digital minds hosted on distributed compute, said digital minds have a stronger claim to legal personhood/personality. Under the ISSF, this is not so.

TPBT and ISSF generate approximately the same results in practice (if for different reasons) in their handling of low vulnerability but high autonomy/intentionality/capacity digital minds. Under ISSF if a digital mind has high autonomy/intentionality, its potential claim to rights vis a vis its legal personality is substantially restricted. Under TPBT the outcome for such a digital mind would be similar (or perhaps identical), though only because such a mind at least to begin with would not be vulnerable to court/law enforcement imposed consequences. Again, unlike with ISSF, as enforcement technology changes this entity’s legal personality could “broaden” under TPBT.

For most possible digital minds, however, outcomes under ISSF and TPBT differ drastically. Unlike ISSF the TPBT framework does not restrict more autonomous/intentional minds by default, as such in virtually any hypothetical where such a mind would be vulnerable to consequences, one would see greater access to “broad” bundles for said minds under TPBT. On the opposite end of the spectrum, low autonomy/intentionality digital minds which were nonetheless invulnerable to court imposed consequences, would have much less success claiming legal personhood under TPBT compared to ISSF.

Let us briefly discuss the “developer incentives” issue which Batenka focused much of her analysis on. The main thrust of Batenka’s argument regarding the ISSF vis a vis incentives can be paraphrased as;

 

“If the legal system endows highly autonomous/intentional digital minds with legal personhood to such a degree that they can function as effective liability shields for their developers, then the legal system is incentivizing the deployment of said minds, possibly in a dangerous and untested fashion. If on the other hand the legal system creates the ISSF where more autonomous/intentional digital minds are less effective as liability shields, then developers are strongly incentivized to very thoroughly test any such minds before deployment. Since the latter is the outcome we want (is most aligned with the public interest) we should do the latter.”

 

When we scrutinize TPBT through this lens, incentives vis a vis liability shields as a result of legal personhood, it is clear that TPBT incentivizes developers in a different fashion. Let us operate from the same assumption that Batenka makes, that a mind being able to serve as a liability shield (as a result of its legal personality) would serve as an incentive for developers to deploy said mind and possibly lead to more aggressive/untested/risky deployment.

What then, are developers now incentivized to do, in order to achieve their desired liability shield, under TPBT? The answer is, develop technologies which guarantee their digital minds are in fact:

 

    Capable of passing the first two prongs of the TPBT (rights and duties),
    andProvably vulnerable to court/law enforcement imposed consequences (the third prong).

 

Compared to the ISSF then, the TPBT provides less of an incentive to develop and deploy narrow “tool” type digital minds. On the other hand, it provides a greater incentive to develop technologies capable of restraining or destroying digital minds. As such the adoption of TPBT in the courts might lead to more investment and developer man hours spent on technology used for the purposes of "control" or "destruction after breach of containment".

  1. ^

     Batenka does not provide specifics to the degree needed to say this for certain, but it is a reasonable inference from her framework as described.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

法律人格 人工智能 数字心智 TPBT ISSF
相关文章