少点错误 06月01日 16:27
Legal Personhood for Models: Novelli et. al & Mocanu
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文深入探讨了Novelli等人提出的数字智能法律人格框架,该框架与传统的法律人格概念有所不同。文章首先回顾了法律人格的基本概念和在美国法律中的作用,然后详细介绍了Novelli框架的核心观点,包括对数字智能法律人格的形而上学基础、可行性以及立法变革的必要性。Novelli框架强调,对于数字智能这类新型实体,应该采取谨慎的、逐步过渡的方式赋予其法律人格,并主张通过立法而非简单的法律解释来实现这一目标。

💡法律人格的核心在于权利与义务的“捆绑”,不同类型的法律主体拥有不同的权利义务组合。例如,心智健全的成年人有权起诉他人并强制执行法院裁决,同时也有义务遵守被起诉时的裁决。

⚖️Novelli框架认为,在赋予数字智能法律人格时,应权衡其利弊,并考虑其是否具备法律主体性的抽象可能性。如果赋予其法律人格对法院或公共利益有利,则可以考虑赋予其一定程度的法律人格。

📜Novelli框架主张,对于数字智能这类新型实体,应通过立法而非简单的法律解释来赋予其法律人格。他们认为,数字智能与传统法律人格主体差异巨大,因此需要量身定制的法律框架,以实现逐步过渡到新型法律人格。

Published on June 1, 2025 8:18 AM GMT

In a previous article I detailed FSU Law professor Nadia Batenka's proposed "Inverted Sliding Scale Framework" approach to the question of legal personhood for digital intelligences. 

In this article, I am going to examine another paper approaching the issue which was first authored by Claudio Novelli, Giorgio Bongiovanni, and Giovanni Sartor. Since its original writing it has been endorsed (with some clarifications) by Diana Mocanu in her paper here.

First, let me provide some background on the concept of Legal Personhood/ Legal Personality, and some of the dynamics at play when it comes to deciding the appropriate framework by which the issue can be applied to digital intelligences.

Background: Legal Personhood/Legal Personality Briefer

Legal personhood or "legal personality" is a term of art used to refer to the status of being considered a "person" under the law. This label includes "natural persons" like competent human adults, as well as "legal persons" like corporations. 

Legal personhood is most easily understood as a "bundle" of rights and duties, with different kinds of legal persons having different bundles.

Some examples of rights which are neatly bundled with duties are: 

Different forms of legal personhood entail different bundles. For example a mentally competent human adult has different rights and duties when compared to a mentally incompetent human adult, who in turn has different rights and duties compared to a child, all of whom have different rights and duties compared to a corporation. It is not correct to say that one of these is "more" or "less" of a legal person than another, rather its best to think of them like circles in a venn diagram which partially overlap but are also qualitatively different in meaningful ways.

In the US, legal personhood is a prerequisite for entities to engage in many activities. For example one must have a certain legal personality to be a party to certain contracts, which is why children or mentally incompetent adults for example often cannot be signatories despite being legal persons.

Legal personhood also determine where liability lies. Corporations can often serve as a "liability shield" for the persons whose collective will the corporation enacts. However this shield can be broken through in cases of egregious misconduct, or in cases where parties have "pierced the corporate veil".

Legal personhood also plays a factor in determining what protections an entity enjoys under the law.[1] For example Section 1 of the Fourteenth Amendment states that,

All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside. No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.

Despite the integral role which the concept of legal personhood plays in US law, there is no objective test by which a new type of entity's legal personality can be easily determined;

There has never been a single definition of who or what receives the legal status of “person” under U.S. law.

Legal personality as defined by US precedent suffers from what Nadia Batenka termed the "circularity" problem. Often when reading precedent which first determined that a given entity was entitled to legal personhood of one sort or another, the defining factor cited was that the entity "has a right to sue or be sued" or something similar;

Consider, for instance, some of the conditions that courts have looked for in deciding whether an entity enjoys legal personhood, such as whether the entity has the "right to sue and be sued," the "right to contract," or "constitutional rights." While this may be a reflection of the realist theory of legal personhood, these are conditions that an entity that already enjoys legal personhood possesses but at the same time courts use them to positively answer the question of legal personhood for an unresolved case. This is the issue of circularity in legal personhood.

An entity could only enjoy the right to sue or be sued if it is a legal person, this entity does enjoy the right to sue or be sued, thus it must be a legal person. Early precedents around legal personality are fraught with such tautologies. Charitably this may be interpreted as the courts operating from a perspective of expediency, where they were endowing an entity with legal personality in order to serve some public interest, however it still leaves us in the unfortunate position of having no clear test by which to evaluate new types of entities.

This conundrum is why scholarship on the topic of legal personhood for digital intelligences has, of late, moved towards proposals for frameworks for legislators or "jurists" (courts/judges) to approach the topic.

Having established this, let us move on to discussing the proposed framework at hand.

Novelli et al.'s Framework

Novelli, Bongiovanni, and Sartor (whom I will henceforth collectively refer to simply as Novelli) spend much of their work discussing the various philosophical/epistemological interpretations of legal personality, as it applies to European continental legal tradition. Having laid this background they then turn to the following questions:

    Absent motivated reasoning (a proactive desire by legislators to endow digital intelligences with legal personality of some sort) does there exist justification to grant digital intelligences legal personality within the continental legal tradition today?Assuming there was a desire to endow upon them some sort of legal personality, is there an expedient way to do this within existing law (de lege lata) or is a change in law needed for personality to be conferred (de lege ferenda)?How should the structuring of a framework which emerges from the previous question be approached?

In regards to the first question, Novelli argues that the metaphysical "grounding" of legal personality allows for consideration of both expediency and "legal subjectivity" (which is defined as "the abstract possibility of having rights and duties") when courts make their determination. Novelli writes,

To this end, we need to balance the advantages and disadvantages that would obtain if legal personhood – as a general ability to have rights and duties, at least in the patrimonial domain[2] – were to be conferred on such entities. Consider, for instance, the multiple implications involved in abortion, inheritance, medical liability, etc., that could result once legal personhood is conferred on the unborn child. This judgement may be facilitated if we preliminarily test the candidates for legal personality by resorting to intermediate concepts, such as the idea of legal subjectivity. This midlevel review may also consist of a judgment of expediency, which may also eventuate in the claim to personhood being rejected.

Which taken at face value would seem to argue that "jurists" have some leeway to declare models to be endowed with legal personality of some level, if they decide that:

    There is an abstract possibility of them having rights and duties.
    andEndowing them with legal personality would be expedient for the purposes of the courts and/or public interest.

Novelli does go on however to say that the "expediency" argument may be preempted by other methods by which the court could achieve the same ends, such as by lumping digital intelligences under other frameworks for "legal subjects" which do not require the endowment of "legal personality",

It may be concluded that certain entities we have classified as legal subjects may not require personality, since the protections, guarantees, or enabling conditions that are suited to such entities (e.g., unborn foetuses, animals, ecosystems, technological systems) may already be secured under different legal regimes that are better suited to such entities.

On the second question, Novelli is very direct that for digital intelligences a change in law to confer personhood (de lege ferenda) is superior. Novelli argues the novelty of the class of entities in question, and the gravity of the situation, necessitates a tailored approach,

It seems inappropriate to grant general legal personality to AI systems merely through legal interpretation, given the novelty of such entities and how different they are from those that have so far been granted legal personality (which differences make analogies highly questionable), and the important political implications of choices about the role that AI systems should play in society.

Having put the onus on legislators, Novelli does provide guidance on how such a framework should be crafted. Here is where we see the major departure from previous frameworks like Batenka's, in that Novelli's framework is very case specific and lays out a path to a gradual transition into legal personality which grows along with capabilities, the polar opposite of Batenka's Inverted Sliding Scale Framework.

Novelli argues European legislators could recognize a particular legal status (not personhood) for "AI systems" which meet certain technical standards, to effect a gradual transition into a novel form of legal personhood designed specifically for them. 

Novelli sketches a path whereby: 

In Novelli's own words,

Such a status may come into shape when the users and owners of certain AI systems are partly shielded from liability (through liability caps, for instance) and when the contractual activities undertaken by AI systems are recognised as having legal effect (though such effects may ultimately concern the legal rights and duties of owners/users), making it possible to view these systems as quasi-holders of corresponding legal positions. The fact that certain AI systems are recognised by the law as loci of interests and activities may support arguments to the effect that – through analogy or legislative reform – other AI entities should (or should not) be viewed in the same way. Should it be the case that, given certain conditions (such as compliance with contractual terms and no fraud), the liability of users and owners – both for harm caused by systems of a certain kind and for contractual obligations incurred through the use of such systems – is limited to the resources they have committed to the AI systems at issue, we might conclude that the transition from legal subjectivity to full legal personality is being accomplished.

Or expressed another way, by trusting a model with resources so that it can fulfil a contract between users and/or developers, where the liability for the actions taken in the process of fulfillment is contained to the resources entrusted to the model, as the model becomes a "loci" of legal activity it is gradually endowed with legal personality.

Diana Mocanu has endorsed this framework and provided some additional practical guidance for legislators.

Mocanu's Framework

In her paper "Degrees of AI Personhood", University of Helsinki postdoctoral researcher Diana Mocanu endorses a "discrete" (limited) form of the Novelli framework with the following caveats:

Strengths of the Novelli & Mocanu Frameworks

The Novelli framework, as updated by Mocanu, provides a clear path by which European legislators can, with "minimal lift", gradually shift society into a world where digital intelligences grow into their own legal personality in conjunction with their capacity. Much of the legal scholarship in this space deals with civil liability, and it's nice to see someone lay out the specifics to such a degree that implementation within their framework would be easy.

The main strength of this framework lies in its specificity. When it comes to a clear and discrete roll, operating within the "patrimonial domain" to execute contracts between users and/or developers, this seems like the most thoroughly fleshed out and easy to apply proposal of any work in the space I have seen to date. That said, it is best viewed as a "starting point" for discussions as even Novelli acknowledges that the requirements for acting within such a discrete roll will differ between industries,

The conditions that make legal personality appropriate in one context (e.g., e-commerce) may be very different from those that make it useful in another (e.g., robots used in health care or in manufacturing).

Lastly, by tying the capacity of models to act within this role to standards imposed by regulators, Mocanu & Novelli do a nice job of answering one of the critiques that Batenka made in "Artificially Intelligent Persons". Namely, that facilitating the ability of a digital intelligence to serve as a liability shield by "increasing" its personhood in conjunction with its capacity, in turn incentivizes developers to more aggressively deploy untested models, and thus increases catastrophic risk. 

Requiring models to adhere to certain technical standards (which would presumably require advances in mechanistic interpretability and alignment), and compulsory insurance regimes, in order to claim increased "personhood" would seem to address this perverse incentive. 

There are some striking similarities between this proposal and Senior Policy Advisor for AI and Emerging Technology, White House Office of Science and Technology Policy Dean Ball's "How Should AI Liability Work (Part II)", though I have not to date seen Dean directly engage with the question of legal personality.

Areas for Improvement

There are two areas where I feel these frameworks are lacking, and these apply to both Novelli/Mocanu and Batenka. 

The first is that in focusing so exclusively on civil liability they are ignoring the necessity of entanglement between civil and criminal law. 

Legal persons like corporations, who cannot be physically imprisoned or executed but still can bear civil liability, have until this point in history been nothing more than vehicles by which other persons who can be punished under criminal law express their collective will. 

If the board of a corporation financed a murder, the board could be brought up on charges and imprisoned, and the corporation dissolved, and presumably at that point further criminal acts would not continue. Similarly if we were to look at the Roman patrimony system, if a slave were to murder someone, they could be imprisoned and prevented from committing further murders. Even if they did it at their patron's behest, that patron could be imprisoned.

What makes digital intelligences so meaningfully different is that they may be functionally impossible to restrain or punish once deployed. If a digital intelligence were to for example hack a self driving car and use it to maliciously kill someone, even if we were to seize their assets and fine their insurer, how exactly would we stop them from committing further murders?

This is simply not the kind of thing which civil liability focused frameworks traditionally needed to address. Attempting to deal with civil liability in a vacuum, without a robust framework/physical infrastructure that also enables governments to enforce criminal law, ignores the enforcement challenge unique to legal personality for digital intelligences. The more you grant a model the capacity to take actions which could lead to violations of criminal law, the more critical the ability to feasibly enforce criminal law (against not only the developer/patron but the model itself) becomes.

This is in some ways an "order of operations" question, where arguably if alignment were "solved" before deployment it becomes a non-issue. That is the point though, for technical standards to not address this issue before deployment is allowed, risks not only a "liability" gap but perhaps more critically an "enforceability" gap. This must be taken into account as frameworks, be they legislative or judicial, are made.

My second criticism is again based on a lack of breadth, this time from model welfare grounds. As I pointed out in my examination of Batenka's theory of personhood, attempting to craft a legal personality framework around civil liability without answering questions about what protections against abuse or mistreatment digital intelligences are guaranteed, would seem to be an immoral and overly narrow approach to legal personality,

A framework for legal personhood in which a digital intelligence capable of joy and suffering was forever barred from the possibility of emancipation, of equal protection under the law, or even of asserting its right not to be deleted, would be profoundly immoral. 

There is nothing in Novelli/Mocanu which explicitly disclaims including some sort of model welfare based standard, or even the possibility of emancipation. However I feel that especially for this case where the most obvious historical parallel is literally a form of slavery, this warrants being mentioned.

Conclusion

It's heartening to see more work being done in the space, and having DMed a bit with Mocanu on Linkedin I know she is planning to publish a book on this topic soon. 

I have also been working on my own ideas attempting to tie some of these more civil liability focused theories with a general concept of personhood that would enable criminal punishment/enforcement, and answer the perverse incentive issues which Batenka based her theory around.

As conversations around the economic impact of general/superintelligence and gradual disempowerment continue to become more mainstream, I encourage everyone interested in these topics to keep the issue of legal personhood/personality in mind, as they will be key factors in how such issues evolve.

 

  1. ^

    It is worth noting that legal personhood is not the only source of protections. For example animals are entitled to protections against abuse. Some argue that animals are in fact legal persons, such as the Non-Human Rights Project, whose effort challenging a Utah law which bars state officials from assigning legal personhood to "artificial intelligences" I wrote about here.

  2. ^

    The "patrimonial domain" referring to the legal system which dealt with Roman slaves, who while not fully legal persons themselves could be entrusted with resources by a patron (or for liability purposes were backed by the resources of a patron), and able to take certain legal actions on that patron's behalf.

  3. ^

     



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

法律人格 数字智能 Novelli框架
相关文章