少点错误 2024年08月19日
Liability regimes for AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章对产品责任归属进行经济分析,探讨在不同情况下应如何确定责任方,涉及到Coasean bargaining和judgment-proof defendant等概念。

🎯Coasean bargaining指在一定条件下,涉事方通过契约可任意划分损害,达到经济高效的结果。如决定让商店对售枪给暴力使用者负责,商店可要求买家购买保险,保险公司据此调整保费,使枪支总体上更难获取。

🚫The judgment-proof defendant指一些人资产少,无法赔偿受害者,如多数校园枪击者。可通过法律强制他们购买保险解决,但仍存在问题,如有人非法获取枪支并实施枪击,该法律可能无法解决问题。

💪解决judgment-proof defendant问题的方法是将责任施加给最不可能无法赔偿的一方,通常是大公司。但这会给大公司带来优势,可能导致市场集中度增加,小公司可通过购买保险竞争,但仍存在交易成本和规模经济问题。

🤖对于AI,责任制度的分歧源于人们对其风险的看法。认为AI潜在风险大的人倾向让大科技公司或AI实验室担责,认为风险小的人则倾向让个人负责。

Published on August 19, 2024 1:25 AM GMT

For many products, we face a choice of who to hold liable for harms that would not have occurred if not for the existence of the product. For instance, if a person uses a gun in a school shooting that kills a dozen people, there are many legal persons who in principle could be held liable for the harm:

    The shooter themselves, for obvious reasons.The shop that sold the shooter the weapon.The company that designs and manufactures the weapon.

Which one of these is the best? I'll offer a brief and elementary economic analysis of how this decision should be made in this post.

The important concepts from economic theory to understand here are Coasean bargaining and the problem of the judgment-proof defendant.

Coasean bargaining

Let's start with Coaesean bargaining: in short, this idea says that regardless of who the legal system decides to hold liable for a harm, the involved parties can, under certain conditions, slice the harm arbitrarily among themselves by contracting and reach an economically efficient outcome. Under these conditions and assuming no transaction costs, it doesn't matter who the government decides to hold liable for a harm; it's the market that will ultimately decide how the liability burden is divided up.

For instance, if we decide to hold shops liable for selling guns to people who go on to use the guns in acts of violence, the shops could demand that prospective buyers purchase insurance on the risk of them committing a criminal act. The insurance companies could then analyze who is more or less likely to engage in such an act of violence and adjust premiums accordingly, or even refuse to offer it altogether to e.g. people with previous criminal records, which would make guns less accessible overall (because there's a background risk of anyone committing a violent act using a gun) and also differentially less accessible to those seen as more likely to become violent criminals. In other words, we don't lose the ability to deter individuals by deciding to impose the liability on other actors in the chain, because they can simply find ways of passing on the cost.

The judgment-proof defendant

However, what if we imagine imposing the liability on individuals instead? We might naively think that there's nothing wrong, because anyone who used a gun in a violent act would be required to pay compensation to the victims which in principle could be set high enough to deter offenses even by wealthy people. However, the problems we run into in this case is that most school shooters have little in the way of assets and certainly not enough to compensate the victims and the rest of the world for all the harm that they have caused. In other words, they are judgment-proof: the best we can do when we catch them is put them in jail or execute them. In these cases, Coaesean bargaining breaks down.

We can try to recover something like the previous solution by mandating such people buy civil or criminal insurance by law, so that they are no longer judgment-proof because the insurance company has big coffers to pay out large settlements if necessary, and also the incentive to turn away people who seem like risky customers. However, law is not magic, and someone who refuses to follow this law would still in the end be judgment-proof.

We can see this in the following example: suppose that the shooter doesn't legally purchase the gun from the shop but steals it instead. Given that the shop will not be held liable for anything, it's only in their interest to invest in security for ordinary business reasons, but they have no incentive to take additional precautions beyond what make sense for e.g. laptop stores. Because the shooter obtains the gun illegally, they can then go and carry out a shooting without any regard for something such as "a requirement to buy insurance". In other words, even a law requiring people to buy insurance before being able to legally purchase a gun doesn't solve the problem of the judgment-proof defendant in this case.

The way to solve the problem of the judgment-proof defendant is obvious: we should impose the liability on whoever is least likely to be judgment-proof, which in practice will be the largest company involved in the process with a big pile of cash and a lot of credibility to lose if they are hit with a large settlement. They can then use Coaesean bargaining where appropriate to divide up this cost as far as they are able to under the constraints they are operating under.

Transaction costs and economies of scale

The problem with this solution is that it gives an advantage to companies that are bigger. This is by design: a bigger company is less likely to be judgment-proof just because it gets to average the risk of selling guns over a larger customer base and therefore any single bad event is less likely to be something for which the company can't afford a settlement. However, it means we expect a trend towards increased market concentration in the presence of such a liability regime, which might be undesirable for other reasons.

A smaller company can try to compete by buying insurance on the risk of them being sued, which is itself another example of a Coasean solution, but this still doesn't remove the economies of scale introduced by our solution because in the real world such bargaining has transaction costs. Because transaction costs are in general concave in the amount being transacted, large companies will still have an advantage over smaller companies, and this is ignoring the possibility that certain forms of insurance may be illegal to offer in some companies.

Summary and implications for AI

So, we end up with the following simple analysis:

    In industries where the problem of the judgment-proof defendant is serious, for example with technologies that can do enormous amounts of harm if used by the wrong actors, we want the liability to be legally imposed on as big of a base as possible. A simple form of this is to hold the biggest company involved in production liable, though there are more complex solutions.

    In industries where the problem of the judgment-proof defendant is not serious, we want to impose the liability on whoever can locally do the most to reduce the risk of the product being used to do harm, as this is the solution that gives the best local incentives and therefore reduces Coasean transaction costs that must be incurred to the minimum. In most cases this will be the end users of a product, though not always.

For AI, disagreements about liability regimes seem to mostly arise out of whether people think we're in world (1) or world (2). Probably most people agree the solution recommended in (1) creates "artificial" economies of scale favoring larger companies, but people who want to hold big technology companies or AI labs liable instead of end users think the potential downside of AI technology is very large and so the end users will be judgment-proof given the scale of the harms the technology could do. It's plausible even the big companies are judgment-proof (e.g. if billions of people die or the human species goes extinct) and this might need to be addressed by other forms of regulation, but focusing only on the liability regime we still want as big of a base as we can get.

In contrast, if you think the relevant risks from AI look like people using their systems to do some small amounts of harm which are not particularly serious, you'll want to hold the individuals responsible for these harms liable and spare the companies. This gives the individuals the best incentives to stop engaging in misuse and reduces transaction costs that would both be bad in themselves and also exacerbate the trend towards industry concentration.

Unless people can agree on what the risks of AI systems are, it's unlikely that they will be able to agree on what the correct liability regime for the industry should look like. Discussion should therefore switch to making the case for large or small risks from AI, depending on what advocates might believe, and away from details of specific proposals which obscure more fundamental disagreements about the facts.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

产品责任 经济分析 Coasean bargaining AI
相关文章