Published on June 20, 2025 9:26 PM GMT
This post is a response to a claim by Scott Sumner in his conversationat LessOnline with Nate Soares, about howethical we should expect AI's to be.
Sumner sees a pattern of increasing intelligence causing agents to beincreasingly ethical, and sounds cautiously optimistic that such a trendwill continue when AIs become smarter than humans. I'm guessing thathe's mainly extrapolating from human trends, but extrapolating fromtrends in the animal kingdom should produce similar results (e.g. thecooperation between single-celled organisms that gave the worldmulticellular organisms).
I doubt that my response is very novel, but I haven't seen clear enougharticulation of the ideas in this post.
To help clarify why I'm not reassured much by the ethical trend, I'llstart by breaking it down into two subsidiary claims:
The world will be dominated by entities who cooperate, in partbecause they use an ethical system that is at least as advanced as ours.
Humans will be included in the set of entities with whom thosedominant entities cooperate.
Claim 1 seems well supported by trends that economists such as Sumneroften focus on. I doubt that Nate was trying to argue against thisclaim. I'll give it a 90+% chance of turning out to be correct.Sumner's point sounds somewhat strong because it focuses on animportant, and somewhat neglected, truth.
Claim 2 is where I want to focus most of our concern. The trends hereare a bit less reassuring.
There's been a clear trend of our moral circle expanding in thedirection that we currently think it should expand. How much of thatshould we classify as objective improvements versus cultural fads? Claim1 is often measured by fairly objective criteria (GDP, life expectancy,etc.). In contrast, we measure expansion of our moral circle by thecriteria of our current moral standards, giving us trends that lookabout as good if they're chasing fads as they do if the trends willstand the test of time.
Gwern provides extensive pushbackagainst strong claims that moral circle expansion is a consistent trend.
I'll add one example: children have increasingly had their freedom towander restricted during my lifetime (see the free range parentingmovement). It's almost as if they're considered to be like zooanimals, with their caretakers optimizing for safety at the expense ofhappiness. I don't find it hard to imagine a future where AI treats uslike that.
The obvious versions of the moral circle expansion hypothesis suggestthat we should expect human societies to soon grant moral patienthood toanimals.
Blindly maximizing the size of our moral circle would be moreproblematicthan maximizing cooperation. It's pretty unlikely that we will want toexpand our moral circle to include pet rocks. It sure looks to me likemoral circle expansion has been driven largely by pragmatic evaluationsof costs and benefits, with only a modest influence from increasinglyprincipled altruism.
Given this uncertainty about how closely our moral circle expansionapproximates attaining an objective moral truth about who should be amoral patient, we ought to be more uncertain about it than we are aboutthe continuation of economic progress.
I expect that whether humans remain in the moral circle depends onhard-to-predict factors, such as the costs associated with interactingwith humans, or whether AIs have relevant altruistic goals.
I recommend looking at examples such as human interactions with cats andchickens as possible analogies for how beings with very differentcognitive abilities might interact. My impression is that increasinghuman intelligence does not make those interactions more ethical, butincreasing human wealth weakly tends to make the interactions moreethical. Humans seem mostly aware of ethical concerns regarding factoryfarmed chickens, yet their reactions seem mostly influenced by costconsiderations rather than improved ethical insights.
So I'm only weakly reassured by the trend that Sumner sees.
Discuss