Published on July 11, 2024 10:01 AM GMT
Let's assume, for the sake of discussion, that Leopold Aschenbrenner is correct that a some point in the fairly near future (possibly even as he claims this decade, or later) AI will be capable of acing as a drop-in remote worker intelligent as the smartest humans and capable of doing basically any form of intellectual work that doesn't have in-person requirements, and that it can do so as well or better than pretty-much all humans, and around two or three orders of magnitude cheaper than current pay for intellectual work (so at least an order of magnitude cheaper than a subsistence income).
Let's also assume for this discussion that at some point after that, perhaps not very long after that, developments in robotics overcome Moravec's paradox, and mass production of robots decrease their cost, to the point where a robot (humanoid or otherwise) can do basically every job that requires manual dexterity, hand-eye-coordination, or bodily agility, again also for significantly less than a subsistence wage. Let's further assume that some of the robots are now well waterproofed, so that even plumbers, lifeguards, and divers are out of work.
I'd also like to also assume for this discussion that we don't to any great extent merge with machines: some of us may get very good at using AI tools, but we don't effectively add them as a third hemisphere of our brain.
My question is, under this set of assumptions, what paying jobs (other than being on UBI) will then still be available to humans, even if just to talented ones? How long-term are the prospects of these (after the inevitable transition period)?
[If you instead want to discuss the probability/implausibility of any or all of my three assumptions, rather than the economic/career consequences of all three of them occurring, that's not an answer to my question, bit it is a perfectly valid comment, and I'd love to discuss that in the comments section.]
Here are the candidates I've already thought of:
Doing something that machines can do better, but that people are still willing to pay to watch a very talented/skilled human do about as well as any human can (on TV or in person).
Examples: chess master, Twitch streamer, professional athlete, Cirque du Soleil performer.
Epistemic status: already proven for some of these, the first two are things that machines have already been able to do better than a human for a while, but people are still interested in paying to watch a human do them very well for a human. Also seems very plausible for the others that current robotics is not yet up to doing better.
Economic limits: If you're not in the top O(1000) people in the world at some specific activity that plenty of people in the world are interested in watching, then you can make roughly no money off this. Despite the aspirations of a great many teenaged boys, being an unusually good (but not amazing) video gamer is not a skill that will make you any money at all.
Doing some intellectual and/or physical work that AI can then do better, but for some reason people are willing to pay two orders of magnitude more to have it done less well by a human, perhaps because they trust humans better. (Could also be mixed with item 3. below.)
Example: Doctor, veterinarian, lawyer, priest, babysitter, nurse, primary school teacher.
Epistemic status: A great many people tell me "I'd never let an AI/a robot do <high stakes intellectual or physical work> for me/my family/my pets…" They are clearly quite genuine in this opinion. It remains to be seen how long this opinion will last in the presence of a very large price differential when the AI/robot-produced work is actually, demonstrably, just as good if not better.
Economic limits: I suspect there will be a lot of demand for this at first, and that it will decrease over time, perhaps even quite rapidly. Requires being reliably good at the job, and at appearing reassuringly competent while doing so.
I'd be interested to know if people think there will be specific examples of this that they believe will never go away, or at least will take a very long time? (Priest is my personal strongest candidate.)
Giving human feedback/input/supervision to/of AI/robotic work/models/training data, to improve, check, or confirm its quality.
Examples: current AI training crowd-workers, acting as a manager or technical lead to a team of AI white collar workers, focus group participant, filling out endless surveys on the fine points of Human Values
Epistemic status: seems inevitable.
Economic limits: I imagine there will be a lot of demand for this at first, I'm rather unsure if that demand will gradually decline, as the AIs get better at doing this without needing human input, or increase over time because the overall economy is growing so fast.
In-person sex work where the client is willing to pay an (likely order-of-magnitude) premium for an actual human.
Example: (self-evident)
Epistemic status: human nature.
Economic limits: Requires rather specific talents.
Providing some nominal economic value while being a status symbol, where the primary point is to demonstrate that the employer has so much money they can waste some of it on employing a real human ("They actually have a human maid!")
Examples: receptionist, maid, personal assistant
Epistemic status: human nature (assuming there are still people this unusually rich).
Economic limits: There are likely to be relatively few positions of this type, at most a few per person so unusually rich that they feel a need to show this fact off. (Human nobility used to do a lot of this, centuries back, but the servants were there supplying real, significant economic value, and the being-a-status-symbol component of it was mostly confined to the uniforms the servant swore while doing so.) Requires rather specific talents, including looking glamorous and expensive, and probably also being exceptionally good at your nominal job.
So, what other examples can people think of?
One category that I'm personally really unsure about the long-term viability of is being an artist/creator/influencer/actor/TV personality. Just being fairly good at drawing, playing a musical instrument/other creative skills is clearly going to get automated out of having any economic value, and being really rather good at it is probably going to turn into "your primary job is to create more training data for the generative algorithms", i.e. become part of item 3 above. What is less clear to me is whether (un-human-assisted) machines will ever become better than world class (compared to humans using AI tools or with AI coworkers) at this sort of stuff, and if they do, whether people will still want content from an actual human instead (thus this making this another example of either item 1. or 5. above).
Discuss