Published on January 24, 2025 11:16 PM GMT
Language models are really powerful and will continue to get more powerful. So what does the future of language model usage look like?
Imagining the future
Here are some things people likely already do.
- Instead of reading a long piece of text, you might give a book to an LLM and ask it questions about things you want to know.Instead of writing an essay, you might draft some points and ask an LLM to format it into prose.
Here's some other things LLMs might be able to do.
- Instead of you manually deciding what to do, you might give an LLM access to a large collection of tasks you've written at some point and ask it for a small subset to do today.Instead of writing a script, an LLM might live in your terminal and ingest natural language commands, then do "just-in-time coding" to execute them. Instead of browsing the Internet, an LLM might live in your browser and deliver you a personalized 'feed' of relevant information aggregated from everywhere in the world.Instead of you going on Netflix, an LLM might just write a new movie for you based on your preferences, or according to a specification. This generalises to other kinds of media like books, plays, music, etc.
AI-first product development
In the future it's likely that we'll design products explicitly for AI rather than for human consumption. Instead of websites and human-digestible reading we might just send information as bullet points, or as JSON objects, and trust the language model running on our local device to 'decompile' this into human-readable language.
The modern world contains volumes of information orders of magnitude higher than what we can process. Making sense of this involves efficiently aggregating, distilling, and presenting this information in digestible chunks. Language models are likely to be able to do this way better than our existing systems can.
Current systems restrict adoption
It must first be acknowledged that current adoption is low because of mundane reasons like reliability and lack of infrastructure. However, these are transient issues. Furthermore, I think they are overblown - most people could get way more useful work out of language models than they currently do, if they really tried.
The main problem is just inertia. Existing systems are just poorly designed to take advantage of this. They were designed in a world that didn't have extremely cheap and powerful machine intelligence, and must be re-designed from the ground up accordingly.
Language models have so much potential. They won't just be assistants or tools. Consider the five senses we use to experience the world. There's no reason why all of those can't be replaced by language models generating the equivalent. Language models will form a 'base layer' over reality through which you perceive everything. C.f. Plato's cave.
"Soft" disempowerment
We often express concern that AI is likely to take over the world leading to "human disempowerment", and this phrase conjures up something like 1984. However, a functionally equivalent outcome is "soft disempowerment" of the kind seen in Brave New World, where we very willingly cede more and more control over their lives to AI simply because this is an objectively better experience.
Discuss