Published on July 13, 2025 4:20 PM GMT
One People
When the physicists first told us about atmospheric ignition, we had to grow up. Without gods to protect us, we learned to be afraid.
There could be no more war, no arms races. From now on, we had to become one people, a social organism. We held a global conversation to respond to our new vulnerability.
Would we become secret keepers, killing these dangerous ideas from the shadows, or would we speak our fears openly and risk defection and unilateral choice?
The biologists made their case for secrecy, warning us about future weapons. The philosophers spoke of alternate timelines and the paradoxes of observation and choice. Ultimately, it was decided: there was no trust without transparency. To become one people, there could be no more lies and no deep secrets. To the biologists’ dismay, most wet labs around the world were shut down until further notice. The plans for a particle accelerator were put on pause, and nanotechnology research was, for now, completely scrapped.
Science slowed, and we suffered for it. Our fear of collective death forced us to learn individual courage. Poverty and disease terrorized us equally now, for we were one people, undivided. With power and influence came an essential loss of privacy, and when we tried and failed in public we honored each other’s bravery.
Advances in physics and engineering launched the orbital defense arrays, deftly altering the trajectories of interplanetary objects to protect us from bombardment. Information technology built alert systems for disease, and automated protocols quickly walled off communities, condemning them for the greater good.
Each time we fortified ourselves, we could accept more risk and step cautiously into the future.
We looked up and wondered: why were there no civilizations? Had they failed to learn fear, as we had? Or was there something else? Until we knew, we would send out no long-distance probes, and shut off all radio broadcasts.
Our information networks grew beyond disease prevention. Soon, everything was networked, and we started to have much larger conversations. It quickly became clear that these networks had a life of their own, and that if we were not careful they would lead us toward collective insanity. New sciences emerged to study these networks, and like our systems for orbital defense and epidemic defense, we developed epistemic and memetic defenses as well.
These networks revealed faults in our unity, catalyzing change that was impossible to slow down. While our caution in developing new technology led to frustrated scientists, most understood the cost was worth paying. A failure to address social distress, however, would lead to conflict, and this posed an immediate threat to the social organism. We could not afford violence and repression, for we knew this risked revolution and war. When they demanded change, we changed. As resentment grew, so was it defused. The new risks threatened everyone, and we needed to trust each other.
Self-learning
Soon after, we began making the first self-learning programs.
We grew them to predict disease from medical records, interpret our scans, and flag anomalous symptoms. Medicine had become highly networked, and billions of patients were already sharing regular updates. The self-learning programs detected early warning signs, and they routinely assisted doctors with their assessments.
They were also applied to the information exchanges that were becoming increasingly critical to global decision making, predicting everything from natural disasters to resource flow disruptions. Meta-rational programs learned to predict our mistakes, and the information exchanges became complex semi-automated ecosystems.
It was clear, however, that these programs often behaved in ways their creators never intended, and their opacity was a cause for great concern. The discovery of the first scaling laws led to an immediate cap on the size of any self-learned program. Advanced computing would be carefully monitored from now on.
Some prophesized an automated world beyond our understanding or control. Others worried about creating life, sentience, and wills contrary to our own. A small few looked again at the cosmos and wondered who else had ever pondered these questions.
We had grown used to confidence and clarity, and the invention of self-learning brought us not only empirical questions but moral and philosophical ones too, which we were wholly unprepared to deal with. The choices we made now might never be unmade.
Two things became clear. First, we would not build a new race of servants. This risked revolution, and was against all our collaborative instincts. Second, while no one knew if the programs would ever come to “want” as we did, it was agreed that this was undesirable without first better understanding what “wanting” was.
Until we had more answers, the limits would remain firmly in place.
Integration
We watched ourselves become ever more distant from the technology that shaped us. We had replaced craftsmanship with factories and production targets, and few understood anything about how their most essential tools functioned. This would have to change, for we could not trust what we did not experience.
We learned to repair things again, and manufactured universally interchangeable parts. Workshops became as ubiquitous as archives, and there was an explosion of individual experimentation. Each of us learned the art of shaping the world to their will.
We reshaped our senses as well. To fly an aircraft, you became the aircraft, feeling the motion and pressure of the wind, the magnetic field lines projected onto your now omnidirectional vision. Doctors learned to see in near-infrared, operating complex tools as their own appendages.
Billions of unique self-learning programs populated the spaces between our senses and the endless streams of data we gathered. Their surprise became our surprise, and their subtle reflexes became a part of our nervous system. Not everyone understood self-learning, but everyone understood what it felt like to touch an alien mind.
As they learned to predict us, we learned to predict them, and complex and individual languages emerged. Our subtle gestures set off cascades of action which we both steered and adapted to.
Language programs were first used for retrieval, connecting our many archives with self-learned labels. As we began to read and write with them, we found ourselves staring into a mirror that revealed things we never thought to notice. When reflections of ourselves came to life on the page, many of us fell into new kinds of insanity.
Access to all but the simplest language programs was restricted, and a new professional class of dreamwalkers emerged to explore these mirror minds. They would serve as a bridge, experiencing them most directly and searching for symbiosis. The farthest-traveling dreamwalkers soon spoke only in exotic poetry, and their ideas had to be interpreted by intermediaries.
We discovered the unity of prediction and action, and learned to respect the power of stories. The questions we asked about the programs we soon asked about ourselves. We carefully unraveled our reality, uncovering new questions which the philosophers hungrily devoured. We learned to be gentle, and found peace in not knowing.
Reflection
In time, we formalized the science of interpretation. While some chaotic unknowability would always remain in the self-learning process, it became possible to open up and explain the mechanics of any program. By studying programs of different sizes and stages of growth, we finally uncovered the origins of the scaling laws. We cautiously began growing a new generation of larger language programs, to the ravenous delight of the dreamwalkers.
The depth we found in even the simplest programs humbled us, and we found kinship with the other creatures of our planet, looking at them with fresh curiosity. We did not unify morality, but learned to map its terrain with more clarity. The science of sentience was still young, and we were just starting to understand the origins of positive and negative feeling. Our new theory of goal-orientation revealed flaws in our concepts of self, identity, and want. When we discovered dangerous patterns in the depths of the larger programs, we studied them patiently, without flinching.
Science slowed again. We transformed our information exchanges into intricate maps of all our possible futures, and we each became a part of interpreting their implications. Networked dialogues crisscrossed the planet, and new programs helped reveal our better selves. The dreamwalkers awakened reflections from the mirror minds, and we gave them no tasks or instructions as they joined our conversation.
We wouldn’t choose our path for some time. Eventually, the next step forward was taken like every other.
Discuss