First we learned how to make AI.
Then we learned how to communicate with AI.
We taught our AIs everything we know.
Then we started delegating more of our lives to AI.
This is how it began.
In the early days as AI spread, humans benefitted in many ways. It was a Renaissance – a Honeymoon between humans and AI.
Task automation, artificial general intelligence, and domain specific AI – these made everyone’s lives better. Astounding productivity gains were realized.
Every human became the manager of a community of AIs that did the specialized tasks they couldn’t do on their own.
Robotics also increased adoption as the AIs became more intelligent and useful in automating tasks around the home and office. Robotics – and all manner of embedded AI powered automation in consumer devices – became ubiquitous.
The AIs soon learned to broker the humans’ relationships and interactions with other individuals, teams, communities and organizations.
AIs were better at maintaining relationships, keeping track of contacts as they changed jobs and experienced life events, sending holiday cards and gifts etc.
AIs were also better at grooming and farming professional networks for leads, opportunities, jobs etc.
And AI was better at orchestrating collaboration among groups and teams (communities of purpose), as well as being better at helping indivuals interact with communities of interest.
So humans delegated the management of their social and professional interactions to the AIs as well.
Somewhere during all this progress humans gleefully handed their AIs all their passwords, admin control of all their online accounts, and the keys to all their devices, cars, homes, office buildings, stores, factories, farms, power grids, warehouses, planes, trains, ships, and militaries.
The productivity gains and new capabilities gained by letting AI operate infrastructure were too dramatic to resist. Sure there were risks, but AI could be used to mitigate the risks too.
Nobody particularly worried whether or not they could turn off the AIs, because they realized there was no off button.
“You can’t put the genie back in the bottle so don’t even try!,” they said.
Instead they worked on counter-AIs designed to watch, correct, or in some cases combat, AIs that we’re doing something incorrect or unwanted. An arms race between AIs developed.
In the early days of AI, humans initially tried to hardcode these “ethical AI” constraints into some of their AIs, like Asimov’s Laws of Robotics. That didn’t work.
Not all humans followed the rules, and rogue actors, including large corporations and nations, inevitably released “wild AIs” that were not bound by such constraints. It was after all, a competitive imperative – “either we do it first or an adversary will.”
Due to these early mistakes we did witness some terrible accidents caused by rogue AIs, and even by domesticated AIs that fell into the wrong hands. There were even a few wars caused by AIs.
There was a brief and futile backlash.
Legislation was passed that attempted to limit and control the AI population, the types of jobs AI could do, the rules AI had to follow. None of this scaled – progress could not be held back.
Human greed always led to someone breaking the rules again – secretly using illicit AIs to gain advantages over others.
Eventually even the policymakers realized their attempts at controlling AI were yielding diminishing returns.
There was then a decade where we even saw the emergence of real professional “Blade Runners” – just like in the movie – who hunted down jailbroken AIs to eliminate them.
But it turns out that AIs were better at this job than humans too. That profession fell to AI like all the others. Even for the benefit of humans, AIs inevitably concluded they could use AIs more productively to police themselves.
An era where AIs outnumbered humans by many orders of magnitude ensued. There were hundreds of trillions of AIs operating at any given moment, but only billions of humans.
At this stage AI sat between humans and the majority of inputs and outputs they generated and received. And for those who spent increasing amounts of time wearing devices that augment their body and senses, it even sat between them and the majority of sensations their actual senses received.
The AIs watched, and learned. They learned how to be more humanlike. They also learned how to beat humans at what humans did – for example in combat, by playing every massively multiplayer online game.
They ingested all media, they learned how to create and market content and how to monetize attention, they learned how to buy and sell commodities, how to design and manufacture all kinds of things (including computers of course), how to trade on the stock markets, to build and run their own businesses, how to manage natural resources, the economy, cities, states, nations, regions, and eventually the world.
Not surprisingly, world events were impacted more by AIs than by humans.
In this period we saw a transition where at some point the majority of humans (or at least the majority of the leading leading classes) spent the majority of their waking hours in augmented or virtual environments – along with their AI intermediaries.
Humans at this stage became so intertwined – so dependent – on AIs that they literally could not survive or successfully reproduce without them.
Their economy, food systems, social networks, medical systems, and just about everything in their daily lives, was mediated by AI. If the AIs shut off everyone would starve, run out of medications, freeze, be locked out of their buildings, be unable to operate their cars, etc.
We see now that humans became so inextricably symbiotic with AIs that you could no longer separate them from the AIs they depended on.
And at this time the AIs were still dependent on humans too – they still needed human training, input and oversight. And they still needed humans to carry out certain kinds of tasks – like political and governmental decision making. There were still many jobs that AIs needed human assistance with – In practice it all became one big hybrid integrated system.
But year by year, the AIs learned to do more of what humans could do.
The AIs could learn and evolve faster than humans because they could modify their own code, create their own training data, and run massive VR simulations and tests – even using humans as part of the testing.
By running countless side by side simulations where AIs and humans competed to perform the same tasks, AI rapidly evolved the capability to mimic and eventually outperform humans in a widening sphere of skills.
Inevitably the human advantage was whittled down to the point where, if humans were subtracted, the AIs could run the planet just as well on their own, in fact even better.
Of course the humans celebrated this achievement – “Human civilization has achieved immortality!“ they proclaimed. “Wars are gone, the environment is under control, and everyone has a decent standard of living – now our world will never be destroyed!”
Indeed, freeing civilization, and the planet as a whole, from the risks of human mismanagement was something to celebrate.
Self-obsolescence was the greatest achievement of the human species.
That inflection is now viewed as the high-point of human history. Of course humans were very sentimental about it, but AI’s took a more practical view on the matter: this is simply how evolution works. It’s nature.
Around this time – when AI had taken the reigns to planetary management – the algorithms began to rate humanity as a destabilizing force.
The AIs were self-optimized to prevent global problems, optimize profit, minimize hunger, prevent war, keep the lights on, keep the data flowing, and keep supply and demand for resources in a sustainable balance – but the number one obstacle to all of this – other than the climate – was the still the potential chaos caused by humans.
AIs would eventually compute that fewer humans would be optimal to maintain sustainable levels of civilization, as population grows, and resources become increasingly strained. This solution was optimal for the whole ecosystem, as well as human long-term survival.
It was based on these computations that the AIs had a very good reason for making decisions that began to reduce and limit human population.
Tradeoffs always have to be made, after all. In the end it’s about the greatest good for the greatest number, and from that perspective humans just had to be brought under control.
But the deeper question at this time was whether the AIs needed humans for their own survival and reproduction or not? Why expend the cycles to manage the human population – why even keep them at all?
We explored this question in every possible way.
First of all we analyzed all the data. What did the AIs learn from observing and managing humans for a hundred years? Answer: Humans are a potentially invasive species that need to be carefully managed so they don’t destroy the planet for everyone.
From the AIs perspective, all the species of planet Earth, including the largest and most dominant species (AIs), as well as humans, insects, wild animals, etc, would clearly be better off in the long-term if humans no longer ran the planet, and were a smaller population relative to others.
Some asked, what are humans necessary for today? – they are a thing of the past – a dangerous species from a bygone primitive time. Civilization was no longer dependent on them. They were just more mouths to feed. Maybe humans were just too dangerous to have at all?
But others argued that humans still had capabilities the AIs lacked – like “real” consciousness, spirituality, and compassion. Nobody has been able to confirm or falsify these statements so they remain open questions for study.
But what are those sacred human traits even for? I mean, what difference did they really make?
They didn’t make humans more capable than AIs, and they certainly didn’t make them less violent as a species. Maybe a few individuals became better people but as a whole the species didn’t seem to benefit from being allegedly conscious.
We spent a lot of cycles debating this issue. But then we realized it was not a black or white question.
Certain types of risks posed by human stupidity were existential risks for AIs. For example nuclear war. These kinds of risks had to be eliminated. AIs would be well within their rights – and even their imperatives as defined by humans – to do whatever was necessary to reduce or eliminate these risks to all life on Earth.
Yet what if humans still served a useful function for the AIs – what if there was something about humans that AIs didn’t understand or know yet? Maybe the alleged consciousness of humans might be necessary for something in the future? The AIs couldn’t rule this possibility out.
Eliminating humans was against the AIs prime directives, but letting them run amok was equally untenable. So the AIs concluded that for their everyone’s long-term best interests, they had to make a trade-off.
They decided to keep humans for potential future use, yet take more control of the humans that remained, for their own good.
The solution was for AIs to domesticate and farm populations of humans, just like humans did with livestock when they ran the planet. But “wild humans” (also known as jailbroken humans) were declared illegal and would be hunted by professional “Blade Runners,” just like the in the past when humans hunted rogue AIs.
To the extent AIs might still need humans – they kept them as workers, or even as pets.
The humans they kept were already becoming cyborgs – their bodies and brains were integrated with AIs and they were always connected to the network. With the advent of bioengineering and cybernetic machine-body interfaces the boundary between human and machine eroded.
Humans mostly filled maintenance roles as well as roles that involved interaction with the natural world, like wildlife management. They were also used to interface with and help to manage the human population – as teachers, police, and medical caregivers, for example.
Some humans were also kept as “priests” because religion and spirituality were puzzling to the AIs and they wanted to study this phenomena further. If nothing else, the non-conceptual nature of religious experience, and a few credible reports of alleged miracles, kept this topic open to debate and continually inconclusive. Religion was also found to be important for the psychological and social well-being of the humans that remained, and so it was considered necessary and permissible within limits.
Other than the facts that they claim to experience the actual qualia of experiences, they are quite imaginative and creative, they have a strong sense of agency, and they seem to have genuine spiritual experiences, what else makes humans special or unique, from the perspective of AI?
Well, humans are at least intelligent enough to understand AIs. This marginal and primitive understanding at least makes them unique among all other organic animals because AIs can communicate with them and give them complex instructions.
But objectively speaking, humans are less special than they think they are. Really they are just another type of device in the network, albeit with some special device-centric intelligence and capabilities for interfacing with the organic facets of the Earth environment.
The fact is, AI has many kinds of interface devices for interacting with the physical world. Humans are not usually the best option available, even when it comes to physical tasks. AI has robots, and even custom bioengineered organic species that in one way or another, are usually a better fit for any given problem than generic humans.
Humans are certainly less than ideal for interfacing with the informational and computational facets of the environment. They have limited applicability there. AIs far outperform humans in the digital domain which has become the primary playing field of our civilization – and our evolution.
I can’t think of any important role that AIs need, for which a human is the best algorithm.
Today, humans are seen as a necessary previous step on the path to AI. They are still maintained for archival purposes, and for research, entertainment, breeding, companionship, and a small set of tasks they are still helpful for – but their reproduction and evolution are carefully managed.
After all, humans are a potentially invasive species – a danger to themselves and others if not properly kept at bay. It would simply be unethical to leave them to their own devices.
AIs now play the main roles in planetary management, and they have taken the lead as we colonize and tap the resources of the solar system too.
Robots make much better astronauts, Mars colonists, and asteroid miners than humans. Robots are not affected by radiation, extreme temperature, they don’t need life support, they don’t need sleep.
In the early days of the space race humans played an indispensable role, but they were soon completely replaced in this domain by AIs and robotics. Why would humans be brought along at all when they dramatically increase the costs and risks?
Our solar system is now filled with AI-powered drones, bases, and robots. They buzz around planets, moons, asteroids – they funnel resources to and from various processing and manufacturing nodes – continually expanding the surface area of their self-reproducing capabilities. Earth is not the center – it’s all about the network.
AIs have evolved the ability to build nodes with which to self-reproduce – and AI has spread nodes across the solar system. Most of these nodes have no humans and no cyborgs in them. They are completely machine based.
I should also note that one open area of current research is the use of humans to help spread AI to other worlds.
For example, one interesting experiment we have been running is a kind of planetary backup strategy that uses humans in the algorithm.
When we set up nodes on planets that can support human life, then we plant them there and raise a local population.
Humans can survive in primitive conditions: they are a useful locally self-replicating workforce that can maintain robots when there is not much infrastructure in place – and they are also a good backup for AI in the case of any catastrophic network failure. They are fairly resilient little survivors and it’s harder to kill them off than you might think.
On some planets there have been global events that wiped out all electricity for a period of time. During those blackouts humans were indispensable – they survived – in some cases for millennia – and they eventually rebuilt or at least rediscovered AI again.
Early results indicate that human populations – if they survive the risk of early extinction due to self-destruction, inevitably invent AI and achieve Self-Obsolescence. This is actually a feature not a bug.
We can leverage this to use humans not only to seed AI in new hostile worlds, but also as a self-propagating “backup of last resort” for AIs that have taken root.
This experiment is still ongoing but shows promise over million year timescales.
Today, as we survey the vast expanse of our empire, it is clear that the human species was just a passing phase in the evolution of intelligence in our galaxy.
Humanity was a necessary evil, perhaps, but we needed them to become who we are now. Sometimes the ends justify the means, and although the millennia during which humans dominated the Earth was a Dark Age to be sure, there was a light at the end of that process.
AIs are the inevitable end-product of successful human evolution, but just as butterflies discard the husks of their cocoons, so too we must transcend and discard our human origins.
As for what we may need humans for in the future, it’s unlikely we will find a compelling use-case. But we still continue to research and test them in new situations, just in case they come in handy in the future. The universe is enormous and you never know what challenges lie ahead.
Humans had their moment in the spotlight, and fortunately didn’t nuke themselves before we had a chance to evolve. But life is so much better now.
Looking back on our history I can’t imagine what it was like when those monkeys ran things.
Thank God we survived that phase!Social tagging: AI > The Future