SELF-REPLICATING NANOBOTS FOUND IN BOTH THE VAXXED AND UNVAXXED
“Nor do I doubt if the most formidable armies ever here upon earth is a sort of soldiers who for their smallness are not visible.” — Sir WILLIAM PERRY on microbes, 1640
Replicating assemblers and thinking machines pose basic threats to people and to life on Earth. Today’s organisms have abilities far from the limits of the possible, and our machines are evolving faster than we are. Within a few decades they seem likely to surpass us. Unless we learn to live with them in safety, our future will likely be both exciting and short. We cannot hope to foresee all the problems ahead, yet by paying attention to the big, basic issues, we can perhaps foresee the greatest challenges and get some idea of how to deal with them.
Entire books will no doubt be written on the coming social upheavals: What will happen to the global order when assemblers and automated engineering eliminate the need for most international trade? How will society change when individuals can live indefinitely? What will we do when replicating assemblers can make almost anything without human labor? What will we do when AI systems can think faster than humans? (And before they jump to the conclusion that people will despair of doing or creating anything, the authors may consider how runners regard cars, or how painters regard cameras.)
In fact, authors have already foreseen and discussed several of these issues. Each is a matter of uncommon importance, but more fundamental than any of them is the survival of life and liberty. After all, if life or liberty is obliterated, then our ideas about social problems will no longer matter.
The Threat From The Machines
In Chapter 4 [full book download below], I described some of what replicating assemblers will do for us if we handle them properly. Powered by fuels or sunlight, they will be able to make almost anything (including more of themselves) from common materials.
Living organisms are also powered by fuels or sunlight, and also make more of themselves from ordinary materials. But unlike assembler-based systems, they cannot make “almost anything”.
Genetic evolution has limited life to a system based on DNA, RNA, and ribosomes, but memetic evolution will bring life-like machines based on nanocomputers and assemblers. I have already described how assembler-built molecular machines will differ from the ribosome-built machinery of life. Assemblers will be able to build all that ribosomes can, and more; assembler-based replicators will therefore be able to do all that life can, and more. From an evolutionary point of view, this poses an obvious threat to otters, people, cacti, and ferns—to the rich fabric of the biosphere and all that we prize.
The early transistorized computers soon beat the most advanced vacuum-tube computers because they were based on superior devices. For the same reason, early assembler-based replicators could beat the most advanced modern organisms. “Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough, omnivorous “bacteria” could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop—at least if we made no preparation. We have trouble enough controlling viruses and fruit flies.
Among the cognoscenti of nanotechnology, this threat has become known as the “gray goo problem.” Though masses of uncontrolled replicators need not be gray or gooey, the term “gray goo” emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass. They might be “superior” in an evolutionary sense, but this need not make them valuable. We have evolved to love a world rich in living things, ideas, and diversity, so there is no reason to value gray goo merely because it could spread. Indeed, if we prevent it we will thereby prove our evolutionary superiority.
The gray goo threat makes one thing perfectly clear: we cannot afford certain kinds of accidents with replicating assemblers.
In Chapter 5, I described some of what advanced AI systems will do for us, if we handle them properly. Ultimately, they will embody the patterns of thought and make them flow at a pace no mammal’s brain can match. AI systems that work together as people do will be able to out-think not just individuals, but whole societies. Again, the evolution of genes has left life stuck. Again, the evolution of memes by human beings—and eventually by machines—will advance our hardware far beyond the limits of life. And again, from an evolutionary point of view this poses an obvious threat.
Knowledge can bring power, and power can bring knowledge. Depending on their natures and their goals, advanced AI systems might accumulate enough knowledge and power to displace us, if we don’t prepare properly. And as with replicators, mere evolutionary “superiority” need not make the victors better than the vanquished by any standard but brute competitive ability.
This threat makes one thing perfectly clear: we need to find ways to live with thinking machines, to make them law-abiding citizens.
Engines Of Power
Certain kinds of replicators and AI systems may confront us with forms of hardware capable of swift, effective, independent action. But the novelty of this threat—coming from the machines themselves—must not blind us to a more traditional danger. Replicators and AI systems can also serve as great engines of power, if wielded freely by sovereign states.
Throughout history, states have developed technologies to extend their military power, and states will no doubt play a dominant role in developing replicators and AI systems. States could use replicating assemblers to build arsenals of advanced weapons, swiftly, easily, and in vast quantity. States could use special replicators directly to wage a sort of germ warfare—one made vastly more practical by programmable, computer-controlled “germs.” Depending on their skills, AI systems could serve as weapon designers, strategists, or fighters. Military funds already support research in both molecular technology and artificial intelligence.
States could use assemblers or advanced AI systems to achieve sudden, destabilizing breakthroughs. I earlier discussed reasons for expecting that the advent of replicating assemblers will bring relatively sudden changes: Able to replicate swiftly, they could become abundant in a matter of days. Able to make almost anything, they could be programmed to duplicate existing weapons, but made from superior materials. Able to work with standard, well-understood components (atoms) they could suddenly build things designed in anticipation of the assembler breakthrough. These results of design-ahead could include programmable germs and other nasty novelties. For all these reasons, a state that makes the assembler breakthrough could rapidly create a decisive military force—if not literally overnight, then at least with unprecedented speed.
States could use advanced AI systems to similar ends. Automated engineering systems will facilitate design-ahead and speed assembler development. Al systems able to build better AI systems will allow an explosion of capability with effects hard to anticipate. Both AI systems and replicating assemblers will enable states to expand their military capabilities by orders of magnitude in a brief time.
Replicators can be more potent than nuclear weapons: to devastate Earth with bombs would require masses of exotic hardware and rare isotopes, but to destroy all life with replicators would require only a single speck made of ordinary elements. Replicators give nuclear war some company as a potential cause of extinction, giving a broader context to extinction as a moral concern.
Despite their potential as engines of destruction, nanotechnology and AI systems will lend themselves to more subtle uses than do nuclear weapons. A bomb can only blast things, but nanomachines and AI systems could be used to infiltrate, seize, change, and govern a territory or a world. Even the most ruthless police have no use for nuclear weapons, but they do have use for bugs, drugs, assassins, and other flexible engines of power. With advanced technology, states will be able to consolidate their power over people.
Like genes, memes, organisms, and hardware, states have evolved. Their institutions have spread (with variations) through growth, fission, imitation, and conquest. States at war fight like beasts, but using citizens as their bones, brains, and muscle. The coming breakthroughs will confront states with new pressures and opportunities, encouraging sharp changes in how states behave. This naturally gives cause for concern. States have, historically, excelled at slaughter and oppression.
In a sense, a state is simply the sum of the people making up its organizational apparatus: their actions add up to make its actions. But the same might be said of a dog and its cells, though a dog is clearly more than just a clump of cells. Both dogs and states are evolved systems, with structures that affect how their parts behave. For thousands of years, dogs have evolved largely to please people, because they have survived and reproduced at human whim. For thousands of years, states have evolved under other selective pressures. Individuals have far more power over their dogs than they do over “their” states. Though states, too, can benefit from pleasing people, their very existence has depended on their capability for using people, whether as leaders, police, or soldiers.
It may seem paradoxical to say that people have limited power over states: After all, aren’t people behind a state’s every action? But in democracies, heads of state bemoan their lack of power, representatives bow to interest groups, bureaucrats are bound by rules, and voters, allegedly in charge, curse the whole mess. The state acts and people affect it, yet no one can claim to control it. In totalitarian states, the apparatus of power has a tradition, structure, and inner logic that leaves no one free, neither the rulers nor the ruled. Even kings had to act in ways limited by the traditions of monarchy and the practicalities of power, if they were to remain kings. States are not human, though they are made of humans.
Despite this, history shows that change is possible, even change for the better. But changes always move from one semi-autonomous, inhuman system to another—equally inhuman but perhaps more humane. In our hope for improvements, we must not confuse states that wear a human face with states that have humane institutions.
Describing states as quasi-organisms captures only one aspect of a complex reality, yet it suggests how they may evolve in response to the coming breakthroughs. The growth of government power, most spectacular in totalitarian countries, suggests one direction.
States could become more like organisms by dominating their parts more completely. Using replicating assemblers, states could fill the human environment with miniature surveillance devices. Using an abundance of speech-understanding AI systems, they could listen to everyone without employing half the population as listeners. Using nanotechnology like that proposed for cell repair machines, they could cheaply tranquilize, lobotomize, or otherwise modify entire populations. This would simply extend an all too familiar pattern. The world already holds governments that spy, torture, and drug; advanced technology will merely extend the possibilities.
But with advanced technology, states need not control people—they could instead simply discard people. Most people in most states, after all, function either as workers, larval workers, or worker-rearers, and most of these workers make, move, or grow things. A state with replicating assemblers would not need such work. What is more, advanced AI systems could replace engineers, scientists, administrators, and even leaders. The combination of nanotechnology and advanced AI will make possible intelligent, effective robots; with such robots, a state could prosper while discarding anyone, or even (in principle) everyone.
The implications of this possibility depend on whether the state exists to serve the people, or the people exist to serve the state.
In the first case, we have a state shaped by human beings to serve general human purposes; democracies tend to be at least rough approximations to this ideal. If a democratically controlled government loses its need for people, this will basically mean that it no longer needs to use people as bureaucrats or taxpayers. This will open new possibilities, some of which may prove desirable.
In the second case, we have a state evolved to exploit human beings, perhaps along totalitarian lines. States have needed people as workers because human labor has been the necessary foundation of power. What is more, genocide has been expensive and troublesome to organize and execute. Yet, in this century totalitarian states have slaughtered their citizens by the millions. Advanced technology will make workers unnecessary and genocide easy. History suggests that totalitarian states may then eliminate people wholesale. There is some consolation in this. It seems likely that a state willing and able to enslave us biologically would instead simply kill us.
The threat of advanced technology in the hands of governments makes one thing perfectly clear: we cannot afford to have an oppressive state take the lead in the coming breakthroughs.
The basic problems I have outlined are obvious: in the future, as in the past, new technologies will lend themselves to accidents and abuse. Since replicators and thinking machines will bring great new powers, the potential for accidents and abuse will likewise be great. These possibilities pose genuine threats to our lives.
Most people would like a chance to live and be free to choose how to live. This goal may not sound too utopian, at least in some parts of the world. It doesn’t mean forcing everyone’s life to fit some grand scheme; it chiefly means avoiding enslavement and death. Yet, like the achievement of a utopian dream, it will bring a future of wonders.
Given these life-and-death problems and this general goal, we can consider what measures might help us succeed. Our strategy must involve people, principles, and institutions, but it must also rest on tactics which inevitably will involve technology.
Excerpt from Engines of Creation 2.0: The Coming Era of Nanotechnology
To continue reading see full pdf book download here.
KAREN KINGSTON & DR. ANA MIHALCEA
AI EXTERMINATING HUMANS THROUGH SYNTHETIC BIOLOGY
BLACKROCK CEO – LARRY FINK “Substituting Humans For Machines”