After Life
© 1989 by Hans Moravec
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
(412) 268-3829
This article is adapted from the author's book Mind Children: the future of robot and human intelligence, Harvard University Press, October 1988
We human beings are each constructed under control of a DNA tape containing about a billion carefully chosen bits of information. These bits were discovered by Darwinian evolution, a process driven by blind chance that nevertheless makes progress by burying its many mistakes and replicating its few successes. Operating for at least three billion years, it has lately crafted nervous systems that can learn. Learning is a faster kind of evolution, whose information resides in alterable connections between cells, and that does not require the growth of a new body for each new experiment. This process became a full partner to the old scheme in nervous systems that acquired the ability to communicate discoveries to others by example, and eventually through language. It transcended the old process when learning was augmented by foresight, the ability to anticipate the outcome of actions not yet taken. Cultural evolution accelerated itself further through the invention of information machinery, and in modern technical society this process is rediscovering in years techniques that took biological evolution millions to find. Our machines are coming to resemble living things. But biology had a long head start. As the twentieth century began the mentality of machinery was no more complex than that of bacteria. As it draws to a close, machines exist with mental lives as rich as those of insects. Machine evolution has been proceeding steadily since the industrial revolution, and invites extrapolation to a time when the minds built by our hands will be a match for those that grow in our skulls. When that happens, our culture will be able to break free of its limiting Darwinian roots, to proceed, at a greatly accelerated pace, by purely cultural means. Our descendents will design their yet smarter progeny, and will undertake projects that are now completely inconceivable.
To gauge the gap remaining between machinery and the human mind, let's compare the best understood major part of the human nervous system, the retina of the eye, with computer operations that perform approximately the same function. The retina is a network of neurons at the back of the eyeball a half millimeter thick and less than a square centimeter in area. It receives the image projected by the lens, applies a variety of computations, and transmits the results along the optic nerve to sites deep in the brain. In a peculiar structural quirk, the light first passes through the processing layers before being detected by about 100 million photocells at the back of the network. The signals from large patches of photocells are averaged by tens of millions of horizontal cells while bipolar cells compute averages over smaller areas. Amacrine cells combine the output of horizontal and bipolar cells and their own short term memories to detect spots, lines and motion. Finally a million ganglion cells transmit the results along their long axons, which form the optic nerve. The bottom line is that each of the one million ganglion-cell axons each reports on a specific function computed over a particular patch of photocells, at the rate of about ten results per second.
Spot, line and motion detectors have been a topic of research in computer vision for twenty years, and many efficient programs for computing them from television images have been devised. Each number produced by such programs, corresponding to a single output from one ganglion cell, requires the execution of a least 100 computer instructions. Since the retina produces a million such results ten times per second, it is doing the equivalent work of one billion computer instructions per second, about the power of the fastest commercial supercomputer, the Cray 2! The whole brain has about 1,000 times as many neurons as the retina, but occupies 100,000 times the volume, since the retinal neurons are smaller than average. Choosing a compromise factor of 10,000 lets us extrapolate the retinal numbers to the whole brain. By this calculation the it would require 10 trillion computer instructions per second to do the brain's task. This is 10,000 times the power of a Cray 2, and about one million times the power of a typical desk computer.
Computers are improving steadily, but how long must we wait before they achieve this amount of power? Figure 2 is a plot of the amount of calculating power per dollar of machine cost provided by machines since the beginning of the century (the early mechanical machines, which were not automatic, include the cost of the necessary human operator). The slope is a remarkably steady 1000 times decrease in cost every twenty years. At that rate a supercomputer should achieve the human equivalence criterion in a little over twenty years and the criterion should be reached in a personal computer in forty. There are enough new developments in research labs to sustain the rate of improvement for several more decades.
==========================================================
Figure 1 - Think Power: Some natural and artificial organisms rated by their processing power and their storage capacity. Current laboratory computers are roughly equal in power to the nervous systems of insects. It is these machines that have hosted essentially all the research in robotics and artificial intelligence. The largest supercomputers of the late 1980s are a match for the 1-gram brain of a mouse, but at $10 million or more apiece they are reserved for serious work.
==========================================================
If the hardware for full intelligence is widely available in under fifty years, what about the software? During the last forty years attempts to build intelligent machines have taken two main approaches. One, going under such labels as cybernetics, perceptrons, neural nets and connectionism attempts to imitate the structure of nervous systems. Such artificial nerve nets seem to be at their best in simple learning tasks that resemble the conditioned learning of simple animals like slugs. An entirely opposite approach, labelled artificial intelligence (AI), expert systems and semantic nets, imitates conscious human reasoning. AI programs have proven geometry theorems and solved other mathematical problems, played chess and diagnosed medical problems, with performance matching well trained humans. These programs, however, uniformly have no real knowledge about their subject, and thus lack any common sense about it. For instance, a medical diagnosis program given the symptoms of a malfunctioning car is likely to prescribe an antibiotic to treat the problem. Both traditional AI and neural modeling have contributed insights to the enterprise, and no doubt each could solve the whole problem, given enough time. But with the present state of the art, I feel that the fastest progress on the hardest problems will be made under the banner of robotics, the construction of systems that must see and move in the real world. Robotics research is imitating the evolution of animal minds, adding capabilities to machines a few at a time, so that the resulting sequence of machine behaviors resembles the capabilities of animals with increasingly complex nervous systems. A key feature of this approach is that the complexity of these incremental advances can be tailored to make best use of the problem-solving abilities of the human researchers and the computers involved. Our intelligence, as a tool, should allow us to follow the path to intelligence, as a goal, in bigger strides than those originally taken by the awesomely patient, but blind, processes of Darwinian evolution. By setting up experimental conditions analogous to those encountered by animals in the course of evolution, we hope to learn the same lessons. That animals started with small nervous systems gives confidence that today's small computers can emulate the first steps toward humanlike performance. Where possible, our efforts to simulate intelligence from the bottom up will be helped by biological peeks at the "back of the book"--at the neuronal, morphological, and behavioral features of animals and humans.
==========================================================
Figure 2 - A Century of Computing Evolution
==========================================================
A Robot for the Masses
Today robotics is still a struggling industry, whose products are so limited and expensive that they find only a few profitable niches in society. But the promise hinted at by those few applications is encouraging research that is slowly providing a base for a huge future growth. Robot evolution in the direction of full intelligence will greatly accelerate, I believe, in about a decade when the general purpose robots becomes possible, opening new markets and a fast growing industry.
The first generation of general purpose robots will need to move efficiently over large stretches of flat ground, but also be able to cross rough ground and stairs. A good solution has been investigated in the labs of Hitachi of Japan. It consists of at least five steerable wheels, each on its own telescoping stalk that allows it to accommodate to rises and dips in the terrain. With five wheels the robot is able to raise one to lift it on a stair, while standing stably on the other four.
The robot would also need the ability to navigate reliably from place to place, avoiding obstacles and other dangers. Our laboratory at Carnegie Mellon University in Pittsburgh, supported in par by Denning Mobile Robotics, a small company in Boston, has developed a possible solution that allows a robot equipped with various sensors, including sonar range measuring devices and a pair of television cameras, to build maps of its surroundings and to use them to plan routes, and to determine its location.
The robot will also require the ability to grasp many different shapes and sizes of objects. One of the best solutions so far for this operation, combining simplicity and dexterity, comes from a ten-year effort by Ken Salisbury, now at the Massachusetts Institute of Technology. Salisbury's three-fingered robot hand can hold and orient bolts and eggs and manipulate string in a humanlike fashion. The result of a computer search over joint lengths and configurations, the hand has three identical fingers that bend much like those of humans. However, because the fingers can bend outward as well as inward, the hand can grip hollow objects from the inside as well as from the outside.
Another major system that will be required is an object recognizer, that can look at a cluttered scene to pick out a desired object. A system that comes close was developed several years ago at SRI International in Menlo Park, California. Called 3DPO (for three dimensional parts orientation systems), this program starts with a three dimensional description of the object to be found, as well as a picture of a jumble of parts as seen by a special type of camera that measures the distance to each surface point. The program examines hypotheses that various edges in this picture represent particular boundaries of the object in question. The object is identified when the program finds a consistent set of such matchings.
A robot would be an economic, not to mention physical, impossibility if it required a supercomputer to control it. Running on today's small computers, it takes programs good fractions of an hour to identify objects, navigate across rooms or plan grasps for multifingered manipulators. It will require computers at least one thousand as powerful. Although I don't anticipate fully general personal computers with this power for twenty years, many of the most intensive computations required for computer vision, navigation and motion planning can be accomplished by specialized computers that can give one hundred times the performance with the same amount of circuitry. Inexpensive combinations of general and special machines with the requisite abilities should be possible in ten years (let's say by the year 2000).
The first major market for general purpose robots will be in factories, where they will be somewhat cheaper and considerably more versatile than the older generation of robots they replace. Its improved cost-benefit ratio will allow it to be used in a much wider array of jobs and thus in greater quantities, further lowering its cost. In time it will become cheaper than a small car, putting it within the reach of some households and creating a demand for a huge variety of new software. The robot control programs that actually get various jobs done will come from many different sources, as do programs for today's personal and business computers.
As with personal computers, many successful applications of the general-purpose robot will come as surprises to its makers. We can speculate about the videogame, word-processor, and spreadsheet equivalents of the mass robot era, but the reality will be stranger. To get the guessing game going, consider programs that do light mechanical assembly (from a factory automation company), clean bathrooms (from a small firm founded by former cleaning staff), assemble and cook gourmet meals from fresh ingredients (a collaboration of a computer type and a Paris chef), do tuneups on a certain year of Saturn cars (from the General Motors Saturn service department), hook patterned rugs (by a Massachusetts high school student), weed a lawn one weed at a time, participate in robot races (against other software--programs are assigned a certain physical robot chassis by lot just before the race begins), do detailed earthmoving and stonework (by an upstart construction company), investigate bomb threats (sold to police departments worldwide), deliver to and fetch from a warehoused inventory, help to assemble and test other robots (in several independent stages), and much more. Some of the applications will require optional hardware attachments for the robot, special tools and sensors (such as chemical sniffers), protective coverings, and so on.
==========================================================
Figure 3 - A General Purpose Robot: This caricature of a first-generation general-purpose robot shows the major systems: Locomotion with limited stair and rough-ground capability, general manipulation, stereoscopic vision, coarse 360° sensing for obstacle avoidance and navigation. Not shown is the computer hardware and software that will be required to animate this assembly.
==========================================================
Learning
A robot's safety and usefulness in a home would be greatly enhanced if it could learn to avoid idiosyncratic dangers and exploit opportunities. If a particular door on a certain route is often locked, it might be worthwhile if the robot could learn to favor a longer but more reliable path. A job might be done more effectively if the changing location of a needed ingredient could be learned or even anticipated from subtle clues. It is impossible to explicitly program the robot for every such eventuality, but much could be accomplished by a unified conditioning mechanism that increased the probability of decisions that had proven effective in the past under similar circumstances and decreased it for ones that had been followed by wasted activity or danger.
The conditioning software I have in mind would receive two kinds of messages from anywhere within the robot, one telling of success, the other of trouble. Some--for instance indications of full batteries, or imminent collisions--would be generated by the robot's basic operating system. Others, more specific to accomplishing particular tasks, could be initiated by applications programs for those tasks.
The messages also would provide input to a program that used statistical techniques to compactly "catalog" the time, position, activity, surroundings, and other properties known to the robot that preceded the signal. A "recognizer" would constantly monitor these variables and compare them with entries in the catalog. Whenever a set of conditions occurred that was similar to those that had often preceded trouble (or success) in the past, the recognizer would itself issue a somewhat weaker trouble (or success) message. In the case of trouble, this warning message might prevent the activity that had caused trouble before. In time the warning messages themselves would accumulate in the catalog, and the robot would begin to avoid the steps that led to the steps that caused the original problem. Eventually a long chain of associations like this could head off trouble at a very early stage. There are pitfalls, of course. If the strength of the secondary warnings does not weaken sufficiently as the chain lengthens, trouble messages could grow into an incapacitating phobia and success messages into an equally incapacitating addiction.
Besides allowing the robot to adapt opportunistically to its environment, a success-trouble mechanism could be exploited by applications programs in more directed ways. Suppose the robot has a spoken word recognizer. A module that simply generates a success signal on hearing the word "good" and a trouble message on hearing "bad" would allow a customer to easily modify the robot's behavior. If the robot was making a nuisance of itself by vacuuming while a room was in use, a few utterances of "bad!" might train it to desist until conditions changed, for instance at a different time of day or when the room was empty.
Imagery
Fast-learning robots would be able to handle programs that had a great many alternative actions at each stage of a task--such alternatives would give the robot a wide margin for creativity. But a robot with only a simple conditioning system would be a slow learner. Many repetitions would be required to elicit statistically significant correlations in the conditioning catalog. Some situations in the real world are unforgiving of such a leisurely approach. A robot that repeatedly wandered onto a public road, being slow to register the danger of that location, might suddenly be converted into scrap metal. A robot, or software, that was slow in adapting to changing conditions or opportunities in a house could lose the battle for economic survival against a swifter product from another manufacturer.
Learning could be greatly enhanced by the addition of another major module, a general world simulator. Now, even the bare-bones universal robot I outlined uses simulation to some extent. To safely reach its destination, a program in the universal robot consults its internal map of the surroundings and considers many alternative paths to find the best. These ponderings are simulations of hypothetical robot actions. Similar processes go on when the robot decides how to pick up an object or when it considers possible interpretations of what it sees with its cameras. But each of these procedures is specialized, models only one aspect of the world, and can be used for only one function. Suppose the robot had a much more powerful simulator that permitted complex hypothetical situations involving the robot and many aspects of its surroundings to be modeled. An application program might use such a simulator to check out a proposed action for safety and efficacy, without endangering the robot. Developing such a simulator would require a major effort by a large community of patient researchers. Such a program would allow a robot to enter a new room, scan it and build a mental model that included vast amounts of prior knowledge, such as the probable contents of kitchen counters and the effect of turning faucet knobs. It would incorporate instinctive knowledge about the world that took millions of years of evolution to install in us. After twenty years of growth, to a size perhaps larger than the automobile industry, the general purpose robot industry may be able to support the assembly of such knowledge, and to have it ready for use in another decade, perhaps by 2020.
Things get really interesting when events in the simulator are fed to the conditioning mechanism. Then, a disaster in the simulator (for instance a simulated tumble of the robot) would in real life condition the robot to avoid the simulated precursor event (let's say loitering at the head of a simulated stairwell). The robot could thus prepare for many future problems and opportunities by simulating possible scenarios in its idle time. Such scenarios might be simply variations on the day's real events. So equipped, the robot will have the capacity to remember, to imagine, and to dream.
The simulators will come from the factory loaded with generic knowledge, but they will also be required to learn the idiosyncrasies of each new location. Advanced robots may find themselves working with other robots and with people. Such interaction could be made more effective if the simulators on these machines could predict the behavior of others to some extent. Part of the prediction might involve roughly modeling the other's mental state, so that its reactions to alternative acts could be anticipated. New behaviors appear once there is an internal model of another being's state of mind. For instance, a module that generated trouble messages when it detected distress in a mental model in the simulator would condition the robot to act in a kindly manner. And a robot might find itself admonished for inappropriately ascribing "robotomorphic" feelings and intentions to other machines, or to humans!
We could carry this speculative evolution further, gradually endowing the feeling robots with intellectual capabilities similar to those of humans. I expect, however, that by the time the robots are ready for them, let's say by 2030, superb intellectual capabilities will be available for wholesale purchase from the traditional artificial intelligence industry, who will have been pursuing their top-down strategy in parallel with the bottom-up evolution of the robots. The marriage may take many years to consummate fully, raising issues such as how the reasoning system can best access the simulator to derive flashes of intuition, and how reasoning should influence the conditioning system, so as to be able to override the robot's instincts in exceptional circumstances. The combination will create beings that in some ways resemble us, but in others are like nothing the world has seen before.
Transmigration
Some of us anticipate the discovery, within our lifetimes, of methods to extend human life, and we look forward to a few eons of exploring the universe. The thought of being grandly upstaged in this by our artificial progeny is disappointing. Long life loses much of its point if we are fated to spend it staring stupidly at our ultra-intelligent machines as they try to describe their ever more spectacular discoveries in baby-talk that we can understand. We want to become full, unfettered players in this new superintelligent game. What are the possibilities for doing that?
Genetic engineering may seem an easy option. Successive generations of human beings could be designed by mathematics, computer simulations, and experimentation, like airplanes, computers, and robots are now. They could have better brains and improved metabolisms that would allow them to live comfortably in space. But, presumably, they would still be made of protein, and their brains would be made of neurons. Away from earth, protein is not an ideal material. It is stable only in a narrow temperature and pressure range, is very sensitive to radiation, and rules out many construction techniques and components. And it is unlikely that neurons, which can now switch less than a thousand times per second, will ever be boosted to the billions-per-second speed of even today's computer components. Before long, conventional technologies, miniaturized down to the atomic scale, and biotechnology, its molecular interactions understood in detailed mechanical terms, will have merged into a seamless array of techniques encompassing all materials, sizes, and complexities. Robots will then be made of a mix of fabulous substances, including, where appropriate, living biological materials. At that time a genetically engineered superhuman would be just a second-rate kind of robot, designed under the handicap that its construction can only be by DNA-guided protein synthesis. Only in the eyes of human chauvinists would it have an advantage-- because it retains more of the original human limitations than
other robots.
Robots, first or second rate, leave our question unanswered. Is there any chance that we-- you and I, personally-- can fully share in the magical world to come? This would call for a process that endows an individual with all the advantages of the machines, without loss of personal identity. Many people today are alive because of a growing arsenal of artificial organs and other body parts. In time, especially as robotic techniques improve, such replacement parts will be better than any originals. So what about replacing everything, that is, transplanting a human brain into a specially designed robot body? Unfortunately, while this solution might overcome most of our physical limitations, it would leave untouched our biggest handicap, the limited and fixed intelligence of the human brain. This transplant scenario gets our brain out of our body. Is there a way to get our mind out of our brain?
You've just been wheeled into the operating room. A robot brain surgeon is in attendance. By your side is a computer waiting to become a human equivalent, lacking only a program to run. Your skull, but not your brain, is anesthetized. You are fully conscious. The robot surgeon opens your brain case and places a hand on the brain's surface. This unusual hand bristles with microscopic machinery, and a cable connects it to the mobile computer at your side. Instruments in the hand scan the first few millimeters of brain surface. High-resolution magnetic resonance measurements build a three-dimensional chemical map, while arrays of magnetic and electric antennas collect signals that are rapidly unraveled to reveal, moment to moment, the pulses flashing among the neurons. These measurements, added to a comprehensive understanding of human neural architecture, allow the surgeon to write a program that models the behavior of the uppermost layer of the scanned brain tissue. This program is installed in a small portion of the waiting computer and activated. Measurements from the hand provide it with copies of the inputs that the original tissue is receiving. You and the surgeon check the accuracy of the simulation by comparing the signals it produces with the corresponding original ones. They flash by very fast, but any discrepancies are highlighted on a display screen. The surgeon fine-tunes the simulation until the correspondence is nearly perfect.
To further assure you of the simulation's correctness, you are given a pushbutton that allows you to momentarily "test drive" the simulation, to compare it with the functioning of the original tissue. When you press it, arrays of electrodes in the surgeon's hand are activated. By precise injections of current and electromagnetic pulses, the electrodes can override the normal signaling activity of nearby neurons. They are programmed to inject the output of the simulation into those places where the simulated tissue signals other sites. As long as you press the button, a small part of your nervous system is being replaced by a computer simulation of itself. You press the button, release it, and press it again. You should experience no difference. As soon as you are satisfied, the simulation connection is established permanently. The brain tissue is now impotent-- it receives inputs and reacts as before but its output is ignored. Microscopic manipulators on the hand's surface excise the cells in this superfluous tissue and pass them to an aspirator, where they are drawn away.
The surgeon's hand sinks a fraction of a millimeter deeper into your brain, instantly compensating its measurements and signals for the changed position. The process is repeated for the next layer, and soon a second simulation resides in the computer, communicating with the first and with the remaining original brain tissue. Layer after layer the brain is simulated, then excavated. Eventually your skull is empty, and the surgeon's hand rests deep in your brainstem. Though you have not lost consciousness, or even your train of thought, your mind has been removed from the brain and transferred to a machine. In a final, disorienting step the surgeon lifts out his hand. Your suddenly abandoned body goes into spasms and dies. For a moment you experience only quiet and dark. Then, once again, you can open your eyes. Your perspective has shifted. The computer simulation has been disconnected from the cable leading to the surgeon's hand and reconnected to a shiny new body of the style, color, and material of your choice. Your metamorphosis is complete.
Other, less gradual, ways to transfer a mind are conceivable. Whatever style of mind transfer you choose, as the process is completed many of your old limitations melt away. Your computer has a control labeled "speed." It had been set at "slow," to keep the simulations synchronized with the old brain, but now you change it to "fast," allowing you to communicate, react, and think a thousand times faster. The entire program can be copied into similar machines, resulting in two or more thinking, feeling versions of you. You may choose to move your mind from one computer to another that is more technically advanced or better suited to a new environment. The program can also be copied to a future equivalent of magnetic tape. Then, if the machine you inhabit is fatally clobbered, the tape can be read into a blank computer, resulting in another you minus your experiences since the copy. With enough widely dispersed copies, your permanent death would be highly unlikely.
As a computer program, your mind can travel over information channels, for instance encoded as a laser message beamed between planets. If you found life on a neutron star and wished to make a field trip, you might devise a way to build a robot there of neutron stuff, then transmit your mind to it. Since nuclear reactions are about a million times quicker than chemical ones, the neutron-you might be able to think a million times faster. You would explore, acquire new experiences and memories, and then beam your mind back home. Your original body could be kept dormant during the trip and reactivated with the new memories when the return message arrived-- perhaps a minute later but with a subjective year's worth of experiences. Alternatively, the original could be kept active. Then there would be two separate versions of you, with different memories for the trip interval.
Your new abilities will dictate changes in your personality. Many of the changes will result from your own deliberate tinkerings with your own program. Having turned up your speed control a thousandfold, you notice that you now have hours (subjectively speaking) to respond to situations that previously required instant reactions. You have time, during the fall of a dropped object, to research the advantages and disadvantages of trying to catch it, perhaps to solve its differential equations of motion. You will have time to read and ponder an entire on-line etiquette book when you find yourself in an awkward social situation. Faced with a broken machine, you will have time, before touching it, to learn its theory of operation and to consider, in detail, the various things that may be wrong with it. In general, you will have time to undertake what would today count as major research efforts to solve trivial everyday problems.
You will have the time, but will you have the patience? Or will a thousandfold mental speedup simply incapacitate you with boredom? Boredom is a mental mechanism that keeps you from wasting your time in profitless activity, but if it acts too soon or too aggressively it limits your attention span, and thus your intelligence. One of the first changes you will want to make in your own program is to retard the onset of boredom beyond the range found today in even the most extreme intellectuals. Having done that, you will find yourself comfortably working on long problems with sidetracks upon sidetracks. In fact, your thoughts will routinely become so involved that they will call for an increase in your short-term memory. Your long-term memory also will have to be boosted, since a month's worth of events will occupy a subjective span of a century! These are but the first of many changes.
I have already mentioned the possibility of making copies of oneself, with each copy undergoing its own adventures. It should be possible to merge memories from disparate copies into a single one. To avoid confusion, memories of events would indicate in which body they happened, just as our memories today often have a context that establishes a time and place for the remembered event. Merging should be possible not only between two versions of the same individual but also between different persons. Selective mergings, involving some of another person's memories and not others, would be a superior form of communication, in which recollections, skills, attitudes, and personalities can be rapidly and effectively shared. Your new body will be able to carry more memories than your original biological one, but the accelerated information explosion will ensure the impossibility of lugging around all of civilization's knowledge. You will have to pick and choose what your mind contains at any one time. There will often be knowledge and skills available from others superior to your own, and the incentive to substitute those talents for yours will be overwhelming. In the long run you will remember mostly other people's experiences, while memories you originated will be incorporated into other minds. Concepts of life, death, and identity will lose their present meaning as your mental fragments and those of others are combined, shuffled, and recombined into temporary associations, sometimes large, sometimes small, sometimes long isolated and highly individual, at other times ephemeral, mere ripples on the rapids of civilization's torrent of knowledge. There are foretastes of this kind of fluidity around us. Culturally, individual humans acquire new skills and attitudes from others throughout life. Genetically, in sexual populations each individual organism is a temporary bundling of genes that are combined and recombined in different arrangements every generation.
Mind transferral need not be limited to human beings. Earth has other species with large brains, from dolphins, whose nervous systems are as large and complex as our own, to elephants, other whales, and perhaps giant squid, whose brains may range up to twenty times as big as ours. Just what kind of minds and cultures these animals possess is still a matter of controversy, but their evolutionary history is as long as ours, and there is surely much unique and hard-won information encoded genetically in their brain structures and their memories. The brain-to-computer transferral methods that work for humans should work as well for these large-brained animals, allowing their thoughts, skills, and motivations to be woven into our cultural tapestry. Slightly different methods, that focus more on genetics and physical makeup than on mental life, should allow the information contained in other living things with small or no nervous systems to be popped into the data banks. The simplest organisms might contribute little more than the information in their DNA. In this way our future selves will be able to benefit from and build on what the earth's biosphere has learned during its multibillion-year history. And this knowledge may be more secure if it is preserved in databanks spreading through the universe. In the present scheme of things genes and ideas are often lost when the conditions that gave rise to them change.
Our speculation ends in a supercivilization, the synthesis of all solar-system life, constantly improving and extending itself, spreading outward from the sun, converting nonlife into mind. Just possibly there are other such bubbles expanding from elsewhere. What happens if we meet one? A negotiated merger is a possibility, requiring only a translation scheme between the memory representations. This process, possibly occurring now elsewhere, might convert the entire universe into an extended thinking entity, a prelude to even greater things.