__________________________________________________________

IEEE Transactions on Medical Electronics v15 n3 July-September 1971, pp. 1175:1195


An Invasive Approach to High-Bandwidth
Neural-Electronic Interfaces

Dexter Wyckoff
principal scientist, Mimecom Seldon Research Center, Sebastopol , California

Rajiv Kamar
research neurobiologist, Department of Psychology. University of California at San Francisco

Fred Wright
computer systems engineer, Project One, Berkeley, California


ABSTRACT In previous years one of the authors (Wyckoff) reported on the development of synthetic neurotransmitter analogs that, administered intravenously, enhanced certain mental functions, including memory formation and recall, and ability to maintain attention for extended periods. Further efforts in that direction yeilded diminishing returns. In an offshoot of this work, the authors investigated the possibility of augmenting mental function by physically linking brain structures to external computer hardware. After locating a suitable neural connection site (the mammalian corpus callosum) we developed hardware and software for the task. This paper describes our first unambiguously successful results, obtained in a juvenile squirrel monkey, which was able, in consequence, to play chess and to read at the level of a schoolchild, activities far outside of its normal competence.
Our approach generalizes straightforwardly to human augmentation, and points to the additional possibility of gradually migrating memories, skills and personality encoded in fragile and bounded neural hardware to faster, more capacious and communicative, and less mortal, external digital machinery--thus preserving and expanding the essential functional of a mind, even as the nervous system in which it arose was lost. A mind and personality, as an information-bearing pattern, might thus be freed from the limitations and risks of a particular physical body, to travel over information channels and through the ether, to reside in alternative physical hosts.

Introduction Traditionally human central nervous systems (CNS) and electronic computation and communication devices have been linked via the bodily senses and musculature--an approach requiring only simple technology and incurring little medical risk. Unfortunately this straightforward avenue has very low information bandwidth: effectively a few kilohertz of sensory information (primarily vision) into the CNS, and a mere one tenth of that figure out. Much higher transfer rates are observed within the CNS. In particular, the corpus callosum connects the right and left cerebral hemispheres with 500 million fibers in the human. Each fiber signals on average at about ten hertz, for an aggregate rate of several gigahertz: about one million times the bandwidth of the senses. The corpus callosum connects to all major cerebral areas, offering a spectacular opportunity for electronic interaction. The primary challenges are the invasive nature and massive scale of any comprehensive link. In other publications we have described the design of "neural combs" which can be inserted non-destructively into nerve bundles to make contact with a large fraction of the fibers: they are scaled up relatives of cochlear implants used in nerve-deafness surgery. This paper describes experiments in which neural combs were implanted into the callosa of primates, and connected to a computers running adaptive algorithms that modeled the measured neural traffic and correlated it with sensory, motor and cognitive states, and later impressed external information on this flow.
The animals (squirrel monkeys) used in the experiments have a CNS size about one two hundredth that of a human, with a corpus callosum of less than ten thousand fibers, greatly simplifying both the surgical and computational aspects of the work. In each experiment a neural comb with two thousand microfiber tines at ten micron separation, each carrying along its length one hundred separate connection rings, was carefully worked between the axons in the callosum of the experimental animal. After a week to heal surgical trauma, a cable bundle from the comb to a PDP-10 ten teraops multiprocessor was activated, and signals from the tines were processed by a factor-analysis program. Once a rough relational map had been obtained, a functional map was constructed by presenting the animal with controlled sensory stimuli, and inducing it to perform previously trained motor tasks, while correlating comb activity. The functional map was further refined by processing the responses to synthesized sensations introduced via the comb. After several days of stimulation and analysis, the PDP-10 had a sufficiently good model of the callosal traffic that we were able to elicit very complex and specific behavior, including some that seem quite beyond the capacities of the unaugmented animals.
Our most notable results were obtained with animal number three (#3), out of five subjects. In one demonstration, we interfaced #3 to the Greenblatt chess program, supplied with the PDP-10 software. We began by fast-training #3 to discriminate individual chess pieces we presented. Fast-training is similar to conventional operant conditioning, but greatly accelerated because the responses we seek and the intense rewards we generate involve fast, unambiguous, callosal signals, rather than clumsy physical acts. We then configured the PDP-10 to reward the animal (by generating callosal stimuli similar to those occurring naturally when tasty fruit is seen) when it scanned the chess board each time its turn to move arose. During the scan, the callosal recognition and location signals for the various chess pieces are translated, by a program module we wrote, into a chessboard configuration, which is fed to the chess program, which returns a suitable move. Our program then stimulates #3's food grasping behavior, directed at the piece to be moved: in consequence, the animal avidly grasps it. Next, the target square is singled out for attention, causing the piece to be moved there. The attractiveness of the piece is then reduced and the animal loses interest, and releases it. It took several intense weeks of effort to "debug" this program. Among the problems we encountered were #3's inattention to other pieces on the board: in early tries it would often incidentally upset them when reaching for the piece to be moved. We now activate an aversion response we had noticed in the mapping process: as best we can determine, #3 now feels about a chess move as it would feel about a luscious fruit that must be gingerly teased out of a thorn bush. Another problem was the animal's wandering interest as it waited for its opponent to move. We solved this by a mild invocation of its response to certain predators. It now quietly but alertly, somewhat apprehensively, awaits the move, drawing no attention to itself.
Another demonstration gave #3 more autonomy. We fast-trained the animal to recognize individual letters of the alphabet, and to scan strings of such letters it encountered. The letter strings were fed to a dictionary look up program, whose output was then translated into appropriate recognition signals for the objects, events and actions in the text. #3 soon learned to respond the labels of containers, and to choose those whose contents were of interest (usually culinary). When the program is running, #3 also shows an interest in books, and registers appropriate reactions such as appetite, excitement, fear, lust and so on appropriate to the stories it reads. Stories about food and outdoor adventures seem to be preferred: curious for an animal that was raised in an indoor breeding colony, and has spent the last five years in small laboratory cages.
In future work we plan to expand the behavioral latitude available to our animal subjects while executing programmed tasks, by writing richer programs more responsive to the animal's internal imperatives, and also by providing means for the animal to invoke major programs on its own initiative. These extensions are, of course, interesting in the context of future applications to human interface.