__________________________________________________________

IEEE Transactions on Medical Electronics v15 n3 July-September 1971, pp. 1175:1195


An Invasive Approach to High-Bandwidth
Neural-Electronic Interfaces

Dexter Wyckoff
principal scientist, Mimecom Seldon Research Center, Sebastopol , California

Rajiv Kamar
research neurobiologist, Department of Psychology, University of California at San Francisco

Fred Wright
computer systems engineer, Project One, Berkeley, California


ABSTRACT In previous years one of the authors (Wyckoff) reported on the development of synthetic neurotransmitter analogs that, administered intravenously, enhanced certain mental functions, including memory formation and recall, and ability to maintain attention for extended periods. Further efforts in that direction yeilded diminishing returns. In an offshoot of this work, the authors investigated the possibility of augmenting mental function by physically linking brain structures to external computer hardware. After locating a suitable neural connection site (the mammalian corpus callosum) we developed hardware and software for the task. This paper describes our first unambiguously successful results, obtained in a juvenile squirrel monkey, which was able, in consequence, to play chess and to read at the level of a schoolchild, activities far outside of its normal competence.
Our approach generalizes straightforwardly to human augmentation, and points to the additional possibility of gradually migrating memories, skills and personality encoded in fragile and bounded neural hardware to faster, more capacious and communicative, and less mortal, external digital machinery--thus preserving and expanding the essential functional of a mind, even as the nervous system in which it arose was lost. A mind and personality, as an information-bearing pattern, might thus be freed from the limitations and risks of a particular physical body, to travel over information channels and through the ether, to reside in alternative physical hosts.

Introduction Traditionally human central nervous systems (CNS) and electronic computation and communication devices have been linked via the bodily senses and musculature--an approach requiring only simple technology and incurring little medical risk. Unfortunately this straightforward avenue has very low information bandwidth: effectively a few kilohertz of sensory information (primarily vision) into the CNS, and a mere one tenth of that figure out. Much higher transfer rates are observed within the CNS. In particular, the corpus callosum connects the right and left cerebral hemispheres with 500 million fibers in the human. Each fiber signals on average at about ten hertz, for an aggregate rate of several gigahertz: about one million times the bandwidth of the senses. The corpus callosum connects to all major cerebral areas, offering a spectacular opportunity for electronic interaction. The primary challenges are the invasive nature and massive scale of any comprehensive link. In other publications we have described the design of "neural combs" which can be inserted non-destructively into nerve bundles to make contact with a large fraction of the fibers: they are scaled up relatives of cochlear implants used in nerve-deafness surgery. This paper describes experiments in which neural combs were implanted into the callosa of primates, and connected to a computers running adaptive algorithms that modeled the measured neural traffic and correlated it with sensory, motor and cognitive states, and later impressed external information on this flow.
The animals (squirrel monkeys) used in the experiments have a CNS size about one two hundredth that of a human, with a corpus callosum of less than ten thousand fibers, greatly simplifying both the surgical and computational aspects of the work. In each experiment a neural comb with two thousand microfiber tines at ten micron separation, each carrying along its length one hundred separate connection rings, was carefully worked between the axons in the callosum of the experimental animal. After a week to heal surgical trauma, a cable bundle from the comb to a PDP-10 ten teraops multiprocessor was activated, and signals from the tines were processed by a factor-analysis program. Once a rough relational map had been obtained, a functional map was constructed by presenting the animal with controlled sensory stimuli, and inducing it to perform previously trained motor tasks, while correlating comb activity. The functional map was further refined by processing the responses to synthesized sensations introduced via the comb. After several days of stimulation and analysis, the PDP-10 had a sufficiently good model of the callosal traffic that we were able to elicit very complex and specific behavior, including some that seem quite beyond the capacities of the unaugmented animals.
Our most notable results were obtained with animal number three (#3), out of five subjects. In one demonstration, we interfaced #3 to the Greenblatt chess program, supplied with the PDP-10 software. We began by fast-training #3 to discriminate individual chess pieces we presented. Fast-training is similar to conventional operant conditioning, but greatly accelerated because the responses we seek and the intense rewards we generate involve fast, unambiguous, callosal signals, rather than clumsy physical acts. We then configured the PDP-10 to reward the animal (by generating callosal stimuli similar to those occurring naturally when tasty fruit is seen) when it scanned the chess board each time its turn to move arose. During the scan, the callosal recognition and location signals for the various chess pieces are translated, by a program module we wrote, into a chessboard configuration, which is fed to the chess program, which returns a suitable move. Our program then stimulates #3's food grasping behavior, directed at the piece to be moved: in consequence, the animal avidly grasps it. Next, the target square is singled out for attention, causing the piece to be moved there. The attractiveness of the piece is then reduced and the animal loses interest, and releases it. It took several intense weeks of effort to "debug" this program. Among the problems we encountered were #3's inattention to other pieces on the board: in early tries it would often incidentally upset them when reaching for the piece to be moved. We now activate an aversion response we had noticed in the mapping process: as best we can determine, #3 now feels about a chess move as it would feel about a luscious fruit that must be gingerly teased out of a thorn bush. Another problem was the animal's wandering interest as it waited for its opponent to move. We solved this by a mild invocation of its response to certain predators. It now quietly but alertly, somewhat apprehensively, awaits the move, drawing no attention to itself.
Another demonstration gave #3 more autonomy. We fast-trained the animal to recognize individual letters of the alphabet, and to scan strings of such letters it encountered. The letter strings were fed to a dictionary look up program, whose output was then translated into appropriate recognition signals for the objects, events and actions in the text. #3 soon learned to respond the labels of containers, and to choose those whose contents were of interest (usually culinary). When the program is running, #3 also shows an interest in books, and registers appropriate reactions such as appetite, excitement, fear, lust and so on appropriate to the stories it reads. Stories about food and outdoor adventures seem to be preferred: curious for an animal that was raised in an indoor breeding colony, and has spent the last five years in small laboratory cages.
In future work we plan to expand the behavioral latitude available to our animal subjects while executing programmed tasks, by writing richer programs more responsive to the animal's internal imperatives, and also by providing means for the animal to invoke major programs on its own initiative. These extensions are, of course, interesting in the context of future applications to human interface.




EMAIL text archive, Kyoto University datacenter, December 2010

Date:
Tuesday, 9 February 1999, 3:27 UT
To:
Chickie Levitt <chickie@neuro.usc.edu>
From:
Ushio Kawabata <ushio@kyotou.jp>
Translation:
jp1->am1
Encoding:
text:rsa-pubkey

Your musings yesterday on a permanent broadband mental link to the worldnet were very thought-provoking. I think you are right, it would allow the human mind to bootstrap itself in an effective way into an entirely new, and much larger, arena of possibilities. In the early stages the effect would be of an expanded mind, with the contents of the world libraries as accessible as one's own memories, and the computational capacities of the world's computers as available as one's own skills. As integration proceeded, one might slowly download one's entire personality into the net, being thus freed from all limitations of the body. It is hard, from our present standpoint, to even imagine what might be seen and reached from that perspective.

Have you any ideas on how to proceed? There was an article yesterday article in Comp.Par on Andrew Systems' Crystal 3. It is probably powerful and small enough to serve as a data compressor for a link: only 1/20 cubic meter for 10 TeraOps: Perhaps one could carry it in a backpack for a perpetual connection?

********************

Date:
Tuesday, 9 February 1999, 8:16 UT
To:
Ushio Kawabata <ushio@kyotou.jp>
From:
Chickie Levitt <chickie@neuro.usc.edu>
Translation:
am1->jp1
Encoding:
text:rsa-pubkey

Usio-samba!
Well, it would still give a pain to carry your brain. A backpack compressor might offer higher bandwidth to the net, but would be much less convenient than a straightforward Eye-glass optic nerve interface (and considerably more risky). I've been thinking of a way around having to put all the processing in electronics, and still get higher overall bandwidth in a vastly more compact form. *If* we could get the neural connections to cooperate----to crossbar and compress the calloflow----we could save 99% of the computation and external communication, making callosum interface practical---- with data rate low enough for a sat-cell relay. So then, you would have to carry around only a standard multiplexer and sat-cell transceiver. The hard parts of the operation can be distributed anywhere over the worldnet!

********************

Date:
Tuesday, 9 February 1999, 8:18 UT
To:
Chickie Levitt <chickie@neuro.usc.edu>
From:
Ushio Kawabata <ushio@kyotou.jp>
Translation:
jp1->am1
Encoding:
text:rsa-pubkey

That would be artful - a few chips at your end, giving access to the world's data and processing power. Not only images and sounds, as with Eye-glasses, but, with callosum access, feelings, motor sensations and more abstract mental concepts, since the connection is to your cortical areas for those functions. One could be in touch with almost anything in the web with an intimacy now possible only with one's own thoughts! (on the other hand, there is danger from useless net blabber all day long: like mental tunes that will not cease).

Small problem: The crux of your suggestion is to build biological neural structures to do most of the job we have been doing in electronics. How does one persuade the neurons to, so conveniently, arrange themselves to compress your callosum flow for satellite transmission?

********************

Date:
Tuesday, 9 February 1999, 8:19 UT
To:
Ushio Kawabata <ushio@kyotou.jp>
From:
Chickie Levitt <chickie@neuro.usc.edu>
Translation:
am1->jp1
Encoding:
text:rsa-pubkey

Well, that's the hard part all right. I have been reading in sci.bio.research about gene hacking by the nerve repair crowd at Hopkins. They've managed to develop viral vectors that infect neurons and bugger their genetic initiator sequences so neural stem cells begin differentiating in mid growth program of just about any structure they want. They can grow an isolated callosum! - Though the ends come out tangled, since there's no place for them to connect to.

********************

Date:
Tuesday, 9 February 1999, 8:19 UT
To:
Chickie Levitt <chickie@neuro.usc.edu>
From:
Ushio Kawabata <ushio@kyotou.jp>
Translation:
jp1->am1
Encoding:
text:rsa-pubkey

There must be many difficulties there. My friend Toshi Okada, who does gene-engineering at Tskuba, tells me that in embryology, almost half the information required to properly grow cell structures comes from the previously grown structure: expressing the DNA code alone is not sufficient to build working assemblies in most instances. Though perhaps additional coding could be added to substitute for insufficient external framework? That would be rather like building scaffolding in preparation for construction proper.

********************

Date:
Tuesday, 9 February 1999, 8:20 UT
To:
Ushio Kawabata <ushio@kyotou.jp>
From:
Chickie Levitt <chickie@neuro.usc.edu>
Translation:
am1->jp1
Encoding:
text:rsa-pubkey

They've done some of that, but still get some distortion. It gets better if the growth is started in the generally right kind of preexisting tissue

I'm thinking of growing a couple of square centimeters of cortical tissue with callosal fibers that seek out and merge with an existing callosum. The DNA hackery would be encoded into an RNA virus deposited on the same electronic chip that contains the digital data interface. The chip would have chemical target sites for one end of the new nerve growth, and would be powered by body metabolism via an integrated ATP fuel cell. Implant the chip somewhere on the edge of the corpus callosum on the brain midline, and the virus will cause the surrounding brain structure to grow a biological data-compressing interface between the chip and the callosum.

The chip would have to be connected to some kind of external antenna to communicate, maybe a thin wire through the skull, like a hair.

********************

Date:
Tuesday, 9 February 1999, 8:20 UT
To:
Chickie Levitt <chickie@neuro.usc.edu>
From:
Ushio Kawabata <ushio@kyotou.jp>
Translation:
jp1->am1
Encoding:
text:rsa-pubkey

Most interesting proposal! I'll ask Toshi if you can use some of Tskuba's gene modeling and embryology software to help you with the design. They've become quite good in the last few years.
I will contact you then.

Best wishes - Ushio


MILESTONES IN MENTAL AUGMENTATION

(side-bar to article in New Scientist, Stepping Out - The Mind Unbounded, February 16, 2010)

1780
Luigi Galvani demonstrates a connection between nerves, muscles and electricity by animating frog legs with electricity applied to nerves leading to muscles, thus hinting at how the internal workings of a mind could be coupled to external artificial devices.
1906
Ramon Cajal and Camillo Golgi receive Nobel Prize for developing nerve staining methods and elucidating the detailed structure of the cerebrum and cerebellum, so providing a rough roadmap for later intervention.
1929
Hans Berger invents the electroencephalogram (EEG) for recording electrical activity in the human brain: a first crude, one-way channel into the functioning of the mind.
1952
James Watson and Francis Crick determine the structure of DNA and its mode of replications, and suggest its role as the control code for biological growth, so laying the foundation for molecular biology, and eventually the engineering of biological structures, including neural assemblies for electronic interfaces.
1953
Wilder Penfield produces maps of the cortex by means of electrical probes of its surface during brain surgery--evoking specific memories, sensations and motor responses by stimulating specific locations, thus establishing the geographic nature of mental organization, and incidentally providing the first examples of artificial interaction with the internal workings of the mind.
1959
Robert Noyce and Jack Kirby invent the integrated circuit, a way of placing many electronic components on a single piece of crystal, initiating at least a half century of exponential growth in electronic complexity, the creation of mind-like machines, and eventually the merger of biological and artificial minds.
1960
Frank Rosenblatt develops and reports on learning experiments with the Perceptron, an artificial neural net: a way of organizing electronic components in a structure that anatomically and functionally matches the organization of biological brains.
1967
George Brindley and William Lewin implant an electrode array into the visual cortex of a congenitally blind subject, and generate visual phosphenes (spots) by camera-controlled computer activation of this array, restoring some sight to a nerve-blind volunteer, and providing an early major demonstration of a computer-nervous system symbiosis.
1969
Dexter Wyckoff and Rajiv Kamar demonstrate the neural comb, a low-noise, high-bandwidth external channel to the nervous system, providing for the first time potentially total external access to higher mental functions.
1971
Wyckoff, Kamar and Fred Wright use a neural comb with a PDP-10 computer to enable a squirrel monkey to play chess and to read, an early example of mental augmentation by electronic means.
1974
Walter House and Janet Urban install a cochlear implant driven by an external computer, restoring partial hearing to a nerve-deaf patient, and creating a successful medical niche for electronic substitution of lost sensory functions.
1982
William DeVries installs first permanent artificial heart implanted in a human subject, causing a major shift in the public perception of the relation of "natural" biological functions to "artificial" mechanical devices.
1987
Josephine Bogart and Paul Vogels install a neural comb in the corpus callosum of an epileptic patient, and program an external computer to interrupt seizures: the first human application of a neural comb.
1991
Carver Meade develops an artificial retina, integrating tens of thousands of artificial neurons on an integrated circuit, developing some of the analog techniques used in the electronic portions of future "neurochips."
1994
Ushio Kawabata develops a successful predictive model of human cortical behavior building on Edelman's "neural darwinism" formulation, an essential step in providing the engineering environment used to design the neural structures grown by neurochip viruses.
1997
Ushio Kawabata and Chickie Levitt develop an information-efficient method of deriving functional neural anatomy from dense observations of nerve signals, so laying the foundation for the mental mapping process used to adapt a neurochip to its host.
2000
Chickie Levitt and Toshi Okada develop a genetic design for a neural interface between the human callosum and a data transmission integrated circuit. This design is encoded into RNA viruses which are part of neurochip implants, and act by infecting nearby neural tissue, so causing the growth of connective and data-compressing neuron structures that connect the electronic portion of the neurochip with the brain.
2003
Chickie Levitt combines previous electronic, genetic and neural innovations to produce the first complete, functional, self-connecting neurochip.
2005
The first experiment with neurochips is partial success. A neurochip-augmented chimpanzee demonstrates an equivalent human IQ of 190 for two months, before dying of a brain tumor.