The Human-Computer Race: Who’s Covering the Footage?

robot-runningLike robots? This is sci-fi sans the “fi.” It’s a research paper that I wrote for an English class in 2005. As I post this, it’s now 2016, eleven years later, and I’m amazed at how NON-outdated this paper has become.

Anyway, here it is:

The Human-Computer Race:
Who’s Covering the Footage?
May 12, 2005

A monkey sat in a research laboratory, playing a simple computer game in which he chased a moving red dot on a monitor’s display. Remarkably, the primate was not using a joystick; this monkey had been trained to move the cursor using thoughts which fired signals to a brain-implanted computer chip (Shelly 264). Scientists have lofty and noble aspirations, such as enabling paralyzed people to regain muscle control, but incorporating our bodies with computer machinery is a subject that raises a number of ethical concerns. The public needs to be involved in discussing the parameters of the potentially life-altering research being conducted by the scientific community. A sheep was cloned in 1996 and the world broke out in debate. A brain implant enables a man to play Pong using thought control, and who is talking? (Maney).brain-electrodes

Inquisitive minds have been interested in the limits of computational science since before the invention of computers (Stanford). Until recently, there was a distinct and comforting gap between man and machine. A quick look at the two fields of computer science and brain science may help clarify how this gap is narrowing now.

Simplistically speaking, a computer consists of devices which operate electronically using a binary number system. The signals are binary because the fuel that computers operate on is electricity, and electricity has only two stages, on and off. Capacitors are very small electrical units on an internal grid which are capable of either being charged and represented by a one, or uncharged and represented by a zero. Each one or zero is a “bit” and eight bits make a “byte.” A byte represents one character, be that a letter, a punctuation mark, a symbol, or a space. While capacitors store bits, transistors help convey these bits of electrical signals along wired highways by bitsbytesacting as switches that open and close in response to electrical pulses. Obviously the machinations are much more complex than described above, but for the purpose of this paper, what is important to understand is how dissimilar the structure of a computer is to the structure of a brain. The evolution of computer intelligence has followed its own pathway, not along a course modeled by how the brain works.

Knowing this, computer guru Jeff Hawkins, the inventor of the Palm Pilot, turned his focus to understanding the brain. But his search for explanatory literature “turned up empty.” He realized “that no one had any idea how the brain actually worked” and that Artificial Intelligence scholars weren’t interested (7). Hawkins, obsessed with discovering the mysteries of the brain, entered into a biophysics program (8). In his book, On Intelligence, he points out that, in the past, scientists tended to focus on the differences within the brain: which lobe did what, for example. Hawkins postulated that the range of possible skills that the brain can “learn” is too vast for every possible “skill” to have its own special area in the cortex. Therefore, it seemed more likely that there was some common mechanism of each cell that was capable of learning. Hawkins recognized that solving the mystery of the neuron’s algorithm was crucial. He describes and gives substantiation for a revolutionary model of how the brain operates with a dynamic set of spatial and temporal patterns (57). Our thought processes are a manifestation of predictions originating from these chemically driven patterns associated in our neocortex. Within the brain, there is a memory-puzzlemulti-level hierarchy of pattern recognition, with more input flowing from stored information than from our senses (Hawkins 107-116). As with computer science, there is a point of primary relevance to be extracted from Hawkins. As researchers learn more and more about how the brain works, and as nanotechnology improves, the applications of developing molecule-sized machinery grow. The roads of computer science and brain science have merged and we are on the interfaced highway.

Many people use interfaces to interact with computers on a daily basis. For instance, Microsoft’s Windows program is an interface that converts input from a human operator into digital code usable by a computer. This program also translates the information that the computer has processed into a graphical format that the human operator can understand. What most of us don’t know is that scientists have now developed interfacing directly between our brains and computers. Over six years ago, doctors successfully implanted a device in a man’s brain that allowed him to “control a computer with his thoughts” (Herberman). And this was not their first successful interfacing operation in a human. The implantation involves the insertion of miniscule “hollow glass cones” which contain a chemical substance that encourages the neurons to grow inside the cones. Neurons communicate by firing electro-chemical impulses across their long axons to other brain cells. When the cultivated neurons develop inside these implanted cones, their axons grow to chemically connect with the micro-thin gold wires. These wires act as conductors of the electro-chemical impulses transmitted by the axons and the current can be conveyed to a device just under the scalp. From there, the signal can be converted to a radio broadcast, picked up by a receiver, changed back into a digital signal, and processed by a computer (Herberman).

brain-wavesRoy Bakay, a neurosurgeon performing these operations, says that “after some training, the patient is able to “will” a cursor to move and then stop on a specific point on the computer screen” (qtd. in Herberman). Developers believe that patients will be able to move more than digital cursors. Electrical currents could be used to stimulate the muscles of paralyzed people, allowing them to walk again. Scientists at a laboratory in Canada have been working on a similar project for years. These researchers believe that a switch “that is activated” by signals measured directly from an individual’s brain” will enable the use of “assistive appliances, computers, and neural prostheses.” The results of this research indicate that imagined finger and foot movements produce activation which is “comparable to accuracies using actual finger [and foot] movements” (Interface Project).

The applications of brain-computer interface (BCI) technology are far-reaching. Kevin Maney, of USA Today, reports that “devices are already regularly implanted in brains to help people who have severe epilepsy, Parkinson’s disease, or other neurological disorders.” One of the first implantees was a woman with Lou Gehrig’s disease (Herberman). Neural bioelectronics are even used to control bladders (Maguire). And neurologist Richard Restak devotes a chapter of his book, The New Brain, to discussing sensory substitution, which is of great interest to blind people. Corroborating Hawkins’ research, Restak describes the plasticity of neural tissue, characterized by the brain’s 10686941_10203320749793829_8854362349917317946_n“capacity for change” (8). It’s this plasticity theory that spurred Paul Bach-y-Rita, Kurt Kaczmarek, and others to develop a microelectronic system for displaying visual patterns on blind people’s tongues. The system uses eyeglasses embedded with miniature cameras and an electrotactile orthodontic device which “teaches” the brain to “see” (Bach-y-Rita). According to Kaczmarek’s website, kaz.med.wisc. edu, (at the University of Wisconsin, where he and Bach-y-Rita work) the tongue unit could also be used as an output device. That is, the tongue could not only receive information, but, by adding switches, it “could be used to control other devices.” This type of research gives hope to more than just medical professionals.

One of the most interested groups is the military. It comes as no surprise that factions such as the Defense Advanced Research Projects Agency (DARPA) are supporting Brain Machine Interface (BMI) research. According to Maney, “DARPA envisions a day when a fighter pilot, for instance, might operate some controls just by thinking.” Ultimately, if airforcepeople were able to control movement of objects with their thoughts, a soldier could fly an unmanned plane, fire ammunition, and run for cover at the same time. If one sensory organ is being used to capacity, another organ can be adapted to bring in the sensory input in an unconventional way. Also, the natural range of our human scope of perception will be increased. A soldier, for example, could be outfitted with a tongue interface system that perceived infrared (Restak 167). Clearly, it is important for a country’s military to be technologically savvy, and the general public can’t be informed of everything government is doing, for the sake of national security, but people do need to know what’s been happening in the field of brain-computer interfacing.

So why is the information not being discussed? An excerpt taken from the meeting minutes of a top secret federal interagency group called Perfect People 2020 (PP2020) should give readers something to talk about. The minutes were mistakenly provided to George Annas, a published bioethicist, by what is believed to be a disgruntled PP2020 doctor who is angry about information disclosed on a surveillance tape. On the tape, the Secretary of Health and Human Services and his speechwriter discuss plans for controlling doctors and patients, and they talk about George Orwell’s “Big Brother” and Aldus Huxley’s Brave New World. The discussion “reminds” the secretary about their “embryo development experiments and the Armed Forces Behavior Modification Program [..] which makes basic training look like a cub scout jamboree weekend.” The secretary also states, “Huxley’s got it right: a drugged citizen is a happy citizen.” (qtd. in Annas 109). The embryo development experiments discussed in section L103B of the minutes involve implanting “monitoring devices at the base of the brain of neonates in three major teaching hospitals.” These implants would not only monitor the subjects but also play subliminal messages directly to their brains” (Annas 104).

“The long-range plan” of this government experimentation is “to screen all human embryos, and perhaps enhance their characteristics” (Annas 104). The “Perfect People” agency had agreed to maintain facilities outside of the United States which would “house women in persistent vegetative states.” The women’s uteruses are needed because the research requires a large number of “children without having to worry about consent or unwanted publicity” (Annas 104). wtfThis information from the minutes of PP2020’s 1993 meeting is shocking, but not complete. The editor of the anthology in which Annas’ article is published notes that after U.S. Attorney General Janet Reno sued to have the minutes suppressed, “a federal judge reviewed the document and personally excised all the material that could affect national security” (Blank 99). That explains why published fragments, like those that discuss infant brain implants and embryo generating factories, are rife with the insertion: “[NATIONAL SECURITY DELETIONS]”.

While this information is hard to believe, it’s not hard to believe that the government would be interested in the progress of this very real technology. Paul Wolpe, an ethicist at the University of Pennsylvania, notes that “the U.S. military is pouring money into neurotechnology research” (qtd. in Butcher). So why aren’t people talking about the implications? The Hastings Center Report reveals that “unlike the scientific community at the advent of genetic technologies, the computer industry has not yet engaged in a public dialogue about these promising but risky technologies.” The Report insightfully adds that “avoiding discussion, simply relying on the principles of free scientific inquiry, is itself a moral stance.” (Maguire). Neil Postman, in the anthology, Computers, Ethics, and Society, publicizes the fact that he has heard many computer experts speak about the benefits of computer advancements, but he has only heard one person speak about the disadvantages. This biased reporting on the subject makes him “wonder if the profession is hiding something important” (102). And Bill Joy, chief scientist at Sun Microsystems, would probably tell Mr. Postman that he is right to wonder.

Joy is one scientist who felt compelled to speak up. He admits with horror the realization that he “may be working to create tools which will enable the construction of the technology that may replace our species.” (116). Joy’s colleagues, Ray Kurzweil, inventor of the first device that could read aloud to the blind, and Hans Moravec, founder of the world’s largest robotics research program, contend that the computing power of the near future will enable a gradual replacement of “ourselves with our robotic technology” (117). The theory that our human species will become extinct at the hands of a robot species sounds fictitious, but some computer experts have come to terms with the idea. When Joy approached Danny Hillis, cofounder of Thinking Machines Corporation, with his concerns, Hillis surprised Joy by calmly stating “that the changes would come gradually, and that we would get used to them.” Hillis figures, “I’m as fond of my body as anyone, but if I can be 200 with a body of silicon, I’ll take it” (qtd. in Joy 114). Some scientists speculate that one day we’ll be able to download our brains to computer equipment and our bodies will basically be repairable cases. On Kurzweil’s website, kurzweilai.net, the robotics expert also suggests “replacing imperfect DNA with software.” It’s worthwhile taking a look through this website, which states “advances in genomics, biotechnology, and nanotechnology have brought the possibility of immortality within our grasp.” The hope of immortality can be a powerful force. But is immortality still immortality if we are not monahuman beings?

How “human” can a computer get? Back in the 1930’s the “founder” of Computer Science and Artificial Intelligence programs, Alan Turing, wrote that it is possible to design machines which emulate the brain. Turing was a brilliant mathematician and an expert on the limits of computation. In the decades that followed, though, most scholars who knew of Turing’s work would try to discredit it (Stanford). Professor John Searle, who teaches at Berkeley, (and goes to robotics conferences with Bill Joy) argues that intelligence is a product of intentionality, which computers don’t have. A mind works by conveying information purposely through symbols such as language whereas a computer only manipulates the symbols without knowing what they mean. “They have only syntax, not semantics,” he explains in his famous paper “Minds, Brains, and Programs.” What Searle and others believe can’t be replicated by machinery is cognition, the mental process of knowing. More and more, though, scientists, such as Jeff Hawkins and Ray Kurzweil, are emerging with different philosophies. Hawkins predicts that we will be able to make “brainlike” machines. The sensory systems will be different and the speed of silicon transmissions is a million times faster than neural transmissions so these machines will exceed human capabilities (223). Kurzweil believes that human-computer interfacing and nanotechnology will enable us to mend ourselves on an atomic level. People are working on this technology; isn’t it time that pertinent issues be discussed?

Ethical concerns, for instance, need to be as public as they were in the debate about genetic alterations. Besides the creepy possibilities of totalitarian control or being succeeded by an “artificially intelligent” species, there are subjects that demand debate now because brain-computer interfacing is being used now. performance-drugsCarson Strong, in Medicine Unbound, writes “an intervention that is life-saving, rehabilitative, or otherwise therapeutic can be consistent with the principle that the physical integrity of the body should be preserved even if it involves a bodily ‘mutilation’ or intrusion, provided that it promotes the integrity of the whole” (21). In other words, it is considered okay to perform augmentative procedures on people as long as the motivation is a curative one. Given the history of human nature, though, innovations that enhance our capabilities will not be used strictly for therapeutic purposes. To presume such would be both naïve and dishonest. A glance at how performance enhancing drugs have been used demonstrates this. Methylphenidate, for instance, was marketed to help children with Attention Deficit Hyperactive Disorder (ADHD). A 2003 article in The Lancet reports that “up to a third of boys are on the drug, even though many of them do not have ADHD” and “there is also evidence that many wealthier persons are now choosing to give the drug to their well-behaved children (Butcher). This trend to make our children better, stronger, and faster raises serious concerns about the use of brain-computer interfacing. Keeping up with the Joneses in a bionic era could be dangerous.

In support of this “inviolability” principle, Michael Dertouzos, director at MIT Laboratory for Computer Science, believes that “unnecessarily tapping the brain is a violation of our bodies, of nature, and for many, of God’s design” (77). Some church leaders and many religious websites are warning that these implantable chips are the mark of the devil. And while most scientists avoid religious arguments in their articles, the name of Faust is sell-soulbrought up in numerous documents, implying that we may sell our soul for this new technology. To some, that may sound crazy, but to those who believe in the human spirit, converting ourselves into robotic immortals does sound like the plotline for a futuristic rendition of Faust. Regardless of spiritual beliefs, the sanctity of humanism is a subject that pertains to everyone.

What if brain-computer interfaces changed our awareness of who we are? Paul Wolpe notes that it is “the progressive loss of cognitive function that characterizes Alzheimer’s” and that this cognitive loss is described as a “loss of personality.” He believes that brain-computer interfaces that are meant to enhance our cognitive abilities will attribute to the same kind of “loss of personality” (qtd. in Butcher). And the psychological impact goes beyond individual parameters. Maguire urges us to think about what will happen to the “boundaries between self and community” if our brains are connected with other brains. A wireless network incorporating our thoughts would add new meaning to the term “peer pressure.” And do we risk losing humanism by increasing the capabilities of our senses beyond their intended use?

“Wearable” computers that act as “visual memory prosthetics and perception enhancer[s]” have been invented (Maguire). This technology permits people to see things digitally, allowing for “freeze-frames” of images. Besides being able to identify the letters on a moving tire, or seeing “unseeable” wavelengths of light, hearing could be similarly enhanced (Maguire). And remembering a person’s name will no longer be a problem. The computer-enhanced brain will “smart tag” faces. Cashier Steve Mann, a Computer Engineering professor at the University of Toronto, discusses the applications of networking with this “BodyNet”: “When we purchase a new appliance, we will ‘remember’ the face behind the store counter. A week later, our spouse, taking the appliance back for a refund, will ‘remember’ the name and the face of the clerk she never met” (qtd. in Maguire). Nicholas Negroponte, director of the Media Lab at MIT, prophesizes about the future of brain-computer interfacing and networking. With seemingly no qualms, he envisions a “collective consciousness” called “the hive mind” which is about “taking all these trillions of cells in our skulls that make individual consciousness and putting them together and arriving at a new kind of consciousness that transcends all the individuals” (qtd. in Maguire). Researchers such as these don’t seem worried that we may over-stretch our human aptitudes.

Why aren’t people concerned about where this technology may be taking us? In 2002, Washington Science printed an article about Kevin Warwick, a Cyberkinetics professor, who would soon undergo an operation to have a chip interfaced with his nervous system. Warwick’s wife was also going to have a chip implanted and the couple hoped to be able to “communicate through computer-mediated signals” (qtd. in Vogel). According to the website, kevinwarwick.org, the experiments have been successful. Warwick finds this progression “tremendously exciting.” In response to the question “can we in the future link extra memory into our brains?” Warwick replies: “why shouldn’t we do something like that?” (qtd. in Vogel). Apparently, biochemist Peter Fromherz believes that “such questions are premature” (qtd. in Vogel). So it’s too soon to be questioning brain interfacing? Kaczmarek, via his website, answers a “frequently asked question” about future plans. His research team’s goal is to “aim for nothing less than transformation of what is meant by ‘human-machine interface.’” While the different attitudes among researchers vary, one thing is clear: the subject of brain-computer interface technology has somehow been kept out of mainstream discussions.

Bill Joy contemplates this lack of concern amongst his colleagues:

They know about the dangers [and] still seem strangely silent. When pressed, they trot out the ‘this is nothing new’ reposte – as if awareness of what could happen is response enough. They tell me, there are universities filled with bioethicists who study this stuff all day long. They say, All this has been written about before, and by experts. They complain, Your worries and your arguments are already old hat. (122).

Joy reveals that he doesn’t “know wehre these people hide their fear” while he seems wracked with anxiety about the repercussions that his work will have on society (122). Craig Summers and Eric Markusen explain how decent people can perform harmful work in their essay in Computers, Ethics, and Society. Scientists may “dissociate” their feelings from their work, shutting out cognizance of what it is that they are actually doing (219). They also may rationalize, or make justifications for, actions that would otherwise impinge on their moral code. And many scientists are “compartmentalized;” they feel that the small roles that they play are too inconsequential to make a significant difference, or they feel that the wave of technology is going to roll through, with or without their work (220). Sometimes “a preoccupation with procedural and technical aspects of work” anesthetizes a technician’s morals, particularly if the researcher has been trained to follow orders (221). Furthermore, one of the primary psychological mechanisms driving many researchers seems to be one of “technological curiosity” (222). robot-workingPeople like Warwick and Hawkins appear to fall into this category. They seem to have a passion for solving puzzles, and with each solution, the excitement grows. People with that kind of passion about something find it hard to pull away because the pursuit seems to become their purpose in life. So is it these scientists, then, that are not instigating discussion?

Government agencies and researchers are not the only ones keeping quiet. Our acceptance of technological advances, as a society, may have a role in this “strange silence.” Postman describes how we have overloaded ourselves with information to a point where we, as the masses, can’t comprehend how things work (107-108). We turn on a light switch, and there is light. We use our phone to capture images which we can send wirelessly to another phone in another country. We enhance our brain-power with little chips? Postman cleverly refers to the computer as the “technological messiah” presented to us by the experts (108). At the center of Postman’s paper is the age-old notion that technological advancements produce both winners and losers. He writes: It is to be expected that the winners – for example, most computer professionals – will encourage the losers to be enthusiastic about computer technology’ (103). Postman goes on to say that if the losers “grow skeptical, the winners dazzle them with the wondrous feats of computers, many of which have only marginal relevance to the quality of he losers’ lives” (103). The societal complacency that Postman credits to “information overload” complements Hillis’s theory about how cyberkinetics will seep into our lives. Where electronics are concerned, we tend to be quietly accepting; it is simply too difficult to actually understand the complexities of the latest innovations.

Therein lays the problem. As engineers become incredibly adept at miniaturizing circuitry, and as the secrets of the human brain unfold, we eke closer and closer to a mergence of mind and machine. Experimentation is advancing. Scientists seem to agree that a total human-machine conversion wouldn’t be possible until far into the future, but are we going to sit on the sidelines as this race progresses? Is Danny Hillis right? Will the metamorphosis to a computerized being happen so gradually that we will just accept our fate? The benefits of this technology could be extraordinary. The detriments could include the end of the human spirit as we know it. A global networking of our “wired” brain cells could enable doctors to assist in surgeries halfway across the world. But the same technology could also take away our individuality and freedom.

It is not too early to be asking questions. For whatever reason, the voices of experts haven’t been loud enough. Despite the fact that we may not understand cyberkinetics, we should be able to understand the life-altering potential of this technology. einForeign governments, perhaps with goals we don’t approve of, will have access to brain-computer interfacing and networking technologies. There is no stopping this research. And, given the advantages that human-machine interfacing could provide to people with disabilities, we shouldn’t want to stop the progress. But we should, at the very least, be discussing who will have control of such powerful mechanisms.

 

Works Cited