Is it in the best interest of mankind to build a human machine?
That's the topic of a paper I wrote for a "legal computing" class last semester. I've been asked by quite a few people for a copy and so I decided to just put it up here.
Advancement of the species is the ultimate goal of any human civilization. As the distinction between computers and man becomes increasingly blurred, we, as a society, will be left to decide whether human, or "conscious" machines can play a beneficial role in our lives. Machines with human-level intelligence will be possible in the future and they will be made. Initially, human machines will be inherently beneficial to society as they will not only push the boundaries of science and imagination, a wholly human ambition, but will also allow us to extend our lives, and may eventually lead to a mechanical immortality. Up to now, the application of ethics to machines, including programs, has been that the actions of the machine were the responsibility of the designer and/or operator. In the future, however, it seems clear that we are going to have machines whose behavior is an emergent and to some extent unforeseeable result of design and operation decisions made by many people and ultimately by other machines. It is the unforeseeable results that might very well put mankind into harm's way.
There are usually three stages in examining the impact of future technology: the sheer fascination of its potential to overcome age-old problems, then an acknowledgment of a new set of problems that will inevitably accompany these new technologies, followed by the realization that the only feasible and responsible path is one that can provide the promise while managing the peril. The best interests of mankind lie somewhere between the promise and the peril. The promises of a human machine are many. They would provide great opportunities for improving the material circumstances of human life. A machine with human-level intelligence can perhaps be viewed as the next step in evolution as it frees the human mind from its severe physical limitations of scope and duration.
As with most revolutionary promises, attached to it is the possibility of revolutionary peril. It has been said that artificial intelligence research makes possible the idea that humans are automata — an idea that results in a loss of autonomy or even of humanity. Some futurists suggest that once the human race brings into existence entities of higher, perhaps unlimited, intelligence, its own preservation may seem less important.
Arguments over the desirability of a technology must weigh the benefits against the risk, the promise against the peril. The peril associated with human machines could be the worst possible: extinction. As has always been the case, any given technology can be deliberately misused to the detriment of humanity, but unlike all previous technologies, machines with human (or better) intelligence might make that decision for mankind.
Discussion of Research
Artificial intelligence is broadly defined as anything that a computer does that would otherwise be considered a human trait. It is the part of computer science concerned with designing systems that exhibit the characteristics we associate with intelligence in human behavior — understanding language, learning, reasoning, solving problems, etc. While the study of artificial intelligence is one of the newest scientific and technologic disciplines, the study of intelligence is one of the oldest. For more than 2000 years, philosophers have tried to understand how seeing, learning, remembering, and reasoning could, or should, be done (Russell and Norvig, 3). The study and creation of artificial intelligence directly relates to a better ability to understand humanity. The chance to learn more about mankind, to learn what it is to be human, could be one of the most rewarding benefits of a human machine.
There seems to be an agreement that there are definite short-term benefits and long-term risks associated with a human machine. In the short-term, the benefits of increasing the intellectual power of machines will be seen as a great boon to humanity. There are already hundreds of contemporary examples of "narrow" artificial intelligence, that is, machines that can perform well-defined tasks that we regard as examples of intelligent behavior when performed by humans, including diagnosing blood cells and electrocardiograms, guiding cruise missiles, solving mathematical theorems, playing master-level games such as chess, and many others.
Technological progress in other fields will be accelerated by the arrival of human-level artificial intelligence — it is a true general-purpose technology. It enables applications in a very wide range of other fields. In particular, scientific and technological research (as well as philosophical thinking) will be done more effectively when conducted by machines that are smarter than humans. Overall, technological advancement will be increased. For at least the next 30 years, computers based on human brains will be far too useful to be suppressed. Military and economic forces alone will be enough to legitimize the advancement of the machines, not to mention the ability to relieve humans of many everyday chores. Among a slew of other things, they will become smart enough to teach children, clean up around the house, drive cars, provide sex, and help human experts in decision making. They will do most of the work that used to require humans and in doing so will create great wealth for the entire planet (de Garis).
Ray Kurzweil, a well-respected author and inventor, and perhaps the world's most accredited futurist voice, says that one of the most exciting benefits of a human machine will be a virtual immortality. It will be possible to "upload" the brain into a computer — knowledge, memories, loves, goals, an entire existence. These machines will be able to convince us that they are conscious by mastering the delicate cues that humans now use to determine consciousness in other humans (Kurzweil, Live Forever). It is clear that most, if not all, short-term effects are beneficial to mankind, but the long-term risks that can arise from human machines can be described as nothing less than catastrophic. Artificial intelligence is a truly revolutionary prospect because it can be expected to lead to the creation of machines with intellectual abilities that vastly surpass those of any human. It would be a mistake to conceptualize machine intelligence as a mere tool. The scenario in which machines with general-purpose intelligence are created needs to be given serious thought. Machines capable of general-purpose intelligence would have independent initiative and could make their own plans. These machines might be better viewed as persons than machines. Many of those well-versed in the field of artificial intelligence share the sentiment that if mankind can indeed create machines that exceed humans in the moral and intellectual dimensions, then it is bound to do so. It is simply seen as the next step on the evolutionary ladder. It is in agreement among most leaders of the artificial intelligence field that by 2020, a $1000 personal computer will have the processing power of the human brain — 20 million billion calculations per second. By 2030, the ability to scan the human brain and recreate its design electronically will be possible. By 2050, a $1000 worth of computing power will equal the processing power of all human brains combined (Kurzweil, Live Forever). The figures help to paint a very powerful picture of where humanity's place in the intellectual food chain will be, or won't be as it were. George Dyson, author of Darwin Among the Machines, writes, "In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines" (Joy).
Professor Hugo de Garis is caught, perhaps more than anyone else, between the promise and the peril of a human machine. He is leading a group that is designing and building the world's first "artificial brain." The "brain," he says, will consist of a billion neurons within four years. Human brains have roughly 100 billion neurons (de Garis). He notes that while massive computational speed and size to do not automatically lead to massive intelligence, they are prerequisites. He not only believes that these machines could become smarter than human beings, but that they could "truly be trillions and trillions and trillions of times greater."
The future, as told by de Garis, will consist of humanity split between two major ideological groups. On one side will be those who think that the creating of these super-intelligent beings is the destiny of the human species and the ultimate goal of creating the next dominant species. The other side will belong to those who believe that building these human (or better) machines will mean that mankind is accepting the risk that they may eventually decide that the human species is inferior and annoying, and might call for its extinction. It is along these lines that de Garis commented, "I'm glad to be alive now. I fear for my grandchildren. They will see the horror, and they will be destroyed by it." He is not alone with a dire vision of mankind's future.
Bill Joy, Chief Scientist and Corporate Executive Officer of Sun Microsystems, has spoken at length about his worries for the future of mankind. He stresses that we need to proceed with great caution as we tend to overestimate our design abilities, which, regarding human machines, could result in our extinction (Joy). Joy states that "We are creators of new technologies and stars of the imagined future, driven despite the clear dangers, hardly evaluating what it may be like to try to live in a world that is the realistic outcome of what we are creating and imagining."
Hans Moravec, the Principal Research Scientist in the Robotics Institute of Carnegie Mellon University, sees the future in a slightly more optimistic, but equally alarming perspective. Like most others well-versed in the field, he believes that the development of intelligent machines is an inevitable truth close at hand, and that every technical step along the way has an evolutionary counterpart likely to benefit its creators, manufacturers, and users (Moravec). He says that each advance will provide intellectual rewards, competitive advantages, increased wealth, and can make the world a better place to live. Humans will be alleviated from essential roles and tasks because intelligent machines will be able to perform them better and cheaper. The increasingly large displacement could eventually remove mankind from the equation altogether, something he claims does not alarm him because he considers the future machines mankind's children — mankind in a more compelling and powerful form. As Moravec explains, the machines will embody humanity's best chance for a long-term future but at the same time will also cause humanity's decline.
The speed of the descent could be slowed, because in the same way that some biological children care for their elderly parents, so too could machines be taught to care for humans until the time comes when humankind should "bow out." Moravec sees this as "a comfortable retirement before we fade away" (Moravec). As was stated above, a slightly more optimistic, but equally alarming perspective that still ultimately results in the extinction of the species. There are those who feel that if we can control the motivations of the artificial intellects that we design, then they could come to constitute a class of highly capable "slaves." Pop-culture is rife with such utopian views of the future. One needs to look no further than 1977's Star Wars, in which intelligent robots are not only a reality but refer to their human owners as "master." On the other hand, it must be noted that even if the case for slave-like, human-preserving, intelligent machines can be made, there is still the very real possibility that they could be turned against humanity by some "evil" person (Lanier). Again, a rather dystopian and scary outlook for the future; perhaps not unlike 1999's The Matrix, where the world has been laid to waste and taken over by advanced intelligent machines.
Once an intelligent robot exists, it is only a small step to a robot species — to an intelligent robot that can make evolved copies of itself (Joy). Stephen Hawking, the world-renowned British scientist and physicist, says, "In contrast with our intellect, computers double their performance every 18 months, so the danger is real that they could develop intelligence and take over the world" (McAuliffe). As quickly as possible, Hawking thinks that technologies need to be developed so as to allow a direct connection between brain and computer, so that artificial brains contribute to human intelligence rather than oppose it.
The interval that humans and machines will have roughly equal intelligence will be brief. The intelligence levels of a human machine will grow quickly and will be superior to human intelligence because it will combine the advantages of non-biological intelligence with the powers of human intelligence. These advantages include the fact that electric circuits are 100 million times faster than the human brain and virtually unlimited memory is available to computers. Human machines will also be capable of sharing knowledge extremely efficiently between them, and much easier and quicker than humans (Kurzweil, Ray Kurzweil Speaks). The brief equality between machine intelligence and human intelligence, coupled with the assured rapid progress of the former, reveals that advanced planning and diligent maintenance will be required to maintain mankind's existence even if the effort is inherently futile. Despite all human diligence, Moravec contends that once human-level intelligence is achieved, "It is the 'wild' intelligences, those beyond our constraints, to whom the future belongs," and it is to this end that most scientists and researchers agree (Moravec).
There is both promise and peril associated with revolutionary ideas and the concept of a human machine is certainly not exempt from this dichotomy. In fact, it might hold truer for this idea than any before it. The short-term promises of a human machine are many and exciting. The benefits are uncountable and most agree that they will be able to alleviate humans from many of the mundane duties of everyday life. In short, they will be left to do most of the work that humans are now responsible for doing. There is also the very real possibility of a virtual immortality being available to humans such that "copies" of their brains, of their existence, are actually put into a machine as they become their robotic selves. The greatest benefit of a human machine is that they, unlike anything before, can and will teach us about ourselves, about what it is to be human.
It is this entirely human desire for knowledge and advancement that will ultimately lead to mankind's demise. It is widely accepted that once human machines come into being, they will not only replicate themselves, but will also seek to make themselves smarter. They may very well devote their abilities to designing the next generation of intelligence, soon realizing that there is no practical use for their human progenitors and perhaps taking measures to get rid of them.
The best interests of mankind are certainly not found in its extinction, therefore a human machine from which extinction is a very real possibility, if not an inevitable certainty, cannot be brought to fruition if humans wish to maintain their role as the dominant earth species. Though most experts agree with this assessment and do feel that machines will eventually reign over man, they press on with their research. Robert Oppenheimer, leader of the Manhattan project, said the following, three months after the first atomic bombings on Nagasaki and Okinawa:
It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge and are willing to take the consequences (Joy).
It is with this idea, this notion that science must advance at all costs, that many researchers and scientists can sleep at night knowing full well the possible consequences of their work. They feel, as do I, that it is the height of arrogance to assume that humans are the final word in goodness and that if intellectually, and perhaps morally superior beings can be brought into existence, then it is the responsibility of mankind to do just that, even when the chances are high that it will remove humans from existence. Ultimately, it is not in the best interest of mankind to build a human machine, but that is not to say that it isn't in the best interest of something, even if that something is an idea that humans might never be able to understand.
Garis, Hugo. Building Gods or building Our Potential Exterminators? http://www.kurzweilai.net/meme/frame.html?main=/articles/art0131.html (2 Feb. 2003). Joy, Bill. Why The Future Doesn't Need Us. Wired. http://www.wired.com/wired/archive/8.04/joy.html (5 Feb. 2003). Kurzweil, Ray. Interview with Sari Kalin. Ray Kurzweil Speaks His Mind. Darwin Online. http://www.darwinmag.com/read/120101/hal_sidebar2.html (9 Feb. 2003). Kurzweil, Ray. Live Forever. Psychology Today. http://www.psychologytoday.com/htdocs/prod/ptoarticle/pto-20000101-000037.asp (1 Feb. 2003). Lanier, Jaron. One Half Of A Manifesto. Edge. http://www.edge.org/documents/archive/edge74.html (8 Feb. 2003). McAuliffe, Wendy. Hawking warns of AI world takeover. ZDNet. http://news.zdnet.co.uk/story/0,,t269-s2094424,00.html (8 Feb. 2003). Moravec, Hans. Robots, Re-Evolving Mind. http://www.frc.ri.cmu.edu/~hpm/project.archive/robot.papers/2000/Cerebrum.htm (1 Feb. 2003). Russel, Stuart and Peter Norvig. Artificial Intelligence: A Modern Approach. Upper Saddle River, New Jersey: Prentice Hall, 1995.