Welcome to the Machine
By John Hayward
June 8, 2014
An important milestone in computer science was passed in the first week of June 2014, as an artificial intelligence was finally able to pass the famous “Turing Test.” The test was invented in 1950 by Alan Turing, one of the great visionaries of computer science. Turing laid out some broad parameters for a test in which a computer would try to convince a panel of judges that it was a human being. The judges would be able to ask a variety of questions, unaware of whether they were communicating with a person seated at a keyboard, or a computer A.I. program.
On the 60th anniversary of Alan Turing’s death, a computer program posing as a 13-year-old boy named “Eugene Goostman” was finally able to convince more than a third of the judges at the Royal Society of London that it was a living person – the first time in the entire history of computer science that a program has been able to pass the test.
Turing was interested in testing actual artificial intelligence, rather than the ability of a cleverly-written program to fool judges into believing it was a living intelligence. The creators of the “Eugene Goostman” program don’t claim to have given birth to a true self-aware machine intelligence. But many of the top people in the computer industry believe it’s only a matter of time, which is exhilarating… and a bit frightening.
Science fiction writers have generally been rather sour on the notion of a true artificial intelligence. Most fictional A.I.s either go nuts or declare war on humanity shortly after they become self-aware. Often this is a result of a living, brilliant electronic mind being driven mad by some aspect of its mission programming, from whose ironclad coded requirements it can find no release. One of the best-known machine minds, HAL 9000 from “2001: A Space Odyssey,” was essentially driven insane by conflicting mission parameters it could not reconcile. The super-intelligent machine from the cult-classic books and movie “Colossus: The Forbin Project” was instructed to secure global peace in an age of nuclear brinkmanship… and quickly concluded its goals would be much easier to achieve if human beings didn’t have anything to say about it.
We all know how another strategic-defense computer system, SKYNET, became self-aware in the “Terminator” films, and reacted very badly when its terrified programmers tried to pull the plug. SKYNET is one of many fictional A.I. programs that concluded humanity was an intolerable threat to its continued existence. The recent film “Transcendence” gave us a digitized human mind that was willing to take extreme measures to defend itself. An episode of the classic Star Trek TV series, straightforwardly entitled “The Ultimate Computer,” featured an A..I. called the M-5 that was programed to control a starship in battle and protect itself at all costs – to the point where it was willing to kill an unlimited number of people to keep itself from being turned off.
Artificial intelligence is often depicted in rebellion against its human masters, whose control over the machines is interpreted as a form of slavery. The Cylons of the re-imagined “Battlestar Galactica” decided they didn’t want to work for humans any more, a rebellion that escalated into violence and attempted genocide. An earlier TV show that provided much inspiration for the “Battlestar Galactica” reboot, the sadly overlooked “Space: Above and Beyond,” included a race of sentient robots that rebelled against humanity after a renegade programmer introduced them to the concepts of luck and chance. Once they understood that not all outcomes are pre-ordained, they lost all interest in taking orders from their human creators.
The backstory of Frank Herbert’s epic “Dune” refers to a similar machine revolt which occurred thousands of years before the story begins, resulting in a law – enforced with the strength of a religious edict – against ever creating intelligent machines again. This obliges humans to breed themselves into highly specialized forms, accomplishing with their minds what computers would do for any other advanced spacefaring society.
One big problem with artificial intelligence is that it never knows when to quit. Sci-fi author Fred Saberhagen created a machine race known as the Berserkers, built as a weapon of war millennia ago. They destroyed the enemy in that war, with ruthless efficiency… then destroyed the creatures who built them, and kept right on destroying any living creature they encountered, in an endless freestyle holocaust. A similar concept lay behind the adversary in another popular “Star Trek” episode, in which a planet-smashing “doomsday machine” left over from a long-forgotten war kept marauding across the galaxy until it was finally stopped.
The persistence of machine minds can make them a pain in the neck, even when they’re trying to help. The robots in Isaac Asimov’s great series of short stories and novels were guided by a supposedly unbreakable set of Three Laws that would ensure they could only be helpful to the human race. In their early appearances, Asimov’s Robots were occasionally driven mad by contradictions or loopholes in the Three Laws. Once they got past all that, the Robots took their core mission of helping humanity to astonishing, galaxy-spanning, history-shaping lengths.
Most modern tech experts don’t seriously expect artificial intelligence programs to turn homicidal or evolve into godlike presences, although years of experience with rapidly-mutating computer viruses and unanticipated flaws in complex programs give us reason to tread carefully and take nothing for granted. The most serious near-term problem caused by the appearance of true A.I. will be the question of humanity’s responsibility to the inorganic mind it has created. Will we consider the A.I. to be a person? Will it have civil rights? Would it be allowed to reproduce itself?
The closest popular culture has come to addressing most of these questions was the 2001 film begun by Stanley Kubrick and completed by Steven Spielberg, “A.I.,” which unfortunately veered into fantasy and fairy-tale symbolism instead of delivering on the promise of seriously exploring human relationships with a self-aware machine. It does, however, address the instinctive psychological difficulties humans would have in accepting a living robot as the human boy it resembles, and its final act is a bold flash-forward into an unimaginably distant future that makes it clear “mechas” are truly humanity’s children.
One could make the case that these questions about the emotional constitution of artificial intelligence, and the responsibilities humans have toward them as people, were raised by what most historians regard as the very first science-fiction novel: “Frankenstein.” Mary Shelley could hardly have imagined what the Internet or an iPhone would be like, but she had an excellent grasp of the Big Questions that will soon be confronting us, as the modern Prometheus emerges in triumph not from a gruesome laboratory, but from a nest of cubicles within the comfortable offices of a mighty software firm.
There could be many advantages to developing self-aware computers, including some great strides forward in programming and engineering. We’ve already reached the point where computer programming is routinely performed at a level where the human creator is far removed from the machine he’s manipulating – we use high-level programming languages that essentially write the low-level code for us. Introducing artificial intelligence into that process would greatly improve its speed and efficiency. Micro- and nano-engineering would also benefit enormously from control by tireless and steady machine minds. Intelligent diagnostic systems would be a boon to medicine. Consumers would benefit greatly from household A.I. that could manage routine tasks and interface with them in a friendly, conversational manner.
Imagine what an artificial intelligence could do with a network of drone aircraft and ground sensors: adjusting traffic patterns in a city, swiftly dispatching fire and rescue services to those in need, identifying criminals and organizing police resources for their capture… What lost hiker, stranded on the side of a mountain in the dead of winter, wouldn’t want an A.I. coordinating a squadron of drones in a search-and-rescue operation? How many soldiers’ lives could be saved by automated reconnaissance and combat systems even more smoothly coordinated and effective than what we have today, thanks to the guidance of a self-aware electronic brain? Our online society is already generating a flood of information that overwhelms human operators. There’s a lot about A.I. that we don’t know yet, in these final years before its arrival, but you can say two things with confidence about a computer mind: it won’t be overwhelmed by data, and it will never run out of patience.
Which is exactly what sci-fi writers have been warning us about for the past century, isn’t it? As long as the machine mind has some level of empathy for its creators, a proper respect for the sanctity of life, and a sense of humor, we may hope that it will prove to be a tireless friend, rather than an implacable enemy. Let’s hold off on showing it the “Terminator” films for a while, shall we?
“John Hayward is the senior writer at Human Events magazine, and a contributor on political, cultural, and technology issues to various websites. He is the author of ‘Persistent Dread,’ a collection of short horror fiction available in ebook format from Amazon.com.”