Few think of Bill Gates and the late pop artist Keith Haring as having anything in common. But Gates in his quiet way, and Haring with his frenetic art, have each expressed grave concerns about what may be the great bogeyman of the 21st Century: the rise of the thinking machine. The inevitable rise of “sentient machines” – self-aware machines that have human intelligence but are void of human emotions – has long-dominated pop culture, and has always been a serious topic of discussion with many of our scientific and technological thought leaders.
The fear of “thinking machines” is a subtext of our times. Gates recently expressed his concerns about a future that he envisions as both promising and terrifying. “I think we should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it’s probably that,” he said. “With artificial intelligence we’re summoning the demon.”
Decades ago, an overarching theme of Keith Haring’s graffiti-inspired art was the existential threat to humanity by computers and relentless industrialization; he particularly feared a diminished role of the arts in human existence. Haring, who died in 1990 at the age of 31, feared that the “machine aesthetic” would influence how we see the world, and how we feel about it. “If humans are expendable,” he wrote in 1978, “then emotions, enjoyment, indulgence, creative aesthetic, and personality of human beings are expendable.”
Keith Haring, in contrast to Bill Gates and just about any of today’s technological gurus, lived a reckless life that nevertheless produced a bold and brilliant body of work. I was fortunate to see an exhibit titled Keith Haring: The Political Line in San Francisco’s de Young Museum a few months ago. In only a decade, Haring produced hundreds of highly-original, thought-provoking works of art. Much of his work portrayed modern-age tyranny in its many forms: war, racism, bigotry, industrial slavery, and the rise of computers. Many of his most arresting works show human figures with a computer replacing the head or brain, with terrifying monsters rising up to enslave them. Haring’s brilliance at such a young age is remarkable; his thoughts about the subjugation of human creativity to machines were forming at age twenty while attending the School of Visual Arts. “Do computers have any sense of aesthetics?” he wrote in his journals at that time. “Can an aesthetic pattern be programmed and fed into a computer so that it reasons and makes decisions based on a given aesthetic?” This was in 1978 – five years before Bill Gates invented Windows and the Graphical User Interface (GUI).
The film The Imitation Game is based on the ultimately tragic life of Alan Turing, often credited as the father of the computer age for his work some 65 years ago. The movie underlines Turing’s theory that machines will someday be able to “think” like humans. Understanding that empirical evidence was needed to prove his theory, he invented the “Turing Test,” an adaptation of a Victorian parlor game called the “Imitation Game.” The original game involved secluding a man and a woman from an interrogator who has to determine which is which by asking questions and analyzing the written responses. Turing replaced the man with a computer that ran a program designed to fool the interrogator. If the computer fooled the interrogator into thinking it was human, then the computer was exhibiting artificial intelligence. Turing, who committed suicide in 1954 at age 42 as a result of his dehumanizing punishment for being homosexual – his tormentors ignoring the fact that he saved millions of lives by breaking the Nazi enigma code, leading to the early end of WWII ¬– predicted that by the year 2000 the interrogator in the game would have less than a 70% chance of determining whether human or machine had supplied the right answer. That prediction has proven to be overly optimistic; computers have a long way to go before they consistently win Turing’s Imitation Game. Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield in Britain writes, “My iPhone has more than 500 times the storage capacity
Alan Turing, Keith Haring, and Bill Gates each approached artificial intelligence from different angles. Turing embraced the possibilities with an evangelist’s enthusiasm; Haring feared and loathed the impact machines already had on the human condition, with an even darker view of the future; and Gates seems to fear a dystopian future without going into specifics as to what “demons” would be unleashed on us.
As an intellectual exercise, in 1950 Turing acknowledged the theological constructs that could challenge his theory (for the record, Turing was an atheist). The theological and intellectual challenge he posed was that thinking is a function of the soul and that machines don’t have souls. The counterbalance to that argument was that if God could not give a machine a soul, it restricted His power and negated the basic premise of the argument.
Haring also posed a somewhat spiritual challenge. His art often captured machines waging war, with war representing the ultimate soulless and evil enterprise; he imagined bizarre anthropomorphic combinations of machine and man. Today’s warfare is carrying out Haring’s vision to a degree even he couldn’t have imagined. Unmanned drones are launched from trailers in the Nevada desert, killing men, women and children in desert communities half a world away. A report published by Human Right’s Watch and Harvard Law School’s International Human Rights Clinic would further repulse Haring. The report notes: “The U.S. Department of Defense envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure.” Which is military-speak for letting autonomous machines decide who lives and who dies, detached from human consideration or emotion.
Popular culture has repeatedly addressed whether machines can ever feel genuine human emotions. Movies from Star Trek to Bladerunner have taken a stab at answering this profound question. On a more intellectual level, however, ethical philosophy teaches that ethical decisions result from reason and principles, not emotions and passions. Since mathematical algorithms are in effect rules, perhaps the Golden Rule can be programmed as an overarching algorithm.
While there is a considerable amount of research around developing emotional intelligence for machines, the end product of such technology is headed towards interpreting the user’s emotions – not to develop emotions within the machine itself. James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, points out that it’s easy to give human values to machines – baking them into one algorithm or another – but by definition “they’re silicon versus carbon.” And there are some human characteristics that we’d rather not replicate. “Intelligent machines won’t love you any more than your toaster does,” Barrat says. “As for enhancing human intelligence, a percentage of our population is psychopathic. Giving people a device that enhances intelligence may not be a terrific idea.”
Which brings us back to what Bill Gates and Keith Haring have in common when it comes to Alan Turing’s long-ago dream of artificial intelligence: a most logical fear of letting machines do too much of our thinking and too little of our feeling.