Turing anticipated many of today’s worries about super-smart machines threatening mankind
DIANE PROUDFOOT
IEEE Spectrum
30/Jun/2015
Many people today are concerned by the prospect of out-of-control artificial intelligence. Some call it “killer AI,” “evil AI,” or “malevolent AI.” Billionaires throw money at the “existential risks” posed by ultraintelligent machines: In January, Elon Musk, creator of PayPal and CEO of SpaceX and Tesla Motors, donated US $10 million to the Future of Life Institute, in Cambridge, Mass., which is “focusing on potential risks from the development of human-level artificial intelligence.” Other new research institutes with apocalyptic names explore the dangers of “singularity” scenarios, and Google has recently formed a hush-hush AI ethics board.
Actually, history is repeating itself. In the mid-1940s, public reaction to reports of the new “electronic brains” was fearful. Newspapers announced that “the controlled monster” (a room-size vacuum-tube computer) could rapidly become “the monster in control,” reducing people to “degenerate serfs.” Humans would “perish, victims of their own brain products.”
Alan Turing, the father of modern computing, added to the consternation. In his first BBC radio talk he noted, “If a machine can think, it might think more intelligently than we do, and then where should we be?” In another talk, he said that “it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers.” But 21st-century pundits have usually overstated the progress AI has made since Turing’s day [Alan’s note: this article was written in 2015; five years before GPT-3], suggesting that ultraintelligent computers are just around the corner.
Turing thought the prospect of such machines “remote” but “not astronomically remote.” He noted, “If it comes at all it will almost certainly be within the next millennium.” And then, he said, “we should have to expect the machines to take control.”
Stephen Hawking united with Nobel laureate physicist Frank Wilczek, Wilczek’s MIT colleague Max Tegmark, and Berkeley computer scientist Stuart Russell to declare last year in The Huffington Post: “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.” And this past December, Hawking warned in a BBC interview that humans would be “superseded” by AIs.
In his 1951 radio broadcast, Turing used that very word—as did Samuel Butler, who wrote in his 1872 novel Erewhon (a satire Turing mentioned approvingly) that we must put “an immediate stop to all further mechanical progress.” He was, of course, ridiculing opponents of the machine age.
Turing (following Butler) poked fun at the fear of out-of-control AI. When he predicted in the London Times that machines could “enter any one of the fields normally covered by the human intellect, and eventually compete on equal terms,” the media protested at the “horrific” implication of these ideas—namely, “machines rising against their creator.” But Turing said drily, “A similar danger and humiliation threatens us from the possibility that we might be superseded by the pig or the rat.” He joked—but with a good pinch of his usual common sense—that we might be able to “keep the machines in a subservient position, for instance by turning off the power at strategic moments.”
Turing knew that developments in AI worried some scientists, as well as other folk. Pioneering cyberneticist William Grey Walter had said as early as 1948 that there was “something sinister” about the new “mechanical monsters.”
Alarm today spreads further and more quickly. Last year Musk told his 1.7 million Twitter followers that AI is potentially “more dangerous than nukes.” Even escaping to Mars—presumably on one of his own rockets—wouldn’t help, he said: “The AI will chase us there pretty quickly.” In January, Bill Gates told Reddit’s millions of visitors, “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Turing’s response to AI panic was gentle mockery. All the same, there was a serious edge to his humor. If runaway AI comes, he said, “we should, as a species, feel greatly humbled.” He seemed almost to welcome the possibility of this humiliating lesson for the human race.
In Turing’s view, humans are not exceptional: We, too, are machines. He said matter-of-factly on the BBC, “It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine.” But, Turing ended chillingly, “I cannot offer any such comfort.”
Image generated by Alan D. Thompson on 24/Nov/2022 using Midjourney v4. Prompt: chicken little, running around in panic, hair/feathers on fire with flames, staring up, rainbow sky, simple children’s book illustration –v 4 –upbeta –ar 3:2
Get The Memo
by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.Bestseller. 10,000+ readers from 142 countries. Microsoft, Tesla, Google...
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.
Dr Alan D. Thompson is an AI expert and consultant, advising Fortune 500s and governments on post-2020 large language models. His work on artificial intelligence has been featured at NYU, with Microsoft AI and Google AI teams, at the University of Oxford’s 2021 debate on AI Ethics, and in the Leta AI (GPT-3) experiments viewed more than 4.5 million times. A contributor to the fields of human intelligence and peak performance, he has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. Technical highlights.
This page last updated: 13/Dec/2022. https://lifearchitect.ai/ai-panic/↑