Headline: For artificial intelligence pioneer Marvin Minsky, computers have soul
Things are going to change, MIT professor and Dan David award-winner says of societal repercussions to AI.
First published in The Jerusalem Post.
By NIV ELIS.
MAY 13, 2014 05:24
The 86-year-old New York-born MIT professor has for decades pushed the idea that intelligence is nothing more than the sum of many non-intelligent parts, and sees no reason why computers could not eventually replicate even the most human aspects of humanity.
Ahead of his first trip to Israel, where, he says, “my ancestors were,” Minsky spoke with The Jerusalem Post about what souls are made of, what it was like advising Stanley Kubric on A Space Odyssey 2001, and how he had Google Glasses, sort of, 40 years ago.
You’ve had an illustrious career. What do you think your greatest academic contributions have been? I started working at a point in history when digital computers were becoming mature, and before that, there were no such machines. There were analog computers. I was among the professors who had grown up in the early era of cybernetics and the younger students who were learning about computers when they were very young, so I played a nice role in educating the many wonderful young people. I think the most important thing was the idea of artificial intelligence, of making computers smarter, and I attracted a lot of young people who became very important in the development of that new field.
With technology progressing at hyper speed, it sometimes seems like science fiction is becoming reality. What’s been the most surprising tech development you’ve seen in your lifetime? The rapid development of modern computers between the time of Allen Turing, up to 1950, and John von Neumann and the next 10 or 20 years of the wonderfully rapid development of digital computers and how they would work. With the appearance of communications networks and interconnected computers, we got the world wide web, and it changed the lives of most people, I think. So that was quite a large change, but I’m not sure it was larger than the introduction of the telephone and telegraph in previous centuries.
I had a Google Glass in 1970! It clipped onto my glasses. Mine was a 1-inch【2,54 cm】 ray tube, mirror and lens.
It’s been 40 years now and it’s nice to see it again. The one I had was almost the same thing, but not as good – mine had a wire going to the computer. I had a portable one too but somebody stole it because it had a TV set in it.
You were an advisor on a Space Odyssey: 2001, a science fiction classic. What do you think of more recent films in that genre such as “Her”? I didn’t see it. Computers have appeared in a few films, but none of them appeared realistic to me.
2001 was the first film when the computer seemed convincing. [Stanley] Kubric came around and Arthur Clarke and asked how to make Hal more lifelike.
I’m not responsible for the plot, though. Stanley Kubric was incredibly sharp, and he’d show me something and I’d make a remark and he’d scrap the set and reset the computer and 5 minutes later everything was different. Working with him was an astonishing experience.
Do you think artificial intelligence can ever be more than pre-programmed humanness? Well, I think that at some point, at various parts they become suddenly much smarter in one area or another, and one can’t really predict how long or far apart these major changes will come. Computers will keep advancing, and though it’s very hard to predict how fast the changes will come, and I don’t see any reason to think there are any limits Biology has limitations because we can’t expect the nerve cells to start working 1,000 times faster than they do now. A nerve synapse or fiber conducts impulses at maybe 1,000 per second at the most, and the wires and elements in computers are now working at many millions of operations per second, and we’re nowhere near the limit of the speed and power of computers.
Progress has been and will continue to be jerky.
Someone will make a slight change and suddenly the computer will work much faster or someone will make it much smaller, and so forth, so we might as well prepare to see several incredibly large changes in the development of computers in the next generation or two so I wouldn’t try to predict even roughly when these changes will come. Somewhere down the line, some computers will become more intelligent than most people.
Do you believe in the concept of the soul? I believe that everyone has to construct a mental model of what they are and where they came from and why they are as they are, and the word soul in each person is the name for that particular mish-mash of those fully formed ideas of one’s nature.
Every culture has people who are more influential in forming these beliefs and models. Soul is the word we use for each person’s idea of what they are and why.
I think every person either inherits or eventually makes up their own idea of what they are and who they are and what caused the world to be, and it seems to me that these stories of creation myth, adopted by different cultures – most of them are less insightful than the stories made up by individual poets and writers. If you see an entire culture with an entire set of beliefs, I would pay more attention to the 10 best poets of that culture, who are trying to change it or rationalize it.
Could a computer have a soul? Why not? What humans have is a more complex and larger brain than any other animal (maybe a whale’s brain is physically large, but it’s not structurally more complex than ours). If you left a computer by itself, or a community of them together, they would try to figure out where they came from and what they are. If they came across a book about computer science they would laugh and say “that can’t be right.” And then you’d have different groups of computers with different ideas.
Will it be a good thing or a bad thing when that happens? That’s a very big question, because there will always be some people who don’t like the idea of a limited lifespan, so you have the prospect of being able to take your brain and recode it and jump into a computer and live for 1,000 years instead of 100.
Making predictions today is very difficult because so many of the futures that have been depicted in fiction and science fiction could be possible options we could possibly face in the next generation or two, and our ordinary ways of thinking about the future are going to be out of date.
What can one say when all those new kinds of possibilities are possible options for us? I think some of the things we’ve been talking about sound like science fiction or fantasy or romance or whatever, but they’re going to happen, or some of them are going to happen, and it would be good for more people to think more seriously about what could happen in the next 100 years and what options we should be aiming toward.
I think right now most people would say these are just fantasies and nothing is going to change very much. But things are going to change alright, and the more people think about it, the better chance that the changes will be good.
I don’t know what good is, but it’s nice to have such a word.
Get The Memoby Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.
Thousands of paid subscribers. Readers from Microsoft, Tesla, Google AI...
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.
Dr Alan D. Thompson is an AI expert and consultant, advising Fortune 500s and governments on post-2020 large language models. His work on artificial intelligence has been featured at NYU, with Microsoft AI and Google AI teams, at the University of Oxford’s 2021 debate on AI Ethics, and in the Leta AI (GPT-3) experiments viewed more than 2.5 million times. A contributor to the fields of human intelligence and peak performance, he has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. He is open to consulting and advisory on major AI projects with intergovernmental organizations and enterprise.
This page last updated: 30/May/2022. https://lifearchitect.ai/marvin-minsky/↑