👋 Hi, I’m Alan. I advise government and enterprise on post-2020 AI like OpenAI GPT-n and Google PaLM. You definitely want to keep up with the AI revolution in 2023. Join thousands of my paid subscribers from places like MIT, RAND, Microsoft AI, and Google AI.
Get The Memo.
Aug/2023
Video: https://youtu.be/4GQrLjvudJ41https://youtu.be/4GQrLjvudJ4
Meeting: AI for Good Summit: The future of intelligence: artificial, natural, and combined.
Speaker: Ray Kurzweil
Transcribed by: OpenAI Whisper via YouTube Transcription Python Notebook.
Edited by: Alan (without AI!)
Date: 22/Aug/2023
Highlights
– Turing Test/AGI by 2029.
– Open source LLMs and advancement: ‘There’s no way out of it.’
– ‘Large language models are the best example of AI.’
Full edited transcript
Question: Good morning, Ray. Greetings from Geneva. Very happy to have you here.
Ray: I’m glad to be here.
Question: I’m one of the many people who read your books. I actually went to my paper library and write a book that came out in 1999 called The Age of Spiritual Machines. So let me quickly introduce you and then we move to our conversation. So Ray Kurzweil is one of the world’s leading inventors, thinkers and futurists. He has a 30-year track record of accurate predictions. He was the principal inventor of the first CCD flatbed scanner, the Omnifond optical character recognition, print to speech reading machine for the blind, text to speech synthesizer, music synthesizer, capable of recreating the grand piano and other orchestral instruments, and commercially marketed large vocabulary speech recognition software. You received the Grammy Award for outstanding achievements in music technology. You are the recipient of the National Medal of Technology in the United States and you were inducted into the National Inventors Hall of Fame.
You hold 21 honorary doctorates and honors from three US presidents. You’ve written five national bestselling books including The Singularity is Near. I think everyone associates the term singularity with you, Ray. And then in 2012 you wrote How to Create a Mind. Both were New York Times bestsellers. And then more recently you wrote a book, Danielle Chronicles of a Super Heroine. And this was the winner of multiple Young Adult Fiction Awards. You are a principal researcher and AI visionary at Google and looking at the long-term implications of technology and society. So you will be coming out with a book. It’s announced for next year and it’s called The Singularity is Nearer. So your first book in 2005 was The Singularity is Near and now you’re coming out with a book called The Singularity is Nearer. And okay, so let’s start.
Question: Actually, my first question refers to your suspenders. They’re pretty colorful.
Ray: You don’t see very many people wearing hand-painted suspenders anymore. These are handmade.
So these are hand-painted suspenders?
Yeah. A girl named Mel makes them. I’m the only one who gets them.
Question: Were you surprised by the capabilities of the large language models within the last that came out in the last seven, eight months or so?
Ray: Well, no. I mean, this activity is a prelude to passing the Turing Test. We can talk more about what that means. But in the book you showed, The Age of Spiritual Machines, which came out in 1999, I predicted that we would pass the Turing Test by 2029. And Stanford was so alarmed at this that they actually held an international conference to talk about my prediction. And 80% of the AI experts that came from around the world agreed with me that a computer would pass the Turing Test, but they didn’t agree with 30 years of 2029. They thought it would take 100 years. This poll has actually been taken every year. So I’ve stayed with 2029. Still believe that. AI experts started at 100 years. It stayed pretty much set. And lately it’s come down. Now the consensus of AI experts around the world is also 2029. So people are agreeing that I was right.
But prior to passing the Turing Test, you’re going to have things like large language models which emulate human intelligence. There’s a few ways that they’re not correct. I mean, if you ask one of the popular large language models, ‘how many E’s does the following sentence have?’ And then you put some sentence in quotes. It actually doesn’t get that correct. And that’s something that humans can do quite easily. However, you can ask it anything about topics in philosophy or physics or any other field. It’ll give you a very intelligent answer. So in many ways, they’re better.
I mean, even Einstein didn’t understand issues in philosophy and psychology and so on. So it really has a very broad base. Can articulate very quickly. They operate thousands of times faster than human. So in many ways, it’s superior.
I mean, when a computer does something, it doesn’t just do it at the levels of humans. Like when it played Go, it plays it far better than any human can possibly play it. In fact, Lee Sedol, who’s the best human player of Go said he’s not going to play anymore because these machines are so fantastic. So one of the things in passing a Turing Test is that you actually have to dumb it down because if it showed its fantastic knowledge of every different field, you’d know it’s a machine.
So that’s one of the things.
But there’s also a few things that humans can do easily that they can’t quite do. But that’s going to be overcome in the next few years. It’ll probably be passed prior to 2029.
Question: OK, can I ask the colleagues of my technical colleagues to crank up the volume a little bit? Because I have difficulty hearing a bit. Ray, you invented something for the blind, but not yet for the hearing-impaired people. I was asking my colleagues if they can do the volume a little bit louder.
Ray: OK. Well, we do have speech recognition. It’s quite accurate. So you can speak and you can actually get a transcription of what people are saying.
Question: So could you make like half a year ago or so, or like a year ago, you couldn’t say if a machine wouldn’t have passed the Turing test. If I speak to a machine and I speak to a computer, I could have easily recognized after, I don’t know, 15 minutes or maybe earlier or a bit later that this is a machine. Now you could actually argue I could still make the difference. But it’s the human responses which are not as great as the answers of the machine. Is that a fair statement?
Ray: Well, once a computer passes something, it doesn’t just stop at a human level. It goes way past it. And that’s something that’s true for everything. So the past Turing test would have to be dumbed down. I think it’s an important test that if it can actually do everything a human can do, particularly in language, there’s another version of Turing test where it would actually have a virtual person who can actually speak and have facial expressions that match what it’s talking about. But that’s actually no more difficult than mastering human intelligence and language.
But these large language models, this is not an alien invasion of intelligent machines coming from Mars. I mean, we create these machines to make ourselves smarter. And if somebody actually uses GPT-4 to write something, I mean, that is I think what human beings do. We use tools to make us smarter. I mean, who here doesn’t have their smartphone? Actually did that five years ago, I would ask who here has a smartphone. Only a few hands went up. I did that recently. Like who here does not have the cell phone? Nobody raised their hand. So this is actually an extension. We’re actually smarter with this than we would otherwise be. Now it’s external. We might lose it. I think it’d be greater if we actually brought it into ourselves so you wouldn’t forget it at home.
But these things are going to make us smarter. And really the rise of intelligence, if you look at the broad scope of evolution on this planet, we’re getting more and more intelligent. So 100,000 years ago, we had homo sapiens, but they were not as smart as us. They didn’t have the tools that we have today. So we basically are able to create tools that make us smarter and more capable of doing things that we need.
Question: How do you see the future of large language models? So some people say, some scientists say, like in five years, they’re not that relevant anymore. How do you see that?
Ray: Well, large language models are actually going to go beyond just language. They’re already bringing in pictures, videos and so on. I’ll just give you one application. If we apply it to medicine, we can actually simulate biology. And the Moderna vaccine was actually done this way. The Moderna vaccine was created in two days. The computer considered billions of different combinations of things that would fight COVID, including mRNA sequences. And it actually went through several billion of them. And it decided on one that was the best. And that was the vaccine. And that’s the vaccine we use today. And it was done in two days. Now, they actually then went for 10 months to test it on humans. But that’s also unnecessary.
Rather than testing on 500 biological humans, we could test on a million simulated humans. That would be just as good. And you’re much more likely to match one of those and the 500 that they use. And that could also be done in a few days. So rather than taking many years, like up to 10 years to do these things with humans,
we could do it on a computer in a matter of days. And that’s where we’re going, simulated biology. And some of the techniques that are used in large language models are used for that. And this is actually now the coming wave in medicine. We’re going to make fantastic progress in the next few years.
Question: Do you see anything where a computer will eventually not be better than a human being?
Ray: No, absolutely not. I mean, sometimes people say, oh, well, emotional intelligence, people have that. And that’s more sophisticated than logical intelligence, which is true. But we use the same kind of connections to deal with emotional intelligence, know how to react to someone else and get into their frame of mind.
That just takes more intelligence. It’s actually the best thing that we can do. But it is also using the same kinds of connections that we use to make any other decision. So they will have emotional intelligence and they’ll actually react to us. But just like humans, some humans are friendly to us, some humans are not friendly to us. So we have to be mindful of creating things that actually advance our goals. So that’s a whole other issue that we can talk about.
Question: I read a book review that you wrote in the New York Times a few years back. And then if I quote here, you said, ‘the superiority of human thinking lies in our ability to express a loving sentiment, to create and appreciate music, and to get a joke.’ These are all examples of emotional intelligence. So, but you’re saying the machines. So I thought humans have emotional intelligence, but machines don’t. But you’re saying machines will have emotional intelligence.
Ray: Yes, absolutely. Meaning, it will make us to be better humans. As I said, it’s not an alien invasion of intelligent machines to take us over. So people constantly look at the machines versus us as if they’re two separate things. But it’s not. I mean, look at how these things are used today. Everybody has a cell phone. Everybody’s amplifying their intelligence already. And that’s just going to continue and it’s going to become much closer to us.
Question: So my smartphone is an extension of my brain. That’s what you’re saying. What would be the next step to enhance?
Ray: Well, the next step is to make machines even more intelligent. And that’s definitely happening. It’s at a very sharp increase right now in medicine and everything else. And also bringing it closer. So virtual reality is another way in which we’re making it closer. But ultimately, it’ll go inside our brains. And that’s a complicated issue. But it doesn’t have to connect to the entire brain. It can just connect to the end of the neocortex. The neocortex is kind of organized in a pyramid. And as you go off the pyramid, it’s dealing with more and more intelligent subjects. So you only really need to connect at the top level of this pyramid. And so and there’s the things that are being experimented with. They can do that today. They’ll be useful to people who can’t communicate, but ultimately it will enhance our own intelligence. And people say, well, I wouldn’t want that. But I mean, who who goes around without their cell phone today? People that thought at the beginning of cell phones, ‘well, I don’t really need that.’ But now everybody has one. And it will ultimately make us smarter. These things are part of who human beings are. And we see that already. And that’s going to continue.
Question: Could you explain or give us some idea what this would look like? There might be invasive technologies that connect the electrodes inside the brain, maybe. And there might be some noninvasive.
Ray: Well, I imagine we would. Take some kind of medication that would have nanobots that go through our bloodstream, and they would find the end of the neocortex and attach themselves there and communicate outside. So you could have then a computer on your body, but more likely really in the cloud, because an advantage of the cloud is it’s duplicated. If you put something in the cloud, it’s actually multiplied many, many times and not just in one building. It actually goes through many different buildings, so you could blow up an entire cloud facility and nothing would be lost because it’s the information is duplicated in other centers.
So our extension of our brains, I mean, our brain has several billion, several, actually trillion connections, and that’s what gives us intelligence. They’re very slow, though, operates at 200 calculations per second. Machines operate at billions of calculations per second. So it can be much, much faster and ultimately know a lot more as we see with large language models even today. So ultimately, we’ll be able to connect to that. And it will be just part of our thinking. It’s just another set of connections. But ultimately, that set of connections that we communicate with outside of our body will be much greater than what we can do in our brains. But that’ll just be part of human intelligence.
Question: Can you give a timeline for these scenarios?
Ray: Yeah, well, I said we’d pass the Turing test by 2029. I think it might be earlier, but I’m sticking with that prediction. So there’ll be like large language models. They’ll know everything. They won’t make sort of stupid mistakes like they can’t count the number of E’s in a sentence that you give them. And they’ll definitely and then we’ll be able to pass the Turing test by dumbing it down. But ultimately, that will actually give us knowledge of everything. No human being has that today, but it’ll be part of our own intelligence. And there’s still a whole veil of ignorance that we don’t know about.
So, I mean, just because if you know everything that humans know, it doesn’t mean you know everything because we haven’t explored other areas. In the 2030s, we’ll be actually merging with this intelligence. So rather than being outside our body, our brain will naturally just be extended. So rather than having a few trillion connections, we’ll have thousands of trillions of connections and ultimately be much smarter. But that will actually help us to explore knowledge that we don’t know. And there’s still a lot that we don’t know.
Question: OK, switching topics a little bit, a number or maybe quite a number of researchers and many people are afraid that AI might let’s just call it take over the world. Could you describe a scenario how that might happen? I mean, today’s language models, they can code, they can code pretty well. You could, I guess, give access to them, maybe to your bank account, do things on the Internet. So could you give maybe a scenario what it might look like if I were really to control many things?
Ray: Well, I mean, large companies today, really all of them, putting out products in the public domain, and they have large efforts to avoid hallucinations, which is inaccurate data and generally follow socially and morally appropriate ways of conduct. But there are lots of public domain large language models out there that don’t necessarily have these controls. It’s just like people. I mean, some people will. Most people actually will advance human intelligence in a positive way, but there’s a few that that don’t do that. And we can’t actually necessarily predict that.
If you look back in history, I mean, take World War Two, the Nazis might have won if they made different decisions. So it’s not absolute. We could have bad players use these things to advance their goals. And so the future of human history is not set. I’m optimistic about it. And the Singularity is nearer, I show 50 different graphs for everything we care about child labor and health and 50 different things have actually been growing every year, every decade. And I believe that will continue.
But we still have to be mindful of bad players using these kinds of technologies. And you can you can demonstrate they could do things. In the future that we wouldn’t be prepared for. So we actually have to beef up our defenses against that. But that’s happening.
Question: I’d like to dig a little bit deeper. Can you give a real real world scenario of how that might work? So we have currently the large language models. And so how how would that work? That AI would take over control?
Ray: Well, there’s lots of scams right now where people use computers to create things that are inaccurate or socially inappropriate and put them out. And we have some defenses against that. But if you actually use a very intelligent, large language model, it could do that much more sophisticatedly. And even some of the smaller ones, well, they’re not as good as the large ones that the big companies have. 50 percent of their output could actually not be told from an innocent output. So we’re going to have to beef up our defenses against that. One way of doing that is to identify who’s putting things out. And if you notice that they’re putting out things that are inaccurate and so on, then you wouldn’t give them permission to influence other people. But that gets to be fairly complicated. But I think that’s the direction we’re going to go in.
Question: OK. So how seriously should we take this problem that researchers call alignment problem, you know, just make it’s called making sure that the robots don’t do things that we don’t want them to do? Researchers seem to get somewhat ridiculed. We think this is a very serious problem. And other researchers say we should rather put our resources into known problems like misinformation, misinformation, disinformation bias. So how serious should we take the problem that I may go have?
Ray: Well, we should take it very seriously because these machines are coming. There’s no way of avoiding that. We actually want responsible actors to have the most intelligence so they can combat this use of the intelligence. I mean, there was a letter that went around that we should stop AI development for six months so we can figure out what’s happening. That would be a very bad idea, because the bad uses of these technologies would still advance. And then the responsible people would not have ways of combating it. Of course, that’s not happening. But. It’s really different people have different ideas about what they want to do in the world, and some of them are bad actors, and we have to continue to combat that.
And it’s going to be on a very sharp increase in intelligence, both the good actors and the bad actors. I mean, they make the case in the Singularity is Near, despite all of these misuses of technology, everybody is better in terms of wealth, in terms of health, 50 different things that I cite, despite the bad actors. So the history is actually pretty promising on this part.
Question: We had Yuval Harari on the program yesterday, and he said that every company should be required to say invest 20 percent in safety research. And then I mentioned this to Stuart Russell today, and he reminded me that nuclear plants, they invest pretty much everything, like 99 percent into safety research. What do you think about making it a requirement for companies to put a certain amount of their money into safety research? And what what amount would be a reasonable amount?
Ray: Well, for large companies that put out the best large language models, they’re putting a very substantial, I’d say it’s more than half into safety, avoiding hallucinations, avoiding socially inappropriate comments and making sure that they’re used for proper purposes. So it’s a huge amount of effort being done really by every large company, because they’ll be liable if they don’t do that. So it’s not like we’re not doing anything. I mean, these large language models are very heavily tested. But there are some large language models not quite as big being used by bad actors that are public domain, and they can be used to manipulate facts and so on.
And that’s going to be a problem. It’s a problem today. But nonetheless, we’re making progress, I think, in the overall quality of life of most people.
Question: Yeah, the open source language models are pretty powerful. And I guess you could argue a good thing about open source language models is they help democratize AI so you don’t depend on big tech large language models. The downside of it, as you say, it may fall into the wrong hands. Is there seems like one have to square a circle. Is there is there a way out of it?
Ray: There’s no way out of it. And I mean, a lot of predictions of the future take today’s large language models and just assume nothing’s going to happen. And that’s been true at every different point. People assume the current technology is just going to remain. And ignore the fact that it’s really on a very sharp increase. So large companies already fighting to fight abuse of large language models, which is feasible, but can be combated. But that’s not good. But, you know, three years from now, it’s going to be a whole different type of technology. You’re going to have to reinvent it. And that’s what we do with technology. It’s just much a lot faster now when the railroad came, it completely displaced lots of jobs and things were very dramatically different, but it took decades for that to happen.
Now it happens in a matter of months. So it’s very, very quick. But we also have tools to combat it. I’m optimistic and I make that case with the history of technology. But it’s going to be very, very fast.
Question: Yeah. I mean, if you say it’ll take time, I think it was maybe also Yuval Harari who made the point that the Industrial Revolution, you know, also caused the Second World War, the world wars. And so humanity made quite a number of mistakes. And I mean, we can’t really afford that with with AI. We should be… We have to get it right the first time.
Ray: But look, look at wars today. I mean, we get very upset, rightly so, at wars that kill hundreds, maybe thousands of people. But you go back, you know, 80 years or something to World War Two, we had 50 million people die in wars in World War Two, just in Europe and many other places as well. We don’t have wars like that anymore, because actually the weapons are more precise and don’t have all the kinds of collateral damage that we did that we did 80 years ago. So actually, you can look at the scale of wars actually going down. On the other hand, we still have lots of nuclear weapons. That’s not really discussed very much. That could blow up everybody. That’s not an AI technology. So, I mean, the future is not foretold. I mean, things could go wrong. But I make the case that even during World War Two, things continue to go better.
I mean, if you look at my graph, if we can put that up, this is a graph I created of the power of AI intelligence. Is it up? And so this is an exponential graph. So a straight line means exponential growth, and it’s 80 years of exponential growth.
The first 40 of this, nobody was following this. So it’s not like, well, we have to get to this level next year. Nobody knew that it was happening. And yet it nonetheless grew exponentially. So I started about 40 years ago. I created this graph, half of it, and then projected it out. And we’re right on that track. The first one on there did 0.0007 calculations per second per dollar. The last one on here does 50 billion calculations per dollar, and that allows large language models. Large language models didn’t exist until maybe a couple of years ago because we didn’t have enough computation to do it. And this has a mind of its own. It’s going to continue. And that’s where all these advantages of computer technology come from.
Actually, it started in World War II. The very first two computers there were done by a German, Zuse. He was not a Nazi, but it was presented to the Nazi government. They turned it down. They didn’t see any advantage of computation that was not important.
The third one was Colossus, created by Turing and his colleagues. And that was taken very seriously by Churchill. And they actually then used it to decode all the Nazi messages. So every Nazi message that was sent, Churchill and his colleagues were able to read. And that accounted actually for the British to win the Battle of Britain, which otherwise they would not have won because they were outgunned. But they actually knew what the other people were doing.
So there’s lots of stories on this line, but this is what’s driving computer technology. And this kind of exponential growth is not just true of computers. It’s true of every type of technology.
Question: OK. What would be technological solutions to fight the disinformation problem?
Ray: Well, I mean, one thing we have search engines, if you look at the major search engines, Bing and Google search and so on, they’re really very accurate. Most people abide by them. You can actually check all the facts put out by a large language model in a search engine, and that’s actually being done. So that actually combats this type of misinformation that’s put out. Now, again, people on their own can do something different and they can actually purposely put out something that certain vaccines don’t work and you’ll be terrible to use them. So we can’t control everything. And it’s actually a good thing that these are decentralized, so you’re not giving too much power to large companies. But they do actually use technology we have already, which is pretty good at identifying falsehood from truth.
Question: Can you give some details?
Ray: Well, I mean, I don’t want to get into comparing one company against another, but all these major companies, Microsoft, Google and so on, use their own search engines to actually check what the large language models are checking. And I mean, I’ve used them now many thousands of times. I’ve never seen anything that’s inaccurate. However, I’m not the only person, hundreds of millions of people and ultimately billions of people will be using this. And if you go over the whole thing, sometimes it’s going to say something that’s inaccurate and then people make that gets promoted virally and people get very upset about it. But it’s really actually pretty accurate.
So I think it’s a manageable problem, at least by the big company, large language models.
Question: Switching topics again, a bit a big topic, not just at this conference, but overall this the topic of governance and perhaps internationally governance. What are your thoughts? I mean, there have been frameworks around for many years, hundreds or thousands or tens of thousands, and they all sound pretty similar. What would let’s let’s get a level deeper. What would be good regulatory requirements?
Ray: Well, Asilomar Conference conference, which was decades ago on biotech has served to be pretty good at avoiding problems with biotech. You can certainly create something in biologically that could kill people and so on. And yet we haven’t seen that. And most of the things that are coming out are positive for humanity. So that actually was Asilomar Conference at the same place on A.I. ethics. And we came up with some things there. It requires judgment to actually implement them.
But the fact is, it’s not the case that there’s no A.I. regulation. For example, one case is to recreate medical ethics so that we can actually make very rapid progress on overcoming cancer and other diseases using simulated biology. But you can’t say there’s no regulation in medicine. I mean, even if you use A.I. and everything’s going to be using A.I., this tremendous amount of regulation. In fact, I think it’s too strong.
I know people who could have used a drug and they weren’t able to get it. And then six months after they died, that drug became available and could help them. So I think, if anything, regulation is too strict. But there’s definitely regulation in that area. And in fact, there’s regulation in every area. I mean, the amount of regulations goes on for hundreds of thousands of pages of regulation that people have to follow. And so we don’t just put out a product. It goes into it’s just filled with all kinds of liabilities and so on. And people really take that seriously. And that’s really where the regulation comes from. These products don’t just go out in the world. They’re used in other areas which have tremendous amount of regulation.
Question: What about requiring large language models to undergo a safety check before they are deployed? There was this open letter or open letters maybe to put a moratorium on the development of large language models. But I guess just a practical development. You don’t stop development, but you could stop or you could put requirements on the deployment. So we distinguish development from deployment. You could ask a company to do certain safety checks before they actually deploy a model.
Ray: Well, they absolutely do that. I mean, they don’t put these things out without checking it. I mean, if they had a tremendous amount of false information and information that would be inappropriate and telling people who are considering suicide, well, we don’t want to tell people who are considering suicide, well, why don’t you try that? Which actually happened once, but didn’t happen again. So there’s a tremendous amount of checking these things before they go out. But at least that’s true of large companies. There are things where people actually change them to put out false information that they believe. So we see that today.
There’s a constant battle between people who misuse these things. And that’s not going to go away. But that’s why we don’t want to stop it, because we want actually we use very intelligent language models to combat the abuse with smaller language models that common people might have and put out to influence people in a negative way. And if we stop development, we won’t actually have those larger language models to help us. And anyway, it’s useless. I mean, you know, people in Iran or China will continue to develop them. Nobody’s going to follow not developing these things. We actually need more intelligent weapons on the side of truth and so on.
Question: OK, let me switch again some of the topic. What would you recommend young people who finish high school and who would like to go to college? What would you recommend that they study?
Ray: Right. Well, it’s actually kind of an old fashioned advice, which is to find something that you have a passion for. So whether it’s music or art or science or physics and psychology and so on, really get into that and appreciate what’s good and bad about it. And then as we have more powerful tools, we’ll actually make more progress and you can appreciate that. But I wouldn’t say go for coding, for example, because coding ultimately will be overcome by machines.
I mean, already something like a third of the code that’s produced is already created by machines, and that’s only going to increase. But you have to appreciate what we can do with this and we’ll ultimately have more tools to advance that. So find your passion. Some people have multiple passions. My father had one passion, which is music. My mother was a great artist. And that’s really what you should try to do.
Question: I’m trying to challenge you on this advice. I read an article, I think it was also in New York Times a few weeks ago, where the authors said it was an opinion piece that following your heart is not necessarily the best advice, because the problem with following your passion or your heart is that you may fall back into cultural assigned roles. So, you know, maybe women may not necessarily necessarily think of going into physics. But so they they might go into roles like psychology or the social sciences, because that’s what sort of culture really is expected of them. So they said it’s actually not good advice to not necessarily good advice to follow your heart.
Ray: Well, I mean, I don’t necessarily agree with that. If your heart is in it, you’re likely to do more creative work and appreciate it more. And the idea of people choosing something that’s really not something they care about, because they think society will reward efforts in that area. I don’t think it’s good advice.
Because the tools to create more intelligent technology in every single thing we do is going to increase. And there’s not such thing as taking a safe view.
We will have a lot more money. I showed that in the book. There’s actually a straight line that shows the increase in the amount of money we have. So I think people will be OK, even if they choose the wrong thing, because it’s going to be a lot of money. In fact, today already, there’s a lot of social safety programs, which we didn’t have a hundred years ago.
The very first safety program, say, in the United States was Social Security, which happened in the 1930s. So 90 years ago, that was the very first thing. There was actually nothing from the government that would help you. Now there’s a lot of programs. It’s not perfect. I think by the time we get to 2030s, we’ll have something like residual income for people that need it, because we’ll be able to afford that.
Question: If you could design the curriculum for high school students, how would you design it? Would you redesign it? What’s currently being taught?
Ray: Yeah, I mean, a lot of education in general hasn’t changed much in a hundred years. And a lot of it is actually memorizing, wrote things. If you want to study history and you actually get into history of warfare and so on, people learn certain facts, but it doesn’t really… they don’t care about them because they just have to learn it to pass the test. Ultimately, if people really care about what they’re studying, they’ll advance it because they have a passion for it.
So, I mean, there’s been attempts. Montessori actually has a pretty good way of finding out what kids… what turns them on and teaching them that. I think we need to really focus on the human brain. And even in childhood, the human brains are going to be amplified already by technology. I mean, I come over and visit my grandchildren, three, nine, 11 years old, and they’re all on their computers and they’re all having a good time. It’s actually advancing their education. So I would try to find out what people care about and learn that.
Question: OK, one last question. Could you give us a prediction that we could verify or falsify for the next, take your pick, two or three years? What is it that you foresee?
Ray: Well, if you follow, for example, large language models, which I think is the best example of AI, they’re already quite remarkable. You can have very intelligent discussion with them about anything. So nobody on this planet, Einstein, Freud and so on, could do that. They might know something about their own field.
But it’s actually pretty remarkable. And this is being advanced such that every month it gets better and better. Three years from now, these things will be quite exciting. I’m saying 2029, that six years from now, I think, passing the Turing test will happen sooner. Turing test really measures kind of a human level of activity.
These circuits actually operated billions of calculations per second, whereas our connections in our brain, I mean, we have trillions of these connections and that gives us some intelligence, but they operated 200 calculations per second. So obviously, these machines can be much faster and so on. But again, it’s not us versus the machines. We create these machines to make ourselves smarter and we already connect with them. Generally, the connections is outside of ourselves, but it’s still part of who we are.
Ultimately, it will go inside our brain, it will be part of who we are. Ultimately, most of the connections that we use will be outside ourselves. The cloud is useful because it actually is backed up, so it’s not actually subject to warfare. You could blow up an entire building and all the information would still be there because it’s separated among different places. And that’s even going to get greater in the years ahead.
Thanks very much, Ray. We are at the end of our interview, so I will be looking forward and I think we all will be looking forward to the publication of your book. It’s called The Singularity is Nearer and coming out next year. So maybe we have you on our event again next year. Thank you very much. Bye bye. Thank you very much.
Apr/2023
Ray’s opinion on the ‘AI pause’ letter (‘too vague to be practical… tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields’).
December/2022
Video: https://youtu.be/KklEmSBlUcM2https://youtu.be/KklEmSBlUcM
Meeting: CHIP Landmark Ideas: Ray Kurzweil.
Speaker: Ray Kurzweil
Transcribed by: OpenAI Whisper via YouTube Transcription Python Notebook (thanks to Andrew Mayne)
Edited by: Alan (without AI!)
Date: 5/Dec/2022
Highlights
– ChatGPT is a ‘sizeable advance’, but ‘not quite right’.
– Large language models (LLMs) are moving in the direction of sentience.
– LLMs carry risks, just as all tech does, including railroads.
Full edited transcript
Intro
Dr. Kurzweil is one of the world’s leading inventors, thinkers and futurists. He creates and predicts using tools and ideas from the field of pattern recognition. Invented many technologies familiar to us today, including flatbed scanning, optical character recognition and text to speech synthesis. He won a Grammy for creating a music synthesizer used by Stevie Wonder that was capable of recreating the grand piano and other orchestral instruments. He was awarded the national medal of technology. His best selling books include the New York Times bestsellers The Singularity is Near and How to Create a Mind. Larry Page brought Kurzweil into Google as a principal researcher and AI visionary.
I’ll just mention one connection to Chip, Ben Rice, a faculty member. When he was a student at MIT, he worked with Ray to develop text to speech interface for that synthesizer so that Stevie Wonder and other non-sighted musicians could interact with the extensive visual navigation interface. The Singularity is a very important idea of Dr. Kurzweil’s. This is the point in time when artificial intelligence will surpass human intelligence resulting in rapid technological growth that will fundamentally change civilization. In order to understand when machines surpass biology, Ray has delved deeply into an understanding of biology and we’re immensely looking forward to hearing and learning and joining him in that understanding today…
Question: You’re joining us for the seminar five days after the release of OpenAI’s ChatGPT [released 30/Nov/2022, this recording 5/Dec/2022], which astounded many across the world and its ability to synthesize natural language responses to really complicated questions and assignments. If you’ve gotten to glimpse this technology, could you place it on the Kurzweil map toward the Singularity? Is this a step forward, is it a distraction, is it related in any way?
Ray: Well large language models occurred three years ago [Alan: Google BERT, 2019] and they seemed quite compelling. They weren’t totally fully there. You could chat with it and sometimes it would kind of break down. The amount of new ideas that are going into large language models has been astounding. It’s like every other week there’s a new large language model [Alan: view the timeline and models for 2022-2023] and some new variation that’s more and more realistic. It’s going to continue to happen. This is just another issue. There are some things that aren’t quite right with that particular model you mentioned [see: Alan’s illustrated guide to ChatGPT].
People have actually interacted with these things and some people say they’re sentient. I don’t think they’re sentient yet but I think they’re actually moving in that direction and that’s actually not a scientific issue. It’s actually a philosophical issue. That’s what you consider sentient or not. Although it’s a very important issue because I would chat with Marvin Minsky who was my mentor for 50 years and he said that sentient is not scientific so therefore forget it, it’s an illusion. That’s not my opinion. If you have a world that had no sentience in it, it may as well not exist. But yes, that was a sizable advance but there’s more to come.
5:40
Question: … What do you make of the criticism that there’s more to intelligence than brute processing speed and pattern recognition that if you want to pass the Turing test we need to learn more about our own evolved, our own intelligence evolved. I’ll just paraphrase you: in The Singularity is Near comparing cognition to chaotic computing models where unpredictable interaction of millions of processes many of which contain random and unpredictable elements provide unexpected and appropriate answers to subtle questions of recognition. In this chaotic computing how can you address Charlotte’s question about our own intelligence and the path forward AI?
Ray: It is a good observation, but chaos and unpredictability can also be simulated in computers. Large language models do that because you can’t always predict how it’s going to answer. A lot of these models you can actually ask the same question multiple times and get different answers, so it depends on the mood of the large language model at that time. To make it more realistic it does have to take that level of… into account when it answers. First we could ask a question and give you a paragraph that could answer your question. Now it can actually give you several pages. It can’t yet give you a whole novel that can be coherent and answer your question so it’s not able to do what humans can do. Not many humans can do it but some humans can write a whole novel that would answer a question. That’s the answer it has to actually cover a large amount of material, have an unpredictable element but also be coherent as one work. Seeing that happen gradually each new large language model is able to actually cover a much broader array of material but it definitely can handle stuff that is not just giving you a predictable amount of… it has a way that is not really totally predictable.
9:18
Question: …What is your definition of intelligence?
Ray: I mean intelligence is to solve difficult problems with limitations of resources including time. So you can’t take a million years to solve a problem. If you can solve it quickly then you’re showing intelligence. And that’s why somebody who is more intelligent might be able to solve problems more quickly.
But we’re seeing that in area after area. I mean AlphaFold for example can actually do things that humans can’t do very quickly or to play something like go goes way beyond what humans can do. In fact Lee Sedol, who’s the best human player in Go in the world says he’s not going to play Go anymore because machines can play it so much better than he can. But that’s actually not my view that it’s going to replace us. I think we can actually make ourselves smarter by merging with it as I said.
10:55
Question: …With AI taking over physical and intellectual achievements and individuals living longer. Do you have thoughts on society and whether individuals risk lacking a purpose?
Ray: Well it’s good to hear from you Sharon. That’s the whole point of our merging with intelligence. I mean if AI was something separate from us it’s definitely going to do everything that go way beyond what humans can do. So we really have to merge with them to make ourselves smarter. But that’s why we create these things. I mean we’re separate from other animals in that we can think of a solution, implement it, and then make ourselves better.
Now if you look at say take what human beings were doing for work 200 years ago. 80 percent had to do with creating food. That’s now down to 2 percent. And so if I were to say, ‘oh well you know all these jobs are going to go away and machines are going to do them’ people say ‘oh well there’s nothing for us to do’. But actually the percentage of people that are employed has gone way up. The amount of money that we’re making per hour has gone way up. And they say ‘well okay but what are we going to be doing?’ I said ‘well you’re going to be doing IT Engineering and Protein folding’ and no one will have any idea what we’re talking about because those ideas didn’t exist.
So we’re going to make ourselves smarter. That’s why we create these capabilities. And so it’s not going to be us versus AI. AI is going to go inside of us and make us much smarter than we were before. So yes I think if we did not do that then it would be very difficult to know what human beings would be doing, because machines would be doing everything better.
But we’re going to be doing it because the AI is going to work through us.
13:31
Question: …A question that relates to your idea of whether it’s a dystopian society or other… people with various political and or personal agendas to harness the increasing power of AI for their own purposes… will not necessarily be to the long term benefit of humankind as a whole. So how does this balance out?… individuals political and personal agendas may use AI for purposes that are not beneficial to mankind. How does that balance out?
Ray: Well I mean every new technology has positive and negative aspects. The railroad did tremendous destruction but it also benefited society. So it’s not that technology is always positive.
Social networks: I mean there’s certainly a lot of commentary as to how it is negative and that’s true. But no one actually would want to do completely without social networks.
And I make the case that we’re actually using technology and measuring the kinds of things that we associate with positive social benefit is actually increasing as the technology gets better. And that’s actually not known. I mean if you ask a poll as to whether these things are getting better or worse people will say they’re getting worse. Whereas they’re actually getting better. But it’s not that everything is positive I mean there are negative aspects of it and that’s why we need to keep working on how we use these technologies.
15:50
Question: … The Singularity is Near. In that book you speculated that the risk of bioterrorism, engineering and viruses will become an existential threat. Since then do you think this risk to humanity has increased or decreased?
Ray: I don’t think it’s increased. I mean I have a chapter in The Singularity is Near and there’s also another one in The Singularity is Nearer on risks. And all of these technologies have risks, and they can also do us in. I don’t think that the likelihood of that has increased. But I remain optimistic and if you look at the actual history of how we use technology you could point to various things that should have gone wrong. Like every single job that we had in 1900 a year, a little over a century ago is gone and yet we’re still working and making actually more money. So the way we’ve used technology has been very beneficial to human beings so far.
17:40
Question: …AI comes with large energy resource demands and rare mineral material needs to build the hardware. How do you see these international global tensions, especially the interaction pervasive AI and the climate?
Ray: I mean computers don’t use that much energy. In fact, that’s the least of our energy needs. And that’s a whole other issue we didn’t get into. The creation of renewable energy sources is on an exponential. I have a very good chart that shows all of the renewable energies and it’s on an exponential. And if you follow that out, we’ll be able to provide all of our energy needs on a renewable basis in 10 years. At that point, we’ll be using one part out of 5,000 parts of the sunlight that hits the earth. So we have plenty of headroom in that. So we’ll actually be able to deal with climate change through renewable sources. In terms of what we’re using, computers are not that expensive.
19:15
Question: …Will the Singularity lead to a decrease in class conflict? Much of the gain in productivity and wealth in the last 50 years has been concentrated in the 1% as inflation adjusted earnings in the working class have stagnated? Are you concerned about gains in productivity due to AI being unevenly distributed? …this related question about inequities that, for example, we saw exacerbated during the COVID pandemic.
Ray: I mean, my observation is that more and more people from more and more backgrounds are participating, which didn’t used to. Third world countries, like in Africa, South America, and so on, did not participate to the same extent where they are participating far more dramatically today, countries that were really under the weather in terms of being able to participate in these types of advances are now participating to very smart, very large extent. So I mean, anyway, that’s my view on it.
Question: …The machine can easily beat the best human player at computer chess, but even a young child can move pieces on the physical board better than any general purpose robot can. Do you imagine embodied machines will ever pass a physical Turing test in the real physical world? And if so, when?
Ray: Yeah, we’re making less progress with robotic machines, but that’s also coming along. And I can also use the same type of machine learning. And we’re going to see, I think, tremendous amount of advances in robotics over the next 10 years.
Question: …How do you envision society once individual brains can interface with a cloud? Will individuality still exist? It seems you imagine human intelligence coalescing into a singular consciousness.
Ray: Yes, definitely. I mean, that’s one of the requirements of being able to connect to the cloud is that this is your portion of the cloud and other people can’t access it. And we’re actually doing very well on that. And we have all of our phones connect to the cloud. And we don’t see people complaining that other people are getting access to it. So we’re actually doing pretty well on that. But definitely you’ll be able to maintain your own level of personality and differences. I think we’ll actually be more different than we are today, given the kinds of skills that we’ll develop.
April/2022
Video: https://youtu.be/5iFSz1orGUg3https://youtu.be/5iFSz1orGUg
Meeting: Singularity University GISP Class of 2009 reunion/update.
Speaker: Ray Kurzweil
Transcribed by: Otter.ai
Edited by: Alan (without AI!)
Date: 16/April/2022
Highlights
– We’ll actually achieve human-like AI before 2029 (around six years from 2022).
– ‘The human brain still has a lot more computation than even the best language models [1 trillion parameter LLMs]… However, we’re advancing them very quickly’.
– Transformers/neural networks are ‘going in the right direction. I don’t think we need a massive new breakthrough’.
Full edited transcript
00:05
Question: What are your thoughts on the singularity now, in terms of timing?
Ray: I have a new book, The Singularity is Nearer4https://www.kurzweilai.net/essays-celebrating-15-year-anniversary-of-the-book-the-singularity-is-near [due for release in Jan/2023]. It’s completely written, I’ve shared it with a few people. And it’s very much consistent with the views I expressed in The Singularity is Near, which is 17 years earlier!
But it has new perspectives. When that book came out, we didn’t have smartphones, we had very little of what we now take for granted. And the kinds of views that I’ve expressed with the singularity are much more acceptable. I mean, people had never heard of that kind of thing before. So, I discuss the singularity with the new perspectives of how we currently understand technology.
And it’s actually an optimistic book. I mean, people are very pessimistic about the future, because it’s all we hear on the news. And what we hear on the news is true, there is bad news. But the good news is actually better. I’ve got like 50 charts that show different indications of human wellbeing. And every year they get better. And that’s really not reported. Peter Diamandis has talked about this as well.
01:48
Question: What kind of things have surprised you about how technology has developed or how it’s affected our society?
02:01
Ray: It really hasn’t surprised me, because it’s really in line with what I’ve been expecting. There was, for example, a poll of 24,000 people in 23 countries, asking whether poverty has gotten better or worse over the last 20 years. Almost everybody said it’s gotten worse. The reality is, it’s fallen by 50%. And there’s one poll after another that shows that people’s views are quite negative, whereas the reality is quite positive. Not to say there isn’t bad news, we see that all the time on news programs. So that’s one issue that I talked about.
But we’re actually pretty close. I mean, I think we’ll actually pass the Turing test by 2029. That’s what I actually started saying, in 1999. And in my book at that time, The Singularity is Near5https://www.amazon.com/dp/0143037889 it said the same thing in 2005.
But we’re actually getting pretty close to that. But we actually have things that come pretty close. In many ways, they’re better than humans. In some ways, they’re not quite there. But I think we have an understanding of how to solve those problems.
I think we’ll actually probably beat 2029.
And in the 2030s, we’ll actually begin to merge with that. It won’t just be us versus computers, we’ll actually put them really inside our minds. We’ll be able to connect to the cloud.
Consider your cell phone. It wouldn’t be very smart if it didn’t connect to the cloud. Most of its intelligence it is constantly getting from the cloud. And we’ll do the same thing with our brains. We’ll be able to think that much more deeply by basically amplifying our ability to do intelligent type processing directly with the cloud. So that’s coming in the 2030s. And so our thinking then will be a hybrid of our natural thinking, and the thinking in the cloud. But the cloud will amplify. Our natural thinking doesn’t advance. So when you get to 2045, most of our thinking will be in the cloud.
Question: Has your thoughts on life extension changed?
Ray: No, in fact we’re now applying AI to life extension. We’re actually simulating biology. So we can actually do tests with a simulated biology.
So the Moderna vaccine: they actually tested several billion different mRNA sequences, and found ones that could create a vaccine6https://sloanreview.mit.edu/audio/ai-and-the-covid-19-vaccine-modernas-dave-johnson/. And they did that in three days. And that was the vaccine. We then spent 10 months testing on humans, but it never changed, it remained the same. And it’s the same today.
Ultimately we won’t need to test on humans. We’ll be able to test on a million simulated humans, which will be much better than testing on a few 100 real humans. And we can do that in a few days. So be able to actually simulate every possible antidote to any problem, and we’ll go through every single problem and come up with solutions very quickly, and test them very quickly. That’s going to be something we’ll see by the end of this decade [2029].
We’re gonna be able to go through very quickly, all the different problems that medicine has. The way that we’ve been doing it, testing with humans takes years. Then you come up with another idea, and that takes years more. We could actually test them all, every single possible solution, very quickly. And that’s, that’s coming now. And we saw some of that with the Moderna vaccine.
06:46
Question: But medicine doesn’t seem to adapt to that. The vaccine was developed before the lockdowns even began, it wasn’t deployed until… probably 2 million people would be alive, if we had had the knowledge that we could deploy that vaccine immediately, which is quite a large number. And medicine doesn’t change its practices and styles nearly as quickly as technology changes.
07:09
Ray: Well, some people were skeptical because it was developed so quickly. But I think we’re gonna have to get over that. But it is good that we had the vaccine, otherwise, a lot more people would have died. And I’m not saying we’re there yet. But we are beginning to simulate biology. And ultimately, we’ll find solutions to all the problems we have in medicine, using simulated biology. So we’ve just begun. And I think we’ll see that being very prominent by the end of this decade.
07:55
But people have to have to want to live forever. I mean, if they avoid solutions to problems, then they won’t take advantage of these advances.
We’ll have the opportunity, but that doesn’t mean everybody will do it. [For example] We have very large anti-vax regimen. Possibly because it was created so quickly…
08:28
Question: What do you feel most optimistic about now, Ray, or most hopeful about?
08:34
Ray: AI is going faster and faster. I’ve been in this field, I got involved when I was 12, I’ve been in it for 60 years. I got involved only six years after AI got its name, in the 1954 [19567https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth] conference at Dartmouth. And things were very, very slow. It would take many years before anything was adopted.
When I got to Google, which is about 10 years ago [Dec/20128https://www.kurzweilai.net/kurzweil-joins-google-to-work-on-new-projects-involving-machine-learning-and-language-processing], things were going faster. It would take maybe a year or two to develop something that would be adopted.
Now things are happening every month. So, the acceleration of technology in general, and particularly in AI, we could definitely see. A real serious problem we seem to overcome very quickly. So, we’re gonna see tremendous progress by the end of this decade [2029].
09:40
Question: For AI, I’m curious now you’re in the middle of working on large language models. You’ve always been around all different angles of AI, so what do you think is going to be the most promising approaches of AI to really get us to the full potential of AGI, pass Turing tests, [give us] super intelligent machines? Do you think for example, it’s continued progression of sort of large scale language model type of things? Or do you see a fusion of neural and more traditional logic-based and symbolic-based approaches?
10:20
Ray: First of all, we need to continue to increase the amount of computation. We’re using an amount of computation now that is beyond what we can actually provide everybody. But we can at least see what’s going to happen.
And as we increase the computation, these systems do a lot better. And they overcome problems they had before.
There are some algorithmic improvements we need. I mentioned inference before, it doesn’t do inference very well. However, we have some ideas that we believe will fix that.
But it also requires more computation. The human brain still has a lot more computation than even the best language models. I mean, right now we’re, like half a trillion parameters [for LLMs]. And it’s still not what the human brain can do. So the human brain thinks about something very carefully, it can go beyond what these machines can do. However, we’re advancing them very quickly.
Like every year, we’re multiplying by five or 10, the amount of computations. So that’s one key thing that’s required.
And we do need some changes in algorithms, I think we have some ideas of what to do. We’re exactly where I would expect to be in 2022 to meet the Turing test by 2029. Which is what I’ve been saying since 1999.
And we also have to collect data, that’s a very important issue. I mentioned simulated biology, we have to actually collect the data that would allow us to simulate biology for different kinds of problems. We’ve done that in some areas, but actually collecting all that data is extremely important. And being able to organize it, and so on. So that’s happening. That’s another thing that’s required.
12:35
Question: I was talking to Geoffrey Hinton [Godfather of AI, developer of artificial neural networks, Fellow at Google] and he said that we really need a next level breakthrough after deep learning to progress AI to the next level. And I’m wondering if you agree with that side, and if you’ve seen anything on the horizon that would fit that criteria?
12:55
Ray: Yeah, well, he’s [Hinton] always been more conservative than I have. I think we have the correct methods: neural nets.
If we just amplify them with new technology, it wouldn’t quite get us there. So we do need some additional ideas. But we can actually change neural nets to do things like inference. And there was a recent paper on jokes, developed by a new, massive model [Google PaLM, Apr/2022, see Alan’s video: https://youtu.be/kea2ATUEHH8].
We’re actually getting there. I think we’re going in the right direction. I don’t think we need a massive new breakthrough. I think the kind of neural nets that he’s [Hinton] advanced are good enough with some of the additional changes that are now being experimented with. So yes, I do think we’d need some additional things, but they’re being worked on. So it’s maybe just a difference of emphasis. But I think we’re on the right path.
14:11
Question: What do you think the pandemic shifted the priority of the future of medicine, as well as other fields that you’d like to share with us?
14:24
Ray: Well, I mean, we did create one of the methods that we’ve been fighting COVID using simulated biology, as I said. I think the fact that it happened so quickly, it’s actually has partly fueled the anti-vax movement.
14:54
So I think we need to actually develop this idea. We’ve had vaccinations actually for over a century. But they generally take a while to develop. So the fact that we are able to do it so quickly by simulating biology, is a surprise to people. And it’s actually going to go faster. The US government has a plan to actually create new vaccines within a few months to three months. And it’s gonna go even faster than that. Because actually the Moderna vaccine was actually done in three days. They started it, they simulated every single mRNA vaccine, and tested it in three days.
But people then wanted to test it on humans. I think eventually, it will be much better to test it on a million simulated humans than 100 real humans. So we will get there. This was actually the very first time that we actually used simulated biology to come up with something. And I think simulated biology will work for every single medical problem.
One of the key ideas is to collect the data. That’s going to be very key. Because you need to collect data for single problem. If you have all the data, then you can run it and figure out an answer very quickly. So, I’m very excited about that. That’s the type of change we needed to really break through lots of medical problems that have been an issue for many years, decades.
17:06
Question: What would you say is the most surprising thing to you over the last decade? You said most things were not surprising you, but surely something has surprised you?
17:24
Ray: I’m not surprised, but I’m quite amazed at large models. Many of you have actually talked to a large model [see Alan’s Leta AI videos] . It’s become actually already a major thing in academe. Here to write about something, you can just ask a large model.
17:58
“What do you think of this philosophical problem?” “If you had a trolley, if you had substituted something for the trolley problem, how would that be?” And it will actually give you an answer, and a very well thought through answer.
And the new models that are coming out now that are five times the size, half a trillion parameters, rather than 100,000 or 100 billion, they even give you better answers.
18:36
Question: Have you ever asked a model to predict the future of technology and it said something to you, where you said, “Oh, I didn’t think of that.” That’s a very trick question for you, Ray…
18:50
Ray: Well, I’m not actually predicting the future. I’m just giving the capabilities that we’ll have to affect it.
18:57
Question: No, I’m specifically meaning your task in life has been to try and forecast features of technology. And so when a model impresses you, because it does something in your task of life that you didn’t think of, that would be a particular bar that I’m asking you about?
19:14
Ray: Yeah, well, I mean, these large models don’t always give you the same answer. In fact, you can ask the same question a hundred times and get 100 different answers. Not all of which you’ll agree with, but it’s like talking to a person.
19:36
And this is actually now affecting academe. Because if somebody’s asked to write about something, you can have the large model write about it. And you can ask it a question. And if you don’t really like that answer, just ask it again, and if you find something you like, you can submit it! There’s no way that anybody could find that you’ve done that, because if they asked the same question that you’ve asked, they’ll get a different answer. Unless they can tell that it’s not your writing style, but since everyone will have these large models anyway, it’s really hard to say what your writing style is. So it’s really a writing tool.
20:24
So, I wouldn’t say that surprised me, but I think it’s really quite delightful. To have this kind of intelligence come from a model. This was never capable before, and is still not at human levels. So we’re gonna see even more surprises from these over the next several years.
20:52
Question: The nature of models in large models, especially around like transformer models and kind of these universal models. So first question, is it really all about just computation? Like the future of innovation is really all about how many nodes can you throw at it? And then ultimately, that becomes a question of how many dollars you can throw at it…
21:31
Ray: We are going beyond what’s affordable. So some of the largest models really can’t allow, like a billion people to use. But we’re able to see what they’re capable of doing. So it actually gives us a direction.
But it’s not just the amount of computation. I mean, the amount of data is important [see Alan’s paper, What’s in my AI?]. Some of the first models, were trained basically on the web. Not everything on the web is accurate. And so it put out a lot of things that were basically at the level of the web, which was not totally accurate. So we’re actually finding ways to train on information that’s more accurate, more reliable.
And particularly if you’re trying to solve a particular problem, like: “What mRNA sequences, can you deploy to create a vaccine?” That’s a very specific type of data, you got to collect that type of information. And so we’re doing that. And so collecting data, as I said before, is very, very important.
And then neural nets by themselves are not adequate. They don’t do inference correctly. That’s something that’s not fully solved. I believe it will get solved, but that’s a question that’s not fully resolved.
And it has to do with inference. Understanding what the statement is saying, and what the implications are, being able to do multi step reasoning, and that’s a key issue. So that’s what’s being worked on now.
There’s algorithmic issues, there’s data issues. The amount of computation is very important, though. Unless you have a certain amount of computation, you can’t really simulate human intelligence. And even with like a half a trillion parameter model, we’re still not what human beings can deploy, when we deploy it on a specific issue.
24:15
Question: Across the nature of lots of computation, there’s lots of money. Across lots of data, there’s lots of money. Today, we’re in a world where a couple people with a laptop—or a couple laptops—in a garage can build some tremendous innovation. But what do you think about the societal impacts if all of the innovation is really around these systems that are just enormously expensive, and how that changes that economic disparity situation?
24:57
Ray: There’s still a lot you can do with a few laptops, which you couldn’t do in previous times. And lots of people are creating very successful companies without having to spend millions or hundreds of millions of dollars on these types of innovations.
And a few people are doing the training. It’s really the training that requires a lot of money [GPT-3 required the equivalent of 288 computing years of training, estimated at $5-10 million9https://lifearchitect.ai/models/]. And to actually run these is not as nearly as expensive. And so you can do that yourself, if you can use a model that’s trained by somebody else. So a lot of people are doing that. Google and other companies, Microsoft, and so on, are making these models available. And then you can use them. No one [single] company is controlling it, because there are multiple companies that are making this available. So yes, there are some things that require money, but then everybody can share that training. And that’s what we’re seeing.
26:22
Question: What do you think about the future of democracy? If technology is going to help somehow fix the problems [within] democracy?
26:35
Ray: I have a chart on democracy over the centuries. But it’s gone way up. And we […] players in the world that are not Democratic, but the number of democracies and the number of people controlled by democracies has gone way up.
And if you go back, even when the United States was formed, that was really the first democracy. And if you look back at that, it wasn’t perfect democracy, we had slavery and so on. And if you look at it now, different countries have different levels of democracies, there’s problems with it, but the amount of democracy in the world has gone way up.
27:36
And I do believe that’s part of the good news of people. And we see actually democracies getting together today to fight people who are trying to oppose that. So yes, we’ve made tremendous progress. Lots of countries that are democratic today were not democratic even a couple of decades ago.
28:18
Question: I’ve always been fascinated and interested in the future of humanity in space. Do you have any predictions on when you think humanity will be a multiplanetary species?
29:01
Ray: Peter [Diamandis] is very concerned about this. I’ve been more concerned about amplifying the intelligence of life here on Earth.
I think it’s going to be a future era beyond the singularity when we actually have kind of exhausted the ability of here on Earth to create more intelligence. At some point, our ability to actually create more computation will come to an end, because we really won’t have more materials to do that with.
I talk about computronium10https://en.wikipedia.org/wiki/Computronium, which was actually pioneered by Eric Drexler [MIT, supervised by Marvin Minsky], as to how much computation can you create if you were to actually organize all the atoms in an optimal way. It’s pretty fantastic. You can basically match all of the intelligence of all humanity with sort of one liter of computronium.
And we’ll get to a point where we’ve used up all the materials on Earth to create computronium. Then we will have to move to another planet.
30:44
And that’s probably at least a century away [2122]. That might seem very long. On the other hand, this world has been around for billions of years, so it’s not that long.
But at that point, it really will become imperative that we explore other planets. We won’t want to send delicate creatures like humans, we’ll want to send something that’s very highly intelligent. And it will then organize the materials and other planets to become computronium. That’ll be something we’ll do for centuries after.
And then a key issue is whether or not we can go beyond the speed of light. If we’re really restricted by the speed of light, this will take a very long time to get into places. If there’s some way of going beyond the speed of light, then it will happen faster.
Putting something on Mars, I think that’s interesting, [but] I don’t think that’ll affect humanity very much. I think it’ll be our ability to extend computation beyond Earth. And that’s really something that’s way beyond the singularity.
Listen to part of Ray’s presentation in my mid-2022 AI report…
Get The Memo
by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.Thousands of paid subscribers. Readers from Microsoft, Tesla, Google AI...
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.

This page last updated: 2/Sep/2023. https://lifearchitect.ai/kurzweil/↑
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10