Ray Kurzweil: 2022-2024 Updates

Advising the majority of Fortune 500s, informing government policy, and in Sep/2024 used by Apple as their primary source for model sizes in their new model paper and viz, Alan’s monthly analysis, The Memo, is a Substack bestseller in 142 countries:
Get The Memo.


Oct/2024

Video: https://youtu.be/xqS5PDYbTsE1https://youtu.be/Iu7zOOofcdg
Title: The Man Who Predicted AGI Decades Ago w/ Ray Kurzweil | EP #125
Transcribed by: OpenAI Whisper via MacWhisper
Edited by: Unedited, but some formatting by Gemini-1.5-Pro-2M powered by gemini-1.5-pro-002 on Poe.com2Prompt: Split the content of this transcript up into paragraphs with logical breaks. Add newlines between each paragraph. Improve spelling and grammar. Do your best to flag the main speaker as **Ray: ** and add **Peter: ** whenever the interviewer interjects or poses a question.

Full transcript (unedited, some formatting by Gemini-1.5-Pro-2M)

Peter: You know, I count you as my mentor, co-founder, you know, and just partner in helping create a vision for humanity of an exponential and a positive future. Well, how long has it been since we’ve known each other?

Ray: Yeah, I’m trying to remember that. I think Martine Rothblatt introduced us. It’s got to be shortly after the, I’m sorry, XPRIZE was won. So I’m thinking it’s about 20 years now.

Peter: Okay. Yeah, it’s interesting. Very amazing. Yeah, it’s amazing. We still look the same, so.

Ray: And we still have the same mindset, which is the most important, you know.

Peter: Yeah, but it’s actually coming true, so.

Ray: It is.

Peter: You know, for, there’s a few interesting facts people should know. You know, you and I co-founded Singularity University and Abundance 360. I got you your first job. I don’t think people know that, which is kind of funny. At Google, you mean?

Ray: Yeah.

Peter: I mean, you had written this book, How to Make a Mind, and then you invited me to help you, to join your board. And we introduced that to Larry Page.

Ray: Yeah, so I came, I joined your board at that company, and you were saying let’s raise, I think we were trying to raise $50 million or something like that. And you had not met Larry at that point. And I said, “Listen, Larry’s become a friend. He’s on my board at XPRIZE. I can, I’m happy to introduce you.” And so I reached out to Larry, and he said, “Yeah, I’d love to, Ray, to meet Ray.” And we, I remember the meeting. I remember the conference room. So you and I walk in there, and you, you launch into presenting the company. What was the company called back then? Do you remember?

Ray: Patterns.

Peter: Yeah, Patterns Inc. That’s right. Yeah. And why don’t you recount what happened when you started presenting Patterns to Larry?

Ray: Well, I thought it might be useful if he made a venture investment in it. And he said, “I don’t want to make a venture investment. That’s ridiculous.” I said, “Okay, well, I’ll have everybody you approach is going to be interested.” He said, “I’ll just buy the company.”

Peter: Yeah. I remember him saying, “We have so much compute and so much data here. Why would you want to do Patterns Inc. outside of the company?” Right.

Ray: And I said, “Okay, but we’ve just started two weeks ago. We haven’t really done anything.” And he said, “Well, is it worth anything?” I said, “Well, yeah, I wouldn’t ask you to invest if it wasn’t worth anything.” He said, “Okay, so I’ll buy it.”

Peter: Yeah. And then you said, “Well, how would you, how would you value it?” And then I remember him saying, “We can value anything.” Right.

Ray: That’s right.

Peter: Yeah. It was fun. And that’s how you got your first job.

Ray: That’s right.

Peter: So what’s your title? Yeah, you are. What’s your title at Google these days?

Ray: Principal Research AI Visionary.

Peter: Okay. That’s good. I like that. So let’s jump in. I’m, I’m thrilled to be having this conversation. And this podcast could last a good 48 hours straight on all the subjects we have to chat about, but we’ll keep it rather succinct and brief. And I want to jump in with a question I’ve actually had for a bit. You know, you’ve talked about the singularity being in the year 2045, and you’ve spoken about reaching human-level AI by 2029. And I’m trying to understand if we’ve got human-level AI by 2029, by 2030, and ’31, and ’35. It doesn’t mean, though, typical of a human. It means typical of all humans.

Ray: Yes. So any human that can do anything, it’ll be comparable to that and better than that.

Peter: So why aren’t we reaching the singularity in 20…?

Ray: It can play Go better than any human, et cetera, for every single thing that humans can do. I buy that. But my question is, why is the singularity 44 years out if we’re doubling, and we’ve got 10 doublings? I mean, the power of AI is doubling not once every two years, but in some instances, it’s doubling every few months. It’s like 20 years away. 2045 is almost 21 years from today. But why is it so far out? It seems like if… I don’t think we’re able to predict anything by 2035, and the speed is going to be just extraordinary by then.

Ray: Well, the singularity is a metaphor, and it is something that we can’t really say about. Like if information goes into a physics singularity, we can’t actually access it. It’s stuck in there, and things can happen in there, but we can’t actually tell what’s going on. So we’re actually borrowing this metaphor from physics to talk about this historical singularity. And so, 2030s, we’ll have things that go beyond what humans can do, and it will seem quite remarkable, but we have humans today. So having more humans, it amplifies us, but it’s not the same thing as the singularity. When we can ourselves, we’re going to merge with AI. So that’s already a difference than other people. People think AI is over here, we’re over here. There’s a difference. We’re actually going to merge the two together, and we’re going to become… we’re going to be able to do everything that AI can do ourselves. Right now, if you use the language model, it doesn’t seem like it’s ourselves. It seems like it’s somebody else. And we’re going to do that with things like the future version of Neuralink right now, which allows you to actually access your own brain. And so Neuralink, for example, is implanted in two paralyzed patients. They can control a computer with their brain. They can do it actually as good as people can access their own computer, but accessing your own computer isn’t like accessing your own mind. Like if I want to tell you something, and you look it up on your computer, it takes you five seconds, 10 seconds. It doesn’t just like pop up instantly. So we’re going to be able to do that in the 2030s. We’re going to have nanorobots that are much more expansive than Neuralink or Synchron today. And by 2045, we’re going to expand our own intelligence a millionfold. That’s so remarkable that we can’t really say what it will be like. And that’s why we’re using this metaphor.

Peter: Do you believe we’re going to have an intelligence explosion? There’s been a lot of conversation about recursive AI, self-programming and self-improving, that will lead to a hard takeoff with an extraordinary acceleration of computational capacity.

Ray: No, but it’s going to be exponential. Exponential seems very fast. But their positions, once it gets established, boom, it interacts in a fraction of a second, and we get the singularity. Minus we’re going to go exponentially towards that. But exponentially gets to be faster and faster. So it seems similar, but it’s actually on the slow school of the singularity. We’re going to get there exponentially.

Peter: You know, it was interesting. Last year I had yourself on stage with me at the Abundance Summit, which was awesome. Geoffrey Hinton was with us. Elon Musk was with us. And I loved that, you know, Elon said, “Yes, Ray is prescient, and he’s conservative.” So I think it’s the first time you’re being called conservative about your AGI estimates.

Ray: Well, GAI. AGI is going to be…

Peter: Do you still believe it’s 2029?

Ray: Yes.

Peter: And he’s actually saying next year.

Ray: Yeah. So I’m conservative compared to him. I think saying by the end of the decade is conservative. It could happen faster. I see no reason to increase my estimate. I said 2029 in 1999.

Peter: Yeah, 30 years.

Ray: And people like Geoffrey Hinton was there. Stanford actually organized a conference. Several hundred people came, and their consensus was 80% thought it would be 100 years, including Geoffrey Hinton. And he’s saying that he was wrong. It’s actually more like 30 years, like what Kurzweil was saying in 1999. But no one was saying 30 years at that time.

Peter: Yeah. No, it’s your, and for those who don’t know, Ray’s predictive accuracy has been outstanding. If you go and you Google, there are many cases in which all of your books are documented about when you predict something is going to happen. And within giving a leniency of like 12 to 24 months, I think your accuracy rate is at 86%. So not too bad. Not too bad at all. The other thing that happened at that summit last year at the Abundance Summit, and I hope we’ll have Elon and you back next year, is that…

Ray: That happened a year ago. And that seems like ancient history.

Peter: Yeah, it is ancient history.

Ray: But one of the things that was…

Peter: It’s happened in the last year. It’s just remarkable. And it’s going to keep happening faster and faster each year. So 2029 for AGI is conservative. And on the predictive 12 to 24 months accurate. So one of the comments made by both Elon and Geoffrey Hinton, and congrats to him for the Nobel Prize, was that 80% chance AI is going to turn out great for humanity, 20% chance we’re screwed. I’m curious whether what your comments are about…

Ray: Say that again?

Peter: So Elon and Geoffrey Hinton said that they’re looking forward on how AI will impact humanity. It’s 80% probability that humanity is going to survive and thrive, and AI is going to support us, and there’s a 20% chance that AI is going to be disruptive to humanity. And so I’m curious if you feel that’s correct, and what do we do to minimize the downside?

Ray: Yeah, well, I deal with that a lot in my book.

Peter: And for those who need to know, Ray’s most recent book is The Singularity is Nearer, the super imaginative title as a follow-on to The Singularity is Near, which really began our relationship. I read that book when I was trekking through… where was I? I was in Chile trekking in the mountains there. I hadn’t met you at that point.

Ray: You hadn’t met me, no.

Peter: And I had this backpack, this huge hardcover book was a significant amount of the weight I was carrying. And when I finished reading this book, I was like, “OK, I know a lot of this stuff, having spent a decade at MIT with folks like Eric Drexler and reading your previous books. But,” I said, “there’s no place on the planet that anyone can go to get an overview of all of these topics to understand what they mean.” And I wrote down at the end of the book International Singularity University, which became Singularity University, which I then approached you, and instantly… we’re having lunch, I think. And that actually… yeah, go ahead.

Ray: Well, I make decisions very quickly.

Peter: Yeah, instantly you said, “Yeah, let’s do it.”

Ray: Persisted my life for decades. I make [decisions] instantly. So I said yes.

Peter: Yes, you did. And we launched it at TED and at Ames. And Larry Page was there. He came on stage and said, “Yes, this should exist.” And so Singularity has done really well. And then you and I started the Abundance 360 membership, which is Singularity’s highest-level membership in the Abundance Summit. But going back to the 80/20 probability, what do you think… do you give us a 20% chance of having problems?

Ray: Well, we’re going to have problems. I mean, we have benefits and problems already. Fire warms us, cooks our food, also burns us. It does both of those things. We can be mindful that we know that fire can burn us, so we have to avoid putting ourselves near fire. We already see things. I mean, I can create you, and I can have you saying things that you would never say. It looks real. That exists today. That’s affecting the election, which is a few weeks away. But I don’t think we would want to get rid of AI. Its benefits are already quite enormous. And it’s going to get better, literally. We talk about something a year ago, it seems like ancient history. So we’re not going to want to get rid of it. And we’re going to actually want to learn its problems. People present problems that AI will present, and they assume that all we have are the things that we can do today to modify it without realizing that that’s also going to expand. And we’re going to be able to actually deal with problems that we can’t deal with today. I mean, this is really an increase in our intelligence. And would we want to not be intelligent, have human beings be the intelligence of a mouse? Maybe that would be good. We wouldn’t be able to develop atomic weapons that way. So yes, intelligence has brought us problems that we didn’t have before. But I mean, I’ve given you a few charts. US personal income.

Peter: Yeah, let’s go ahead and put on the charts one second for those on YouTube, which is most of our audience. Let’s take a look at these charts.

Ray: I mean, in constant dollars, the amount of money per person, the person in the United States makes is 10 times what it was 100 years ago, in constant dollars. So we see literally an exponential curve between 1774 up through 2022. And this is just a meteoric rise. And the important thing is that these are in constant-year dollars. They’re not inflated dollars. And all this is driven by the exponential increase in computation. Actually, that’s my most important chart. And this is plotted on a log scale, which means a straight line. Each level is 10 times the level below it. So this started with the Zuse computer. He was a German in 1939. He was shown to Hitler. He was not actually supportive of Hitler, but it got shown to him. Hitler had no interest in computation. The third computer on this chart is Turing’s computer, which is shown to the leader of the Allied forces. We used it to decode the Nazi messages. Anyway, this is a straight line. It means exponential growth. And it started at 0.00007 calculations per second per constant dollar. In 2024, the NVIDIA chip, the B200, is half a trillion calculations per second per constant dollar. So for the same price, we can now get a 75 quadrillion-fold increase in the amount of computation. And that’s what’s driven it. That’s why we didn’t have large language models in 1931 or even three years ago. We began having them two years ago. If we compare the things two years ago to today, it’s remarkable. So this constant increase in the amount of computation for the same money has driven this AI revolution that started two years ago.

Peter: You’ve deemed this the law of accelerating returns. Moore’s law is a segment of it, which involves integrated chips.

Ray: Right. Moore’s law has to do with integrated circuits. But this started at relays, went to tubes, discrete transistors, integrated circuits, and now we’ve gone beyond integrated circuits.

Peter: And you don’t see any variation in this tube, and you don’t expect this to slow down or stop.

Ray: No. In fact, I found this chart 45 years ago and felt it would continue. And it has continued exactly as I found it 45 years ago. So I had half of this chart, and it’s continued on to trillions of calculations per second per constant dollar. So it’s pretty amazing. And this is going to continue. We made the same progress with software as we have with hardware. So the actual value is the amount of computations we get from software and from hardware both. So it’s actually even more expansive than what you see here.

Peter: Yeah, Elon was saying on our stage last year that he’s seeing 100x per year if you include computation and algorithmic efficiencies.

Ray: Well, that’s about what we’ve done with the kinds of computations we’ve done with the large language models. And if we compare what we have today to two years ago, it’s something like that. So it’s pretty extraordinary.

Peter: Amazing. I believe that the large language models should be called large event models because large language models, yes, they’re actually pretty fantastic with language, but they’re not just applied to language. They’re applied to pharmaceuticals. They’re applied to all kinds of things other than language. So it really should be large event models.

Peter: So I’d like to turn the conversation to a couple of questions that have been on a lot of people’s minds, and I want to put them to bed because it causes fear and excitement. The first is concerns of job loss. And I think you and I have both been consistent on our positions here. But do you see significant job loss in 2025, ’26, ’27 as a result of humanoid robots, AI-driven humanoid robots coming online, and large language models entering this field?

Ray: Well, I mean, we have to consider what jobs, what value we get out of jobs. And if it’s actually to explore new areas, find out new things that we didn’t know before, that’s very exciting. That’s what I’ve tried to do with my life. And I can actually do that more if I actually have more intelligence. I mean, I have a certain amount of intelligence, but it’s limited. I’ll have a lot more if I can actually merge with the intelligence that we’re creating. But people are very eager to retire because they really don’t like their jobs. The jobs are, “We’ve got a bunch of tables that have been used. We have to clean it up.” It’s not the kind of job where… the bathroom. Yeah. These people are, these people get jobs because it’s what they need to put food on the table for their kids or get insurance. It’s not what they dreamed of doing as a kid. And that kind of job is what we’re going to be able to do for ourselves. Now, that’s going to lead to some kind of dislocation. For example, large language models can code already. They’re not quite the same level as a professional coder, but they’re going to get there pretty soon. So we’re going to have to relearn how to apply our new intelligence to create new kinds of capabilities. That’ll be very good. That’s what we’ve done anyway. 80% of people worked in food production 200 years ago.

Peter: Farmers, yes.

Ray: Two percent. So we’ve lost all those jobs, and yet we have actually more people working. People say, “Well, don’t worry about losing your job. You’ll become a social media influencer, and no one will know what you’re talking about.” So we’re going to have new types of jobs, things that we can’t even imagine today. So there’s going to be problems with figuring this out, and it’s going to go very, very quickly. But ultimately, we’ll be happier for it. I think we will give people money so they can manage while they’re reconsidering what kind of work they will be able to do.

Peter: Do you think UBI will come online, Universal Basic Income?

Ray: Yes, I think in the 2030s.

Peter: Would that come… I’ve heard Jeff Bezos talk about we’re going to tax the robots and the AIs that take the jobs, and then we’ll use the income from that as the tax base to then provide UBI.

Ray: People have different ideas. I’m not sure we’ll follow anybody’s ideas, but we’ll figure that out. I mean, we’ll have to do that. People won’t have money to buy products.

Peter: Another question along these lines is, I think humans inherently need some level of challenge and purpose in life to feel happy. And I’m curious, as we head towards digital super intelligence or artificial super intelligence, whatever name you like.

Ray: The digital super intelligence is going to be within us. It’s not like we’re here, and the AI is over there.

Peter: So we had that conversation on the Abundance stage last year too. And it’s interesting, right? Because that effectively means we are going to speciate. There are going to be those who choose to merge with AI and become enabled, empowered, and those who choose not to. And then you’ve got the scenario with Her, the movie Her, in which the AIs get bored with us and just take off and go away. So will we link with AI, I guess is one of the questions.

Ray: Well, we will. I mean, 15 years ago, I presented something like this. It said, “Are you going to want to carry this around every day? You don’t want to have it with you?” People said, “Nah, maybe once a day I’ll look at it.” Recently I asked everybody, “Who has his cell phone?” Literally, everybody had their cell phone. So it’s not like there’s certain persons that carry their cell phone and certain people that don’t. Carrying their cell phone is a problem, though. If you want to get information from it, it takes a few seconds. It can take like 10, 15, 20 seconds. That’s actually too long. You’d better if it just pops up in your mind. And it’s like if you try to remember something, you remember it instantly. That’s how we want this new intelligence to be. We don’t want to have a separate thing. I mean, I have to carry this around all the time. What a pain in the neck, huh? Or a pain in the butt.

Peter: Yeah, it is. But we do it. So you think purpose is going to be, we’re going to up-level our purpose because we’re able to now take on new huge aspirational goals because we’re more intelligent and capable?

Ray: Yes, absolutely.

Peter: As do I. Would we want to stay at the level of a mouse? I mean, a lot of the problems we have are because of our intelligence. Like we created atomic weapons. Mice didn’t do that. We did that. And that’s because of our added intelligence. So we’ve actually created problems, and people think of new problems created by the additional intelligence we’ll have. And I guess I’m advocating that we’ll become more intelligent. I think that’s beneficial. And we’ll be able to do things that are aspirational. You know, today when I’m on stages, and I speak about a future of connecting with the cloud, of brain-computer interface, of connecting our neocortex with the cloud, and I ask people, “How many of you would like to do this?” and you know, there’s always a good 20% in the audience, but it’s not 100% because I think they fear the unknown. I think it’s very similar to when you asked the question 20 years ago, “How many of you want to carry this around with you everywhere all the time?”

Ray: You know, Neuralink, we humans connect at like 40 bits per second as we’re communicating, I think is the appropriate rate, speed there.

Peter: But our thoughts actually occur to us much more quickly than that.

Ray: They do internally, sure.

Peter: Yeah. And that’s what we’re going for. We’re going to actually be able to create internal speeds like we have with our own brain. And that’s what we’ll have in the 2030s. We don’t have that today.

Peter: Yeah, we just saw two Nobel Prizes. Obviously, the prize that Geoffrey Hinton received, and then the prize that Demis and John Jumper received, both at DeepMind, related to AI and directly in AI, you know, in AlphaFold.

Ray: Yes, it was very much for AI. It was a great year.

Peter: And here’s the question. It seems to me like I would imagine in the next two to 10 years and then beyond, almost every Nobel Prize in chemistry, math, and physics will be enabled by, or directly discovered by, AI. Do you agree with that?

Ray: Yes. I mean, right now, 95% of the Nobel Prizes in medicine go to people with MDs. I think they should get rid of that requirement because computer scientists, they may have studied anything, but they know computer science, and they can develop AlphaFold, for example, which should stymie human beings until they get AlphaFold.

Peter: Yeah. I remember when I was in medical school, the huge grand challenge was, “Can you predict the folding of a protein?” It was the unknown supercomputer. Some people could do that, but they didn’t do it very accurately, and we only did a few hundred thousand. Then one year, we did 200 million proteins in one year.

Ray: And then AlphaProteo, we can actually design new proteins that bind to a given target protein. And we can actually create something, for example, that goes into a cell, determines whether or not it can divide cancerously, and if it divides cancerously, it will set a flag. If the flag is set, it will destroy that cell. So basically, there would be a cure for cancer. And we can actually do that with AlphaProteo. We have to actually design the protein. So there’s a few steps left, but that’s actually done a very large fraction of curing cancer, for example.

Peter: Let’s switch, on that note, to the subject of longevity escape velocity, something that we’re both passionate about. In fact, I’ve got my longevity cup here, which says on the backside, “What would you do with an extra 30 years of healthy life,” which is a lowball number.

Ray: Well, also, I mean, when you get to 30 years of extra life, it’s not like nothing’s going to happen during that 30 years. That’s the incredible part, right? It’s like… I mean, every year we’re going to be able to develop new things. For example, all of our organs, what do they do? They either put things into the bloodstream, or they take things out, except for the heart and brain, which is a different matter, but we can also deal with that. But like the lungs, put in oxygen, take out carbon dioxide. And we’re actually developing those. Going to a board meeting of the United Therapeutics next year, and we’re actually developing lungs, hearts, kidneys, and so on. It’s amazing. And it’s only a few years away. So literally, all of our organs will be redeveloped so they’ll be much more reliable. Right now, people lose their lives because one of their organs doesn’t function properly. So right now, you go through a year and use up a year of your longevity. However, you’re getting back approximately four months from scientific research. So you’re only losing about eight months a year of your longevity. But by the early 2030s, around 2032, depending on how diligent you are, you’ll live a year, you give up a year of your longevity, but you’ll get back a full year from scientific progress. Beyond that point, you’ll actually get back more than a year. So you’ll actually go backward in terms of time. Now, that doesn’t guarantee you’ll live forever. You could have a healthy 20-year-old, you could compute his longevity as many, many decades, and he could die tomorrow, probably from an accident. We’re also dealing with that. Self-driving cars, for example, have almost no accidents. We lose 40,000 a year from human drivers. Generally, self-driving cars will actually have no accidents. But anyway, it’s an amazing time.

Peter: I just tweeted out this morning a quote from a paper by Dario Amodei, the CEO of Anthropic, who considers you one of the great visionaries. You’re a hero of his. And he says, “It is my guess that powerful AI could at least 10x the rate of these biological discoveries, giving us the next 50 to 100 years of biological progress in the next 5 to 10 years.”

Ray: And it’s going to keep getting faster and faster. It’s not like we just go through one step, and suddenly we’re going from getting back four months a year to 12 months. It’s going to go faster and faster. And we’re going to actually deal with accidents and actually be able to back up our brain, back up our heart. And it’s going to be very hard to imagine how you could die. And people don’t really want to die. You ask people, “Well, do you want to live to 120?” and people are negative about that because they think of people that they’ve met. They haven’t met anybody 120, but they’ve met 95, 100-year-olds. They don’t want to be like that, but we won’t be like that. And people actually, and people say, “Well, I don’t want to live past 95.” But when they get to be 95, and if they have a sound mind and body, and they say, “Do you want to die tomorrow?” the answer is no unless they’re in horrible pain. And obviously, we want to avoid that as well.

Peter: Yeah, I agree. Now you’ve been very, again, part of your predictions, you’ve been very specific in saying that we’re going to reach longevity escape velocity by the end of the year 2030. I’ve had this conversation with George Church and David Sinclair, and they are placing it in the mid-2030s. That’s still, for anybody listening, that’s the next six to 10 years. And so I think your advice is, “Don’t die from something stupid between now and then,” right?

Ray: Right, exactly. And so what’s dependent on our own body, which is very problematic. We’ve got all these different organs. One of them doesn’t function quite correctly, and it’s not your choice. You could die. But if you, I mean, I really believe if you can hold on for five, maybe 10 years, we can replace all of these problems that enable us to die more quickly than we want.

Peter: Did you see the recent news that scientists have been able to create the Drosophila’s connectome? They were able to, using a scanning electron microscope, actually slice the brain of a Drosophila so thin and then use AI to map all the synaptic connections. It’s the first full brain that was actually mapped. And I find that amazing.

Ray: Yeah, we’re not going to use actual brain matter to expand our brain. We’re going to use kind of, I mean, already the brains of computers are much more intelligent. I mean, it can go trillions of calculations per second. Our own brain goes about 200 calculations per second. So we don’t want to use our brain matter to expand our brain.

Peter: Yes, true. But understanding how the brain functions to avoid neurological…

Ray: All kinds of things. I mean, literally every day there’s fantastic progress. I just think about a year ago, it seems like ancient history.

Peter: Let’s go a year forward. I am curious, being the oracle of the singularity ray, that’s my nickname for you. What do you have for us for the year 2025? What do you think is likely, or maybe you would broaden it from ’25, ’26, ’27. What are we likely to see in the next one to three years?

Ray: Well, I mean, just based on conversations I’ve had in the last few days, we can already take an idea we have and transform it into a movie. Today the movies are not quite there. Let’s convince you that it’s done by a person, although Google actually has a thing where you can actually take anything. I fed in my entire book. Just fed in the entire book and said, “Have a podcast between two people that talks about the summary of the book.”

Peter: It’s called what? It’s called NotebookLM, isn’t it? Yeah. It’s an amazing product.

Ray: It actually got the right summary because there’s a lot of ideas in the book, some of which are not that important if you want to talk about the summary of the book. It actually picked the right things and had two people that sounded human interacting about a summary of the book, which was better actually than most summaries that people have created on their own. That’s today. Suppose you say, “Well, okay. We got these two people, but I want to actually see them. I want them to actually be in some kind of situation.” We can’t quite do that today, but that will happen within a year or two.

Peter: Yeah, I believe that. I just saw one of the Avatar-type companies has created the ability to create your Avatar and then have it join a Zoom meeting and represent you in form and voice. You can be attending 100 or 500 meetings. You and I used to joke, “I’ll have Ray, two of 10, meet with Peter, three of 10 in the meeting tomorrow afternoon,” and be able to be in multiple places at once. That’s pretty extraordinary.

Ray: Yeah. That’s going to happen. I don’t know if it’s one year or two years or three years, but it’s happening. It’s in that time frame.

Peter: Any other fun predictions or conversations you’ve had in the back halls of Google?

Ray: Well, all the things that we do… I mean, what’s really exciting to me is when we created pharmaceuticals, we used to go to a person who’s had some experience and they have some idea of what might be a pharmaceutical interaction and they work for 10 years testing it on people and maybe if they’re lucky, they’ll find some pharmaceuticals. Most of the pharmaceuticals on the market today were done that way. So when we were doing the analysis to find the COVID vaccine, they made a list of all the different mRNA sequences that might create COVID, and they had several billion of them. So you couldn’t actually test all several billion on humans. That’s impossible to do. So they actually simulated that, and they simulated in two days. They tried all several billion in different ways. They could eliminate this patch and this patch and they found one thing that caused COVID and they came up with a vaccine. They actually created that vaccine in two days. Now we actually then tested it on humans. It took 10 months. We will be able to eliminate that by testing it also using simulated biology and do that in a few days and we’ll be able to actually test every type of medication we want against cancer and so on very quickly. It’ll be millions of times faster than what we’ve done.

Peter: Especially with AlphaProteo, right?

Ray: That’s part of the process to actually come up with proteins that can do that. Human trials are slow, they’re risky, they’re expensive, they take a long time. We’ll be able to do this much more quickly, literally millions of times faster over the next few years. So I think we’re going to achieve—and already there are things on the market that were done that way for cancer and so on. I’ve got people who’ve got cancer and are actually trying these new things. It’s going to be really fantastic over the next few years.

Peter: I want to go to a few questions from my Twitter audience in our last segment here. First question is, do you say please and thank you to your AI, to Gemini, when you’re communicating with it or with ChatGPT or whatever you’re using?

Ray: Yeah, I haven’t been convinced that they’re human yet. Maybe that’s just an old habit of mine.

Peter: I definitely say please and thank you. I talk to them and it’s like, I feel like if they become conscious, I want them to know I was respectful from the beginning.

Ray: Well, that’s good. You’re more advanced than I am. I’ll get there.

Peter: All right. Next question here. Will humanity… this is from Anna Panart… Will humanity split into two, an AI-merged set of humans and then the original Homo sapiens species? What do you think? Are we going to speciate?

Ray: No. I mean, I asked 15 years ago whether people would carry these around [gestures to a phone], and I would say maybe 80% said they would not. But that 80% is carrying this around today. I mean, how many people don’t carry their cell phone around? It’s approximately zero. If you actually go out and you don’t have it, you’ve got to go back and get it.

Peter: Yes, that’s true. So the answer is if you ask people today, some people would say no. But when we actually get there, the advantages are going to be so great that everybody’s going to have it. And it’s going to be a lot easier because you won’t actually have to carry this around. It will be inside your mind.

Peter: Next question comes from Dustin Headed. He says, “If you could upload your consciousness into a robot,” and I think we will be able to, “would you choose to retain your human flaws or opt to be a perfect version of Ray?”

Ray: Well, right now we have really just one body we can be. Virtual reality has given us a little bit of flexibility on that. But the type of bodies you can get with virtual reality aren’t quite there. They will be there. Ultimately, we’ll be able to create new types of bodies. We won’t be limited to one.

Peter: That’s a great point, right? Different bodies for different occasions. And you can have flaws or not have flaws and so on. And what we’ll accept from our body won’t necessarily be even a human body. You’ll be able to play games and so on. That’s the way you have a different type of body. We won’t be limited to one body per person.

Peter: I love that. Torsten HQ asks, “What are your recommendations for a teenager today for being ready for the future?” Right? University, trade, travel, hard skills, soft skills. That’s a question I’m often asked as well. I’m super curious. I have two 13-year-old boys. I don’t think school is preparing them for the future. What’s your recommendation?

Ray: Right. Well, it’s not just for teenagers. I mean, it’s also for young kids and also for old kids like you and me. You want to learn the passions that come from different types of activities and what benefits they can then provide. And then you can actually create new types of institutions with new types of intelligence that we don’t have yet, that we will have in the future. So you want to find out what is beneficial from everything that we put effort into. You want to be courageous. You want to follow your dreams despite the skeptics. You want to see an exponential future.

Peter: Yeah, I answer that. I want my kids to find their passion and become passion-driven or purpose-driven in that regard. Ash Stewart asks, “Will AI or ASI lead to more centralization or decentralization?” And that’s a fascinating question on a government side, on an organizational side. How do you think about that?

Ray: Well, what do you mean by centralization?

Peter: I think what they mean is, would a government be enabled with AI? And I’m reading into their question here because it’s not my question. Would China benefit more from AI or a decentralized government be more beneficial from AI? The idea of communism never could possibly have worked with humans trying to run the supply-demand curve, and capitalism was the ideal marketplace for that. But the question is, is an all-knowing AI able to do a better job moving society in a direction?

Ray: Well, I think individual people will be able to do more than they can do today because we’ll have many more skills. And you brought up certain mathematical things. A lot of people in the audience may not have actually heard that before, but we’ll be able to understand that. It won’t necessarily have to go to a lot of different people. We’re basically taking all of the creations of every living person who’s ever demonstrated anything, and we will actually have access to all of it inside our own minds. So it works both ways. It is democratizing. And molecular assemblers, when we get to that—I think that’s going to be more like the 2040s—we’ll be able to build anything we want anywhere. So the ability of one person is going to become more democratic as we move forward.

Peter: Here’s a two-part question. The Turing test. I feel like we passed the Turing test a while ago. The originally described Turing test, right? And we just didn’t notice. Do you agree with that?

Ray: Yes. I said the Turing test is going to be… we’re going to hear early versions that computers are past the Turing test and we won’t actually pay attention to that. By the time it becomes clear that computers can pass the Turing test, it’ll have been said so many times that we’ll dismiss it, “Oh, that’s old hat at this point.” And we’ve already gone past that first point. We’ve already heard that GPT-4 or so can handle the Turing test. So by the time it becomes obvious, it won’t be a question anymore. And it’s probably like four or five years. We’re in the midst of that now.

Peter: You know, we saw GPT-01 score an IQ of 120. About a year ago, we saw an Anthropic Claude 3 score an IQ of 101. The average human IQ by definition is 100. I know that Google has…

Ray: Right, but there are some people that have a higher IQ.

Peter: Sure. So it’s not equal to all people, but that’s the average. The capability of so-called large language models is going to increase. So it’s 120 today, it’ll be 130 next time, 140. So that’s my question. Where do you think we’ll be next year? Or a year from now? How fast is that escalation likely to be? I mean, I’d say over the next… by 2029, it’s going to be far beyond what humans can do.

Ray: I think that is definitely true. But I’ve heard projections that we’ll hit 150. And the question becomes, do you think that could happen by early next year or next year? Which would put it in the top 0.0001% of humanity. And also, it has some things that no human being can do at all. I mean, you can ask it any question and it can actually give you a very good response. If you don’t like that response, you can ask it again. It can say the same thing in different words, but also intelligently. No human being today can do that. So it has that advantage. It knows everything that anybody has ever dealt with.

Peter: Ray, I want to thank you for your mentorship, for your leadership, for your vision, for all that you’ve done.

Ray: Yes, well, thank you. It’s been a two-way pleasure.

Peter: Yeah. I’m looking forward to the next 30 years together, maybe 100, we’ll see, you know.

Ray: Yes. Well, the next five years is going to be pretty spectacular. It’s going to be unbelievably spectacular and it is such a pleasure to be alive. You know, the one reason I think we’re living in a simulation is this is such the most extraordinary time ever to be alive, and the coincidence of being here now… well, hey.

Peter: But you talk to people that think it’s a terrible time to be alive, so…

Ray: Yeah, that’s a problem. They don’t want to have children, so.

Peter: Yeah, well, you know, I remember sitting down with you and Neil Jacobstein in the early days of Singularity University where the idea of my first book, Abundance: The Future Is Better Than You Think, came into play. Because as we’re sitting there talking about it, and I’m saying, wow, technology is the force that turns everything from scarcity into abundance over and over and over again, and people’s default mindset of fear and scarcity is blinding them to what’s going on in the world.

Ray: Yeah, it’s true.

Peter: Yeah. But that will continue. Well, hopefully, we can help people see it a little bit differently.

Ray: Right.

Peter: Have a beautiful day, my friend, wherever you are in the cosmos and thank you for spending the time with me. I hope we can do this again.

Ray: Yeah, absolutely. Thank you.

Feb/2024 (recorded Mar/2023)

Video: https://youtu.be/Iu7zOOofcdg3https://youtu.be/Iu7zOOofcdg
Title: Ray Kurzweil Q&A – The Singularity, Human-Machine Integration & AI | EP #83
Transcribed by: OpenAI Whisper via MacWhisper
Edited by: Unedited, but some formatting by Claude 2.1 on Poe.com4Prompt: Split the content of this transcript up into paragraphs with logical breaks. Add newlines between each paragraph. Improve spelling and grammar. The speaker is **Ray: ** and add **Question: ** whenever an audience member poses a question.

Date: The annual A360 Summit live from Los Angeles, CA on March 20-23, 2023. Released 2/Feb/2024.

Full transcript (unedited, some formatting by Claude 2.1)

Ray: It seemed to me that a huge revolution was going on. Now it’s changed. I am optimistic, but I’m also worried about it. I’ve been in the field of AI for 60 years. I was 14. I met Marvin Minsky, who’s in his 30s. Frank Rosenblatt who created the Perceptron, the first popular neural net. But in the early years, it was really not clear that neural nets could do anything successful. And they’re showing now that this was really the path to artificial general intelligence. It’s not just us versus AI. The intelligence that we’re creating is adding AI to our own brains. 2045 is when I said we will actually multiply our intelligence millions fold. And that’s going to be true of everybody. And we’ll be able to get rid of, you know, terrible lives that we see through poverty and lack of access to information. Great. Good morning. Good morning to you. It’s great to be with you, Peter, and also Salim. I’ve done lots of presentations with Peter. It’s really remarkable what you’ve contributed.

So I just want to share a few ideas. I’ve been following large language models for almost three years. There was Lambda, now BARD from Google, different GPT versions from OpenAI. It seemed to me that a huge revolution was going on. Now it’s changed. OpenAI changed GPT-3 to ChatGPT. It was the fastest growing app, I believe, in history with over 100 million users within the first two months of its launch. And lots of other companies, particularly Google, are introducing. Google just introduced BARD, I think, a few days ago. OpenAI has also introduced GPT-4. Without going into comparison with these LLMs, because it changes like every day, I can write things in one style and ask it to be articulated in the style of Shakespeare, E.E. Cummings, any other poet or writer. The results are amazingly impressive. In my opinion, this is not just another category of AI.

To me, it’s as significant as the advent of written language, which started with Cuneiform 5,000 years ago. You remember using Cuneiform 5,000 years ago. Homo sapiens evolved in Africa 300,000 years ago. So for most of that history, we had no ways of documenting our language.

In the past century, we’ve added to written language, we’ve added word processors and other means to help us. But this latest breakthrough allows us to creatively create written language based on the LLM’s own understanding.

It’s going to go in all directions and at a very high speed. I mean, just look at it in the last two years. It’s been unbelievable. It’s going to change everything we do.

It can write code perfectly. It can convert code into human terms, deal with all languages, different styles of communicating and so on.

It’s been already very extensively used to create answers for subtle questions.

So I actually took a couple of the top LLM’s and I asked the various questions like, how do my views of consciousness relate to those of Marvin Minsky and how do they compare?

Now that’s kind of a subtle question. I’m not sure I’ve actually ever read anything that answered that question.

I asked LLM’s from Google and from OpenAI. The answers were really quite remarkably subtle, very well stated, and they were not copied from anywhere else.

Now many people are concerned that large language models may promote ideas that are not socially appropriate, that engender racism or sexism and so on.

It’s definitely very worthwhile for us to study this. That may happen from time to time, but I’ve actually used LLM’s probably close to a thousand times.

I’ve actually not seen anything that could be categorized that way. Maybe it’s the way I asked the question. It also seems pretty accurate.

The only mistake it made is that I thought my son Ethan went to Harvard as an undergraduate. He actually went there for an MBA.

I’ve written a new book, which I’ve talked about for years. The singularity is nearer. It should be out in about a year.

I keep writing because literally every week we can’t come out with this without covering this. But that’s been happening now every few days.

So I finally had to give up on that. By the time it comes out, it’ll be out of date. But it’s not just covering today. It’s covering how we got here and what will happen in the near future.

Critics of AI very often show how large language models may not be perfect. There was one recently said, well, if you put mathematics inside language, it doesn’t do that correctly.

But now within a year of saying that, that’s no longer true. So one of my themes, and this is also true of Peter and Salim, has been the acceleration of progress in information technology, but also everything that we work on.

So here’s a chart. I actually came out with this chart 40 years ago. It shows for each year the best computer that provided the amount of computations per second.

And it’s pretty much a very straight line on an exponential growth. And people were not even aware of this. I mean, I came out with this graph 40 years ago.

It’s 40 years after the progression started, and I’ve been updating it ever since.

Now, people very often call this Moore’s law. I really believe we shouldn’t do that anymore because there’s nothing to do with Moore’s. I mean, this started decades before Intel was even created.

It’s been going on for 40 years before anyone even knew it was happening.

If you go to the bottom left, the first programmable computer was the Zuse 1, 1941. It performed .00007 calculations per second per dollar.

Zuse was a German, apparently was not a fan of Hitler, but it was shown to Hitler, and some people were excited about getting behind this, but they didn’t get behind it.

They saw no military value to computation, a big mistake for them, among a lot of other mistakes.

The third computer on here is the Colossus, created by Alan Turing and his colleagues.

Now, Winston Churchill felt that this computer would be the key to winning World War II, and that was true.

They got totally behind the Colossus computer, and they used it to completely decode Nazi messages.

So everything that Hitler knew, Churchill also knew. And so even though the Nazi air power was actually several times that of the British, they used the Colossus to win the Battle of Britain anyway with this computer and provide the allies with a launching pad for its D-Day invasion.

So if you go along this chart, there are many stories behind all the computers on this chart.

It almost looks like someone was behind this exponential trend, like someone’s following it, okay, we’re at this point now, we need to be here for the next year.

But for the first 40 years, no one even knew this was happening. It just happened. That’s the nature of exponential growth.

And this is just one example of exponential growth. It’s not that everything comes from this graph. This graph just shows you one example of how technology expands exponentially. And whether we’re aware of it or not.

So exponential growth impacts everything around us, including everything that we create. And I projected that this would continue in the same direction that I noticed 40 years ago.

And as you can see, it’s done that. It’s gone from telephone relays to vacuum tubes to transistors to integrated circuits.

As I mentioned, people have called this Moore’s law, but as I said, that’s not correct. It started decades before Intel was even formed.

Of the 80 best computers in terms of computations per second per dollar, only 10 of these out of 80 have anything to do with Intel.

Now, every five years, people were going around saying Moore’s law is over. You might remember that this started when the COVID pandemic started just a few years ago.

People saying Moore’s law is over. And of course, I went around saying, hey, should that be called Moore’s law?

Regardless of that, whether Intel chips were the best value or not, this exponential progression has never stopped. Not for World War II, not for recessions, not for depressions or for any other reason.

It’s gone for 80 years from point 0.000007 calculations per second per dollar to now 50 billion calculations per second per dollar. So you’re getting a lot more for the same amount of money.

And it’s only in the last three years that large language models have been feasible.

So people believe that neural nets were effective decades ago did so really based on their inclination, not any evidence.

I’ve been in the field of AI for 60 years. That’s quite amazing. Like where does the time go? I was 14.

I met Marvin Minsky, who was in his 30s. Frank Rosenblatt created the Perceptron, the first popular neural net.

As far as I’m aware, I don’t think anyone else has 60 years experience or more in AI as I’ve had. But if you’ve been there for more than that, let me know. I have a lot of stories about that.

But in the early years, it was really not clear that neural nets could do anything successful. And they’re showing now that this is really the path to artificial general intelligence.

We will have large language models that can understand lots of different types of written language, from formal research articles to jokes and so on.

They’re now mastering mathematics within the language. They can code and do so perfectly and at very high speed.

Now, this obviously brings up not just that, but all the things it can do brings up concerns about its effect on human employment, which we were just talking about.

Employment is really not necessarily the best way to bring resources to human. I mean, look at around the world.

France is now dealing with protests because they’re adding a couple of years before people can access their retirement. It tells me that people really don’t like the jobs they do for employment.

So that’s, I think, a difference. We’ll actually be able to do what we are really cut out to do.

And in my opinion, it’s not just us versus AI. People say, well, how are we going to compete with AI?

The intelligence that we’re creating is adding AI to our own brains just the way our phones and computers do already. This is not an alien invasion of intelligent machines coming from Mars.

I mean, how many people here have come to this meeting without your phone? It’s already part of our intelligence. We can’t leave home without it.

It ultimately will be automatically added to our intelligence, and it already is.

I’ll add one more AI topic, and I’m sure we’ll get into a lot more during the questions and answers. But something else that’s also extremely exciting, which is simulated biology.

This has already started. The Moderna vaccine was created by feeding in every possible combination of mRNA sequences and simulating in the computer what would happen.

They tried several billion of such sequences, and they went through them all and seen what the impact would be. It took two days to process all several billion of them. And then they had the vaccine. It actually took two days to create.

It’s been the most successful COVID vaccine. And because we did test it with humans, we’re going to get over that as well.

We’re ultimately going to be using biological simulation of humans to replace human testing. I mean, rather than spending a year or several years testing the results on a few hundred subjects, none of which probably match you, we will test it on a million or more simulated humans in just a few days.

So to cure cancer, for example, we’ll simply feed in every possible method that can detect cancer cells from normal cells and destroy them or do anything that would help us.

And we won’t evaluate them. We’ll just feed in all the ideas we have about each of these possibilities into the computer.

The computer will evaluate all of the many billions of sequences and provide the results. We’ll then test the final product with simulated humans, also very quickly.

And we’ll do this for every major health predicament. It will be done a thousand times faster than conventional methods.

And based on our ability to do this, we should be able to overcome most significant health problems by 2029.

That’s, by the way, my prediction for passing the Turing test. I came out with that in 1999. People thought that was crazy.

Stanford had a conference. Actually, 80 percent of the people who came didn’t think we would do it, but they thought it would take 100 years.

They keep polling people. And now everybody actually thinks that we will actually pass the Turing test by 2029.

And actually, to pass the Turing test, meaning it’s equivalent to humans, we’re actually going to have to dumb them down, because if it does everything that a computer can do, we’ll know it’s not a human.

But this will lead people who are diligent about their health to overcome many problems, reaching what I call longevity escape velocity by the end of this decade.

Now, this doesn’t guarantee living forever. I mean, you can have a 10-year-old and you can compute their life expectancy, whatever, many, many decades, and they could die tomorrow. So it’s not a guarantee for living forever.

But the biggest problem we have is aging, and people actually die from aging.

I actually had an aunt who was 97. She was a psychologist. And she actually was still meeting with her patients at 97.

And the last conversation I had with her, she’s saying, “Well, what do you do?” And I said, “Well, I give lots of speeches.” “Well, what do you talk about?” And I said, “Longevity escape velocity.”

“Oh, what’s that?” And I described it. And the very last thing she said to me, “This longevity escape velocity, could we do that a little faster than you’re doing it now?”

So anyway, I look forward to your questions and comments, and it’s really delightful to be here.

Peter: Thank you, Ray. All right. I’m going to take privilege and ask the first question. Ray, we’ve seen LLMs. What’s the next major breakthrough that you expect to see on the road of evolution of AI?

Ray: Well, LLMs, I mean, they do remarkable things. But it’s really just the beginning. I mean, the very first time I saw an LLM was three years ago, and it actually didn’t work very well. Every six months, it’s completely revolutionary. So it’s going to give us new ways of communicating with each other. And as I said, I think it’s the biggest advance since written language, which happened 5,000 years ago. I mentioned advancing longevity escape velocity, doing simulated biology. We’ve actually done that. People are taking this test, which was done with simulated biology. Lots of people are going into this. It’s the way biology is going to be done, and we’re going to see amazing progress starting, really, I’d say, in a few years. It’s going to do everything that we do, but as I said, it’s not competing with us. I mean, I write mostly about the positive. I think I think things are moving in a positive direction in this new book. I’ve got 50 graphs showing all the things we care about are moving in the right direction. But that never reaches the news. You watch the news. Everything is bad news. And the bad news is true. But we completely ignore the good news. I mean, look at what life was like 50 years ago or nineteen hundred. The human life expectancy was forty eight. It was thirty five and eighteen. It’s not that long ago. So anyway, we are able to begin to tell what’s going on inside our minds with some greater accuracy.

Harry: Hey, Ray. Good to see you. So Ray and I have been collaborating for actually probably 20 years on something else, not natural language programming, but humanoid robots. Ray, I wanted to get your opinion. You know, at Beyond Managed Nation, we’re creating AI-powered robots called Beyond Me, and we have a lot of discussions about AI for natural language, for images. Where do you see AI and humanoid robots going in the future to impact physical work?

Ray: Yes, that’s a very good comment. I’ve been very pleased to hear of your amazing progress. I mean, you have a robot that can actually take something and actually flip a cap off a jar. No one else can do that. We’ve not made as much progress in this area. We can do fantastic things with language, but if I give you a table that has where you need to put it in the dishwasher and wash out dishes and so on, we have not been able to do that. You’re actually working on that, and I think that’s going to be amazing with these types of robots. You could send someone into a burning building and save people. You could have a surgeon in New York perform surgery on somebody in Africa. So we’re going to actually master the human body and how we move, and we’re going to be using neural nets to do that. And I think that’s another thing we’re going to see really starting now, and will be quite prevalent within a few years.

Samuel: Hi, Samuel Smith from Tyler, Texas. I’m currently working on a way to help students learn using AI and putting a lot of them together. What I’m really curious though is, with the rise of artificial general intelligence, how do we grow with AI? Because I know there’s a lot of fear out there. And what would you say to the people that are wanting to grow with AI?

Ray: Well, yes, I mean, we’re going to be using these types of capabilities to learn. One of the biggest applications of LLM is to help education. In many ways, we’re educating people the same way when I was a child or when my grandparents were children. We really need to go beyond that. We can learn from computers. They know everything. They can become very good at articulating it. They can actually measure where a student is and help them to learn, overcome their barriers. And they’re going to be then part of the solution. Again, these computers is not something we need to compete with. We need to know how to use them together. And another big application of education is socialization, getting to learn other people and make friends and so on. So we’re going to have to actually do that as well. Computers can definitely help there. But we’re going to completely use large language models that are coming out very soon to really revamp education.

Yisheng: Good morning, Ray. I’m Yisheng Liu. I’m from Texas. Very much looking forward to meeting you today. Thank you, Peter, for having me here. My question to you is, how do you predict the future with such accuracy? Is it because you helped to shape it and then delivered it or you calculate, you know, the loss that other people don’t? And then you can predict it. So which one is actively shaping it? Which part is it?

Ray: Well, that’s a very good question. I’ll give you a very brief idea of how I got into what I’m doing. My great grandmother actually started the first school that educated women to 14th grade. If you in 1850, if you were able to get an education at all as a woman, it went through ninth grade and she went around Europe educating why we should. Educate women. It’s very controversial. Like, why do you want to do that? Her daughter became actually the first woman to get a Ph.D. in chemistry in Europe. She took over the school. They ran it for 80 years, called the Sternshuler in Vienna. There’s a book about it. And she wrote a book, actually, the title of it. It would be very appropriate for one of my books. It’s called One Life is Not Enough. But she wasn’t actually talking about extending life. She didn’t have that idea. But she noticed that one life really is enough to get things done. So she showed me when I was six years old, she showed me the book and she showed me the manual typewriter that she created it on. I got very interested in the book many years later.

At that time, I was not interested in the book, but I was amazingly interested in the manual typewriter. I mean, here’s a machine that had no electronics. It’s a manual typewriter. And it could take a blank piece of paper and turn it into something that looked like it came from a book. So I actually wrote a book on it. It’s 23 pages. It’s about a guy that travels on the back of geese around the world and wrote it on the book and actually created pictures by using the dot and X keys to create images. So I then began, I noticed this was just created with mechanical objects. So I went around the neighborhood and I gathered mechanical objects, little things from radios, broken bicycles. This was an area where you would allow a six year old kid to go around the neighborhood and collect these things. You’d probably get arrested today. And I went around saying, I have no idea how to put these things together, but someday I’m going to figure that out and I’m going to be able to solve any problem. I’ll be able to go to other places. We’ll be able to live forever and so on. I remember actually talking to these very old girls. I think they were 10. And they were quite fascinated. And they said, well, you have quite an imagination there.

So other people were saying what they wanted to be, fighting fires or educating people. I said, I know what I’m going to be. I’m going to be an inventor. And starting at eight, I actually created a virtual reality theater that was a big hit in my third grade class. So I got into inventing. And the biggest problem was when do you approach a certain problem? Like I did character recognition in the 70s. I did speech recognition in the 80s. Why did I do it that way? It’s because speech recognition requires actually more computation. So I began to study how technology evolves. And really about 40 years ago, I realized that computers were on this exponential rise. And so I didn’t get into futures and for futures in itself. It was really to plan my own projects and what I would get involved in. And so if I look forward five years, 10 years, we’re now actually at a very fast pace of this exponential path, as you can see. I’ll see what what what are the capabilities going to be? And then you needed to use a little bit of imagination. You know, what can we do with the computers of this power and other types of things that we can capable that we can manage? But that’s really been my plan is to figure out what is capable and that you saw that chart. It’s just absolutely straight line. I had 40 years ago and projected it as a straight line and it’s exactly where it should be. So that’s and then then you can use imagination as to what you can do with it with that type of power. So that’s that’s how I go.

Peter: Great. Just to point out, it’s a straight line on a log scale, meaning it’s going exponentially.

Ray: Yes, exactly. Thank you, sir.

Mike: Hi. Great to meet you, Ray. Quick question. When do you think that quantum computing will break RSA encryption?

Ray: Well, I’m a little bit skeptical of quantum computing. I mean, people go around saying, oh, we’ve got this 50 qubit computers, but it creates lots of errors. And we’ve actually figured out how many qubits you would need to actually do it perfectly. I mean, computation that creates lots of errors is pretty useless. And so it takes about at least a thousand, maybe even 10000 qubits to create one qubit that’s actually accurate. Last time I checked 50 divided by 1000 is less than one. And we really haven’t done anything with quantum computing. And that was the same thing 10 years ago. So maybe we’ll figure out how to overcome this problem. I know there are people working on it. They’ve got some theories why that will work. But all the predictions I make have to do with classical computing, not quantum computing. And you can see the amazing things that we’re doing. And if you look at what humans can do, we can definitely account for that with classical computing.

Neil: Hello, Ray. My name is Neil from Sacramento, California. Many of the technologies that we’re seeing are going to be more readily available to people with the financial resources and the education to immediately take advantage of. But what do you believe are the technologies that will be most ubiquitous and will have the biggest impact, perhaps on the middle class and the working class communities? And how would we best educate our broader communities to be able to understand and help embrace those technologies?

Ray: Well, they’re all working together. I mean, I think I think we need a little bit more work, for example, on virtual reality. But I mean, that allows people to go anywhere and interact with people that don’t exist now, but might have existed, you know, tens of millions of years ago. And also put people together. I mean, the virtual reality we’re using right now is a little bit limited. There’s actually some new 3D forms that I’ve actually begun to use where it actually appears like I’m there and can actually shake people’s hands and so on. So that’s all coming. We use computers and this type of technology to bring us closer together. I just watched the movie Around the World in 80 Days. It was quite amazing to actually get around the world in 80 days. But today you can meet people almost instantly. And also, it would be great to actually be able to hug each of you and so on. That’s all coming. So increasing communication, allowing and also to meet my grandmother’s view of one life is not enough. She did not have an answer to that. But I think we’re going to be able to keep ourselves here. I mean, when people are around for a while, they actually gain some wisdom and they get to keep us around for a while longer.

Peter: Thank you, Ray.

Mike: Hello, Ray. Mike Wandler from Wyoming. Peter was showing us the AI-enabled mind reading. Really curious about how that works and especially the connection to collective consciousness or consciousness.

Ray: So Ray, this is recently they put some subjects in a functional MRI and then fed the output to stable diffusion that we get. I’ve actually done that. This was maybe five years ago. Wasn’t perfect, but it was significant. I mean, things that go on inside our minds, actually, it affects things that we don’t usually notice, like an eye blinking and so on. And we’re gaining more ability to do that. We can do pretty good. Telling if people are telling the truth or not. So that’s going to happen. And there are ways in which some of these things are positive and negative. I mean, I write mostly about the positive. I think I think things are moving in a positive direction in this new book. I’ve got 50 graphs showing all the things we care about are moving in the right direction. But that never reaches the news. You watch the news. Everything is bad news. And the bad news is true. But we completely ignore the good news. I mean, look at what life was like 50 years ago or nineteen hundred. The human life expectancy was forty eight. It was thirty five and eighteen. It’s not that long ago. So anyway, we are able to begin to tell what’s going on inside our minds with some greater accuracy.

Sadok: Hi, Ray. Sadok Cohen from Istanbul, Turkey. It looks like LLMs are with the aid of some expert systems, the way to go to general intelligence. Do you think that means that a hint of how the brain or our brain really works? And if that’s the case, does it mean that the more we understand the L.M. models, the more we understand our brain and be able to hack it? And is that a hint that we are more deterministic than we thought we were?

Ray: Well, it’s a very good question. It uses a somewhat different technique. No, that’s every phase. It’s able to get itself closer to the truth. We don’t actually see anything in our brain that actually does it. Does it a different way? But somehow we have all these different connections and the large language models that are effective. I mean, we actually had large language models that had. One hundred million connections. That sounds like a lot, but it actually didn’t do very much. But I got to 10 billion. It started to do things. The recent ones started now at 100 billion, going to a trillion connections and basically is able to to look at all the different connections between them. And that’s exactly what our brain does. And these things are going to go way beyond what our brain does. We see that already. I mean, I can play Go. I’m hardly the best player, but Lee Sedol, who is the best human player. And that used to be significant because he could just look at the board and be able to do something that no one else could do. And he says he’s not going to play Go anymore because he can’t compete with a computer. In my view, though, we’re going to add this to ourselves and we’ll all become master players of Go and everything else that we want. But yes, it is using the same ability to connect things. And if you get enough of them, it seems to be basically a trillion seems to be way beyond what humans can do. We can be very intelligent.

Yaseen: Thank you, Ray. I’m Yaseen. I’m from the Netherlands. And as I was trying to think of a question, I wasn’t sure. So I asked Chad, “GPT, I’m sitting right next to Ray. Give me some tough questions.” And the one that was really interesting is kind of what the German lady was just saying. As AI becomes more advanced, there are concerns it may become impossible for humans to understand how AI makes decisions. So how do we ensure AI systems are transparent and accountable to humans always?

Ray: Well, I’m not sure that’s really the right thing. Because I deal with human beings and I can’t always account for what they might be doing. So I think we have to actually export certain values. I try to associate with people who have somebody — I may not be able to predict what they’re doing, but I understand what they’re about and what they’re trying to accomplish. And we need to teach that to our machines as well. I actually think large language models — I mean, even though people are concerned, they might say the wrong thing. Sometimes they do. I mean, there was a large language model. I won’t say where it came from. But it’s talking about suicide. And it actually said, “Well, maybe you should try that.” Which is not the correct answer. We want people to understand the impact that it will have on other people and internalize that and try to make that be the greatest value in the decisions it’s made. But we really can’t predict what these large language models will do. But I think we are actually sharing our values with them.

Peter: Thank you. Let’s go to Shailesh on Zoom. We’re also monitoring upvoted questions in Slido here. Then we’ll come back here. Shailesh? Go ahead, Shailesh.

Shailesh: I’m in Mumbai, India. So my question to you, Ray, is do you have a prediction of when the entire world will get to net zero and we’ll be able to breathe cleaner air and drink safer water?

Ray: Well, if you look at some of the graphs in Peter’s book and in my book, you see we’re definitely headed in that direction. We’re not there. Alternative energy, for example, is actually expanding at an exponential pace. By the early ’30s, we’ll be able to actually get all of our energy through renewable sources. That’s not true today, but we’re actually headed in that direction. Not everybody has access to the Internet, although I walked through San Francisco and these homeless cities, and somebody actually takes out his cell phone and makes a call. So, I mean, it is spreading quite rapidly. By 2029, computers will pass the Turing test. They certainly can do it in many ways already. Once it can actually do everything that humans can do, it’ll go way past that. But as they say, we’re going to bring them into ourselves. 2045 is when I said we will actually multiply our intelligence millions-fold, and that’s going to be true of everybody. We’ll be able to get rid of the kinds of terrible lives that we see through poverty and lack of access to information. So, it’s really just the next few decades that we need to get through, but we’re already making a lot of progress.

Peter: Thank you, Shailesh.

Ashish: Please. And then we’ll go to Zoom next. Yes, sir. Thank you. Hi, Ray. My name is Ashish. I’m representing chemicals and materials space. So, my question to you is, if you had the chemical industry executives as your audience, what would you like chemical industry or materials industry to do to move forward?

Ray: Well, as I said, my grandmother was actually the first person to get a PhD in chemistry in Europe. And I actually asked something like that. She said, well, chemistry is really something that serves other industries. So, we need to see what other industries need. What kind of products do we need to make LLMs more powerful? What kind of chemicals do we need to prevent certain types of diseases? And so, it’s not any one particular type of thing. It’s really service to every other industry that we’re trying to advance.

Pete: Please. And then we’ll go to Zoom next. Yes, sir. Thank you. Hi, Ray. My name’s Pete Zacco. I’m from New Jersey. I design and build data centers. My question is about decentralization and especially the migration we’re seeing of technologies from the mainframe where the product was the mainframe hardware, and then we saw software, and then we saw us as the product in a centralized internet. My question is what predictions and thoughts that you have about this decentralization trend we find ourselves ultimately at perhaps ending with the decentralization of the internet and individual ownership of data rather than central ownership of data. Thank you.

Ray: Yeah, well, it’s a lot of questions, but I think everything is moving to the cloud. And people say everything in the cloud. So someone could blow up one of these cloud centers, we’d lose everything, but it’s not the case even today. If you store something in the cloud, it’s multiplied several dozen folds and it’s put in different places and you could blow up any data center and you’d still have that information. In fact, it would be very if part of ultimately we’re going to have our thinking is going to be in our brains and in the computer. The brain part is not going to grow, but the computer part will grow and ultimately most of our thinking will be in the computer part. And so we don’t want to lose that. I think it’d be actually very hard to actually exit the world because every part of our thinking will be in the cloud and the cloud is multiplied hundreds, maybe thousands of fold. And so you could blow up, you know, 90 percent of it, you’d still have everything that was there before. So redundancy is actually a major advantage of cloud thinking. We used to have computers. I mean, I had I got access to IBM 1620 when I was 14, a 14 year old using computers. Hardly amazing today, but there are only 12 computers in all of New York City at that time. And you had to actually go to the computer. And if anything happened on the computer, that data would be lost. But now everything is stored in the cloud. Everything on your phone is stored in the cloud. So I think that’s a good thing because I think information is extremely important.

Maddie: Maddie from Houston, Texas. We’ve talked a lot about a post scarcity world here. And I wanted to know, how do you see the future of currency, jobs and just general value?

Ray: Well, jobs is actually a large section of my next book about jobs and what it is that we’d like to to accomplish. And jobs have turned over many, many times. I mean, none of the jobs that people have in 1800. It’s almost true of 1900 to people have today. And yet we have many more people working and jobs in general is something that people more and more actually like doing because it uses their creativity. And. But we still see people striking over advancing retirement age from 60 to 62. I feel that I actually retired when I was five because I decided to be an inventor. That seemed really exciting to me. And I’m still an inventor. So I think we’ll be able to do what we want to do. We’ll be exposed to many more types of problems that we’d like to solve. We’ll be able to solve things much more quickly than we did before. But we get used to that and people forget what things are like. People think the world is always the way it was today. Go back five years, 50 years, 50 years in the future. It’s always the same. But if you actually look at history, you see it’s constantly changing.

Peter: Thank you, Joe. All right. Joe Honan from Bainbridge Island, Washington. Several years ago, I had asked you a question about, you know, these big ideas that you have. How do you work on it? When do you have time? And you said you assign yourself a question before you go to sleep and you activate your brain through that. My question is, do you still do that or do you rely upon GTP4 or something else for that now? But more importantly, you are such an amazing predictor of things. So what surprised what has surprised you? What is something that you didn’t expect that you’ve seen? I think we’d all be fascinated with that.

Ray: Well, I’ll start with that. I mean, large language models, it’s quite consistent with what I’ve said, but I’m still amazed by it. Right? I mean, you can put something in the computer and you get something that’s totally surprising and totally delightful. That didn’t exist like a year or two ago. And even though I kind of saw that happening when I actually experienced it, it surprises me and is quite delightful. And we’re going to see that more and more. I mean, every six months, it’s going to be a whole new world. As for lucid thinking, yes, that’s how I go to sleep. I go to sleep. And it’s really kind of hard to go from awake and sit like I am now to being asleep. So I start thinking about what could we do with computers and different things and just fantasize about that. And if something doesn’t seem feasible, well, we’ll figure that out. I kind of step over it. We’ll be able to do it anyway. I mean, that’s how I go to sleep. And in the morning, the best ideas actually are still there. So I do use lucid dreaming to come up with ideas.

Yousef: Welcome. Hi, Ray. This is Yousef from Abu Dhabi, UAE. The question for you, Ray, but also for the audience. So if you have any thoughts, ideas, please reach out. So we’re trying to rethink our parenting in Abu Dhabi and how we create more family time and engagement between parents and children, for young children. And I’m curious how we can adapt exponential thinking and abundant thinking into this and what are these technologies that might help us to disrupt this type of activities?

Ray: Yeah, well, I mean, it does make me think, what can we actually do with the extra time we have with computers, working with computers and being able to do things much more quickly? And actually, I think it will help family time. If you talk to very busy people even today, they’re so busy, they have no time to deal with their family. And so I do spend a lot of time actually learning a lot. My daughter is actually a cartoonist for the New Yorkers and she has very interesting ideas. And that’s and she’s actually collaborated with her on many projects. So how you parent, I think it’s different. There are different types of cultures and different things that we value in parenting. But I think we’ll actually have more time for the positive aspects of that as computers do more of the routine work that we’d rather not do.

Gloria: Thank you, Ray. My name is Gloria and I come from Spain. I just wanted to share an idea that I woke up with this morning. It’s a bit crazy, but I woke up with this image of neurons in addition, a Petri dish playing ping pong. And I thought, what about if we put these neurons on sensors and connect them to a quantum computers or whatever and have them feeling stuff so they can be more empathetic and understand the humans or sentient beings, animals, whatever. And I don’t know where that comes from, but maybe that will evolve into something greater and not just to have the machine embedded in our brain. So to actually grow neurons and connect these sensors to the AI.

Ray: Yeah. Well, you bring up a number of interesting issues. Ourselves doesn’t have to be in this body. I mean, we can have sensors that are even thousands of miles away that are really part of who we are. And you’re talking about feelings. I mean, that’s a big issue. Where do feelings come from? It’s actually not a scientific issue. I can’t put an entity into something and it would scan and say, this is conscious. No, this isn’t conscious. There’s actually nothing that would actually tell us that. So it’s actually a philosophical question. I used to discuss this with Marvin Minsky and he says, oh, that’s philosophical. We don’t deal with that. And he dismissed it. But actually, he did actually evaluate the ability of people to be intelligent. And really, the more intelligent you are, the more you can process things, the more feelings you have from it. I think that’s where feelings come from. And yes, we can actually grow things that are outside of ourselves that could be part of our feelings as well. My idea was that who says that consciousness doesn’t want to experience itself through the machine and these sensors, we can have pleasure and pain or whatever. It’s just a thought.

Peter: Thank you. Thank you. We’re going to pause and go to Zoom. One second.

Dagmar: Germany. In Germany. Great. With the history of Germany, AI really has a very big challenge here because there are people who are really afraid of reviving a basic big brother angst. So Ray, thank you very much for answering maybe this question, how to overcome this fear, because the thing is really we need to learn and explore and play with the tech so that we actually can deal with it and learn about it. So where do you see the power to create this framework for learning?

Ray: Well, I was actually just in Germany a few months ago, and I think they’ve considered their past and how that happens and how we can avoid it happening, I think more than any other country. And I really felt that while I was there. And really to understand humans, I think large language models, because it actually incorporates all of the learning of humans, we can actually begin to appreciate that. And I’ve asked these machines questions which no human could answer because we can’t actually hold all of everything that’s happened to humans in our mind. But if you can actually have something that has experienced everything and can look through that, we can avoid the kind of problems we’ve had in the past.

Peter: Thank you so much. Let’s go to Jason on Zoom. I know we have a number of hands up there and we’ll come back to you gentlemen in a second. Jason, good morning. Where are you on the planet?

Jason: Hey, Ray. I’m in Calgary, Alberta, Canada. And I love the optimism around where we’re headed, a future of abundance. What I would really love to know is your perspective on as we cure diseases, as we have access to this knowledge instantly, what are some of the downsides or the threats that we might be missing that we’re going to have to face in the future?

Ray: Yeah. Well, each of my books actually has an apparels chapter. My generation was the first to grow up with that. I remember in elementary school, we would have these drills to prepare for a nuclear war. We would actually get under our desk, put our hands behind our hands, seem to work. We’re all still here. But these new technologies do have downsides. You can certainly imagine AI being in the power of some body, could be a human or any other type of entity that wants to control us. And it could happen. I was actually part of the Sylmar conference on bringing ethics to AI to prevent that kind of thing. I am optimistic, but I’m also worried about it. Nanotechnology, biotechnology. I mean, we just had this COVID go through our planet. We don’t actually know where it came from. But somebody could create somebody. Right now, viruses, they either spread very easily, but they don’t make us that sick, or they don’t spread that easily and they can kill us. We generally don’t have anything that could go through the entire human beings and kill everybody. But someone could actually design that. So we have to be very mindful of avoiding these types of perils. So I put that into one chapter. I do think if you actually look at how we’re living, we’re living far better than we’ve ever done before. And in terms of health, in terms of progress, in terms of recreation and everything else. But yes, there’s ways of these technologies being quite abusive. And that happened when I was born with the atomic age.

Yaseen: Please, sir. Hi, my name is Yaseen. I’m from the Netherlands. And as I was trying to think of a question, I wasn’t sure. So I asked Chad, “GPT, I’m sitting right next to Ray. Give me some tough questions.” And the one that was really interesting is kind of what the German lady was just saying. As AI becomes more advanced, there are concerns it may become impossible for humans to understand how AI makes decisions. So how do we ensure AI systems are transparent and accountable to humans always?

Ray: Well, I’m not sure that’s really the right thing. Because I deal with human beings and I can’t always account for what they might be doing. So I think we have to actually export certain values. I try to associate with people who have somebody — I may not be able to predict what they’re doing, but I understand what they’re about and what they’re trying to accomplish. And we need to teach that to our machines as well. I actually think large language models — I mean, even though people are concerned, they might say the wrong thing. Sometimes they do. I mean, there was a large language model. I won’t say where it came from. But it’s talking about suicide. And it actually said, “Well, maybe you should try that.” Which is not the correct answer. We want people to understand the impact that it will have on other people and internalize that and try to make that be the greatest value in the decisions it’s made. But we really can’t predict what these large language models will do. But I think we are actually sharing our values with them.

Peter: Let’s go to Shailesh on Zoom. We’re also monitoring upvoted questions in Slido here. Then we’ll come back here. Shailesh? Go ahead, Shailesh.

Shailesh: So my question to you, Ray, is do you have a prediction of when the entire world will get to net zero and we’ll be able to breathe cleaner air and drink safer water?

Ray: Well, if you look at some of the graphs in Peter’s book and in my book, you see we’re definitely headed in that direction. We’re not there. Alternative energy, for example, is actually expanding at an exponential pace. By the early ’30s, we’ll be able to actually get all of our energy through renewable sources. That’s not true today, but we’re actually headed in that direction. Not everybody has access to the Internet, although I walked through San Francisco and these homeless cities, and somebody actually takes out his cell phone and makes a call. So, I mean, it is spreading quite rapidly. By 2029, computers will pass the Turing test. They certainly can do it in many ways already. Once it can actually do everything that humans can do, it’ll go way past that. But as they say, we’re going to bring them into ourselves. 2045 is when I said we will actually multiply our intelligence millions-fold, and that’s going to be true of everybody. We’ll be able to get rid of the kinds of terrible lives that we see through poverty and lack of access to information. So, it’s really just the next few decades that we need to get through, but we’re already making a lot of progress. Thank you.

Ashish: Hi, Ray. My name is Ashish. I’m representing chemicals and materials space. So, my question to you is, if you had the chemical industry executives as your audience, what would you like chemical industry or materials industry to do to move forward?

Ray: Well, as I said, my grandmother was actually the first person to get a PhD in chemistry in Europe. And I actually asked something like that. She said, well, chemistry is really something that serves other industries. So, we need to see what other industries need. What kind of products do we need to make LLMs more powerful? What kind of chemicals do we need to prevent certain types of diseases? And so, it’s not any one particular type of thing. It’s really service to every other industry that we’re trying to advance.

Pete: Hi, Ray. My name’s Pete Zacco. I’m from New Jersey. I design and build data centers. My question is about decentralization and especially the migration we’re seeing of technologies from the mainframe where the product was the mainframe hardware, and then we saw software, and then we saw us as the product in a centralized internet. My question is what predictions and thoughts that you have about this decentralization trend we find ourselves ultimately at perhaps ending with the decentralization of the internet and individual ownership of data rather than central ownership of data. Thank you.

Ray: Yeah, well, it’s a lot of questions, but I think everything is moving to the cloud. And people say everything in the cloud. So someone could blow up one of these cloud centers, we’d lose everything, but it’s not the case even today. If you store something in the cloud, it’s multiplied several dozen folds and it’s put in different places and you could blow up any data center and you’d still have that information. In fact, it would be very if part of ultimately we’re going to have our thinking is going to be in our brains and in the computer. The brain part is not going to grow, but the computer part will grow and ultimately most of our thinking will be in the computer part. And so we don’t want to lose that. I think it’d be actually very hard to actually exit the world because every part of our thinking will be in the cloud and the cloud is multiplied hundreds, maybe thousands of fold. And so you could blow up, you know, 90 percent of it, you’d still have everything that was there before. So redundancy is actually a major advantage of cloud thinking. We used to have computers. I mean, I got access to IBM 1620 when I was 14, a 14 year old using computers. Hardly amazing today, but there are only 12 computers in all of New York City at that time. And you had to actually go to the computer. And if anything happened on the computer, that data would be lost. But now everything is stored in the cloud. Everything on your phone is stored in the cloud. So I think that’s a good thing because I think information is extremely important.

Ray: Maddie, please.

Maddie: Hi, Ray. Maddie from Houston, Texas. We’ve talked a lot about a post scarcity world here. And I wanted to know, how do you see the future of currency, jobs and just general value?

Ray: Well, jobs is actually a large section of my next book about jobs and what it is that we’d like to to accomplish. And jobs have turned over many, many times. I mean, none of the jobs that people have in 1800. It’s almost true of 1900 to people have today. And yet we have many more people working and jobs in general is something that people more and more actually like doing because it uses their creativity. And. But we still see people striking over advancing retirement age from 60 to 62. I feel that I actually retired when I was five because I decided to be an inventor. That seemed really exciting to me. And I’m still an inventor. So I think we’ll be able to do what we want to do. We’ll be exposed to many more types of problems that we’d like to solve. We’ll be able to solve things much more quickly than we did before. But we get used to that and people forget what things are like. People think the world is always the way it was today. Go back five years, 50 years, 50 years in the future. It’s always the same. But if you actually look at history, you see it’s constantly changing. Thank you.

Ray: Thank you, Joe.

Joe: All right. Joe Honan from Bainbridge Island, Washington. Several years ago, I had asked you a question about, you know, these big ideas that you have. How do you work on it? When do you have time? And you said you assign yourself a question before you go to sleep and you activate your brain through that. My question is, do you still do that or do you rely upon GTP4 or something else for that now? But more importantly, you are such an amazing predictor of things. So what surprised what has surprised you? What is something that you didn’t expect that you’ve seen? I think we’d all be fascinated with that.

Ray: Well, I’ll start with that. I mean, large language models, it’s quite consistent with what I’ve said, but I’m still amazed by it. Right? I mean, you can put something in the computer and you get something that’s totally surprising and totally delightful. That didn’t exist like a year or two ago. And even though I kind of saw that happening when I actually experienced it, it surprises me and is quite delightful. And we’re going to see that more and more. I mean, every six months, it’s going to be a whole new world. As for lucid thinking, yes, that’s how I go to sleep. I go to sleep. And it’s really kind of hard to go from awake and sit like I am now to being asleep. So I start thinking about what could we do with computers and different things and just fantasize about that. And if something doesn’t seem feasible, well, we’ll figure that out. I kind of step over it. We’ll be able to do it anyway. I mean, that’s how I go to sleep. And in the morning, the best ideas actually are still there. So I do use lucid dreaming to come up with ideas. Thank you, Joe.

Yousef: Hi, Ray. This is Yousef from Abu Dhabi, UAE. The question for you, Ray, but also for the audience. So if you have any thoughts, ideas, please reach out. So we’re trying to rethink our parenting in Abu Dhabi and how we create more family time and engagement between parents and children, for young children. And I’m curious how we can adapt exponential thinking and abundant thinking into this and what are these technologies that might help us to disrupt this type of activities?

Ray: Yeah, well, I mean, it does make me think, what can we actually do with the extra time we have with computers, working with computers and being able to do things much more quickly? And actually, I think it will help family time. If you talk to very busy people even today, they’re so busy, they have no time to deal with their family. And so I do spend a lot of time actually learning a lot. My daughter is actually a cartoonist for the New Yorker and she has very interesting ideas. And that’s and she’s actually collaborated with her on many projects. So how you parent, I think it’s different. There are different types of cultures and different things that we value in parenting. But I think we’ll actually have more time for the positive aspects of that as computers do more of the routine work that we’d rather not do. Thank you.

I want to make a quick point here. If we went back 57 years ago, if you were a parent and something happened with a child, you had no idea what to do. We had no resources. You could basically ask the immediate five people around you. And now we have data sets, socialization of issues globally, and you can ask the internet. There’s a million resources and I think we’ve probably taken parenting at least an order of magnitude better than it was a few generations ago. And we don’t, this is one of the examples that we don’t see very often. Interesting. Wisdom beyond. Actually, I don’t think I would have had the career I had if we didn’t have a different attitude. I mean, I was six, seven years old and I would actually wander through the neighborhood and find things and bring them back. And that’s not something you would allow a child to do then. But that actually got me on this path that I’m still on. Let’s go to our final question here. A good one to close on, I’m sure.

Alex: Dr. Alex Zavankov. Thank you. Great fan. Alex Zavankov. I founded a company called Insilica Medicine. And my question is maybe a little bit personal. So right now, according to your bio, you’re 75. And that’s a very interesting age to be. I always like to talk to people, you know, of various ages to understand how to plan my own life. And two questions. So one is, what is your roadmap for your own personal longevity? How do you predict your own personal persona is going to evolve? What are you doing to live longer? And do you think you have a chance to live to, let’s say, 200? And the second question is that if you were to go back in time, what would you have done differently in the past, let’s say, 20 years?

Ray: Well, first of all, getting to 200, so that would be 125 years from now. How much technological progress will we make in the next 125 years? Even 25 years? I mean, we’re going to be able to overcome most of the problems that we have. 125 years, I mean, our thinking will be in the cloud, the cloud will be multiplied many times, and we’ll overcome some of the issues we have with people being depressed and so on. I mean, so it’s not like living to 200. I mean, I think we get to a point where dying is going to be kind of an option that people don’t use. And if you look at people that actually do take their lives, the only reason they take it is because they have terrible suffering from physical pain, moral pain, emotional pain, spiritual pain. But something is really bothering them and they just can’t stand to be here. But if you actually live your life in a positive way, contribute to each other, I think we’re going to want to live. And we’re not that far away. I mean, I believe by 2029, that’s like six, seven years from now, when you go forward a year, we’re going to push your longevity, escape your life expectancy forward at least a year and then ultimately more than a year. So rather than using up time, we’ll actually gain more time. And I really feel I’m doing what I did when I was five, six, seven years old. I have much more powerful tools now and many more people are appreciative. And I appreciate the tools more than I did back then. But we’re really discovering there’s still a lot we don’t know about the world. And we’re going to continue to learn more and more about that. Okay.

Harry: Ray, when do you think we’re going to have our personal robot buddy like Rosie the robot?

Ray: Well, I mean, you’re working on that. A lot of other people are working on it. I think there’s actually a little bit behind what we’ve done with language. I think within five or six years, let’s say 2029, we’re going to have people that can help us. Some of them will look like humans because it’s a useful way to look. I think humans are pretty good. But there’s other ways that they can manifest themselves. We’ll change who we are. We see that already. People dress up in ways that were really not acceptable when I was like 10 years old. And that’s going to expand far greater. But actually, robots that do what humans do and can actually be put into places where we wouldn’t want to put humans, like a burning building. I think that’s happening very soon over the next five, six years.

Aug/2023

Video: https://youtu.be/4GQrLjvudJ45https://youtu.be/4GQrLjvudJ4
Meeting: AI for Good Summit: The future of intelligence: artificial, natural, and combined.
Speaker: Ray Kurzweil
Transcribed by: OpenAI Whisper via YouTube Transcription Python Notebook.
Edited by: Alan (without AI!)
Date: 22/Aug/2023

Highlights

– Turing Test/AGI by 2029.
– Open source LLMs and advancement: ‘There’s no way out of it.’
– ‘Large language models are the best example of AI.’

Full edited transcript

Question: Good morning, Ray. Greetings from Geneva. Very happy to have you here.

Ray: I’m glad to be here.

Question: I’m one of the many people who read your books. I actually went to my paper library and write a book that came out in 1999 called The Age of Spiritual Machines. So let me quickly introduce you and then we move to our conversation. So Ray Kurzweil is one of the world’s leading inventors, thinkers and futurists. He has a 30-year track record of accurate predictions. He was the principal inventor of the first CCD flatbed scanner, the Omnifond optical character recognition, print to speech reading machine for the blind, text to speech synthesizer, music synthesizer, capable of recreating the grand piano and other orchestral instruments, and commercially marketed large vocabulary speech recognition software. You received the Grammy Award for outstanding achievements in music technology. You are the recipient of the National Medal of Technology in the United States and you were inducted into the National Inventors Hall of Fame.

You hold 21 honorary doctorates and honors from three US presidents. You’ve written five national bestselling books including The Singularity is Near. I think everyone associates the term singularity with you, Ray. And then in 2012 you wrote How to Create a Mind. Both were New York Times bestsellers. And then more recently you wrote a book, Danielle Chronicles of a Super Heroine. And this was the winner of multiple Young Adult Fiction Awards. You are a principal researcher and AI visionary at Google and looking at the long-term implications of technology and society. So you will be coming out with a book. It’s announced for next year and it’s called The Singularity is Nearer. So your first book in 2005 was The Singularity is Near and now you’re coming out with a book called The Singularity is Nearer. And okay, so let’s start.

Question: Actually, my first question refers to your suspenders. They’re pretty colorful.

Ray: You don’t see very many people wearing hand-painted suspenders anymore. These are handmade.

So these are hand-painted suspenders?

Yeah. A girl named Mel makes them. I’m the only one who gets them.

Question: Were you surprised by the capabilities of the large language models within the last that came out in the last seven, eight months or so?

Ray: Well, no. I mean, this activity is a prelude to passing the Turing Test. We can talk more about what that means. But in the book you showed, The Age of Spiritual Machines, which came out in 1999, I predicted that we would pass the Turing Test by 2029. And Stanford was so alarmed at this that they actually held an international conference to talk about my prediction. And 80% of the AI experts that came from around the world agreed with me that a computer would pass the Turing Test, but they didn’t agree with 30 years of 2029. They thought it would take 100 years. This poll has actually been taken every year. So I’ve stayed with 2029. Still believe that. AI experts started at 100 years. It stayed pretty much set. And lately it’s come down. Now the consensus of AI experts around the world is also 2029. So people are agreeing that I was right.

But prior to passing the Turing Test, you’re going to have things like large language models which emulate human intelligence. There’s a few ways that they’re not correct. I mean, if you ask one of the popular large language models, ‘how many E’s does the following sentence have?’ And then you put some sentence in quotes. It actually doesn’t get that correct. And that’s something that humans can do quite easily. However, you can ask it anything about topics in philosophy or physics or any other field. It’ll give you a very intelligent answer. So in many ways, they’re better.

I mean, even Einstein didn’t understand issues in philosophy and psychology and so on. So it really has a very broad base. Can articulate very quickly. They operate thousands of times faster than human. So in many ways, it’s superior.

I mean, when a computer does something, it doesn’t just do it at the levels of humans. Like when it played Go, it plays it far better than any human can possibly play it. In fact, Lee Sedol, who’s the best human player of Go said he’s not going to play anymore because these machines are so fantastic. So one of the things in passing a Turing Test is that you actually have to dumb it down because if it showed its fantastic knowledge of every different field, you’d know it’s a machine.
So that’s one of the things.

But there’s also a few things that humans can do easily that they can’t quite do. But that’s going to be overcome in the next few years. It’ll probably be passed prior to 2029.

Question: OK, can I ask the colleagues of my technical colleagues to crank up the volume a little bit? Because I have difficulty hearing a bit. Ray, you invented something for the blind, but not yet for the hearing-impaired people. I was asking my colleagues if they can do the volume a little bit louder.

Ray: OK. Well, we do have speech recognition. It’s quite accurate. So you can speak and you can actually get a transcription of what people are saying.

Question: So could you make like half a year ago or so, or like a year ago, you couldn’t say if a machine wouldn’t have passed the Turing test. If I speak to a machine and I speak to a computer, I could have easily recognized after, I don’t know, 15 minutes or maybe earlier or a bit later that this is a machine. Now you could actually argue I could still make the difference. But it’s the human responses which are not as great as the answers of the machine. Is that a fair statement?

Ray: Well, once a computer passes something, it doesn’t just stop at a human level. It goes way past it. And that’s something that’s true for everything. So the past Turing test would have to be dumbed down. I think it’s an important test that if it can actually do everything a human can do, particularly in language, there’s another version of Turing test where it would actually have a virtual person who can actually speak and have facial expressions that match what it’s talking about. But that’s actually no more difficult than mastering human intelligence and language.

But these large language models, this is not an alien invasion of intelligent machines coming from Mars. I mean, we create these machines to make ourselves smarter. And if somebody actually uses GPT-4 to write something, I mean, that is I think what human beings do. We use tools to make us smarter. I mean, who here doesn’t have their smartphone? Actually did that five years ago, I would ask who here has a smartphone. Only a few hands went up. I did that recently. Like who here does not have the cell phone? Nobody raised their hand. So this is actually an extension. We’re actually smarter with this than we would otherwise be. Now it’s external. We might lose it. I think it’d be greater if we actually brought it into ourselves so you wouldn’t forget it at home.

But these things are going to make us smarter. And really the rise of intelligence, if you look at the broad scope of evolution on this planet, we’re getting more and more intelligent. So 100,000 years ago, we had homo sapiens, but they were not as smart as us. They didn’t have the tools that we have today. So we basically are able to create tools that make us smarter and more capable of doing things that we need.

Question: How do you see the future of large language models? So some people say, some scientists say, like in five years, they’re not that relevant anymore. How do you see that?

Ray: Well, large language models are actually going to go beyond just language. They’re already bringing in pictures, videos and so on. I’ll just give you one application. If we apply it to medicine, we can actually simulate biology. And the Moderna vaccine was actually done this way. The Moderna vaccine was created in two days. The computer considered billions of different combinations of things that would fight COVID, including mRNA sequences. And it actually went through several billion of them. And it decided on one that was the best. And that was the vaccine. And that’s the vaccine we use today. And it was done in two days. Now, they actually then went for 10 months to test it on humans. But that’s also unnecessary.

Rather than testing on 500 biological humans, we could test on a million simulated humans. That would be just as good. And you’re much more likely to match one of those and the 500 that they use. And that could also be done in a few days. So rather than taking many years, like up to 10 years to do these things with humans,
we could do it on a computer in a matter of days. And that’s where we’re going, simulated biology. And some of the techniques that are used in large language models are used for that. And this is actually now the coming wave in medicine. We’re going to make fantastic progress in the next few years.

Question: Do you see anything where a computer will eventually not be better than a human being?

Ray: No, absolutely not. I mean, sometimes people say, oh, well, emotional intelligence, people have that. And that’s more sophisticated than logical intelligence, which is true. But we use the same kind of connections to deal with emotional intelligence, know how to react to someone else and get into their frame of mind.

That just takes more intelligence. It’s actually the best thing that we can do. But it is also using the same kinds of connections that we use to make any other decision. So they will have emotional intelligence and they’ll actually react to us. But just like humans, some humans are friendly to us, some humans are not friendly to us. So we have to be mindful of creating things that actually advance our goals.  So that’s a whole other issue that we can talk about.

Question: I read a book review that you wrote in the New York Times a few years back. And then if I quote here, you said, ‘the superiority of human thinking lies in our ability to express a loving sentiment, to create and appreciate music, and to get a joke.’ These are all examples of emotional intelligence. So, but you’re saying the machines. So I thought humans have emotional intelligence, but machines don’t. But you’re saying machines will have emotional intelligence.

Ray: Yes, absolutely. Meaning, it will make us to be better humans. As I said, it’s not an alien invasion of intelligent machines to take us over. So people constantly look at the machines versus us as if they’re two separate things. But it’s not. I mean, look at how these things are used today. Everybody has a cell phone. Everybody’s amplifying their intelligence already. And that’s just going to continue and it’s going to become much closer to us.

Question: So my smartphone is an extension of my brain. That’s what you’re saying. What would be the next step to enhance?

Ray: Well, the next step is to make machines even more intelligent. And that’s definitely happening. It’s at a very sharp increase right now in medicine and everything else. And also bringing it closer. So virtual reality is another way in which we’re making it closer. But ultimately, it’ll go inside our brains. And that’s a complicated issue. But it doesn’t have to connect to the entire brain. It can just connect to the end of the neocortex. The neocortex is kind of organized in a pyramid. And as you go off the pyramid, it’s dealing with more and more intelligent subjects. So you only really need to connect at the top level of this pyramid. And so and there’s the things that are being experimented with. They can do that today. They’ll be useful to people who can’t communicate, but ultimately it will enhance our own intelligence. And people say, well, I wouldn’t want that. But I mean, who who goes around without their cell phone today? People that thought at the beginning of cell phones, ‘well, I don’t really need that.’ But now everybody has one. And it will ultimately make us smarter. These things are part of who human beings are. And we see that already. And that’s going to continue.

Question: Could you explain or give us some idea what this would look like? There might be invasive technologies that connect the electrodes inside the brain, maybe. And there might be some noninvasive.

Ray: Well, I imagine we would. Take some kind of medication that would have nanobots that go through our bloodstream, and they would find the end of the neocortex and attach themselves there and communicate outside. So you could have then a computer on your body, but more likely really in the cloud, because an advantage of the cloud is it’s duplicated. If you put something in the cloud, it’s actually multiplied many, many times and not just in one building. It actually goes through many different buildings, so you could blow up an entire cloud facility and nothing would be lost because it’s the information is duplicated in other centers.

So our extension of our brains, I mean, our brain has several billion, several, actually trillion connections, and that’s what gives us intelligence. They’re very slow, though, operates at 200 calculations per second. Machines operate at billions of calculations per second. So it can be much, much faster and ultimately know a lot more as we see with large language models even today. So ultimately, we’ll be able to connect to that. And it will be just part of our thinking. It’s just another set of connections. But ultimately, that set of connections that we communicate with outside of our body will be much greater than what we can do in our brains. But that’ll just be part of human intelligence.

Question: Can you give a timeline for these scenarios?

Ray: Yeah, well, I said we’d pass the Turing test by 2029. I think it might be earlier, but I’m sticking with that prediction. So there’ll be like large language models. They’ll know everything. They won’t make sort of stupid mistakes like they can’t count the number of E’s in a sentence that you give them. And they’ll definitely and then we’ll be able to pass the Turing test by dumbing it down. But ultimately, that will actually give us knowledge of everything. No human being has that today, but it’ll be part of our own intelligence. And there’s still a whole veil of ignorance that we don’t know about.

So, I mean, just because if you know everything that humans know, it doesn’t mean you know everything because we haven’t explored other areas. In the 2030s, we’ll be actually merging with this intelligence. So rather than being outside our body, our brain will naturally just be extended. So rather than having a few trillion connections, we’ll have thousands of trillions of connections and ultimately be much smarter. But that will actually help us to explore knowledge that we don’t know. And there’s still a lot that we don’t know.

Question: OK, switching topics a little bit, a number or maybe quite a number of researchers and many people are afraid that AI might let’s just call it take over the world. Could you describe a scenario how that might happen? I mean, today’s language models, they can code, they can code pretty well. You could, I guess, give access to them, maybe to your bank account, do things on the Internet. So could you give maybe a scenario what it might look like if I were really to control many things?

Ray: Well, I mean, large companies today, really all of them, putting out products in the public domain, and they have large efforts to avoid hallucinations, which is inaccurate data and generally follow socially and morally appropriate ways of conduct. But there are lots of public domain large language models out there that don’t necessarily have these controls. It’s just like people. I mean, some people will. Most people actually will advance human intelligence in a positive way, but there’s a few that that don’t do that. And we can’t actually necessarily predict that.

If you look back in history, I mean, take World War Two, the Nazis might have won if they made different decisions. So it’s not absolute. We could have bad players use these things to advance their goals. And so the future of human history is not set. I’m optimistic about it. And the Singularity is nearer, I show 50 different graphs for everything we care about child labor and health and 50 different things have actually been growing every year, every decade. And I believe that will continue.

But we still have to be mindful of bad players using these kinds of technologies. And you can you can demonstrate they could do things. In the future that we wouldn’t be prepared for. So we actually have to beef up our defenses against that. But that’s happening.

Question: I’d like to dig a little bit deeper. Can you give a real real world scenario of how that might work? So we have currently the large language models. And so how how would that work? That AI would take over control?

Ray: Well, there’s lots of scams right now where people use computers to create things that are inaccurate or socially inappropriate and put them out. And we have some defenses against that. But if you actually use a very intelligent, large language model, it could do that much more sophisticatedly. And even some of the smaller ones, well, they’re not as good as the large ones that the big companies have. 50 percent of their output could actually not be told from an innocent output. So we’re going to have to beef up our defenses against that. One way of doing that is to identify who’s putting things out. And if you notice that they’re putting out things that are inaccurate and so on, then you wouldn’t give them permission to influence other people. But that gets to be fairly complicated. But I think that’s the direction we’re going to go in.

Question: OK. So how seriously should we take this problem that researchers call alignment problem, you know, just make it’s called making sure that the robots don’t do things that we don’t want them to do? Researchers seem to get somewhat ridiculed. We think this is a very serious problem. And other researchers say we should rather put our resources into known problems like misinformation, misinformation, disinformation bias. So how serious should we take the problem that I may go have?

Ray: Well, we should take it very seriously because these machines are coming. There’s no way of avoiding that. We actually want responsible actors to have the most intelligence so they can combat this use of the intelligence. I mean, there was a letter that went around that we should stop AI development for six months so we can figure out what’s happening. That would be a very bad idea, because the bad uses of these technologies would still advance. And then the responsible people would not have ways of combating it. Of course, that’s not happening. But. It’s really different people have different ideas about what they want to do in the world, and some of them are bad actors, and we have to continue to combat that.

And it’s going to be on a very sharp increase in intelligence, both the good actors and the bad actors. I mean, they make the case in the Singularity is Near, despite all of these misuses of technology, everybody is better in terms of wealth, in terms of health, 50 different things that I cite, despite the bad actors. So the history is actually pretty promising on this part.

Question:  We had Yuval Harari on the program yesterday, and he said that every company should be required to say invest 20 percent in safety research. And then I mentioned this to Stuart Russell today, and he reminded me that nuclear plants, they invest pretty much everything, like 99 percent into safety research. What do you think about making it a requirement for companies to put a certain amount of their money into safety research? And what what amount would be a reasonable amount?

Ray: Well, for large companies that put out the best large language models, they’re putting a very substantial, I’d say it’s more than half into safety, avoiding hallucinations, avoiding socially inappropriate comments and making sure that they’re used for proper purposes. So it’s a huge amount of effort being done really by every large company, because they’ll be liable if they don’t do that. So it’s not like we’re not doing anything. I mean, these large language models are very heavily tested. But there are some large language models not quite as big being used by bad actors that are public domain, and they can be used to manipulate facts and so on.

And that’s going to be a problem. It’s a problem today. But nonetheless, we’re making progress, I think, in the overall quality of life of most people.

Question: Yeah, the open source language models are pretty powerful. And I guess you could argue a good thing about open source language models is they help democratize AI so you don’t depend on big tech large language models. The downside of it, as you say, it may fall into the wrong hands. Is there seems like one have to square a circle. Is there is there a way out of it?

Ray: There’s no way out of it. And I mean, a lot of predictions of the future take today’s large language models and just assume nothing’s going to happen. And that’s been true at every different point. People assume the current technology is just going to remain. And ignore the fact that it’s really on a very sharp increase. So large companies already fighting to fight abuse of large language models, which is feasible, but can be combated. But that’s not good. But, you know, three years from now, it’s going to be a whole different type of technology. You’re going to have to reinvent it. And that’s what we do with technology. It’s just much a lot faster now when the railroad came, it completely displaced lots of jobs and things were very dramatically different, but it took decades for that to happen.

Now it happens in a matter of months. So it’s very, very quick. But we also have tools to combat it. I’m optimistic and I make that case with the history of technology. But it’s going to be very, very fast.

Question: Yeah. I mean, if you say it’ll take time, I think it was maybe also Yuval Harari who made the point that the Industrial Revolution, you know, also caused the Second World War, the world wars. And so humanity made quite a number of mistakes. And I mean, we can’t really afford that with with AI. We should be… We have to get it right the first time.

Ray: But look, look at wars today. I mean, we get very upset, rightly so, at wars that kill hundreds, maybe thousands of people. But you go back, you know, 80 years or something to World War Two, we had 50 million people die in wars in World War Two, just in Europe and many other places as well. We don’t have wars like that anymore, because actually the weapons are more precise and don’t have all the kinds of collateral damage that we did that we did 80 years ago. So actually, you can look at the scale of wars actually going down. On the other hand, we still have lots of nuclear weapons. That’s not really discussed very much. That could blow up everybody. That’s not an AI technology. So, I mean, the future is not foretold. I mean, things could go wrong. But I make the case that even during World War Two, things continue to go better.

I mean, if you look at my graph, if we can put that up, this is a graph I created of the power of AI intelligence. Is it up? And so this is an exponential graph. So a straight line means exponential growth, and it’s 80 years of exponential growth.
The first 40 of this, nobody was following this. So it’s not like, well, we have to get to this level next year. Nobody knew that it was happening. And yet it nonetheless grew exponentially. So I started about 40 years ago. I created this graph, half of it, and then projected it out. And we’re right on that track. The first one on there did 0.0007 calculations per second per dollar. The last one on here does 50 billion calculations per dollar, and that allows large language models. Large language models didn’t exist until maybe a couple of years ago because we didn’t have enough computation to do it. And this has a mind of its own. It’s going to continue. And that’s where all these advantages of computer technology come from.

Actually, it started in World War II. The very first two computers there were done by a German, Zuse. He was not a Nazi, but it was presented to the Nazi government. They turned it down. They didn’t see any advantage of computation that was not important.

The third one was Colossus, created by Turing and his colleagues. And that was taken very seriously by Churchill. And they actually then used it to decode all the Nazi messages. So every Nazi message that was sent, Churchill and his colleagues were able to read. And that accounted actually for the British to win the Battle of Britain, which otherwise they would not have won because they were outgunned. But they actually knew what the other people were doing.

So there’s lots of stories on this line, but this is what’s driving computer technology. And this kind of exponential growth is not just true of computers. It’s true of every type of technology.

Question: OK. What would be technological solutions to fight the disinformation problem?

Ray: Well, I mean, one thing we have search engines, if you look at the major search engines, Bing and Google search and so on, they’re really very accurate. Most people abide by them. You can actually check all the facts put out by a large language model in a search engine, and that’s actually being done. So that actually combats this type of misinformation that’s put out. Now, again, people on their own can do something different and they can actually purposely put out something that certain vaccines don’t work and you’ll be terrible to use them. So we can’t control everything. And it’s actually a good thing that these are decentralized, so you’re not giving too much power to large companies. But they do actually use technology we have already, which is pretty good at identifying falsehood from truth.

Question: Can you give some details?

Ray: Well, I mean, I don’t want to get into comparing one company against another, but all these major companies, Microsoft, Google and so on, use their own search engines to actually check what the large language models are checking. And I mean, I’ve used them now many thousands of times. I’ve never seen anything that’s inaccurate. However, I’m not the only person, hundreds of millions of people and ultimately billions of people will be using this. And if you go over the whole thing, sometimes it’s going to say something that’s inaccurate and then people make that gets promoted virally and people get very upset about it. But it’s really actually pretty accurate.

So I think it’s a manageable problem, at least by the big company, large language models.

Question: Switching topics again, a bit a big topic, not just at this conference, but overall this the topic of governance and perhaps internationally governance. What are your thoughts? I mean, there have been frameworks around for many years, hundreds or thousands or tens of thousands, and they all sound pretty similar. What would let’s let’s get a level deeper. What would be good regulatory requirements?

Ray: Well, Asilomar Conference conference, which was decades ago on biotech has served to be pretty good at avoiding problems with biotech. You can certainly create something in biologically that could kill people and so on. And yet we haven’t seen that. And most of the things that are coming out are positive for humanity. So that actually was Asilomar Conference at the same place on A.I. ethics. And we came up with some things there. It requires judgment to actually implement them.

But the fact is, it’s not the case that there’s no A.I. regulation. For example, one case is to recreate medical ethics so that we can actually make very rapid progress on overcoming cancer and other diseases using simulated biology. But you can’t say there’s no regulation in medicine. I mean, even if you use A.I. and everything’s going to be using A.I., this tremendous amount of regulation. In fact, I think it’s too strong.

I know people who could have used a drug and they weren’t able to get it. And then six months after they died, that drug became available and could help them. So I think, if anything, regulation is too strict. But there’s definitely regulation in that area. And in fact, there’s regulation in every area. I mean, the amount of regulations goes on for hundreds of thousands of pages of regulation that people have to follow. And so we don’t just put out a product. It goes into it’s just filled with all kinds of liabilities and so on. And people really take that seriously. And that’s really where the regulation comes from. These products don’t just go out in the world. They’re used in other areas which have tremendous amount of regulation.

Question: What about requiring large language models to undergo a safety check before they are deployed? There was this open letter or open letters maybe to put a moratorium on the development of large language models. But I guess just a practical development. You don’t stop development, but you could stop or you could put requirements on the deployment. So we distinguish development from deployment. You could ask a company to do certain safety checks before they actually deploy a model.

Ray: Well, they absolutely do that. I mean, they don’t put these things out without checking it. I mean, if they had a tremendous amount of false information and information that would be inappropriate and telling people who are considering suicide, well, we don’t want to tell people who are considering suicide, well, why don’t you try that? Which actually happened once, but didn’t happen again. So there’s a tremendous amount of checking these things before they go out. But at least that’s true of large companies. There are things where people actually change them to put out false information that they believe. So we see that today.

There’s a constant battle between people who misuse these things. And that’s not going to go away. But that’s why we don’t want to stop it, because we want actually we use very intelligent language models to combat the abuse with smaller language models that common people might have and put out to influence people in a negative way. And if we stop development, we won’t actually have those larger language models to help us. And anyway, it’s useless. I mean, you know, people in Iran or China will continue to develop them. Nobody’s going to follow not developing these things. We actually need more intelligent weapons on the side of truth and so on.

Question: OK, let me switch again some of the topic. What would you recommend young people who finish high school and who would like to go to college? What would you recommend that they study?

Ray: Right. Well, it’s actually kind of an old fashioned advice, which is to find something that you have a passion for. So whether it’s music or art or science or physics and psychology and so on, really get into that and appreciate what’s good and bad about it. And then as we have more powerful tools, we’ll actually make more progress and you can appreciate that. But I wouldn’t say go for coding, for example, because coding ultimately will be overcome by machines.

I mean, already something like a third of the code that’s produced is already created by machines, and that’s only going to increase. But you have to appreciate what we can do with this and we’ll ultimately have more tools to advance that. So find your passion. Some people have multiple passions. My father had one passion, which is music. My mother was a great artist. And that’s really what you should try to do.

Question: I’m trying to challenge you on this advice. I read an article, I think it was also in New York Times a few weeks ago, where the authors said it was an opinion piece that following your heart is not necessarily the best advice, because the problem with following your passion or your heart is that you may fall back into cultural assigned roles. So, you know, maybe women may not necessarily necessarily think of going into physics. But so they they might go into roles like psychology or the social sciences, because that’s what sort of culture really is expected of them. So they said it’s actually not good advice to not necessarily good advice to follow your heart.

Ray: Well, I mean, I don’t necessarily agree with that. If your heart is in it, you’re likely to do more creative work and appreciate it more. And the idea of people choosing something that’s really not something they care about, because they think society will reward efforts in that area. I don’t think it’s good advice.

Because the tools to create more intelligent technology in every single thing we do is going to increase. And there’s not such thing as taking a safe view.

We will have a lot more money. I showed that in the book. There’s actually a straight line that shows the increase in the amount of money we have. So I think people will be OK, even if they choose the wrong thing, because it’s going to be a lot of money. In fact, today already, there’s a lot of social safety programs, which we didn’t have a hundred years ago.

The very first safety program, say, in the United States was Social Security, which happened in the 1930s. So 90 years ago, that was the very first thing. There was actually nothing from the government that would help you. Now there’s a lot of programs. It’s not perfect. I think by the time we get to 2030s, we’ll have something like residual income for people that need it, because we’ll be able to afford that.

Question: If you could design the curriculum for high school students, how would you design it? Would you redesign it? What’s currently being taught?

Ray: Yeah, I mean, a lot of education in general hasn’t changed much in a hundred years. And a lot of it is actually memorizing, wrote things. If you want to study history and you actually get into history of warfare and so on, people learn certain facts, but it doesn’t really… they don’t care about them because they just have to learn it to pass the test. Ultimately, if people really care about what they’re studying, they’ll advance it because they have a passion for it.

So, I mean, there’s been attempts. Montessori actually has a pretty good way of finding out what kids… what turns them on and teaching them that. I think we need to really focus on the human brain. And even in childhood, the human brains are going to be amplified already by technology. I mean, I come over and visit my grandchildren, three, nine, 11 years old, and they’re all on their computers and they’re all having a good time. It’s actually advancing their education. So I would try to find out what people care about and learn that.

Question: OK, one last question. Could you give us a prediction that we could verify or falsify for the next, take your pick, two or three years? What is it that you foresee?

Ray: Well, if you follow, for example, large language models, which I think is the best example of AI, they’re already quite remarkable. You can have very intelligent discussion with them about anything. So nobody on this planet, Einstein, Freud and so on, could do that. They might know something about their own field.

But it’s actually pretty remarkable. And this is being advanced such that every month it gets better and better. Three years from now, these things will be quite exciting. I’m saying 2029, that six years from now, I think, passing the Turing test will happen sooner. Turing test really measures kind of a human level of activity.

These circuits actually operated billions of calculations per second, whereas our connections in our brain, I mean, we have trillions of these connections and that gives us some intelligence, but they operated 200 calculations per second. So obviously, these machines can be much faster and so on. But again, it’s not us versus the machines. We create these machines to make ourselves smarter and we already connect with them. Generally, the connections is outside of ourselves, but it’s still part of who we are.

Ultimately, it will go inside our brain, it will be part of who we are. Ultimately, most of the connections that we use will be outside ourselves. The cloud is useful because it actually is backed up, so it’s not actually subject to warfare. You could blow up an entire building and all the information would still be there because it’s separated among different places. And that’s even going to get greater in the years ahead.

Thanks very much, Ray. We are at the end of our interview, so I will be looking forward and I think we all will be looking forward to the publication of your book. It’s called The Singularity is Nearer and coming out next year. So maybe we have you on our event again next year. Thank you very much. Bye bye. Thank you very much.

Apr/2023

Ray’s opinion on the ‘AI pause’ letter (‘too vague to be practical… tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields’).

December/2022

Video: https://youtu.be/KklEmSBlUcM6https://youtu.be/KklEmSBlUcM
Meeting: CHIP Landmark Ideas: Ray Kurzweil.
Speaker: Ray Kurzweil
Transcribed by: OpenAI Whisper via YouTube Transcription Python Notebook (thanks to Andrew Mayne)
Edited by: Alan (without AI!)
Date: 5/Dec/2022

Highlights

– ChatGPT is a ‘sizeable advance’, but ‘not quite right’.
– Large language models (LLMs) are moving in the direction of sentience.
– LLMs carry risks, just as all tech does, including railroads.

Full edited transcript

Intro

Today, we’ll be hearing about rewriting biology with artificial intelligence for Ray Kurzweil, an inventor and futurist. I’m Ken Mandel. I direct the computational health informatics program at Boston Children’s Hospital. The program was founded in 94, we’re a multidisciplinary applied research and education program. To learn more, you can visit www.chip.org. The landmark ideas series is an event series featuring thought leaders across healthcare, informatics, IT, big science, innovation and more.

Dr. Kurzweil is one of the world’s leading inventors, thinkers and futurists. He creates and predicts using tools and ideas from the field of pattern recognition. Invented many technologies familiar to us today, including flatbed scanning, optical character recognition and text to speech synthesis. He won a Grammy for creating a music synthesizer used by Stevie Wonder that was capable of recreating the grand piano and other orchestral instruments. He was awarded the national medal of technology. His best selling books include the New York Times bestsellers The Singularity is Near and How to Create a Mind. Larry Page brought Kurzweil into Google as a principal researcher and AI visionary.

I’ll just mention one connection to Chip, Ben Rice, a faculty member. When he was a student at MIT, he worked with Ray to develop text to speech interface for that synthesizer so that Stevie Wonder and other non-sighted musicians could interact with the extensive visual navigation interface. The Singularity is a very important idea of Dr. Kurzweil’s. This is the point in time when artificial intelligence will surpass human intelligence resulting in rapid technological growth that will fundamentally change civilization. In order to understand when machines surpass biology, Ray has delved deeply into an understanding of biology and we’re immensely looking forward to hearing and learning and joining him in that understanding today…

3:05

Question: You’re joining us for the seminar five days after the release of OpenAI’s ChatGPT [released 30/Nov/2022, this recording 5/Dec/2022], which astounded many across the world and its ability to synthesize natural language responses to really complicated questions and assignments. If you’ve gotten to glimpse this technology, could you place it on the Kurzweil map toward the Singularity? Is this a step forward, is it a distraction, is it related in any way?

Ray: Well large language models occurred three years ago [Alan: Google BERT, 2019] and they seemed quite compelling. They weren’t totally fully there. You could chat with it and sometimes it would kind of break down. The amount of new ideas that are going into large language models has been astounding. It’s like every other week there’s a new large language model [Alan: view the timeline and models for 2022-2023] and some new variation that’s more and more realistic. It’s going to continue to happen. This is just another issue. There are some things that aren’t quite right with that particular model you mentioned [see: Alan’s illustrated guide to ChatGPT].

People have actually interacted with these things and some people say they’re sentient. I don’t think they’re sentient yet but I think they’re actually moving in that direction and that’s actually not a scientific issue. It’s actually a philosophical issue. That’s what you consider sentient or not. Although it’s a very important issue because I would chat with Marvin Minsky who was my mentor for 50 years and he said that sentient is not scientific so therefore forget it, it’s an illusion. That’s not my opinion. If you have a world that had no sentience in it, it may as well not exist. But yes, that was a sizable advance but there’s more to come.

5:40

Question: … What do you make of the criticism that there’s more to intelligence than brute processing speed and pattern recognition that if you want to pass the Turing test we need to learn more about our own evolved, our own intelligence evolved. I’ll just paraphrase you: in The Singularity is Near comparing cognition to chaotic computing models where unpredictable interaction of millions of processes many of which contain random and unpredictable elements provide unexpected and appropriate answers to subtle questions of recognition. In this chaotic computing how can you address Charlotte’s question about our own intelligence and the path forward AI?

Ray: It is a good observation, but chaos and unpredictability can also be simulated in computers. Large language models do that because you can’t always predict how it’s going to answer. A lot of these models you can actually ask the same question multiple times and get different answers, so it depends on the mood of the large language model at that time. To make it more realistic it does have to take that level of… into account when it answers. First we could ask a question and give you a paragraph that could answer your question. Now it can actually give you several pages. It can’t yet give you a whole novel that can be coherent and answer your question so it’s not able to do what humans can do. Not many humans can do it but some humans can write a whole novel that would answer a question. That’s the answer it has to actually cover a large amount of material, have an unpredictable element but also be coherent as one work. Seeing that happen gradually each new large language model is able to actually cover a much broader array of material but it definitely can handle stuff that is not just giving you a predictable amount of… it has a way that is not really totally predictable.

9:18

Question: …What is your definition of intelligence?

Ray: I mean intelligence is to solve difficult problems with limitations of resources including time. So you can’t take a million years to solve a problem. If you can solve it quickly then you’re showing intelligence. And that’s why somebody who is more intelligent might be able to solve problems more quickly.

But we’re seeing that in area after area. I mean AlphaFold for example can actually do things that humans can’t do very quickly or to play something like go goes way beyond what humans can do. In fact Lee Sedol, who’s the best human player in Go in the world says he’s not going to play Go anymore because machines can play it so much better than he can. But that’s actually not my view that it’s going to replace us. I think we can actually make ourselves smarter by merging with it as I said.

10:55

Question: …With AI taking over physical and intellectual achievements and individuals living longer. Do you have thoughts on society and whether individuals risk lacking a purpose?

Ray: Well it’s good to hear from you Sharon. That’s the whole point of our merging with intelligence. I mean if AI was something separate from us it’s definitely going to do everything that go way beyond what humans can do. So we really have to merge with them to make ourselves smarter. But that’s why we create these things. I mean we’re separate from other animals in that we can think of a solution, implement it, and then make ourselves better.

Now if you look at say take what human beings were doing for work 200 years ago. 80 percent had to do with creating food. That’s now down to 2 percent. And so if I were to say, ‘oh well you know all these jobs are going to go away and machines are going to do them’ people say ‘oh well there’s nothing for us to do’. But actually the percentage of people that are employed has gone way up. The amount of money that we’re making per hour has gone way up. And they say ‘well okay but what are we going to be doing?’ I said ‘well you’re going to be doing IT Engineering and Protein folding’ and no one will have any idea what we’re talking about because those ideas didn’t exist.

So we’re going to make ourselves smarter. That’s why we create these capabilities. And so it’s not going to be us versus AI. AI is going to go inside of us and make us much smarter than we were before. So yes I think if we did not do that then it would be very difficult to know what human beings would be doing, because machines would be doing everything better.

But we’re going to be doing it because the AI is going to work through us.

13:31

Question: …A question that relates to your idea of whether it’s a dystopian society or other… people with various political and or personal agendas to harness the increasing power of AI for their own purposes… will not necessarily be to the long term benefit of humankind as a whole. So how does this balance out?… individuals political and personal agendas may use AI for purposes that are not beneficial to mankind. How does that balance out?

Ray: Well I mean every new technology has positive and negative aspects. The railroad did tremendous destruction but it also benefited society. So it’s not that technology is always positive.

Social networks: I mean there’s certainly a lot of commentary as to how it is negative and that’s true. But no one actually would want to do completely without social networks.

And I make the case that we’re actually using technology and measuring the kinds of things that we associate with positive social benefit is actually increasing as the technology gets better. And that’s actually not known. I mean if you ask a poll as to whether these things are getting better or worse people will say they’re getting worse. Whereas they’re actually getting better. But it’s not that everything is positive I mean there are negative aspects of it and that’s why we need to keep working on how we use these technologies.

15:50

Question: … The Singularity is Near. In that book you speculated that the risk of bioterrorism, engineering and viruses will become an existential threat. Since then do you think this risk to humanity has increased or decreased?

Ray: I don’t think it’s increased. I mean I have a chapter in The Singularity is Near and there’s also another one in The Singularity is Nearer on risks. And all of these technologies have risks, and they can also do us in. I don’t think that the likelihood of that has increased. But I remain optimistic and if you look at the actual history of how we use technology you could point to various things that should have gone wrong. Like every single job that we had in 1900 a year, a little over a century ago is gone and yet we’re still working and making actually more money. So the way we’ve used technology has been very beneficial to human beings so far.

17:40

Question: …AI comes with large energy resource demands and rare mineral material needs to build the hardware. How do you see these international global tensions, especially the interaction pervasive AI and the climate?

Ray: I mean computers don’t use that much energy. In fact, that’s the least of our energy needs. And that’s a whole other issue we didn’t get into. The creation of renewable energy sources is on an exponential. I have a very good chart that shows all of the renewable energies and it’s on an exponential. And if you follow that out, we’ll be able to provide all of our energy needs on a renewable basis in 10 years. At that point, we’ll be using one part out of 5,000 parts of the sunlight that hits the earth. So we have plenty of headroom in that. So we’ll actually be able to deal with climate change through renewable sources. In terms of what we’re using, computers are not that expensive.

19:15

Question: …Will the Singularity lead to a decrease in class conflict? Much of the gain in productivity and wealth in the last 50 years has been concentrated in the 1% as inflation adjusted earnings in the working class have stagnated? Are you concerned about gains in productivity due to AI being unevenly distributed? …this related question about inequities that, for example, we saw exacerbated during the COVID pandemic.

Ray: I mean, my observation is that more and more people from more and more backgrounds are participating, which didn’t used to. Third world countries, like in Africa, South America, and so on, did not participate to the same extent where they are participating far more dramatically today, countries that were really under the weather in terms of being able to participate in these types of advances are now participating to very smart, very large extent. So I mean, anyway, that’s my view on it.

Question: …The machine can easily beat the best human player at computer chess, but even a young child can move pieces on the physical board better than any general purpose robot can. Do you imagine embodied machines will ever pass a physical Turing test in the real physical world? And if so, when?

Ray: Yeah, we’re making less progress with robotic machines, but that’s also coming along. And I can also use the same type of machine learning. And we’re going to see, I think, tremendous amount of advances in robotics over the next 10 years.

Question: …How do you envision society once individual brains can interface with a cloud? Will individuality still exist? It seems you imagine human intelligence coalescing into a singular consciousness.

Ray: Yes, definitely. I mean, that’s one of the requirements of being able to connect to the cloud is that this is your portion of the cloud and other people can’t access it. And we’re actually doing very well on that. And we have all of our phones connect to the cloud. And we don’t see people complaining that other people are getting access to it. So we’re actually doing pretty well on that. But definitely you’ll be able to maintain your own level of personality and differences. I think we’ll actually be more different than we are today, given the kinds of skills that we’ll develop.


April/2022

Video: https://youtu.be/5iFSz1orGUg7https://youtu.be/5iFSz1orGUg
Meeting: Singularity University GISP Class of 2009 reunion/update.
Speaker: Ray Kurzweil
Transcribed by: Otter.ai
Edited by: Alan (without AI!)
Date: 16/April/2022

Highlights

– We’ll actually achieve human-like AI before 2029 (around six years from 2022).
– ‘The human brain still has a lot more computation than even the best language models [1 trillion parameter LLMs]… However, we’re advancing them very quickly’.
– Transformers/neural networks are ‘going in the right direction. I don’t think we need a massive new breakthrough’.

Full edited transcript

00:05

Question: What are your thoughts on the singularity now, in terms of timing?

Ray: I have a new book, The Singularity is Nearer8https://www.kurzweilai.net/essays-celebrating-15-year-anniversary-of-the-book-the-singularity-is-near [due for release in Jan/2023]. It’s completely written, I’ve shared it with a few people. And it’s very much consistent with the views I expressed in The Singularity is Near, which is 17 years earlier!

But it has new perspectives. When that book came out, we didn’t have smartphones, we had very little of what we now take for granted. And the kinds of views that I’ve expressed with the singularity are much more acceptable. I mean, people had never heard of that kind of thing before. So, I discuss the singularity with the new perspectives of how we currently understand technology.

And it’s actually an optimistic book. I mean, people are very pessimistic about the future, because it’s all we hear on the news. And what we hear on the news is true, there is bad news. But the good news is actually better. I’ve got like 50 charts that show different indications of human wellbeing. And every year they get better. And that’s really not reported. Peter Diamandis has talked about this as well.

01:48

Question: What kind of things have surprised you about how technology has developed or how it’s affected our society?

02:01

Ray: It really hasn’t surprised me, because it’s really in line with what I’ve been expecting. There was, for example, a poll of 24,000 people in 23 countries, asking whether poverty has gotten better or worse over the last 20 years. Almost everybody said it’s gotten worse. The reality is, it’s fallen by 50%. And there’s one poll after another that shows that people’s views are quite negative, whereas the reality is quite positive. Not to say there isn’t bad news, we see that all the time on news programs. So that’s one issue that I talked about.

But we’re actually pretty close. I mean, I think we’ll actually pass the Turing test by 2029. That’s what I actually started saying, in 1999. And in my book at that time, The Singularity is Near9https://www.amazon.com/dp/0143037889 it said the same thing in 2005.

But we’re actually getting pretty close to that. But we actually have things that come pretty close. In many ways, they’re better than humans. In some ways, they’re not quite there. But I think we have an understanding of how to solve those problems.

I think we’ll actually probably beat 2029.

And in the 2030s, we’ll actually begin to merge with that. It won’t just be us versus computers, we’ll actually put them really inside our minds. We’ll be able to connect to the cloud.

Consider your cell phone. It wouldn’t be very smart if it didn’t connect to the cloud. Most of its intelligence it is constantly getting from the cloud. And we’ll do the same thing with our brains. We’ll be able to think that much more deeply by basically amplifying our ability to do intelligent type processing directly with the cloud. So that’s coming in the 2030s. And so our thinking then will be a hybrid of our natural thinking, and the thinking in the cloud. But the cloud will amplify. Our natural thinking doesn’t advance. So when you get to 2045, most of our thinking will be in the cloud.

Question: Has your thoughts on life extension changed?

Ray: No, in fact we’re now applying AI to life extension. We’re actually simulating biology. So we can actually do tests with a simulated biology.

So the Moderna vaccine: they actually tested several billion different mRNA sequences, and found ones that could create a vaccine10https://sloanreview.mit.edu/audio/ai-and-the-covid-19-vaccine-modernas-dave-johnson/. And they did that in three days. And that was the vaccine. We then spent 10 months testing on humans, but it never changed, it remained the same. And it’s the same today.

Ultimately we won’t need to test on humans. We’ll be able to test on a million simulated humans, which will be much better than testing on a few 100 real humans. And we can do that in a few days. So be able to actually simulate every possible antidote to any problem, and we’ll go through every single problem and come up with solutions very quickly, and test them very quickly. That’s going to be something we’ll see by the end of this decade [2029].

We’re gonna be able to go through very quickly, all the different problems that medicine has. The way that we’ve been doing it, testing with humans takes years. Then you come up with another idea, and that takes years more. We could actually test them all, every single possible solution, very quickly. And that’s, that’s coming now. And we saw some of that with the Moderna vaccine.

06:46

Question: But medicine doesn’t seem to adapt to that. The vaccine was developed before the lockdowns even began, it wasn’t deployed until… probably 2 million people would be alive, if we had had the knowledge that we could deploy that vaccine immediately, which is quite a large number. And medicine doesn’t change its practices and styles nearly as quickly as technology changes.

07:09

Ray: Well, some people were skeptical because it was developed so quickly. But I think we’re gonna have to get over that. But it is good that we had the vaccine, otherwise, a lot more people would have died. And I’m not saying we’re there yet. But we are beginning to simulate biology. And ultimately, we’ll find solutions to all the problems we have in medicine, using simulated biology. So we’ve just begun. And I think we’ll see that being very prominent by the end of this decade.

07:55

But people have to have to want to live forever. I mean, if they avoid solutions to problems, then they won’t take advantage of these advances.

We’ll have the opportunity, but that doesn’t mean everybody will do it. [For example] We have very large anti-vax regimen. Possibly because it was created so quickly…

08:28

Question: What do you feel most optimistic about now, Ray, or most hopeful about?

08:34

Ray: AI is going faster and faster. I’ve been in this field, I got involved when I was 12, I’ve been in it for 60 years. I got involved only six years after AI got its name, in the 1954 [195611https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth] conference at Dartmouth. And things were very, very slow. It would take many years before anything was adopted.

When I got to Google, which is about 10 years ago [Dec/201212https://www.kurzweilai.net/kurzweil-joins-google-to-work-on-new-projects-involving-machine-learning-and-language-processing], things were going faster. It would take maybe a year or two to develop something that would be adopted.

Now things are happening every month. So, the acceleration of technology in general, and particularly in AI, we could definitely see. A real serious problem we seem to overcome very quickly. So, we’re gonna see tremendous progress by the end of this decade [2029].

09:40

Question: For AI, I’m curious now you’re in the middle of working on large language models. You’ve always been around all different angles of AI, so what do you think is going to be the most promising approaches of AI to really get us to the full potential of AGI, pass Turing tests, [give us] super intelligent machines? Do you think for example, it’s continued progression of sort of large scale language model type of things? Or do you see a fusion of neural and more traditional logic-based and symbolic-based approaches?

10:20

Ray: First of all, we need to continue to increase the amount of computation. We’re using an amount of computation now that is beyond what we can actually provide everybody. But we can at least see what’s going to happen.

And as we increase the computation, these systems do a lot better. And they overcome problems they had before.

There are some algorithmic improvements we need. I mentioned inference before, it doesn’t do inference very well. However, we have some ideas that we believe will fix that.

But it also requires more computation. The human brain still has a lot more computation than even the best language models. I mean, right now we’re, like half a trillion parameters [for LLMs]. And it’s still not what the human brain can do. So the human brain thinks about something very carefully, it can go beyond what these machines can do. However, we’re advancing them very quickly.

Like every year, we’re multiplying by five or 10, the amount of computations. So that’s one key thing that’s required.

And we do need some changes in algorithms, I think we have some ideas of what to do. We’re exactly where I would expect to be in 2022 to meet the Turing test by 2029. Which is what I’ve been saying since 1999.

And we also have to collect data, that’s a very important issue. I mentioned simulated biology, we have to actually collect the data that would allow us to simulate biology for different kinds of problems. We’ve done that in some areas, but actually collecting all that data is extremely important. And being able to organize it, and so on. So that’s happening. That’s another thing that’s required.

12:35

Question: I was talking to Geoffrey Hinton [Godfather of AI, developer of artificial neural networks, Fellow at Google] and he said that we really need a next level breakthrough after deep learning to progress AI to the next level. And I’m wondering if you agree with that side, and if you’ve seen anything on the horizon that would fit that criteria?

12:55

Ray: Yeah, well, he’s [Hinton] always been more conservative than I have. I think we have the correct methods: neural nets.

If we just amplify them with new technology, it wouldn’t quite get us there. So we do need some additional ideas. But we can actually change neural nets to do things like inference. And there was a recent paper on jokes, developed by a new, massive model [Google PaLM, Apr/2022, see Alan’s video: https://youtu.be/kea2ATUEHH8].

We’re actually getting there. I think we’re going in the right direction. I don’t think we need a massive new breakthrough. I think the kind of neural nets that he’s [Hinton] advanced are good enough with some of the additional changes that are now being experimented with. So yes, I do think we’d need some additional things, but they’re being worked on. So it’s maybe just a difference of emphasis. But I think we’re on the right path.

14:11

Question: What do you think the pandemic shifted the priority of the future of medicine, as well as other fields that you’d like to share with us?

14:24

Ray: Well, I mean, we did create one of the methods that we’ve been fighting COVID using simulated biology, as I said. I think the fact that it happened so quickly, it’s actually has partly fueled the anti-vax movement.

14:54

So I think we need to actually develop this idea. We’ve had vaccinations actually for over a century. But they generally take a while to develop. So the fact that we are able to do it so quickly by simulating biology, is a surprise to people. And it’s actually going to go faster. The US government has a plan to actually create new vaccines within a few months to three months. And it’s gonna go even faster than that. Because actually the Moderna vaccine was actually done in three days. They started it, they simulated every single mRNA vaccine, and tested it in three days.

But people then wanted to test it on humans. I think eventually, it will be much better to test it on a million simulated humans than 100 real humans. So we will get there. This was actually the very first time that we actually used simulated biology to come up with something. And I think simulated biology will work for every single medical problem.

One of the key ideas is to collect the data. That’s going to be very key. Because you need to collect data for single problem. If you have all the data, then you can run it and figure out an answer very quickly. So, I’m very excited about that. That’s the type of change we needed to really break through lots of medical problems that have been an issue for many years, decades.

17:06

Question: What would you say is the most surprising thing to you over the last decade? You said most things were not surprising you, but surely something has surprised you?

17:24

Ray: I’m not surprised, but I’m quite amazed at large models. Many of you have actually talked to a large model [see Alan’s Leta AI videos] . It’s become actually already a major thing in academe. Here to write about something, you can just ask a large model.

17:58

“What do you think of this philosophical problem?” “If you had a trolley, if you had substituted something for the trolley problem, how would that be?” And it will actually give you an answer, and a very well thought through answer.

And the new models that are coming out now that are five times the size, half a trillion parameters, rather than 100,000 or 100 billion, they even give you better answers.

18:36

Question: Have you ever asked a model to predict the future of technology and it said something to you, where you said, “Oh, I didn’t think of that.” That’s a very trick question for you, Ray…

18:50

Ray: Well, I’m not actually predicting the future. I’m just giving the capabilities that we’ll have to affect it.

18:57

Question: No, I’m specifically meaning your task in life has been to try and forecast features of technology. And so when a model impresses you, because it does something in your task of life that you didn’t think of, that would be a particular bar that I’m asking you about?

19:14

Ray: Yeah, well, I mean, these large models don’t always give you the same answer. In fact, you can ask the same question a hundred times and get 100 different answers. Not all of which you’ll agree with, but it’s like talking to a person.

19:36

And this is actually now affecting academe. Because if somebody’s asked to write about something, you can have the large model write about it. And you can ask it a question. And if you don’t really like that answer, just ask it again, and if you find something you like, you can submit it! There’s no way that anybody could find that you’ve done that, because if they asked the same question that you’ve asked, they’ll get a different answer. Unless they can tell that it’s not your writing style, but since everyone will have these large models anyway, it’s really hard to say what your writing style is. So it’s really a writing tool.

20:24

So, I wouldn’t say that surprised me, but I think it’s really quite delightful. To have this kind of intelligence come from a model. This was never capable before, and is still not at human levels. So we’re gonna see even more surprises from these over the next several years.

20:52

Question: The nature of models in large models, especially around like transformer models and kind of these universal models. So first question, is it really all about just computation? Like the future of innovation is really all about how many nodes can you throw at it? And then ultimately, that becomes a question of how many dollars you can throw at it…

21:31

Ray: We are going beyond what’s affordable. So some of the largest models really can’t allow, like a billion people to use. But we’re able to see what they’re capable of doing. So it actually gives us a direction.

But it’s not just the amount of computation. I mean, the amount of data is important [see Alan’s paper, What’s in my AI?]. Some of the first models, were trained basically on the web. Not everything on the web is accurate. And so it put out a lot of things that were basically at the level of the web, which was not totally accurate. So we’re actually finding ways to train on information that’s more accurate, more reliable.

And particularly if you’re trying to solve a particular problem, like: “What mRNA sequences, can you deploy to create a vaccine?” That’s a very specific type of data, you got to collect that type of information. And so we’re doing that. And so collecting data, as I said before, is very, very important.

And then neural nets by themselves are not adequate. They don’t do inference correctly. That’s something that’s not fully solved. I believe it will get solved, but that’s a question that’s not fully resolved.

And it has to do with inference. Understanding what the statement is saying, and what the implications are, being able to do multi step reasoning, and that’s a key issue. So that’s what’s being worked on now.

There’s algorithmic issues, there’s data issues. The amount of computation is very important, though. Unless you have a certain amount of computation, you can’t really simulate human intelligence. And even with like a half a trillion parameter model, we’re still not what human beings can deploy, when we deploy it on a specific issue.

24:15

Question: Across the nature of lots of computation, there’s lots of money. Across lots of data, there’s lots of money. Today, we’re in a world where a couple people with a laptop—or a couple laptops—in a garage can build some tremendous innovation. But what do you think about the societal impacts if all of the innovation is really around these systems that are just enormously expensive, and how that changes that economic disparity situation?

24:57

Ray: There’s still a lot you can do with a few laptops, which you couldn’t do in previous times. And lots of people are creating very successful companies without having to spend millions or hundreds of millions of dollars on these types of innovations.

And a few people are doing the training. It’s really the training that requires a lot of money [GPT-3 required the equivalent of 288 computing years of training, estimated at $5-10 million13https://lifearchitect.ai/models/]. And to actually run these is not as nearly as expensive. And so you can do that yourself, if you can use a model that’s trained by somebody else. So a lot of people are doing that. Google and other companies, Microsoft, and so on, are making these models available. And then you can use them. No one [single] company is controlling it, because there are multiple companies that are making this available. So yes, there are some things that require money, but then everybody can share that training. And that’s what we’re seeing.

26:22

Question: What do you think about the future of democracy? If technology is going to help somehow fix the problems [within] democracy?

26:35

Ray: I have a chart on democracy over the centuries. But it’s gone way up. And we […] players in the world that are not Democratic, but the number of democracies and the number of people controlled by democracies has gone way up.

And if you go back, even when the United States was formed, that was really the first democracy. And if you look back at that, it wasn’t perfect democracy, we had slavery and so on. And if you look at it now, different countries have different levels of democracies, there’s problems with it, but the amount of democracy in the world has gone way up.

27:36

And I do believe that’s part of the good news of people. And we see actually democracies getting together today to fight people who are trying to oppose that. So yes, we’ve made tremendous progress. Lots of countries that are democratic today were not democratic even a couple of decades ago.

28:18

Question: I’ve always been fascinated and interested in the future of humanity in space. Do you have any predictions on when you think humanity will be a multiplanetary species?

29:01

Ray: Peter [Diamandis] is very concerned about this. I’ve been more concerned about amplifying the intelligence of life here on Earth.

I think it’s going to be a future era beyond the singularity when we actually have kind of exhausted the ability of here on Earth to create more intelligence. At some point, our ability to actually create more computation will come to an end, because we really won’t have more materials to do that with.

I talk about computronium14https://en.wikipedia.org/wiki/Computronium, which was actually pioneered by Eric Drexler [MIT, supervised by Marvin Minsky], as to how much computation can you create if you were to actually organize all the atoms in an optimal way. It’s pretty fantastic. You can basically match all of the intelligence of all humanity with sort of one liter of computronium.

And we’ll get to a point where we’ve used up all the materials on Earth to create computronium. Then we will have to move to another planet.

30:44

And that’s probably at least a century away [2122]. That might seem very long. On the other hand, this world has been around for billions of years, so it’s not that long.

But at that point, it really will become imperative that we explore other planets. We won’t want to send delicate creatures like humans, we’ll want to send something that’s very highly intelligent. And it will then organize the materials and other planets to become computronium. That’ll be something we’ll do for centuries after.

And then a key issue is whether or not we can go beyond the speed of light. If we’re really restricted by the speed of light, this will take a very long time to get into places. If there’s some way of going beyond the speed of light, then it will happen faster.

Putting something on Mars, I think that’s interesting, [but] I don’t think that’ll affect humanity very much. I think it’ll be our ability to extend computation beyond Earth. And that’s really something that’s way beyond the singularity.


Listen to part of Ray’s presentation in my mid-2022 AI report…

Get The Memo

by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.
Bestseller. 10,000+ readers from 142 countries. Microsoft, Tesla, Google...
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.

Dr Alan D. Thompson is an AI expert and consultant, advising Fortune 500s and governments on post-2020 large language models. His work on artificial intelligence has been featured at NYU, with Microsoft AI and Google AI teams, at the University of Oxford’s 2021 debate on AI Ethics, and in the Leta AI (GPT-3) experiments viewed more than 4.5 million times. A contributor to the fields of human intelligence and peak performance, he has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. Technical highlights.

This page last updated: 25/Oct/2024. https://lifearchitect.ai/kurzweil/