Dr Ray Kurzweil: Update April/2022

Video: https://youtu.be/5iFSz1orGUg1https://youtu.be/5iFSz1orGUg
Meeting: Singularity University GISP Class of 2009 reunion/update.
Speaker: Dr Ray Kurzweil
Transcribed by: Otter.ai
Edited by: Alan (without AI!)
Date: 16/April/2022


– We’ll actually achieve the AI singularity before 2029 (around six years from 2022).
– ‘The human brain still has a lot more computation than even the best language models [1 trillion parameter LLMs]… However, we’re advancing them very quickly’.
– Transformers/neural networks are ‘going in the right direction. I don’t think we need a massive new breakthrough’.

Full edited transcript


Question: What are your thoughts on the singularity now, in terms of timing?

Ray: I have a new book, The Singularity is Nearer2https://www.kurzweilai.net/essays-celebrating-15-year-anniversary-of-the-book-the-singularity-is-near [due for release in Jan/2023]. It’s completely written, I’ve shared it with a few people. And it’s very much consistent with the views I expressed in The Singularity is Near, which is 17 years earlier!

But it has new perspectives. When that book came out, we didn’t have smartphones, we had very little of what we now take for granted. And the kinds of views that I’ve expressed with the singularity are much more acceptable. I mean, people had never heard of that kind of thing before. So, I discuss the singularity with the new perspectives of how we currently understand technology.

And it’s actually an optimistic book. I mean, people are very pessimistic about the future, because it’s all we hear on the news. And what we hear on the news is true, there is bad news. But the good news is actually better. I’ve got like 50 charts that show different indications of human wellbeing. And every year they get better. And that’s really not reported. Peter Diamandis has talked about this as well.


Question: What kind of things have surprised you about how technology has developed or how it’s affected our society?


Ray: It really hasn’t surprised me, because it’s really in line with what I’ve been expecting. There was, for example, a poll of 24,000 people in 23 countries, asking whether poverty has gotten better or worse over the last 20 years. Almost everybody said it’s gotten worse. The reality is, it’s fallen by 50%. And there’s one poll after another that shows that people’s views are quite negative, whereas the reality is quite positive. Not to say there isn’t bad news, we see that all the time on news programs. So that’s one issue that I talked about.

But we’re actually pretty close. I mean, I think we’ll actually pass the Turing test by 2029. That’s what I actually started saying, in 1999. And in my book at that time, The Singularity is Near3https://www.amazon.com/dp/0143037889 it said the same thing in 2005.

But we’re actually getting pretty close to that. But we actually have things that come pretty close. In many ways, they’re better than humans. In some ways, they’re not quite there. But I think we have an understanding of how to solve those problems.

I think we’ll actually probably beat 2029 [to achieve the singularity].

And in the 2030s, we’ll actually begin to merge with that. It won’t just be us versus computers, we’ll actually put them really inside our minds. We’ll be able to connect to the cloud.

Consider your cell phone. It wouldn’t be very smart if it didn’t connect to the cloud. Most of its intelligence it is constantly getting from the cloud. And we’ll do the same thing with our brains. We’ll be able to think that much more deeply by basically amplifying our ability to do intelligent type processing directly with the cloud. So that’s coming in the 2030s. And so our thinking then will be a hybrid of our natural thinking, and the thinking in the cloud. But the cloud will amplify. Our natural thinking doesn’t advance. So when you get to 2045, most of our thinking will be in the cloud.

Question: Has your thoughts on life extension changed?

Ray: No, in fact we’re now applying AI to life extension. We’re actually simulating biology. So we can actually do tests with a simulated biology.

So the Moderna vaccine: they actually tested several billion different mRNA sequences, and found ones that could create a vaccine4https://sloanreview.mit.edu/audio/ai-and-the-covid-19-vaccine-modernas-dave-johnson/. And they did that in three days. And that was the vaccine. We then spent 10 months testing on humans, but it never changed, it remained the same. And it’s the same today.

Ultimately we won’t need to test on humans. We’ll be able to test on a million simulated humans, which will be much better than testing on a few 100 real humans. And we can do that in a few days. So be able to actually simulate every possible antidote to any problem, and we’ll go through every single problem and come up with solutions very quickly, and test them very quickly. That’s going to be something we’ll see by the end of this decade [2029].

We’re gonna be able to go through very quickly, all the different problems that medicine has. The way that we’ve been doing it, testing with humans takes years. Then you come up with another idea, and that takes years more. We could actually test them all, every single possible solution, very quickly. And that’s, that’s coming now. And we saw some of that with the Moderna vaccine.


Question: But medicine doesn’t seem to adapt to that. The vaccine was developed before the lockdowns even began, it wasn’t deployed until… probably 2 million people would be alive, if we had had the knowledge that we could deploy that vaccine immediately, which is quite a large number. And medicine doesn’t change its practices and styles nearly as quickly as technology changes.


Ray: Well, some people were skeptical because it was developed so quickly. But I think we’re gonna have to get over that. But it is good that we had the vaccine, otherwise, a lot more people would have died. And I’m not saying we’re there yet. But we are beginning to simulate biology. And ultimately, we’ll find solutions to all the problems we have in medicine, using simulated biology. So we’ve just begun. And I think we’ll see that being very prominent by the end of this decade.


But people have to have to want to live forever. I mean, if they avoid solutions to problems, then they won’t take advantage of these advances.

We’ll have the opportunity, but that doesn’t mean everybody will do it. [For example] We have very large anti-vax regimen. Possibly because it was created so quickly…


Question: What do you feel most optimistic about now, Ray, or most hopeful about?


Ray: AI is going faster and faster. I’ve been in this field, I got involved when I was 12, I’ve been in it for 60 years. I got involved only six years after AI got its name, in the 1954 [19565https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth] conference at Dartmouth. And things were very, very slow. It would take many years before anything was adopted.

When I got to Google, which is about 10 years ago [Dec/20126https://www.kurzweilai.net/kurzweil-joins-google-to-work-on-new-projects-involving-machine-learning-and-language-processing], things were going faster. It would take maybe a year or two to develop something that would be adopted.

Now things are happening every month. So, the acceleration of technology in general, and particularly in AI, we could definitely see. A real serious problem we seem to overcome very quickly. So, we’re gonna see tremendous progress by the end of this decade [2029].


Question: For AI, I’m curious now you’re in the middle of working on large language models. You’ve always been around all different angles of AI, so what do you think is going to be the most promising approaches of AI to really get us to the full potential of AGI, pass Turing tests, [give us] super intelligent machines? Do you think for example, it’s continued progression of sort of large scale language model type of things? Or do you see a fusion of neural and more traditional logic-based and symbolic-based approaches?


Ray: First of all, we need to continue to increase the amount of computation. We’re using an amount of computation now that is beyond what we can actually provide everybody. But we can at least see what’s going to happen.

And as we increase the computation, these systems do a lot better. And they overcome problems they had before.

There are some algorithmic improvements we need. I mentioned inference before, it doesn’t do inference very well. However, we have some ideas that we believe will fix that.

But it also requires more computation. The human brain still has a lot more computation than even the best language models. I mean, right now we’re, like half a trillion parameters [for LLMs]. And it’s still not what the human brain can do. So the human brain thinks about something very carefully, it can go beyond what these machines can do. However, we’re advancing them very quickly.

Like every year, we’re multiplying by five or 10, the amount of computations. So that’s one key thing that’s required.

And we do need some changes in algorithms, I think we have some ideas of what to do. We’re exactly where I would expect to be in 2022 to meet the Turing test by 2029. Which is what I’ve been saying since 1999.

And we also have to collect data, that’s a very important issue. I mentioned simulated biology, we have to actually collect the data that would allow us to simulate biology for different kinds of problems. We’ve done that in some areas, but actually collecting all that data is extremely important. And being able to organize it, and so on. So that’s happening. That’s another thing that’s required.


Question: I was talking to Geoffrey Hinton [Godfather of AI, developer of artificial neural networks, Fellow at Google] and he said that we really need a next level breakthrough after deep learning to progress AI to the next level. And I’m wondering if you agree with that side, and if you’ve seen anything on the horizon that would fit that criteria?


Ray: Yeah, well, he’s [Hinton] always been more conservative than I have. I think we have the correct methods: neural nets.

If we just amplify them with new technology, it wouldn’t quite get us there. So we do need some additional ideas. But we can actually change neural nets to do things like inference. And there was a recent paper on jokes, developed by a new, massive model [Google PaLM, Apr/2022, see Alan’s video: https://youtu.be/kea2ATUEHH8].

We’re actually getting there. I think we’re going in the right direction. I don’t think we need a massive new breakthrough. I think the kind of neural nets that he’s [Hinton] advanced are good enough with some of the additional changes that are now being experimented with. So yes, I do think we’d need some additional things, but they’re being worked on. So it’s maybe just a difference of emphasis. But I think we’re on the right path.


Question: What do you think the pandemic shifted the priority of the future of medicine, as well as other fields that you’d like to share with us?


Ray: Well, I mean, we did create one of the methods that we’ve been fighting COVID using simulated biology, as I said. I think the fact that it happened so quickly, it’s actually has partly fueled the anti-vax movement.


So I think we need to actually develop this idea. We’ve had vaccinations actually for over a century. But they generally take a while to develop. So the fact that we are able to do it so quickly by simulating biology, is a surprise to people. And it’s actually going to go faster. The US government has a plan to actually create new vaccines within a few months to three months. And it’s gonna go even faster than that. Because actually the Moderna vaccine was actually done in three days. They started it, they simulated every single mRNA vaccine, and tested it in three days.

But people then wanted to test it on humans. I think eventually, it will be much better to test it on a million simulated humans than 100 real humans. So we will get there. This was actually the very first time that we actually used simulated biology to come up with something. And I think simulated biology will work for every single medical problem.

One of the key ideas is to collect the data. That’s going to be very key. Because you need to collect data for single problem. If you have all the data, then you can run it and figure out an answer very quickly. So, I’m very excited about that. That’s the type of change we needed to really break through lots of medical problems that have been an issue for many years, decades.


Question: What would you say is the most surprising thing to you over the last decade? You said most things were not surprising you, but surely something has surprised you?


Ray: I’m not surprised, but I’m quite amazed at large models. Many of you have actually talked to a large model [see Alan’s Leta AI videos] . It’s become actually already a major thing in academe. Here to write about something, you can just ask a large model.


“What do you think of this philosophical problem?” “If you had a trolley, if you had substituted something for the trolley problem, how would that be?” And it will actually give you an answer, and a very well thought through answer.

And the new models that are coming out now that are five times the size, half a trillion parameters, rather than 100,000 or 100 billion, they even give you better answers.


Question: Have you ever asked a model to predict the future of technology and it said something to you, where you said, “Oh, I didn’t think of that.” That’s a very trick question for you, Ray…


Ray: Well, I’m not actually predicting the future. I’m just giving the capabilities that we’ll have to affect it.


Question: No, I’m specifically meaning your task in life has been to try and forecast features of technology. And so when a model impresses you, because it does something in your task of life that you didn’t think of, that would be a particular bar that I’m asking you about?


Ray: Yeah, well, I mean, these large models don’t always give you the same answer. In fact, you can ask the same question a hundred times and get 100 different answers. Not all of which you’ll agree with, but it’s like talking to a person.


And this is actually now affecting academe. Because if somebody’s asked to write about something, you can have the large model write about it. And you can ask it a question. And if you don’t really like that answer, just ask it again, and if you find something you like, you can submit it! There’s no way that anybody could find that you’ve done that, because if they asked the same question that you’ve asked, they’ll get a different answer. Unless they can tell that it’s not your writing style, but since everyone will have these large models anyway, it’s really hard to say what your writing style is. So it’s really a writing tool.


So, I wouldn’t say that surprised me, but I think it’s really quite delightful. To have this kind of intelligence come from a model. This was never capable before, and is still not at human levels. So we’re gonna see even more surprises from these over the next several years.


Question: The nature of models in large models, especially around like transformer models and kind of these universal models. So first question, is it really all about just computation? Like the future of innovation is really all about how many nodes can you throw at it? And then ultimately, that becomes a question of how many dollars you can throw at it…


Ray: We are going beyond what’s affordable. So some of the largest models really can’t allow, like a billion people to use. But we’re able to see what they’re capable of doing. So it actually gives us a direction.

But it’s not just the amount of computation. I mean, the amount of data is important [see Alan’s paper, What’s in my AI?]. Some of the first models, were trained basically on the web. Not everything on the web is accurate. And so it put out a lot of things that were basically at the level of the web, which was not totally accurate. So we’re actually finding ways to train on information that’s more accurate, more reliable.

And particularly if you’re trying to solve a particular problem, like: “What mRNA sequences, can you deploy to create a vaccine?” That’s a very specific type of data, you got to collect that type of information. And so we’re doing that. And so collecting data, as I said before, is very, very important.

And then neural nets by themselves are not adequate. They don’t do inference correctly. That’s something that’s not fully solved. I believe it will get solved, but that’s a question that’s not fully resolved.

And it has to do with inference. Understanding what the statement is saying, and what the implications are, being able to do multi step reasoning, and that’s a key issue. So that’s what’s being worked on now.

There’s algorithmic issues, there’s data issues. The amount of computation is very important, though. Unless you have a certain amount of computation, you can’t really simulate human intelligence. And even with like a half a trillion parameter model, we’re still not what human beings can deploy, when we deploy it on a specific issue.


Question: Across the nature of lots of computation, there’s lots of money. Across lots of data, there’s lots of money. Today, we’re in a world where a couple people with a laptop—or a couple laptops—in a garage can build some tremendous innovation. But what do you think about the societal impacts if all of the innovation is really around these systems that are just enormously expensive, and how that changes that economic disparity situation?


Ray: There’s still a lot you can do with a few laptops, which you couldn’t do in previous times. And lots of people are creating very successful companies without having to spend millions or hundreds of millions of dollars on these types of innovations.

And a few people are doing the training. It’s really the training that requires a lot of money [GPT-3 required the equivalent of 288 computing years of training, estimated at $5-10 million7https://lifearchitect.ai/models/]. And to actually run these is not as nearly as expensive. And so you can do that yourself, if you can use a model that’s trained by somebody else. So a lot of people are doing that. Google and other companies, Microsoft, and so on, are making these models available. And then you can use them. No one [single] company is controlling it, because there are multiple companies that are making this available. So yes, there are some things that require money, but then everybody can share that training. And that’s what we’re seeing.


Question: What do you think about the future of democracy? If technology is going to help somehow fix the problems [within] democracy?


Ray: I have a chart on democracy over the centuries. But it’s gone way up. And we […] players in the world that are not Democratic, but the number of democracies and the number of people controlled by democracies has gone way up.

And if you go back, even when the United States was formed, that was really the first democracy. And if you look back at that, it wasn’t perfect democracy, we had slavery and so on. And if you look at it now, different countries have different levels of democracies, there’s problems with it, but the amount of democracy in the world has gone way up.


And I do believe that’s part of the good news of people. And we see actually democracies getting together today to fight people who are trying to oppose that. So yes, we’ve made tremendous progress. Lots of countries that are democratic today were not democratic even a couple of decades ago.


Question: I’ve always been fascinated and interested in the future of humanity in space. Do you have any predictions on when you think humanity will be a multiplanetary species?


Ray: Peter [Diamandis] is very concerned about this. I’ve been more concerned about amplifying the intelligence of life here on Earth.

I think it’s going to be a future era beyond the singularity when we actually have kind of exhausted the ability of here on Earth to create more intelligence. At some point, our ability to actually create more computation will come to an end, because we really won’t have more materials to do that with.

I talk about computronium8https://en.wikipedia.org/wiki/Computronium, which was actually pioneered by Eric Drexler [MIT, supervised by Marvin Minsky], as to how much computation can you create if you were to actually organize all the atoms in an optimal way. It’s pretty fantastic. You can basically match all of the intelligence of all humanity with sort of one liter of computronium.

And we’ll get to a point where we’ve used up all the materials on Earth to create computronium. Then we will have to move to another planet.


And that’s probably at least a century away [2122]. That might seem very long. On the other hand, this world has been around for billions of years, so it’s not that long.

But at that point, it really will become imperative that we explore other planets. We won’t want to send delicate creatures like humans, we’ll want to send something that’s very highly intelligent. And it will then organize the materials and other planets to become computronium. That’ll be something we’ll do for centuries after.

And then a key issue is whether or not we can go beyond the speed of light. If we’re really restricted by the speed of light, this will take a very long time to get into places. If there’s some way of going beyond the speed of light, then it will happen faster.

Putting something on Mars, I think that’s interesting, [but] I don’t think that’ll affect humanity very much. I think it’ll be our ability to extend computation beyond Earth. And that’s really something that’s way beyond the singularity.

Get The Memo

by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.
Hundreds of paid subscribers. Readers from Microsoft, Tesla, & Google AI.
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.

Dr Alan D. Thompson is an AI expert and consultant. With Leta (an AI powered by GPT-3), Alan co-presented a seminar called ‘The new irrelevance of intelligence’ at the World Gifted Conference in August 2021. His applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021. He has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. He is open to consulting and advisory on major AI projects with intergovernmental organisations and enterprise.

This page last updated: 30/May/2022. https://lifearchitect.ai/kurzweil/