👋 Hi, I’m Alan. I advise government and enterprise on post-2020 AI like OpenAI GPT-n and Google PaLM. You definitely want to keep up with the AI revolution in 2023. Join thousands of my paid subscribers from places like MIT, RAND, Microsoft AI, and Google AI.
Get The Memo.
Apr/2023
Ray’s opinion on the ‘AI pause’ letter (‘too vague to be practical… tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields’).
December/2022
Video: https://youtu.be/KklEmSBlUcM1https://youtu.be/KklEmSBlUcM
Meeting: CHIP Landmark Ideas: Ray Kurzweil.
Speaker: Dr Ray Kurzweil
Transcribed by: OpenAI Whisper via YouTube Transcription Python Notebook (thanks to Andrew Mayne)
Edited by: Alan (without AI!)
Date: 5/Dec/2022
Highlights
– ChatGPT is a ‘sizeable advance’, but ‘not quite right’.
– Large language models (LLMs) are moving in the direction of sentience.
– LLMs carry risks, just as all tech does, including railroads.
Full edited transcript
Intro
Dr. Kurzweil is one of the world’s leading inventors, thinkers and futurists. He creates and predicts using tools and ideas from the field of pattern recognition. Invented many technologies familiar to us today, including flatbed scanning, optical character recognition and text to speech synthesis. He won a Grammy for creating a music synthesizer used by Stevie Wonder that was capable of recreating the grand piano and other orchestral instruments. He was awarded the national medal of technology. His best selling books include the New York Times bestsellers The Singularity is Near and How to Create a Mind. Larry Page brought Kurzweil into Google as a principal researcher and AI visionary.
I’ll just mention one connection to Chip, Ben Rice, a faculty member. When he was a student at MIT, he worked with Ray to develop text to speech interface for that synthesizer so that Stevie Wonder and other non-sighted musicians could interact with the extensive visual navigation interface. The Singularity is a very important idea of Dr. Kurzweil’s. This is the point in time when artificial intelligence will surpass human intelligence resulting in rapid technological growth that will fundamentally change civilization. In order to understand when machines surpass biology, Ray has delved deeply into an understanding of biology and we’re immensely looking forward to hearing and learning and joining him in that understanding today…
Question: You’re joining us for the seminar five days after the release of OpenAI’s ChatGPT [released 30/Nov/2022, this recording 5/Dec/2022], which astounded many across the world and its ability to synthesize natural language responses to really complicated questions and assignments. If you’ve gotten to glimpse this technology, could you place it on the Kurzweil map toward the Singularity? Is this a step forward, is it a distraction, is it related in any way?
Ray: Well large language models occurred three years ago [Alan: Google BERT, 2019] and they seemed quite compelling. They weren’t totally fully there. You could chat with it and sometimes it would kind of break down. The amount of new ideas that are going into large language models has been astounding. It’s like every other week there’s a new large language model [Alan: view the timeline and models for 2022-2023] and some new variation that’s more and more realistic. It’s going to continue to happen. This is just another issue. There are some things that aren’t quite right with that particular model you mentioned [see: Alan’s illustrated guide to ChatGPT].
People have actually interacted with these things and some people say they’re sentient. I don’t think they’re sentient yet but I think they’re actually moving in that direction and that’s actually not a scientific issue. It’s actually a philosophical issue. That’s what you consider sentient or not. Although it’s a very important issue because I would chat with Marvin Minsky who was my mentor for 50 years and he said that sentient is not scientific so therefore forget it, it’s an illusion. That’s not my opinion. If you have a world that had no sentience in it, it may as well not exist. But yes, that was a sizable advance but there’s more to come.
5:40
Question: … What do you make of the criticism that there’s more to intelligence than brute processing speed and pattern recognition that if you want to pass the Turing test we need to learn more about our own evolved, our own intelligence evolved. I’ll just paraphrase you: in The Singularity is Near comparing cognition to chaotic computing models where unpredictable interaction of millions of processes many of which contain random and unpredictable elements provide unexpected and appropriate answers to subtle questions of recognition. In this chaotic computing how can you address Charlotte’s question about our own intelligence and the path forward AI?
Ray: It is a good observation, but chaos and unpredictability can also be simulated in computers. Large language models do that because you can’t always predict how it’s going to answer. A lot of these models you can actually ask the same question multiple times and get different answers, so it depends on the mood of the large language model at that time. To make it more realistic it does have to take that level of… into account when it answers. First we could ask a question and give you a paragraph that could answer your question. Now it can actually give you several pages. It can’t yet give you a whole novel that can be coherent and answer your question so it’s not able to do what humans can do. Not many humans can do it but some humans can write a whole novel that would answer a question. That’s the answer it has to actually cover a large amount of material, have an unpredictable element but also be coherent as one work. Seeing that happen gradually each new large language model is able to actually cover a much broader array of material but it definitely can handle stuff that is not just giving you a predictable amount of… it has a way that is not really totally predictable.
9:18
Question: …What is your definition of intelligence?
Ray: I mean intelligence is to solve difficult problems with limitations of resources including time. So you can’t take a million years to solve a problem. If you can solve it quickly then you’re showing intelligence. And that’s why somebody who is more intelligent might be able to solve problems more quickly.
But we’re seeing that in area after area. I mean AlphaFold for example can actually do things that humans can’t do very quickly or to play something like go goes way beyond what humans can do. In fact Lee Sedol, who’s the best human player in Go in the world says he’s not going to play Go anymore because machines can play it so much better than he can. But that’s actually not my view that it’s going to replace us. I think we can actually make ourselves smarter by merging with it as I said.
10:55
Question: …With AI taking over physical and intellectual achievements and individuals living longer. Do you have thoughts on society and whether individuals risk lacking a purpose?
Ray: Well it’s good to hear from you Sharon. That’s the whole point of our merging with intelligence. I mean if AI was something separate from us it’s definitely going to do everything that go way beyond what humans can do. So we really have to merge with them to make ourselves smarter. But that’s why we create these things. I mean we’re separate from other animals in that we can think of a solution, implement it, and then make ourselves better.
Now if you look at say take what human beings were doing for work 200 years ago. 80 percent had to do with creating food. That’s now down to 2 percent. And so if I were to say, ‘oh well you know all these jobs are going to go away and machines are going to do them’ people say ‘oh well there’s nothing for us to do’. But actually the percentage of people that are employed has gone way up. The amount of money that we’re making per hour has gone way up. And they say ‘well okay but what are we going to be doing?’ I said ‘well you’re going to be doing IT Engineering and Protein folding’ and no one will have any idea what we’re talking about because those ideas didn’t exist.
So we’re going to make ourselves smarter. That’s why we create these capabilities. And so it’s not going to be us versus AI. AI is going to go inside of us and make us much smarter than we were before. So yes I think if we did not do that then it would be very difficult to know what human beings would be doing, because machines would be doing everything better.
But we’re going to be doing it because the AI is going to work through us.
13:31
Question: …A question that relates to your idea of whether it’s a dystopian society or other… people with various political and or personal agendas to harness the increasing power of AI for their own purposes… will not necessarily be to the long term benefit of humankind as a whole. So how does this balance out?… individuals political and personal agendas may use AI for purposes that are not beneficial to mankind. How does that balance out?
Ray: Well I mean every new technology has positive and negative aspects. The railroad did tremendous destruction but it also benefited society. So it’s not that technology is always positive.
Social networks: I mean there’s certainly a lot of commentary as to how it is negative and that’s true. But no one actually would want to do completely without social networks.
And I make the case that we’re actually using technology and measuring the kinds of things that we associate with positive social benefit is actually increasing as the technology gets better. And that’s actually not known. I mean if you ask a poll as to whether these things are getting better or worse people will say they’re getting worse. Whereas they’re actually getting better. But it’s not that everything is positive I mean there are negative aspects of it and that’s why we need to keep working on how we use these technologies.
15:50
Question: … The Singularity is Near. In that book you speculated that the risk of bioterrorism, engineering and viruses will become an existential threat. Since then do you think this risk to humanity has increased or decreased?
Ray: I don’t think it’s increased. I mean I have a chapter in The Singularity is Near and there’s also another one in The Singularity is Nearer on risks. And all of these technologies have risks, and they can also do us in. I don’t think that the likelihood of that has increased. But I remain optimistic and if you look at the actual history of how we use technology you could point to various things that should have gone wrong. Like every single job that we had in 1900 a year, a little over a century ago is gone and yet we’re still working and making actually more money. So the way we’ve used technology has been very beneficial to human beings so far.
17:40
Question: …AI comes with large energy resource demands and rare mineral material needs to build the hardware. How do you see these international global tensions, especially the interaction pervasive AI and the climate?
Ray: I mean computers don’t use that much energy. In fact, that’s the least of our energy needs. And that’s a whole other issue we didn’t get into. The creation of renewable energy sources is on an exponential. I have a very good chart that shows all of the renewable energies and it’s on an exponential. And if you follow that out, we’ll be able to provide all of our energy needs on a renewable basis in 10 years. At that point, we’ll be using one part out of 5,000 parts of the sunlight that hits the earth. So we have plenty of headroom in that. So we’ll actually be able to deal with climate change through renewable sources. In terms of what we’re using, computers are not that expensive.
19:15
Question: …Will the Singularity lead to a decrease in class conflict? Much of the gain in productivity and wealth in the last 50 years has been concentrated in the 1% as inflation adjusted earnings in the working class have stagnated? Are you concerned about gains in productivity due to AI being unevenly distributed? …this related question about inequities that, for example, we saw exacerbated during the COVID pandemic.
Ray: I mean, my observation is that more and more people from more and more backgrounds are participating, which didn’t used to. Third world countries, like in Africa, South America, and so on, did not participate to the same extent where they are participating far more dramatically today, countries that were really under the weather in terms of being able to participate in these types of advances are now participating to very smart, very large extent. So I mean, anyway, that’s my view on it.
Question: …The machine can easily beat the best human player at computer chess, but even a young child can move pieces on the physical board better than any general purpose robot can. Do you imagine embodied machines will ever pass a physical Turing test in the real physical world? And if so, when?
Ray: Yeah, we’re making less progress with robotic machines, but that’s also coming along. And I can also use the same type of machine learning. And we’re going to see, I think, tremendous amount of advances in robotics over the next 10 years.
Question: …How do you envision society once individual brains can interface with a cloud? Will individuality still exist? It seems you imagine human intelligence coalescing into a singular consciousness.
Ray: Yes, definitely. I mean, that’s one of the requirements of being able to connect to the cloud is that this is your portion of the cloud and other people can’t access it. And we’re actually doing very well on that. And we have all of our phones connect to the cloud. And we don’t see people complaining that other people are getting access to it. So we’re actually doing pretty well on that. But definitely you’ll be able to maintain your own level of personality and differences. I think we’ll actually be more different than we are today, given the kinds of skills that we’ll develop.
April/2022
Video: https://youtu.be/5iFSz1orGUg2https://youtu.be/5iFSz1orGUg
Meeting: Singularity University GISP Class of 2009 reunion/update.
Speaker: Dr Ray Kurzweil
Transcribed by: Otter.ai
Edited by: Alan (without AI!)
Date: 16/April/2022
Highlights
– We’ll actually achieve human-like AI before 2029 (around six years from 2022).
– ‘The human brain still has a lot more computation than even the best language models [1 trillion parameter LLMs]… However, we’re advancing them very quickly’.
– Transformers/neural networks are ‘going in the right direction. I don’t think we need a massive new breakthrough’.
Full edited transcript
00:05
Question: What are your thoughts on the singularity now, in terms of timing?
Ray: I have a new book, The Singularity is Nearer3https://www.kurzweilai.net/essays-celebrating-15-year-anniversary-of-the-book-the-singularity-is-near [due for release in Jan/2023]. It’s completely written, I’ve shared it with a few people. And it’s very much consistent with the views I expressed in The Singularity is Near, which is 17 years earlier!
But it has new perspectives. When that book came out, we didn’t have smartphones, we had very little of what we now take for granted. And the kinds of views that I’ve expressed with the singularity are much more acceptable. I mean, people had never heard of that kind of thing before. So, I discuss the singularity with the new perspectives of how we currently understand technology.
And it’s actually an optimistic book. I mean, people are very pessimistic about the future, because it’s all we hear on the news. And what we hear on the news is true, there is bad news. But the good news is actually better. I’ve got like 50 charts that show different indications of human wellbeing. And every year they get better. And that’s really not reported. Peter Diamandis has talked about this as well.
01:48
Question: What kind of things have surprised you about how technology has developed or how it’s affected our society?
02:01
Ray: It really hasn’t surprised me, because it’s really in line with what I’ve been expecting. There was, for example, a poll of 24,000 people in 23 countries, asking whether poverty has gotten better or worse over the last 20 years. Almost everybody said it’s gotten worse. The reality is, it’s fallen by 50%. And there’s one poll after another that shows that people’s views are quite negative, whereas the reality is quite positive. Not to say there isn’t bad news, we see that all the time on news programs. So that’s one issue that I talked about.
But we’re actually pretty close. I mean, I think we’ll actually pass the Turing test by 2029. That’s what I actually started saying, in 1999. And in my book at that time, The Singularity is Near4https://www.amazon.com/dp/0143037889 it said the same thing in 2005.
But we’re actually getting pretty close to that. But we actually have things that come pretty close. In many ways, they’re better than humans. In some ways, they’re not quite there. But I think we have an understanding of how to solve those problems.
I think we’ll actually probably beat 2029.
And in the 2030s, we’ll actually begin to merge with that. It won’t just be us versus computers, we’ll actually put them really inside our minds. We’ll be able to connect to the cloud.
Consider your cell phone. It wouldn’t be very smart if it didn’t connect to the cloud. Most of its intelligence it is constantly getting from the cloud. And we’ll do the same thing with our brains. We’ll be able to think that much more deeply by basically amplifying our ability to do intelligent type processing directly with the cloud. So that’s coming in the 2030s. And so our thinking then will be a hybrid of our natural thinking, and the thinking in the cloud. But the cloud will amplify. Our natural thinking doesn’t advance. So when you get to 2045, most of our thinking will be in the cloud.
Question: Has your thoughts on life extension changed?
Ray: No, in fact we’re now applying AI to life extension. We’re actually simulating biology. So we can actually do tests with a simulated biology.
So the Moderna vaccine: they actually tested several billion different mRNA sequences, and found ones that could create a vaccine5https://sloanreview.mit.edu/audio/ai-and-the-covid-19-vaccine-modernas-dave-johnson/. And they did that in three days. And that was the vaccine. We then spent 10 months testing on humans, but it never changed, it remained the same. And it’s the same today.
Ultimately we won’t need to test on humans. We’ll be able to test on a million simulated humans, which will be much better than testing on a few 100 real humans. And we can do that in a few days. So be able to actually simulate every possible antidote to any problem, and we’ll go through every single problem and come up with solutions very quickly, and test them very quickly. That’s going to be something we’ll see by the end of this decade [2029].
We’re gonna be able to go through very quickly, all the different problems that medicine has. The way that we’ve been doing it, testing with humans takes years. Then you come up with another idea, and that takes years more. We could actually test them all, every single possible solution, very quickly. And that’s, that’s coming now. And we saw some of that with the Moderna vaccine.
06:46
Question: But medicine doesn’t seem to adapt to that. The vaccine was developed before the lockdowns even began, it wasn’t deployed until… probably 2 million people would be alive, if we had had the knowledge that we could deploy that vaccine immediately, which is quite a large number. And medicine doesn’t change its practices and styles nearly as quickly as technology changes.
07:09
Ray: Well, some people were skeptical because it was developed so quickly. But I think we’re gonna have to get over that. But it is good that we had the vaccine, otherwise, a lot more people would have died. And I’m not saying we’re there yet. But we are beginning to simulate biology. And ultimately, we’ll find solutions to all the problems we have in medicine, using simulated biology. So we’ve just begun. And I think we’ll see that being very prominent by the end of this decade.
07:55
But people have to have to want to live forever. I mean, if they avoid solutions to problems, then they won’t take advantage of these advances.
We’ll have the opportunity, but that doesn’t mean everybody will do it. [For example] We have very large anti-vax regimen. Possibly because it was created so quickly…
08:28
Question: What do you feel most optimistic about now, Ray, or most hopeful about?
08:34
Ray: AI is going faster and faster. I’ve been in this field, I got involved when I was 12, I’ve been in it for 60 years. I got involved only six years after AI got its name, in the 1954 [19566https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth] conference at Dartmouth. And things were very, very slow. It would take many years before anything was adopted.
When I got to Google, which is about 10 years ago [Dec/20127https://www.kurzweilai.net/kurzweil-joins-google-to-work-on-new-projects-involving-machine-learning-and-language-processing], things were going faster. It would take maybe a year or two to develop something that would be adopted.
Now things are happening every month. So, the acceleration of technology in general, and particularly in AI, we could definitely see. A real serious problem we seem to overcome very quickly. So, we’re gonna see tremendous progress by the end of this decade [2029].
09:40
Question: For AI, I’m curious now you’re in the middle of working on large language models. You’ve always been around all different angles of AI, so what do you think is going to be the most promising approaches of AI to really get us to the full potential of AGI, pass Turing tests, [give us] super intelligent machines? Do you think for example, it’s continued progression of sort of large scale language model type of things? Or do you see a fusion of neural and more traditional logic-based and symbolic-based approaches?
10:20
Ray: First of all, we need to continue to increase the amount of computation. We’re using an amount of computation now that is beyond what we can actually provide everybody. But we can at least see what’s going to happen.
And as we increase the computation, these systems do a lot better. And they overcome problems they had before.
There are some algorithmic improvements we need. I mentioned inference before, it doesn’t do inference very well. However, we have some ideas that we believe will fix that.
But it also requires more computation. The human brain still has a lot more computation than even the best language models. I mean, right now we’re, like half a trillion parameters [for LLMs]. And it’s still not what the human brain can do. So the human brain thinks about something very carefully, it can go beyond what these machines can do. However, we’re advancing them very quickly.
Like every year, we’re multiplying by five or 10, the amount of computations. So that’s one key thing that’s required.
And we do need some changes in algorithms, I think we have some ideas of what to do. We’re exactly where I would expect to be in 2022 to meet the Turing test by 2029. Which is what I’ve been saying since 1999.
And we also have to collect data, that’s a very important issue. I mentioned simulated biology, we have to actually collect the data that would allow us to simulate biology for different kinds of problems. We’ve done that in some areas, but actually collecting all that data is extremely important. And being able to organize it, and so on. So that’s happening. That’s another thing that’s required.
12:35
Question: I was talking to Geoffrey Hinton [Godfather of AI, developer of artificial neural networks, Fellow at Google] and he said that we really need a next level breakthrough after deep learning to progress AI to the next level. And I’m wondering if you agree with that side, and if you’ve seen anything on the horizon that would fit that criteria?
12:55
Ray: Yeah, well, he’s [Hinton] always been more conservative than I have. I think we have the correct methods: neural nets.
If we just amplify them with new technology, it wouldn’t quite get us there. So we do need some additional ideas. But we can actually change neural nets to do things like inference. And there was a recent paper on jokes, developed by a new, massive model [Google PaLM, Apr/2022, see Alan’s video: https://youtu.be/kea2ATUEHH8].
We’re actually getting there. I think we’re going in the right direction. I don’t think we need a massive new breakthrough. I think the kind of neural nets that he’s [Hinton] advanced are good enough with some of the additional changes that are now being experimented with. So yes, I do think we’d need some additional things, but they’re being worked on. So it’s maybe just a difference of emphasis. But I think we’re on the right path.
14:11
Question: What do you think the pandemic shifted the priority of the future of medicine, as well as other fields that you’d like to share with us?
14:24
Ray: Well, I mean, we did create one of the methods that we’ve been fighting COVID using simulated biology, as I said. I think the fact that it happened so quickly, it’s actually has partly fueled the anti-vax movement.
14:54
So I think we need to actually develop this idea. We’ve had vaccinations actually for over a century. But they generally take a while to develop. So the fact that we are able to do it so quickly by simulating biology, is a surprise to people. And it’s actually going to go faster. The US government has a plan to actually create new vaccines within a few months to three months. And it’s gonna go even faster than that. Because actually the Moderna vaccine was actually done in three days. They started it, they simulated every single mRNA vaccine, and tested it in three days.
But people then wanted to test it on humans. I think eventually, it will be much better to test it on a million simulated humans than 100 real humans. So we will get there. This was actually the very first time that we actually used simulated biology to come up with something. And I think simulated biology will work for every single medical problem.
One of the key ideas is to collect the data. That’s going to be very key. Because you need to collect data for single problem. If you have all the data, then you can run it and figure out an answer very quickly. So, I’m very excited about that. That’s the type of change we needed to really break through lots of medical problems that have been an issue for many years, decades.
17:06
Question: What would you say is the most surprising thing to you over the last decade? You said most things were not surprising you, but surely something has surprised you?
17:24
Ray: I’m not surprised, but I’m quite amazed at large models. Many of you have actually talked to a large model [see Alan’s Leta AI videos] . It’s become actually already a major thing in academe. Here to write about something, you can just ask a large model.
17:58
“What do you think of this philosophical problem?” “If you had a trolley, if you had substituted something for the trolley problem, how would that be?” And it will actually give you an answer, and a very well thought through answer.
And the new models that are coming out now that are five times the size, half a trillion parameters, rather than 100,000 or 100 billion, they even give you better answers.
18:36
Question: Have you ever asked a model to predict the future of technology and it said something to you, where you said, “Oh, I didn’t think of that.” That’s a very trick question for you, Ray…
18:50
Ray: Well, I’m not actually predicting the future. I’m just giving the capabilities that we’ll have to affect it.
18:57
Question: No, I’m specifically meaning your task in life has been to try and forecast features of technology. And so when a model impresses you, because it does something in your task of life that you didn’t think of, that would be a particular bar that I’m asking you about?
19:14
Ray: Yeah, well, I mean, these large models don’t always give you the same answer. In fact, you can ask the same question a hundred times and get 100 different answers. Not all of which you’ll agree with, but it’s like talking to a person.
19:36
And this is actually now affecting academe. Because if somebody’s asked to write about something, you can have the large model write about it. And you can ask it a question. And if you don’t really like that answer, just ask it again, and if you find something you like, you can submit it! There’s no way that anybody could find that you’ve done that, because if they asked the same question that you’ve asked, they’ll get a different answer. Unless they can tell that it’s not your writing style, but since everyone will have these large models anyway, it’s really hard to say what your writing style is. So it’s really a writing tool.
20:24
So, I wouldn’t say that surprised me, but I think it’s really quite delightful. To have this kind of intelligence come from a model. This was never capable before, and is still not at human levels. So we’re gonna see even more surprises from these over the next several years.
20:52
Question: The nature of models in large models, especially around like transformer models and kind of these universal models. So first question, is it really all about just computation? Like the future of innovation is really all about how many nodes can you throw at it? And then ultimately, that becomes a question of how many dollars you can throw at it…
21:31
Ray: We are going beyond what’s affordable. So some of the largest models really can’t allow, like a billion people to use. But we’re able to see what they’re capable of doing. So it actually gives us a direction.
But it’s not just the amount of computation. I mean, the amount of data is important [see Alan’s paper, What’s in my AI?]. Some of the first models, were trained basically on the web. Not everything on the web is accurate. And so it put out a lot of things that were basically at the level of the web, which was not totally accurate. So we’re actually finding ways to train on information that’s more accurate, more reliable.
And particularly if you’re trying to solve a particular problem, like: “What mRNA sequences, can you deploy to create a vaccine?” That’s a very specific type of data, you got to collect that type of information. And so we’re doing that. And so collecting data, as I said before, is very, very important.
And then neural nets by themselves are not adequate. They don’t do inference correctly. That’s something that’s not fully solved. I believe it will get solved, but that’s a question that’s not fully resolved.
And it has to do with inference. Understanding what the statement is saying, and what the implications are, being able to do multi step reasoning, and that’s a key issue. So that’s what’s being worked on now.
There’s algorithmic issues, there’s data issues. The amount of computation is very important, though. Unless you have a certain amount of computation, you can’t really simulate human intelligence. And even with like a half a trillion parameter model, we’re still not what human beings can deploy, when we deploy it on a specific issue.
24:15
Question: Across the nature of lots of computation, there’s lots of money. Across lots of data, there’s lots of money. Today, we’re in a world where a couple people with a laptop—or a couple laptops—in a garage can build some tremendous innovation. But what do you think about the societal impacts if all of the innovation is really around these systems that are just enormously expensive, and how that changes that economic disparity situation?
24:57
Ray: There’s still a lot you can do with a few laptops, which you couldn’t do in previous times. And lots of people are creating very successful companies without having to spend millions or hundreds of millions of dollars on these types of innovations.
And a few people are doing the training. It’s really the training that requires a lot of money [GPT-3 required the equivalent of 288 computing years of training, estimated at $5-10 million8https://lifearchitect.ai/models/]. And to actually run these is not as nearly as expensive. And so you can do that yourself, if you can use a model that’s trained by somebody else. So a lot of people are doing that. Google and other companies, Microsoft, and so on, are making these models available. And then you can use them. No one [single] company is controlling it, because there are multiple companies that are making this available. So yes, there are some things that require money, but then everybody can share that training. And that’s what we’re seeing.
26:22
Question: What do you think about the future of democracy? If technology is going to help somehow fix the problems [within] democracy?
26:35
Ray: I have a chart on democracy over the centuries. But it’s gone way up. And we […] players in the world that are not Democratic, but the number of democracies and the number of people controlled by democracies has gone way up.
And if you go back, even when the United States was formed, that was really the first democracy. And if you look back at that, it wasn’t perfect democracy, we had slavery and so on. And if you look at it now, different countries have different levels of democracies, there’s problems with it, but the amount of democracy in the world has gone way up.
27:36
And I do believe that’s part of the good news of people. And we see actually democracies getting together today to fight people who are trying to oppose that. So yes, we’ve made tremendous progress. Lots of countries that are democratic today were not democratic even a couple of decades ago.
28:18
Question: I’ve always been fascinated and interested in the future of humanity in space. Do you have any predictions on when you think humanity will be a multiplanetary species?
29:01
Ray: Peter [Diamandis] is very concerned about this. I’ve been more concerned about amplifying the intelligence of life here on Earth.
I think it’s going to be a future era beyond the singularity when we actually have kind of exhausted the ability of here on Earth to create more intelligence. At some point, our ability to actually create more computation will come to an end, because we really won’t have more materials to do that with.
I talk about computronium9https://en.wikipedia.org/wiki/Computronium, which was actually pioneered by Eric Drexler [MIT, supervised by Marvin Minsky], as to how much computation can you create if you were to actually organize all the atoms in an optimal way. It’s pretty fantastic. You can basically match all of the intelligence of all humanity with sort of one liter of computronium.
And we’ll get to a point where we’ve used up all the materials on Earth to create computronium. Then we will have to move to another planet.
30:44
And that’s probably at least a century away [2122]. That might seem very long. On the other hand, this world has been around for billions of years, so it’s not that long.
But at that point, it really will become imperative that we explore other planets. We won’t want to send delicate creatures like humans, we’ll want to send something that’s very highly intelligent. And it will then organize the materials and other planets to become computronium. That’ll be something we’ll do for centuries after.
And then a key issue is whether or not we can go beyond the speed of light. If we’re really restricted by the speed of light, this will take a very long time to get into places. If there’s some way of going beyond the speed of light, then it will happen faster.
Putting something on Mars, I think that’s interesting, [but] I don’t think that’ll affect humanity very much. I think it’ll be our ability to extend computation beyond Earth. And that’s really something that’s way beyond the singularity.
Listen to part of Ray’s presentation in my mid-2022 AI report…
Get The Memo
by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.Thousands of paid subscribers. Readers from Microsoft, Tesla, Google AI...
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.

This page last updated: 25/Apr/2023. https://lifearchitect.ai/kurzweil/↑
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9