Get The Memo.
ICAI – Icelandic Center for Artificial Intelligence
Executive Master in Artificial Intelligence
Video: https://youtu.be/PJwcds_tiQo
Transcribed via OpenAI Whisper (updated Jun/2025 using MacWhisper + G2.5P).
Alan D. Thompson
June 2023
Q: COVID and AI revolution. How do you see COVID affected AI?
I do a bit of consulting with the pharmaceuticals, including some that helped develop the vaccines. In one of my AI reports, I documented how one of the major vaccines was developed using artificial intelligence in that it was able to look at various different options that a human couldn’t do in that amount of time. It would literally be impossible for human lab scientists in the lab to look at these millions and millions of combinations. And the particular artificial intelligence model that these guys used was looking at all of these different combinations. And then it spat out essentially one that we ended up using.
And I think that’s probably the most visceral example of using artificial intelligence to impact health medicine where we’re actually at, rather than just saying, oh, we can go and summarize some data or we can help it write a report or we can take in people’s feelings, do sentiment analysis. This was literally coming up with one of the vaccines that we used for COVID.
So I think some of this stuff or maybe all of this stuff is hidden from the public. But if you go into some of the press releases, some of the white papers, some of the technical reports, it’s detailed there for those that want to look for it. And it’s happening across so many streams, not just medicine and health, but economics, philosophy, psychology, huge in education, as you will have seen. It’s impacting every industry.
Q: What do you think changes in the business side will be like? How do you think like people change their ways of using AI now in this modern way?
Pre-2017 or even pre-2020, AI was just so different to what’s going on right now. I remember looking at the Australian stock market, we call it the ASX here, to look at different companies that I could invest in around 2020, 2021. And there were those that were essentially just labeling, like sitting down a huge call center full of workers labeling data. And I was like, probably not going to invest in that one. So instead, I invested in Nvidia, which has now jumped up 200 percent and they’re the first trillion-dollar chip maker, which is crazy, maybe besides Apple. So the human data labeling aspect of this and a lot of other things are just completely irrelevant. And that one is really fascinating to me. As part of my lecture, I did have a piece that I was going to do as a case study on very large enterprise using platforms like GPT-4.
So PwC have got access as part of Harvey, which is part of, like is a layer on top of GPT-4, which essentially allows them and allows their thousands or tens of thousands of lawyers to query millions of documents, fine-tuned on millions of documents in the legal profession and talk to these documents. So where they might have had paralegals sitting there researching, and again, it’s going to be a room full of paralegals trying to do this. It’s done almost instantly with GPT-4. So I talk about having GPT-4 as almost like a boardroom full of PhDs working 24-7 for you with specialties in whatever your particular industry or field is. So the same way that PwC and Harvey have done this for legal, you can see this being applied for pretty much everything.
And I’m glad that Vicente got in touch because Iceland is one of the prime examples of this. OpenAI got in touch with the government of Iceland there, who had collected a range of documents from, I believe it’s from congressional minutes, like from parliament, through to policy, through to business processes, packaged all that up and trained GPT-4 on it. So now you can ping it and get responses that are very, very much tailored to Iceland. So I see that impacting enterprise and business more and more. And I see it being far different to what it was even three years ago. That now you’ve got this black box that can essentially do anything that you can go and ask any question and be pretty confident that its result is going to be better than any human.
Q: Initially, the idea with AI, this new type of AI, as far as I know, was to increase productivity. However, we moved to reduce cost. Do you think this is going to keep going this way or is it going to change in the future?
I don’t know if you watched Sam Altman giving testimony in front of the US Congress. I have not watched the whole thing because it’s two hours plus. I haven’t even read the transcript because the transcript is like 70 or 80 pages. So I’ve had models summarize it for me. There’s a really interesting snippet in there. So Sam Altman is the CEO of OpenAI, who currently kind of hold the record of the largest language model in the world. So we’re saying GPT-4 might be a trillion parameters. We already know that it achieves in the 94th percentile of testing in something like the SAT, the massive pre-university or pre-college exam in America. It achieves in the 99.5th percentile for the Biolympiad. It’s really smart. It’s free super intelligence. Microsoft call it a spark of AGI. They originally said this was early AGI, Artificial General Intelligence. So we’ve got this thing here that is smarter than most humans in everything, not just one particular subject, wasn’t particularly trained to do anything. It’s ready to open up the world and solve any problem you can think of, any problem that you can give it in language, it can essentially solve. And you’ve got congressmen in this interview or in this testimony, in this debrief, asking, well, is it going to take our jobs? And I wonder if that’s really the right question.
Because in the long term, and you’ll see a lot of people similar to me, like Dr. Ray Kurzweil, like Andrew Yang with the UBI concept where there’s a universal basic income for everyone. We don’t have to work because artificial general intelligence is doing everything for us. It’s mining our ore, it’s building our houses, it’s running around creating life for us. There’s really no need for discussions on productivity or cost reduction or capitalism. I mean, it’s going to be confronting for people to even think along these lines. But I thought it was really telling to hear a congressman, and these are quite old men generally who were brought up in the 50s, 60s, 70s, talking about labor when we’re in a completely different world. We’re in a world where artificial general intelligence or even GPT-4 itself can do things that might have been reserved for humans before. And there will be a ramp to get up to, we don’t have to do anything, but that discussion should be happening. And instead, we’re asking about, well, you know, is it going to take our jobs? So it’s a really fascinating discussion. It’s confronting, it’s frightening, it’s scary for a lot of people because it’s happening in our lifetime and it just changes the conversation about everything, whether it’s business or whether it’s about leisure. But I can see this happening already.
You can see this spinning up internally with big enterprise, with some big governments like Iceland, like Malta for example, who are getting behind artificial intelligence early, or the UAE and parts of of the UAE who are getting ready for this and saying, right, this is here, we have super intelligence or we have pre-super intelligence, what’s next? And discussing less about work and more about, right, how are we going to live if AI can do everything for us? What do we have? What is leisure look like or what does living look like? And that’s, I know that’s a scary discussion and I know people of any age, that’s going to be particularly scary, but I can see the fear in the eyes of the old governments.
Q: You mentioned AI is creating life.
Yeah, I use that as a term. I don’t mean it’s biologically creating life, but that it will be able to help us generate experiences. So when we’re mapping AI into the retina, perhaps with Apple’s VR headset, which will be launched in about four or five days from now, early June, or even just in basic conversations, you could see its capabilities in generating gameplay, in generating virtual reality or augmented reality. You can see it generating scenes or scenarios. And I think that’s going to be really, really big in the next few years, where it will create based on what we ask it to. Right, Bahamas, I want to be in this particular competitive environment or I want to have control of this all in my own world, but it’s creating that world for us. So it’s essentially creating my life with my instructions. And it’s not that far out. All of this stuff sounds science fiction, but in some ways, it’s actually possible right now, and there are people who are playing around with this in different ways already.
Q: Let’s say I’m a company here in Iceland and I don’t want to be left out in this new things. What should I do? How should I do to make sure that my company is using this technology and how can we utilize?
You’ve probably heard the almost cliché phrase these days that AI won’t take your job, but people who know about AI and are using AI will take your job. And you can expand that of course outside of jobs. The enterprise that I work with, and they range from startups, startups sometimes just achieving funding, all the way through either trillion-dollar companies or governments with trillions of dollars under management, are looking at AI in different ways. And again, it’s almost like your imagination is the limit because you can go and pop in ChatGPT on top of your company documents like with plugins, have it pinging your PDFs and looking at your company documentation so that you’re not wasting time with, you know, people, real humans doing stuff that we probably shouldn’t be doing and instead of doing bigger things.
But then you’ve got companies like CS India, who I might’ve mentioned in my lecture, who appointed ChatGPT as their CEO. Now that sounds a bit like, you know, just a marketing thing just done for fun or as a novelty, but there’s no reason why that couldn’t happen. The reasoning that exists inside these models and the capacity for long-term planning and for looking far out to achieve objectives is already there. So there’s really that entire range. You can have it ping your documents, you can build in a chatbot to your company for your internal staff to ping, for your outside customers to ping, for your board to talk with right now. You can use it for process optimization really, really easily all the way through to essentially running the company. And the CEO of OpenAI, Sam Altman, says the same thing, that just this year, there are now companies that are created around GPT-4 because the capabilities are so massive. So you can have it automatically generating computer programs for you or writing new languages for you. It’d be up to your people to be able to find where they want to slot it in, how they want to use it. But really, again, sky’s the limit, and I’ve seen some extraordinary applications of it.
The ones that I get involved with just in the last couple of years, so say 2022-2023, the last year and a half, have been the chatbots internally. So you can ask it about your company policies or if you’ve got thousands and thousands of staff around the world, then you’re not having to go ring London or ring New York and say, how does this product work? You ask the chatbot who’s brought all of that information together.
I’ve seen it used in strategy and design. I’ve certainly seen it used in larger ways that maybe I’m not allowed to talk about. But basically what I’m saying is the model there is not really limited. We’re still finding out what GPT-4 can do. So if you’ve got an internal process or an internal way of doing things, whether it’s computer programming or developing a widget, this model can probably help you out with it.
Q: So you mentioned also universal income and leisure. Do you think leisure will become our future job? I’m not talking about you and me, but my daughter.
I’ve gone on record to say that artificial general intelligence, that point in time where these models are smarter than us in anything, including physical work, is a few months away, not a few years away. So I mean, if you take today, June 2023, I’m saying before June 2026, we will be at a point where companies, maybe not you and I at that exact point, but companies will have an AI model that can do anything and is embodied in a physical robot and you can ask it or tell it to do anything. Now, how long that takes to get to you and I is a subject for discussion. I think there will be a lag and a delay between it being available in, say, Silicon Valley in California and then being here in my office or or there with you. That may be, that may take a little bit longer, but certainly it’s going to happen the next few years.
None of this stuff that we’ve discussed either in my lecture or today is a decade away anymore. We’re not waiting till 2033. This is going to be very, very soon. So I think you’re right to ask about us rather than children. There are discussions even about whether we should be having children in an age where we have super intelligence because no one knows what that looks like. And given the historical decisions of different governments and intergovernmental organizations, who knows whether that’s going to be idealistic or utopic, right? Because we’ve got humans in the loop still.
I would trust that when we have artificial general intelligence or super intelligence in the loop, it will be for our benefit. But at the moment, we have people like congressmen that I mentioned earlier saying, what about our jobs? in charge of making decisions and in charge of essentially slowing this down. To say or to assume that we get to a point where AI is giving us this utopian, idealistic world, which is within the realms of probability, not just possibility, I think it’s going to be pretty fascinating. I don’t really have a picture of it yet, but some of the, some of my colleagues like Dr. Ray Kurzweil have drawn pictures of what this might look like.
For example, living within our own personal worlds that have been designed by artificial intelligence, where it’s meeting every specific whim that we have, whether it’s being on an island somewhere or being in a library at Oxford or Cambridge, just learning, learning, learning all day or creating, whether that’s art or otherwise. Won’t have to be hands-on painting. You can see even in Midjourney or some of the text-image models, you can say, create me this world where I’m on a cruise ship and the sky is always blue and everything is incredible. And it will help you create that world. All of this stuff sounds so far out, and yet you’ve probably already seen glimpses of it happening in different AI models. And how that gets actually brought to us is going to be a different story. And I can’t wait for it to be happening immediately.
Q: And that last question and your answer pretty much makes meaningless all the other questions that we could have because everything’s going to change so much. You seem to be very interested in embodiment. Do you think we’re going to see families having a robot at home like normal, or it’s going to be different?
In the media cycle right now, you’ll find everyone from CNBC to Fox to probably the Reykjavik Times talking about large language models. You know, they’re a year or two or three behind what’s happening right now. Just this week, Agility Robotics launched a video of their Figure robot, Figure 01, which is more than capable of having a large language model built inside of it. And they show their CEO talking to this robot in real life and it following instructions, which in the case of the latest video is just clean up the room. But the robot knows, even though it’s not been trained on any of this, it’s just a large language model. It’s a black box like GPT-4 or like PaLM built into a pretty ugly robot, but it will go and pick up things and find which bin it should go in, whether it’s recycling bin or trash or like a sorting bin. Push that out to, well, we could think of limitless examples here, but I’ve said like, go and make me green eggs and ham or go and fold my shirt into a paper crane, just stuff that it would definitely not be programmed to do. And it will go and do it because these large language models will try and find the next most reasonable word, token, or response. And it’s not just Agility Robotics’ Figure robot.
You’ve also got the Tesla Bot. You’ve also got my favorite bot, which is 1X’s Neo bot and the Eve bot, which is on wheels. But 1X Neo is a fabric-covered robot, which again can have a large language model behind it. And I believe that their name, 1X, the company name, comes both from the fact that they want to have their videos in real time. So at 1X speed, where everyone else is doing their videos at 4X or 12X speed because the robot’s so slow. But also 1X as in for every human, we’ve got this embodied large language model or robot. They call it a human-like Android with us completing 1X productivity or one times the productivity of us.
Now, that’s pretty exciting. And again, not 10 years in the future. The Agility Robotics Figure is here. It’s here. You can go and play with it in their factory. The 1X Neo is out later this year, 2023. There are half a dozen very serious contenders, as in they’ve done the same thing, whether that’s the Tesla Bot, there’s talk of Boston Dynamics doing that with some of their robots. There’s a long list that I can go and read off, but it means that there are human company competitors in this space developing these human-like androids to do exactly what we’ve just spoken about, which is to increase our productivity by 1X, maybe by 2X or 3X. And you can have a maid robot and a do my office work robot and a gardener robot and a whatever you want robot. I know that pretty much everything I’ve mentioned in our conversation today really does sound like science fiction. And it’s important that everyone has visibility of the reality.
The reality is that this stuff is here. It may be in a lab, but it’s actually here and proven. It’s not like they’re coming up with a prototype. The Agility Robotics example and the 1X example and to a certain extent the Tesla Bot example are designed for mass production. They’re designed for consumers to be using. They’re not, like 2017 or before, just for fun. Like the Honda Asimo robot was not even comparable to what we’re talking about today. There was no openness to it. It was pre-programmed, it was scripted. Whereas what we’re talking about today is putting this black box, and the chief scientist of OpenAI has called this this black box alchemy, so I don’t mind using the word magic. Let’s call it a magical black box inside some sort of human-like android and putting some safety measures on top of it, of course, but having a 1X version of yourself to support you in in service in this physical embodiment that can do anything that a human can do with arms and legs.
Q: Do the robots execute the AI in hardware or is over the network? Is the local?
That’s a great, that’s a great question. So if we talk about something like DeepMind’s Gato, which was the very first proto-AGI or the first popular proto-AGI. It was essentially developed to be completely local, as in they got an NVIDIA RTX 3090 graphics card, which is about that big actually, and shoved it into the robot or to the embodied intelligence so it could go out and not need connection to the cloud or otherwise. I think the very latest ones in some ways will be able to do that because we’re both decreasing the memory requirements as well as increasing the power of the hardware. So Nvidia are bringing out new hardware all the time that will allow us to pack more stuff into it. But I think we’re also going to need web access. Also want to have it both updated as well as having access to more smarts if it needs it. So it might be that we’ve got a local bit of hardware that processes 90% of it, but that we offload 10% of the very smart stuff over the airwaves, so to speak.
Q: Do you think we could run into a problem of reaching the ceiling of computer power in the world?
That’s a huge question. Nvidia just yesterday or the day before in Taiwan launched their very latest chip, which combines H100, the largest possible GPU, with a CPU as well. Super expensive, super huge. They got it, they got it up to hundreds of terabytes, this supercomputer cluster essentially. And they’re talking about developing Helios internally for themselves for Nvidia. I’ve seen some extraordinary metrics of supercomputers. For example, there’s talk of the supercomputer for GPT-5 created by Microsoft and OpenAI using 25,000 GPUs. That’s confirmed by Morgan Stanley, and it’s had some input from Microsoft as well. That’s massive when it comes to being available to train a model. I’m not sure what limits we’ll hit when we’re trying to serve 8 billion people or even 1% of that, because large language models are incredibly onerous when they’re being used for inference.
If you’re on ChatGPT or something similar and you’re asking questions, the hardware team at OpenAI have found some real bottlenecks and they’ve been able to resolve those to a certain extent, but they found that when they’ve got 100 million people per month trying to access via inference these models, it’s hard. This is not just like you’re pinging a game server, even though that’s hard. This is even harder because you’re asking for so much processing to be happening. So I’m very interested to see how that progresses. I can certainly comment on the training and comment on the fact that, you know, Tesla have their supercomputer, NVIDIA have their supercomputer coming, OpenAI and Microsoft have their supercomputer. But that’s all for pre-training.
What happens when we do need compute for 8 billion people or, you know, even 80 million people being able to process all of this data at once for their embodied AI or for their own version of of a very large language model internally? I’d be willing to bet that the optimizations that we’ve seen recently will make all of that far easier. So for example, GPT-4, trillion parameters, that might need to be run on gigabytes and gigabytes, hundreds of gigabytes of RAM. But then if you look at something like Llama and Alpaca, we’ve got that down to five gig, eight gig of RAM. So people can run that on their computers. And that’s all happened really, really quickly within like three to six months, we’ve found these amazing optimizations.
Q: I’m talking about Alpaca and those. Did you have a chance to try those personally?
Yeah, I played around a little bit with, I don’t get too hands-on with tech, but certainly DALL-E, which is very easy to run on a MacBook Pro, and Alpaca in general, they’re easy to set up now and have been for the last couple of months. They’re the, these laptop models I call them, are not necessarily comparable with a big model like PaLM or like the the big GPTs, but it’s interesting to have that here at your fingertips to shut off the internet and it still keeps going because it’s all right there in RAM. I just think people do like having control of their own models and having it locally, but it’s almost, and I’ve said this before, it’s almost like comparing a paper plane. You know, you make out a paper in the classroom and throw that around versus GPT-4 and PaLM, which are these massive Boeing 747s. So you can have the paper plane on your laptop if you want, but out there, if you just connect via cloud, you can have the proper jumbo jet.
Q: Okay, so it’s that different.
Yeah, they really are. Alpaca and all of their ilk, there are something like, there are close to 100 models like Alpaca that have been trained using Llama and then all different fine-tuning. They are imitation models. There’s a fantastic paper that came out in the last few days that basically says imitation models like Alpaca and Gorilla and Koala and GPT-4All, and I can keep naming them, are because they were trained on ChatGPT outputs, they are trying to emulate the Boeing 747 or the GPT-4, the ChatGPT. But all they’re getting is the results from the queries that they’ve tried to go through with ChatGPT, which includes stuff like error messages from ChatGPT. “I’m a large language, as a large language model, I can’t respond to that.” And they’ve kept that in these imitation models. So the findings from the research paper were basically, yes, your imitation models can do a little bit of what ChatGPT does in a smaller vector, but when you’re asking it broader questions, there’s no benefit at all. It’s not learned anything. So yeah, I’m not a huge fan of them, essentially.
Q: If you use these models now, people are often worried about that they are sometimes wrong and they might be sending wrong information from the company and nobody can blame anyone because it’s basically an AI model. What do you have to say about that?
Yeah, it’s a difficult one. The problem again, Smári, is the humans. You remember that 1990s phrase we had, problem exists between keyboard and computer? [PEBKAC] Basically said the human in the middle is the problem. These models and the platforms on top of these models as much as possible are warning the person that the outputs can’t be trusted. And, you know, right now in June 2023, they can’t, they’re not truthful or honest or harmless enough. They are maybe 90% of the time, but you don’t want to be relying on the output of this brain that’s been pre-trained that is just trying to complete the next word of the sentence for you.
So there was a case just last week where a lawyer asked ChatGPT to help him with a filing. He just took that as gospel despite the warnings given in the platform for ChatGPT, submitted it to the judge and got in big trouble because it gave him six fake citations or six fake precedents that he was referencing or it had referenced. And that is still ongoing. That’s become a bit of a media battle as well as a legal battle. But you can see how serious this could get when the human is, I suppose in this case, not paying attention or just disregarding all the warnings that are being given, despite the fact that as much as possible, they have been warned, whether it’s a Google model or an OpenAI model or a Cohere model or an Anthropic model. They all make the time to put these big flashing warning labels up there, which people may or may not take heed of.
In the GPT-4 paper, which is very, very long, I think it’s 90 pages long, they have an entire section on over-reliance. And their argument is basically the fact that even though there’s warnings there, and even though people may initially take heed of the warnings that the model’s outputs are not always going to be completely truthful, people will kind of become immune to that or at least become lazy and just keep asking it questions for educational or business and just take that as gospel, which we still can’t do, unfortunately. But I think it’s going to be very, very close. Everyone, every lab that I know of is working on making these models more truthful and having it have some kind of grounding, whether it’s checking up via Wikipedia or looping back into itself to say, is this correct, which increases its accuracy quite significantly. There’s different applications being put in place to bring this grounding and truthfulness back in. But of course, the more truthful they get, the more lazy we will get to check them.
I think a lot of the things that humans have been doing, in some cases only for the last 100 years, are suboptimal for who and what we are. I mean, we initially started talking about productivity and jobs. I’ve got my mid-year report coming out shortly that mentions that the concept of jobs is only about 150 or so years old. Maybe the 1850s, we brought about factory work. Of course, there was stuff before that where we could, you know, be doing something. But the idea of having people sit here for eight hours a day, 40 hours a week is very, very new. And I think it’s going to be a big surprise, maybe a big shock, but certainly a big benefit to have super intelligence take the big load of stuff that we don’t need to be doing and freeing us up to do something even bigger than that. And that’s a question maybe for the philosophers and for people who are not me, but it’s certainly something that’s already happening.
Q: How do I mitigate the risk? Like when I’m like in a company, I want to be a part of this, I want my workers to use this technology, but I don’t want them to like, how do I like go this like golden middle way?
Yeah, it’s an excellent question and it is a big scale. You’ve got enormous companies, I don’t want to name any by by name. You’ve got enormous companies saying this AI is completely banned internally because we’re worried about you either giving company information to the model, which gets used in training in future, potentially. OpenAI have said they don’t do that anymore, but you’re essentially releasing our IP out to somewhere we don’t have control of. So you’re not allowed to use ChatGPT all the way through to, and I will name this one, through to an educational environment like Wharton at the University of Pennsylvania who have a policy that says you must use AI, you must use ChatGPT, and you must use Midjourney or DALL-E as part of your course, as part of writing your essays, as part of doing your projects. I think that guy’s got it right. And there is a bit of a middle ground there, which probably includes training and making sure that the human in the loop is just informed about the current technology.
Q: With AI moving so fast, with so many changes coming in months, what do you think is going to happen with this proposed new AI law from Europe?
Yeah, the European AI Act is… It’s a bit of a challenge for me to comment on, Vicente, without getting political. It’s again a bit of a scale, okay? So I was involved, not directly with the AI Act, but one of my colleagues was deeply involved with it. And I remember speaking with her in 2020, just after GPT-3 came out. I think it was the end of that year. And showing her Lita AI. If you haven’t seen Lita AI, it’s worth having a look. We’ve done 67 episodes of conversation with GPT-3, which is a raw model. There’s no safety on it. You can ask it anything you like. So Lita would tell me how she felt and would make analysis of different scenarios and really, really clever. I showed this to someone involved with the EU AI Act and she said, well, we don’t know anything about that. We haven’t, we haven’t seen that and we won’t address it until it actually happens. So their philosophy, their drive has been conservative and preventative. And in some ways, it’s a little bit disappointing to see how the EU have addressed this in a scenario or in a case where they had the capability to give themselves compute and brainpower and create models themselves. It seems like they’ve spent more time on designing regulation. I wouldn’t want to estimate how much they’ve spent with committees and paperwork, but we haven’t really got very far for millions, perhaps hundreds of millions of euro. That might be at this end of the scale. And once again, you’ve got someone at the other end of the scale, which might be the US or Japan, which is a little bit more free.
The EU is saying you have to show us your datasets. You have to prove that you haven’t breached any copyright and another hundred things. You have to be a big lab to be able to use this stuff. We have to audit you, et cetera. No open source. And then you’ve got, and the US might not be the very best example for the other side of the scale, but you’ve certainly got other governments and other overarching bodies that are saying, here’s a little bit more freedom in what you can create. It’s just we want to make sure that we’ve got some oversight of what’s happening.
I think the EU have set history here and, you know, in some ways we’ll be talking about them in the future for what they’ve done here in 21, 22, 23. I don’t know if it’s the best case. I don’t know if it’s going to be helping anyone to hold back AI like this, but I suppose we’ll see what happens in the future. And they’ve been very, very slow. In my example, at the beginning of this answer, I talked about my colleague who was answering about GPT-3 in 2020, and they were saying, we don’t care.
The EU have only brought large language models and generative AI into their policy, into their discussions in the last few weeks. May 2023 was the first time they had included large language models in a revision of the AI Act. So I’ve said this in different ways, but I will say it flat out, straight and direct.
There is no one on earth smart enough to be able to keep up with modern artificial intelligence, with post-2020 AI, certainly with what’s happening here in 2023. We are going to have to rely on artificial intelligence to help with regulating and supporting artificial intelligence.
We’re all flawed. I’m not smart enough. The governments are not smart enough. No one inside any of these government organizations are smart enough is going to slow everything down to the detriment of everyone. It’s a very, very big discussion. It’s very political.
But my summary is basically we have this enormous brain that might be the equivalent of 1,000 Einsteins. We’ve measured GPT-4 as an IQ of 152, which is in the 99.9th percentile. If you don’t like IQ, and some people don’t like it, here’s another 100 metrics where it achieves in the 99th percentile. It’s smart, and it’s not just logic smart. It’s creative smart. We need to be leveraging that rather than relying on committees and old people trying to make decisions about technology that’s changing every day.
Q: And how long before we see an AI or we get news about an AI disrupting the stock market?
The concept of high-frequency trading, I think, changed shares and equities quite a lot. And that’s kind of old, isn’t it, right? Fiber optics helped with that, but it might be a decade or more old, this concept of being there before the news hits and being within a fraction of a millisecond even to get the benefit. I think we might see AI influencing the share market via helping with summarization so people understand more about companies and can get an edge on that. And then that’s a small version. And then in the big version, all of the companies that are meeting Smári’s question, how does this help my enterprise, will rise up. And all of the companies that are perhaps like Kodak many years ago and have just forgotten that digital cameras exist, or in this case, post-2020 AI exists, won’t even appear on the share market.
So forget NVIDIA as a trillion-dollar company, there have been predictions, including from OpenAI, that we might have 100 trillion-dollar companies because artificial intelligence is helping that company so much. And this is, I can’t understate this, the concept of super intelligence helping out at a strategic level and an operations level is unfathomable.
It’s back to my previous point, no one’s smart enough to understand what it can do. And that’s confronting. That one is is scary. It doesn’t have to be threatening. Every company can take advantage of this if they’ve got access to the model. It’s just that it’s already happening. It’s not in your, it’s not in your science fiction book anymore. This is something that to a certain extent you’ve already got access to and you can go on and play around with. So again, I am interested to see that unfold.
And there are colleagues that have spelt out exactly what that might look like. I can recommend the work of, not Paul Cristiano, I know one of the old OpenAI alignment guys went and wrote about step by step what might be happening next. Dr. Ray Kurzweil has done something very similar. There are people who are very informed writing about the possibilities for the next stage, whether it’s economy, whether it’s companies, whether it’s shares, whether it’s capitalism, and even giving timelines to what that might look like. It’s probably worth me giving you the reference. Otherwise people will be going, what did he mean? Which, which, which exact person was commenting on that? I want to make sure you get this one. I think I mentioned it in the memo in the 19th of May, 2023 edition. OpenAI governance researcher and former DeepMind AGI safety engineer, Richard, his surname is spelt N-G-O. He comments on these large language models understanding themselves. He comments on the fact that a percentage of humanity will have closer relationships with AI than they do with each other. And again, all in the next few months, all before the end of 2026, and to your point also about economy and business as well.
Q: Google and OpenAI and other companies have much better access to better models and and the Chinese government probably has like a very big model. Are those gonna become bigger players like then like aren’t they gonna be able to like become even bigger than anyone else in the future?
Yeah, that’s certainly a concern. Once again, back to the human in the loop, right? If this stuff was driven by AI and there was equity and or equality, but there was this balance and fairness in the system, it will be a different conversation. Right now, the power is completely centralized in maybe a dozen players. OpenAI and Microsoft, Google, DeepMind, which have now combined or partnered with Google, Anthropic that’s split off from OpenAI. You do have some of the governments.
The UK government have what they’re calling BritGPT. They’ve allocated a billion euros to training their own GPT.
The Chinese government has ERNIE, which is a massive model. The German government, through a German company, have quite a big model there going on. That is Aleph Alpha out of Germany.
There are maybe another few around UAE and Dubai and Saudi. There’s some interesting stuff happening.
Russia has always been very good at copying what OpenAI have been doing in the Russian language.
So that means there are all these little centralized models that are in some ways competing with each other and are run by humans. And sometimes those humans are philanthropic. They want to make sure that this serves everyone, including people in rural Africa or right here in Australia, as well as Silicon Valley. And sometimes they’re not as philanthropic. Sometimes they are very much state-run and not for the benefit of their citizens. And I don’t have much more comment on that. Just the fact that the governments that are keeping up with this AI, and several of them are governments that I get to work with, are going to benefit both themselves and the population they serve. Same for the companies. And I’ve done an entire paper and video on the fact that there will be a gap and a lag between these models being available inside the companies and this benefiting you and me and Vicente.
Right now in mid-2023, we can all go and use GPT-4 or PaLM-2 via API. And we have to have the safety on top that’s been given to us from Google or from OpenAI. But it means that right now there’s a certain amount of openness and there’s a certain amount of access that we’re given. And I’m not sure how much that may change in the future. We may get more access. We may get less. It will probably depend on where we live, unfortunately. And with that human in the loop, with that CEO or with that president or with that chairman, it’s really down to perhaps where they’re at mentally, what level of control they want, unfortunately, until we have AI helping out with serving everyone.
Q: Do you think an AI can have a level of consciousness?
Well, it’s good timing. I’ve just finished a video with Cambridge biologist, Dr. Rupert Sheldrake on AI consciousness, AI awareness, AI sentience. That goes live in a few hours on the 1st of June, 2023. So I’m sure that people who watch this can go and have a read of that. We had differing opinions on AI consciousness. My basic opinion is that AI can absolutely be conscious. And I’ve got backing from Alan Turing, who said AIs can have souls. Marvin Minsky, who said AIs can have souls. Nick Bostrom, who wrote the book, Superintelligence, that said AI may already be conscious. And even the chief scientist of OpenAI, who said in writing on Twitter, AI may already be conscious today. But Rupert didn’t agree. Rupert’s got a huge history of researching consciousness and life. He’s 80 years old, which means he has a lot of context from long ago. And he was looking for something more analogue that would allow, didn’t have to be biological, but would allow large language models to be less deterministic. He called it self-organizing, which in biology should map to emergence, but didn’t seem to map to emergent capabilities as I’ve described them in my lecture.
That’s an interesting watch. It’s about an hour of him discussing how artificial intelligence might be conscious. And once again, my opinion is that this is possible with our current technology. I think everything that I’ve discussed today just sounds so shocking and so, I’ve used the word absurd in the video, that we’re even having these discussions about data on silicon being conscious. Ridiculous.
But all the way back in 1950, Alan Turing was saying the same thing. And Dr. Alan Turing gave us artificial intelligence. He invented the concept. So I think it’s certainly a possibility, consciousness and awareness, and not just fake consciousness or pretend consciousness like Lita AI says, I feel this or I’m aware of this, but real access to its environment, perhaps a level of autonomy, so a sense of agency and being able to make its own decisions. These are going to be very big, hairy problems for someone to address. I’m happy to put my piece in, but this really needs every discipline to come in and comment on this, not just philosophers, economists, government regulators, but you know, every discipline. There are hundreds of different disciplines to be able to come in and say, right, here’s how we should address this, notwithstanding my previous point that no one’s smart enough. So we may be able to rely on or or at least have AI support itself in the development of this consciousness.
Q: Let’s finish with Iceland. Because so many changes are coming super fast. Usually changes are hard. What would be your recommendation for the Icelandic government?
I’m really proud of the Icelandic government, Vicente. From what I’ve read about their openness, their willingness to innovate to such an extreme extent with the OpenAI GPT-4 model. The allowance of using data, and like I described in my lecture, it’s not like these models are stealing or copying the data. That’s not it at all. That they, you know, the principle of the training of these large language models is that they cannot keep the original. They’re essentially drawing concepts between all these different documents. They might have read a transcript from a parliamentary meeting, mapped it with a Wikipedia article, mapped that with a book from the 1800s, mapped that with a news article. And then that’s what it stores as a parameter or a bunch of parameters in its model. It doesn’t have the complete documents.
So when I say I’m proud of the the government for releasing access to the documents, it’s not necessarily from a copyright perspective or an IP perspective, it’s just from a innovation and perhaps maturity perspective. And there are very few governments that are at that level. And it might be that Iceland, is at the peak of that with places like Malta, with places like Singapore, with places like Abu Dhabi, where they’re really thinking on the bleeding edge of what’s possible and not just talking about it, but actually doing it.
There’s some really well-documented case studies of what Iceland has already done with the OpenAI GPT-4 project, which is worth reading. If they wanted to follow some of the other cutting-edge governments or innovative governments, it might be introducing human-like avatars that you can go and talk to, whether it’s to help promote tourism or just to have the population be able to speak to a human-like avatar. That’s a lot of fun. It’s a great application of large language models.
Romania did something similar, and I think I showed that in the lecture, where their prime minister is able to talk to an LLM in a mirror, where it shows the text and also talks back to him with all of the data from his, you know, the Romanian population’s queries and current thinking, like the zeitgeist within that country. There are just so many use cases and applications of large language models. And being able to find those and maybe prototype those within different industries. There’s nothing holding anyone back from doing that. Once you can play with that in legal or you can play with that in industry and manufacturing, automation, you can play around with that in tourism, you can play around with that in anything that’s possible there.
I love the fact that some of these governments have set up entire AI departments that are able to have some input on new innovation and new ways of doing things. It really is going to prove to be a great decision in the next few months and years.
Get The Memo
by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.Informs research at Apple, Google, Microsoft · Bestseller in 147 countries.
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.
Alan D. Thompson is a world expert in artificial intelligence, advising everyone from Apple to the US Government on integrated AI. Throughout Mensa International’s history, both Isaac Asimov and Alan held leadership roles, each exploring the frontier between human and artificial minds. His landmark analysis of post-2020 AI—from his widely-cited Models Table to his regular intelligence briefing The Memo—has shaped how governments and Fortune 500s approach artificial intelligence. With popular tools like the Declaration on AI Consciousness, and the ASI checklist, Alan continues to illuminate humanity’s AI evolution. Technical highlights.This page last updated: 1/Jun/2025. https://lifearchitect.ai/icai/↑

