👋 Hi, I’m Alan. I was a former Chairman for Mensa International (gifted families), and I’ve been revealing the IQ of post-2020 AI models since my groundbreaking 2020 Mensa article and presentation to the World Gifted Conference. Join thousands of my paid subscribers from places like Harvard, RAND, Microsoft AI, Google AI, and Pearson (Wechsler).
Get The Memo.
Viz
Download source (PDF)
Tests: View the data (Google sheets)
Trivia test (by WCT): Human: 52%, GPT-3: 73%, J1: 55.4%
Older viz
Note: While I would love to facilitate cognitive testing of current language models, nearly all popular IQ instruments would preclude testing of a ‘written word only’ language model; the most common instruments from Wechsler and Stanford-Binet require a test candidate who is using verbal, auditory, visual, and even kinaesthetic… For this reason, AI labs generally use customised testing suites focused on written only. A selection of these benchmarks have been visualised below. Please see the full data for context and references. (Update Dec/2022: This note is now outdated, and we can test AI models on some IQ tests like Raven’s, though specialized AI benchmarks are still standard.)
Highlights
On SAT questions, GPT-3 scored 15% higher than an average college applicant.
On trivia questions, models like GPT-3 and J1 score up to 40% higher than the average human.
Download source (PDF)
Tests: View the data (Google sheets)
Trivia test (by WCT): Human: 52%, GPT-3: 73%, J1: 55.4%
Notable events in IQ testing AI models
Date | Summary | Notes |
---|---|---|
Jan/2023 | ChatGPT = IQ 147 | ChatGPT had an IQ of 147 on a Verbal-Linguistic IQ Test. This would place it in the 99.9th percentile. |
19/Dec/2022 | GPT-3.5 outperforms humans on some tasks in symbolic Raven’s IQ tests | UCLA psychology tested GPT-3.5 [28/Nov/2022 release] using a symbolic model of Raven’s Progressive Matrices (RPM): ‘We found that GPT3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings’ |
4/Nov/2022 | Anthropic’s testing on MMLU benchmarks find that model AND human outperforms model OR human | …we find that human participants who interact with an unreliable large-language-model dialog assistant through chat—a trivial baseline strategy for scalable oversight—substantially outperform both the model alone and their own unaided performance… large language models can productively assist humans with difficult tasks… present large language models can help humans achieve difficult tasks in settings that are relevant to scalable oversight. — Anthropic, Nov/2022. |
10/Feb/2022 | GPT-3 has intelligence | GPT-3 has its own form of fluid and crystalline intelligence. The crystalline part is all of the facts it has accumulated and the fluid part is its ability to make logical deductions from learning the relationships between things. – OpenAI, Feb/2022 |
8/Dec/2021 | DeepMind Gopher on par with students for SAT reading questions | NYT reported “In December 2021, DeepMind announced that its L.L.M. Gopher scored results on the RACE-h benchmark — a data set with exam questions comparable to those in the reading sections of the SAT — that suggested its comprehension skills were equivalent to that of an average high school student.” RACE-h as complex reading comprehension questions for high-school students: Average human (Amazon Turk worker) = only 69.4%, ceiling 94.2% (link) PaLM 540B = 54.6% (few-shot). Gopher 280B = 71.6% |
2/Nov/2020 | GPT-3 is Artificial General Intelligence. | In November 2020, Connor Leahy, co-founder of EleutherAI, re-creator of GPT-2, creator of GPT-J & GPT-NeoX-20B, CEO of Conjecture, said about OpenAI GPT-3: “I think GPT-3 is artificial general intelligence, AGI. I think GPT-3 is as intelligent as a human. And I think that it is probably more intelligent than a human in a restricted way… in many ways it is more purely intelligent than humans are. I think humans are approximating what GPT-3 is doing, not vice versa.” — Connor Leahy (November 2020) |
ChatGPT achievements
ChatGPT achievements: View the full data (Google sheets)
Binet assessment with Leta AI
An early version of the Binet (1905) was run with Leta AI in May 2021. It is very informal, only a small selection of questions were used, and it should be considered as a fun experiment only.
https://youtu.be/BDTm9lrx8Uw?list=PLqJbCeNOfEK88QyAkBe-U0zxCgbHrGa4V
Get The Memo
by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.Thousands of paid subscribers. Readers from Microsoft, Tesla, Google AI...
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.

This page last updated: 24/Jan/2023. https://lifearchitect.ai/iq-testing-ai/↑