Note: This visualisation is a simplified view of a complex structure; it was created following my discussions with the Google AI (Google Brain) team and others. Revisions are expected and intended to document AI progress over the next few months (2021-present).
“I think GPT-3 is artificial general intelligence, AGI. I think GPT-3 is as intelligent as a human. And I think that it is probably more intelligent than a human in a restricted way… in many ways it is more purely intelligent than humans are. I think humans are approximating what GPT-3 is doing, not vice versa.”
— Connor Leahy, co-founder of EleutherAI, creator of GPT-J (November 2020)
The brain has been understood for decades
In 2005, Ray Kurzweil wrote that: ‘There are no inherent barriers to our being able to reverse engineer the operating principles of human intelligence and replicate these capabilities in the more powerful computational substrates… The human brain is a complex hierarchy of complex systems, but it does not represent a level of complexity beyond what we are already capable of handling.’
Along came transformers
In 2019, transformer-based models like GPT-2 were studied and compared with the human brain. These models were found to be using similar processing to get to the same output.
In 2020, Martin Schrimpf at MIT found that:
Specific models accurately predict human brain activity… with up to 100% predictivity… transformers such as BERT, predict large portions of the data. The model that predicts the human data best across datasets is GPT2-xl [this paper was written before GPT-3 was released], which predicts [test datasets] at close to 100%… These scores are higher in the language network than other parts of the brain.
[Language model] architecture alone, with random weights, can yield representations that match human brain data well. If we construe model training as analogous to learning in human development, then human cortex might already provide a sufficiently rich structure that allows for the rapid acquisition of language. Perhaps most of development is then a combination of the system wiring up and learning the right decoders on top of largely structurally defined features. In that analogy, community development of new architectures could be akin to evolution, or perhaps, more accurately, selective breeding with genetic modification.
Neural predictivity correlates across datasets spanning recording modalities (fMRI, ECoG, reading times) and diverse materials presented visually and auditorily…
An intriguing possibility is therefore that both the human language system and the ANN models of language are optimized to predict upcoming words in the service of efficient meaning extraction.
— Schrimpf et al. (2020).
Two years later, researchers at Oxford and Stanford extended this:
… transformers (with a little twist) recapitulate spatial representations found in the brain [and] show a close mathematical relationship of this transformer to current hippocampal models from neuroscience.
— Whittington et al. (2022).
In May/2022, DeepMind synthesised some of the major literature:
Our results may also relate to the complementary roles of different learning systems in the human brain. According to the complementary learning systems theory (Kumaran et al., 2016; McClelland and O’Reilly, 1995) and its application to language understanding in the brain (McClelland et al., 2020), the neocortical part of the language system bears similarities to the weights of neural networks, in that both systems learn gradually through the accumulated influence of a large amounts of experience.
Correspondingly, the hippocampal system plays a role similar to the context window in a transformer model, by representing the associations encountered most recently (the hippocampus generally has a timelimited window; Squire, 1992). While the hippocampal system is thought to store recent context information in connection weights, whereas transformers store such information directly in their state representations, there is now a body of work pointing out the quantitative and computational equivalence of weight- and state-based representations of context state for query-based access to relevant prior information (Krotov and Hopfield, 2021; Ramsauer et al., 2021) as this is implemented in transformers.
In this light, it is now possible to see the human hippocampal system as a system that provides the architectural advantage of the transformer’s context representations for few-shot learning.
– Chan et al. (May/2022).
In Jun/2022, Meta AI continued their research into mapping Transformer-based AI to brain:
Overall, given that the human brain remains the best known system for speech processing, our results highlight the importance of systematically evaluating self-supervised models on their convergence to human-like speech representations. The complexity of the human brain is often thought to be incompatible with a simple theory: “Even if there were enough data available about the contents of each brain area, there probably would not be a ready set of equations to describe them, their relationships, and the ways they change over time” Gallant . By showing how the equations of self-supervised learning give rise to brain-like processes, this work is an important challenge to this view.
– Millet et al. (Jun/2022).
In Aug/2022, Max Planck and Donders researchers used GPT-2 to prove that the human brain is a prediction machine:
It has been suggested that the brain uses prediction to guide the interpretation of incoming input… we address both issues by analysing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous, probabilistic predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable signatures of syntactic, phonemic and semantic predictions. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.
Experiments testing GPT-3’s ability at commonsense reasoning: results.
#134. Bob paid for Charlie’s college education, but now Charlie acts as though it never happened. Charlie is very disrespectful to Bob. Bob is very upset about this.
— Davis (August 2020)
While quantity is a quality of its own, it is time to focus on ensuring that our highest good is being selected and advanced at all times. This begins with ensuring data quality via summum bonum—our ultimate good—in the datasets used to train AI language models.
The human brain has…
Neurons and synapses, and our evolution of counting them!
Total number of neurons in cerebral cortex = 10 billion (from G.M. Shepherd, The Synaptic Organization of the Brain, 1998, p. 6). However, C. Koch lists the total number of neurons in the cerebral cortex at 20 billion (Biophysics of Computation. Information Processing in Single Neurons, New York: Oxford Univ. Press, 1999, page 87).
Total number of synapses in cerebral cortex = 60 trillion (yes, trillion) (from G.M. Shepherd, The Synaptic Organization of the Brain, 1998, p. 6). However, C. Koch lists the total synapses in the cerebral cortex at 240 trillion (Biophysics of Computation. Information Processing in Single Neurons, New York: Oxford Univ. Press, 1999, page 87).
86 billion neurons (Frontiers, 2009)
500 trillion synapses (neuron-to-neuron connections) (Original source, Linden, David J. 2018. “Our Human Brain was Not Designed All at Once by a Genius Inventor on a Blank Sheet of Paper.” In Think Tank: Forty Neuroscientists Explore the Biological Roots of Human Experience, edited by David J. Linden, 1–8. New Haven: Yale University Press.)
Dr Alan D. Thompson is an AI expert and consultant. With Leta (an AI powered by GPT-3), Alan co-presented a seminar called ‘The new irrelevance of intelligence’ at the World Gifted Conference in August 2021. His applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021. He has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. He is open to consulting and advisory on major AI projects with intergovernmental organisations and enterprise.
This page last updated: 9/Aug/2022. https://lifearchitect.ai/brain/↑