Connor Leahy (EleutherAI/GPT-J) interview transcripts

Due to length, this page was somehow broken by WordPress. You can read the originals in this Google Doc.

Connor Leahy is the co-founder of EleutherAI, and the creator of GPT-J. These transcripts are being archived for interest. The transcripts were generated by AI (using Otter.ai), and are >90% accurate.

Connor/Christoph interview 2021

YouTube video: Connor Leahy (AI researcher at Aleph Alpha & Eleuther.ai) – The Ultimate Interview
Published: 21/Jan/2021
By: Christoph Schuhmann
Featuring: Connor Leahy
Length: 3:24:50 (3h24m50s)

SUMMARY KEYWORDS

ai, humans, good, people, problems, gpt, smarter, rationality, years, model, expect, emotions, build, computers, world, hard, company, research, brain, thinking

0:00 Introduction
3:26 The role of AI (Artificial Intelligence) in our world today
6:32 How will AI look in 1 or 2 decades look like?
9:17 How could human-level AI look like?
10:59 Connor explains the reasons he bases his forecast on
14:10 On the progress of AI – text generation abilities in the past few years
18:22 How Connor replicated GPT-2, OpenAI’s language model to “dangerous to release!”
25:08 Connor’s educational background
29:18 Do you need a degree to work in software engineering or machnine learning?
30:34 What would you advise someone who is at the beginning and wants to work in AI or software engineering in the future
32:52 Connor talks about the company he currently works for,Aleph Alpha ( https://aleph-alpha.de/company )
38:12 Connor talks about what he does at Aleph Alpha 1
38:56 Where do AI startups in Europe get funding from?
40:18 How much money do machine learning /software engineers make in Europe?
42:18 How much do you work?
45:06 What are the steps for a startup to become a huge “AI Player”
47:05 Connor talks about what he does at Aleph Alpha 2
49:38 What’s the role of connecting people and project management in AI research compared to programming?
54:35 The importance of trusting in your abilities and selling yourself confidently in the IT-industry
56:57 Are 10 mediocre programmers better than 1 really good programmer?
59:25 What are the most important skills that you apply in your work?
1:01:45 What is the role of learning new things in your job?
1:05:52 What would you advice aspiring programmers who are not very well at social interactions?
1:08:59 What is Eleuther AI?
1:14:50 Which mile stones have you already achieved with Eleuther, what is next and what will you do then?
1:17:40 What had been the biggest GPT-model Eleuther had trained so far?
1:19:02 When will the new GPU version (with 175 billion parameters) be ready?
1:25:32 GPT-3’s attention mechanism
1:29:20 Couldn’t we just make the model make bigger and then use only sparse attention?
1:30:19 How will the release of GPT-3-like language models affect the IT-world & society?
1:32:35 Which will be the implications of DALL-E? ( https://openai.com/blog/dall-e/ )
1:37:43 On the ambition to replicate “Learning to Summarize with Human Feedback” ( https://openai.com/blog/learning-to-s… )
1:39:54 On the pace of progress in AI
1:42:20 Teaching AI emotional intelligence
1:47:48 Connor’s predictions for the future
2:01:07 On “Paper-Clip-Maximizers” & Goodhart’s Law
2:09:41 Thoughts on “Human Compatible AI” from Stuart Russell
2:12:44 How AI could manipulate humans to do what it wants
2:14:04 Ideal superintelligent AI would look after humans like loving adults would look after their elderly parents
2:17:55 Christoph about positive views on human nature & the impacts of scarcity and abundance on it
2:24:45 Connor on the importance of deliberately implementing positive human values into AI
2:25:18 DON’T BUILD PAPER-CLIP-MAXIMIZERS!!!
2:25:46 It is possible to build AI that loves humans
2:28:03 What sci-fi gets wrong about AI
2:31:24 AI could easily take over the world by being nice, useful and pleasant :)
2:36:26 We already have a superintelligence: The Economy
2:38:38 Connor talks about his past & his personal life
2:46:19 Finding meaning in life
2:49:08 AI Utopia
2:52:42 Really important questions in life
2:56:37 Rationality
3:02:37 How emotional programs influence our decisions
3:08:30 Mindfulness
3:14:50 Why don’t many more people think rigorously about huge topics like happiness, meaning and mortality
3:18:30 Personal development
3:21:06 If you want to be a hero, don’t let anyone tell you you can’t


Due to length, this page was somehow broken by WordPress. You can read the originals in this Google Doc.

Connor panel interview 2020

YouTube video: AI Alignment & AGI Fire Alarm – Connor Leahy
Published: 2/Nov/2020
By: Machine Learning Street Talk
Featuring: Connor Leahy and discussion panel
Length: 2:04:49 (2h04m49s)

SUMMARY KEYWORDS

intelligence, ai, gpt, humans, utility function, problem, intelligent, alignment, theory, argument, talk, rationality, system, alphago, decision, concept, question, good, function, algorithm

00:00:00 Introduction to AI alignment and AGI fire alarm
00:15:16 Main Show Intro
00:18:38 Different schools of thought on AI safety
00:24:03 What is intelligence?
00:25:48 AI Alignment
00:27:39 Humans dont have a coherent utility function
00:28:13 Newcomb’s paradox and advanced decision problems
00:34:01 Incentives and behavioural economics
00:37:19 Prisoner’s dilemma
00:40:24 Ayn Rand and game theory in politics and business
00:44:04 Instrumental convergence and orthogonality thesis
00:46:14 Utility functions and the Stop button problem
00:55:24 AI corrigibality – self alignment
00:56:16 Decision theory and stability / wireheading / robust delegation
00:59:30 Stop button problem
01:00:40 Making the world a better place
01:03:43 Is intelligence a search problem?
01:04:39 Mesa optimisation / humans are misaligned AI
01:06:04 Inner vs outer alignment / faulty reward functions
01:07:31 Large corporations are intelligent and have no stop function
01:10:21 Dutch booking / what is rationality / decision theory
01:16:32 Understanding very powerful AIs
01:18:03 Kolmogorov complexity
01:19:52 GPT-3 – is it intelligent, are humans even intelligent?
01:28:40 Scaling hypothesis
01:29:30 Connor thought DL was dead in 2017
01:37:54 Why is GPT-3 as intelligent as a human
01:44:43 Jeff Hawkins on intelligence as compression and the great lookup table
01:50:28 AI ethics related to AI alignment?
01:53:26 Interpretability
01:56:27 Regulation
01:57:54 Intelligence explosion


Get The Memo

by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.
Bestseller. 10,000+ readers from 142 countries. Microsoft, Tesla, Google...
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.

Dr Alan D. Thompson is an AI expert and consultant, advising Fortune 500s and governments on post-2020 large language models. His work on artificial intelligence has been featured at NYU, with Microsoft AI and Google AI teams, at the University of Oxford’s 2021 debate on AI Ethics, and in the Leta AI (GPT-3) experiments viewed more than 4.5 million times. A contributor to the fields of human intelligence and peak performance, he has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. Technical highlights.

This page last updated: 17/Apr/2023. https://lifearchitect.ai/connor-transcripts/