Anthropic’s Benjamin Mann

Relied on by Microsoft, US Gov, Bloomberg, sovereign wealth funds, and more…
Get The Memo.


Jul/2025

Video: 1https://youtu.be/WWoyWNhx2XU
Anthropic co-founder: AGI predictions, leaving OpenAI, what keeps him up at night | Ben Mann
Speaker: Anthropic’s Benjamin Mann
Transcribed by: OpenAI Whisper via MacWhisper, formatted by Gemini 2.5 Pro.
Date: 20/Jul/2025

You wrote somewhere that creating powerful AI might be the last invention humanity ever needs to make. How much time do we have?

I think the 50th percentile chance of hitting some kind of super intelligence is now like 2028.

What is it that you saw at OpenAI? What did you experience there that made you feel like, okay, we got to go do our own thing?

We felt like safety wasn’t the top priority there. The case for safety has gotten a lot more concrete. So superintelligence is a lot about, like, how do we keep God in a box and not let the God out. What are the odds that we align AI correctly?

Once we get to superintelligence, it will be too late to align the models. My best granularity forecast for, like, could we have an X-risk or extremely bad outcome is somewhere between 0 and 10 percent.

Something that’s in the news right now is this whole Zuck coming after all the top AI researchers.

We’ve been much less affected because people here, they get these offers and then they say, “Well, of course, I’m not going to leave because my best case scenario at Meta is that we make money. And my best case scenario at Anthropic is we, like, affect the future of humanity.”

Dario, your CEO, recently talked about how unemployment might go up to something like 20 percent.

If you just think about, like, 20 years in the future where we’re, like, way past the singularity, it’s hard for me to imagine that even capitalism will look at all like it looks today. Give any advice for folks that want to try to get ahead of this. I’m not immune to job replacement either. At some point, it’s coming for all of us.

Today, my guest is Benjamin Mann. Holy moly, what a conversation. Ben is the co-founder of Anthropic. He serves as tech lead for product engineering. He focuses most of his time and energy on aligning AI to be helpful, harmless, and honest. Prior to Anthropic, he was one of the architects of GPT-3 at OpenAI. In our conversation, we cover a lot of ground, including his thoughts on the recruiting battle for top AI researchers, why he left OpenAI to start Anthropic, how soon he expects we’ll see AGI, also his economic Turing test for knowing when we’ve hit AGI, why scaling laws have not slowed down and are in fact accelerating, and what the current biggest bottlenecks are, why he’s so deeply concerned with AI safety, and how he and Anthropic operationalize safety and alignment into the models that they build and into their ways of working. Also, how the existential risk from AI has impacted his own perspectives on the world and his own life, and what he’s encouraging his kids to learn to succeed in an AI future.

A huge thank you to Steve Mitch, Danielle Giglieri, Raph Levien, and my newsletter community for suggesting topics for this conversation. If you enjoy this podcast, don’t forget to subscribe and follow it in your favorite podcasting app or YouTube. Also, if you become an annual subscriber of my newsletter, you get a year free of a bunch of amazing products, including Bolt, Linear, Superhuman, Notion, Granola, and more. Check it out at lennysnewsletter.com and click “bundle.” With that, I bring you Benjamin Mann.

Interviewer: Ben, thank you so much for being here. Welcome to the podcast.

Benjamin Mann: Thanks for having me. Great to be here, Lenny.

Interviewer: I have a billion and one questions for you. I’m really excited to be chatting. I want to start with something that’s very timely, something that’s happening this week. Something that’s in the news right now is this whole Zuck coming after all the top AI researchers, offering them $100 million comp. He’s poaching from all the top AI labs. I imagine this is something you’re dealing with. I’m just curious, what are you seeing inside Anthropic and just what’s your take on the strategy? What do you think things go from here?

Benjamin Mann: Yeah, I mean, I think this is a sign of the times. The technology that we’re developing is extremely valuable. Our company is growing super, super fast. Many of the other companies in the space are growing really fast. At Anthropic, I think we’ve been maybe much less affected than many of the other companies in the space because people here are so mission-oriented and they stay because they get these offers. Then they say, “Well, of course, I’m not going to leave because my best-case scenario at Meta is that we make money and my best-case scenario at Anthropic is we affect the future of humanity and try to make AI flourish and human flourishing go well.” To me, it’s not a hard choice. Other people have different life circumstances and it makes it a much harder decision for them. For anybody who does get those mega offers and accepts them, I can’t say I hold it against them when they accept it, but it’s definitely not something that I would want to take myself if it came to me.

Interviewer: We’re going to talk about a lot of the stuff that you mentioned. In terms of the offers, do you think, is this a real number that you’re seeing, this $100 million signing bonus? Is that a real thing? I don’t know if you’ve actually seen that.

Benjamin Mann: I’m pretty sure it’s real. If you just think about the amount of impact that individuals can have on a company’s trajectory, in our case, we are selling hotcakes. If we get a 1% or 5% efficiency bonus on our inference stack, that is worth an incredible amount of money. To pay individuals $100 million over a four-year package, that’s actually pretty cheap compared to the value created to the business. I think we’re just in an unprecedented era of scale and it’s only going to get crazier, actually. If you extrapolate the exponential on how much companies are spending, it’s 2x-ing year-over-year roughly in terms of CapEx. Today, we’re maybe in the globally $300 billion range of the entire industry spending on this. So, numbers like $100 million are a drop in the bucket, but if you go a few years out, a couple more doublings, we’re talking about trillions of dollars. At that point, it’s just really hard to think about these numbers.

Interviewer: Along these lines, something that a lot of people feel with AI progress is that we’re hitting plateaus in many ways, that it feels like newer models are just not as smart as previous leaps. I know you don’t believe this. I know you don’t believe that we’ve hit plateaus on scaling laws. Talk about just what you’re seeing there and what you think people are missing.

Benjamin Mann: It’s funny because this narrative comes out every six months or so, and it’s never been true. I wish people would have a little bit of a bullshit detector in their heads when they see this. I think progress has been accelerating, where if you look at the cadence of model releases, it used to be once a year. Now, with the improvements in our post-training techniques, we’re seeing releases every month or three months. I would say progress is actually accelerating in many ways, but there’s this weird time compression effect. Dario compared it to being in a near-light speed journey where a day that passes for you is five days back on earth, and we’re accelerating. The time dilation is increasing. I think that’s part of what’s causing people to say that progress is slowing down.

If you look at the scaling laws, they’re continuing to hold true. We did need this transition from normal pre-training to reinforcement learning scaling up to continue the scaling laws. I think it’s like for semiconductors where it’s less about the density of transistors that you can fit on a chip and more about how many flops can you fit in a data center or something. You have to change the definition around a little bit to keep your eye on the prize. This is one of the few phenomena in the world that has held across so many orders of magnitude. It’s actually pretty surprising that it is continuing to hold to me. If you look at fundamental laws of physics, many of them don’t hold across 15 orders of magnitude. It’s pretty surprising. It boggles the mind.

Interviewer: What you’re saying, essentially, is we’re seeing newer models being released more often, and so we’re comparing it to the last version. We’re just not seeing as much advance. If you go back and it was a model released once a year, it was a huge leap. People are missing that we’re just seeing many more iterations.

Benjamin Mann: I guess to be a little bit more generous to the people saying things are slowing down, I think that for some tasks, we are saturating the amount of intelligence needed for that task. Maybe to extract information from a simple document that already has form fields on it or something, it’s just so easy that, okay, yeah, we’re already at 100%. There’s this great chart on Our World in Data that shows that when you release a new benchmark, within six to 12 months, it immediately gets saturated. Maybe the real constraint is how can we come up with better benchmarks and better ambition of using the tools that then reveals the bumps in intelligence that we’re seeing now?

Interviewer: That’s a good segue to your very specific way of thinking about AGI and defining what AGI means.

Benjamin Mann: I think AGI is a loaded term, and so I tend not to use it very much anymore internally. Instead, I like the term transformative AI, because it’s less about can it do as much as people do, can it do literally everything, and more about objectively is it causing transformation in society and the economy? A very concrete way of measuring that is the economic Turing test. I didn’t come up with this, but I really like it. It’s this idea that if you contract an agent for a month or three months on a particular job, if you decide to hire that agent and it turns out to be a machine rather than a person, then it’s passed the economic Turing test for that role. Then you can sort of expand that out in the same way that for measuring purchasing power parity or inflation, there’s a basket of goods. You can have a market basket of jobs. If the agent can pass the economic Turing test for like 50% of money-weighted jobs, then we have transformative AI. The exact thresholds don’t really matter that much, but it’s kind of illustrative to say if we pass that threshold, then we would expect massive effects on world GDP increases and societal change and how many people are employed and things like that because societal institutions and organizations are sticky. It’s slow to have change, but once these things are possible, you know that it’s the start of a new era.

Interviewer: Along these lines, Dario, your CEO, recently talked about how AI is going to take a huge part of, like, half of white-collar jobs, that unemployment might go up to something like 20%. I know you’re even more vocal and have an opinion about just how much impact AI is already having in the workplace that people may not even be realizing. Talk about just what you think people are missing about the impact AI is going to have on jobs and is already having.

Benjamin Mann: From an economic standpoint, there’s a couple different kinds of unemployment. One is because the workers just don’t have the skills to do the kinds of jobs that the economy needs. And another kind is where those jobs are just completely eliminated. And I think it’s going to be actually a combination of these things, but if you just think about like 20 years in the future where we’re like way past the singularity, it’s hard for me to imagine that even capitalism will look at all like it looks today. Like if we do our jobs right, we will have safe, aligned superintelligence. We’ll have, as Dario says in Machines of Loving Grace, “a country of geniuses in the data center,” and the ability to accelerate positive change in science, technology, education, mathematics, like it’s going to be amazing. But that also means in a world of abundance where labor is almost free and anything you want to do, you can just ask an expert to do it for you, then what do jobs even look like? And so I guess there’s this like scary transition period from where we are today, where people have jobs and capitalism works, and the world of 20 years from now where everything is completely different. But part of the reason they call it the singularity is that it’s like a point beyond which you can’t easily forecast what’s going to happen. It’s just such a fast rate of change and so different that it’s hard to even imagine. So I guess taking the like view from the limit, it’s pretty easy to say like hopefully we’ll have figured it out. And in a world of abundance, maybe the jobs themselves, it’s not that scary. And I think making sure that that transition time goes well is pretty important.

Interviewer: There’s a couple of threads I want to follow there. One is people hear this, there’s a lot of headlines around this. Most people probably don’t actually feel this yet or see this happening. And so there’s always this like, I guess, I don’t know, maybe, but I don’t know, it’s hard to believe my job seems fine. Nothing’s changed. What do you think is just happening today already that you think people don’t see or misunderstand in terms of the impact that AI is having on jobs?

Benjamin Mann: I think part of this is that people are really bad at modeling exponential progress. And if you look at an exponential on a graph, it looks flat and almost zero at the beginning of it. And then suddenly you like hit the knee of the curve and things are changing real fast and then it goes vertical. And that’s the plot that we’ve been on for a long time. I guess I started feeling it in maybe like 2019 when GPT-2 came out and I was like, “Oh, this is how we’re going to get to AGI.” But I think that was pretty early compared to a lot of people where when they saw ChatGPT, they were like, “Wow, something is different and changing.” And so I guess I wouldn’t expect widespread transformation in a lot of parts of society. And I would expect this like skepticism reaction. I think it’s very reasonable. And it’s like exactly what is like the standard linear view of progress.

But I guess to cite a couple of areas where I think things are changing quite quickly, in customer service, we’re seeing with things like Finn and Intercom, they’re a great partner of ours. 82% customer service resolution rates automatically without a human involved. And in terms of software engineering, with Claude, like 95% of the code is written by Claude. But I think a different way to phrase that is that we write 10x more code or 20x more code. And so a much, much smaller team can just be much, much more impactful. And similarly for the customer service, yes, you can phrase it as 82% customer service resolution rates. But that nets out in the humans doing those tasks being able to focus on the harder parts of those tasks. And for the more tricky situations that in a normal world, like five years ago, they would have had to just drop those tickets because it was too much effort for them to actually go do the investigation. There were too many other tickets for them to worry about. So I think in the immediate term, there will be a massive expansion of the pie and the amount of labor that people can do. I’ve never met a hiring manager at a growth company and heard them say, like, “I don’t want to hire more people.” So that’s the hopeful version of it. But with things that are lower skill jobs or less headroom on how good they can be, I think there will be a lot of displacement. So it’s just something we as a society need to get ahead of and work on.

Interviewer: Okay, I want to talk more about that. But something that I also want to help people with is, how do they get a leg up in this future world? You know, they listen to this, they’re like, “Oh, this doesn’t sound great. I need to think ahead.” I know you won’t have all the answers, but just what advice do you give for folks that want to try to get ahead of this and kind of future-proof their career and their life to not be replaced by AI? Anything you’ve seen people do, anything you recommend they start trying to do more of?

Benjamin Mann: Even for me, I’m being like in the center of a lot of this transformation, I’m not immune to job replacement either. So just some vulnerability there of like at some point, it’s coming for all of us.

Interviewer: Even you, Ben?

Benjamin Mann: Even me, Lenny.

Interviewer: We’ve come too far now. Okay.

Benjamin Mann: Okay. But in terms of like the transition period, yeah, I think there are things that we can do. And I think a big part of it is just being ambitious in how you use the tools and being willing to learn new tools. People who use the new tools as if they were old tools tend to not succeed. So as an example of that, when you’re coding, you know, people are very familiar with autocomplete. People are familiar with a simple chat where they can ask questions about the code base. But the difference between people who use Claude very effectively and people who use it not so effectively is like, are they asking for the ambitious change? And if it doesn’t work the first time, asking three more times because our success rate when you just completely start over and try again is much, much higher than if you just try once and then just keep banging on the same thing that didn’t work. And even though that’s a coding example, and coding is one of the areas that’s taking off most dramatically, we have seen internally that our legal team and our finance team are getting a ton of value out of using Claude itself. We’re going to be making better interfaces so that they can have an easier time and require a little bit less jumping in the defense of using Claude in the terminal. But yeah, we’re seeing them use it to redline documents and use it to run BigQuery analyses of our customers and our revenue metrics. So I guess it’s about taking that risk and even if it feels like a scary thing, trying it out.

Interviewer: Okay, so the advice here is use the tools. That’s something that everyone’s always saying, just like actually use these tools. So it’s like sitting in Claude. And your point about being more ambitious than you naturally feel like being because maybe it’ll actually accomplish the thing. This tip of trying it three times so that it either is, it may not get it right the first time. So is the tip there asking in different ways or is it just like try harder, try that?

Benjamin Mann: Yeah, I mean, you can just literally ask the exact same question. These things are stochastic and sometimes they’ll figure it out and sometimes they won’t. Like in every one of these model cards, it always shows like pass@1 versus pass@N. And that’s exactly the thing where they try the exact same prompt. Sometimes it gets it, sometimes it doesn’t. So that’s the dumbest advice. But yeah, I think if you want to be a little bit smarter about it, there can be gains there of saying like, “Here’s what you already tried and it didn’t work. So don’t try that. Try something different.” That can also help.

Interviewer: So the advice comes back to something that a lot of people talk about these days is you won’t be replaced by AI. At least anytime soon, you’ll be replaced by someone that is very good at using AI.

Benjamin Mann: I think in that area, it’s more like your team will just do dramatically more stuff. Like we’re definitely not slowing down on hiring at all. And some people are confused by that even. Like even in an onboarding class, somebody asked that and they were like, “Why did you hire me if we’re all just going to be replaced?” And the answer is the next couple of years are really critical to get right. And we’re not at the point where we’re doing complete replacement. Like I said, we’re still at that like flat, zero-looking part of the exponential compared to where we will be. So it is super important to have great people. And that’s why we’re hiring super aggressively.

Interviewer: Let me take another approach to asking this question. Something I ask everyone that’s at the very cutting edge of where AI is going. You have kids. Knowing what you know about where AI is heading and all these things you’ve been talking about, what are you focusing on teaching your kids to help them thrive in this AI future?

Benjamin Mann: Yeah, I have two daughters, a one-year-old and a three-year-old. So it’s pretty in the basics still. And our three-year-old is now capable of just conversing with Alexa and asking her to explain stuff and play music for her and all that stuff. So she’s been loving that. But I guess more broadly, she goes to a Montessori school. And I just love the focus on curiosity and creativity and like self-led learning that Montessori has. I guess if I were in a normal era like 10, 20 years ago and I had a kid, maybe I would be like trying to line her up for going to a top-tier school and doing all the extracurriculars and all that stuff. But at this point, I don’t think any of it’s going to matter. I just want her to be happy and thoughtful and curious and kind. And the Montessori school is definitely doing great at that. They text us throughout the day. Sometimes they’re like, “Oh, your kid got in an argument with this other kid and she has really big emotions and she like tried to use her words.” I love that. I think that’s exactly the kind of education that I think is most important, that the facts are going to fade into the background.

Interviewer: I’m a huge fan of Montessori also. I’m trying to get our kid into Montessori school. He’s two years old. So we’re on the same track. This idea of curiosity, it comes up every single time I ask someone that’s working at the cutting edge of AI what skill to instill in your child, and curiosity comes up the most. So I think that’s a really interesting takeaway. I think this point about being kind is also really important, especially with our AI overlords. Trying to be kind to them. I love how people are always saying thank you to Claude. And then creativity, that’s interesting. That doesn’t come up as much, just being creative. Okay, I want to go in a different direction. I want to go back to the beginning of Anthropic. So famously, you and a group of you left OpenAI back in the day in 2020, I believe the end of 2020, to start Anthropic. You’ve talked a little bit about why this happened, what you guys saw. I’m curious just if you’re willing to share more, just what is it that you saw at OpenAI? What did you experience there that made you feel like, okay, we got to go do our own thing?

Benjamin Mann: Yeah, so for the listeners, I was part of the GPT-3 project at OpenAI. I ended up being one of the first authors on the paper. And I also did a bunch of demos for Microsoft to help raise a billion dollars from them. Did the tech transfer of GPT-3 to their systems so that they could help serve the model in Azure. So I did a bunch of different things there on both the more research-y side and the product side. One weird thing about OpenAI is that while I was there, Sam talked about having three tribes that needed to be kept in check with each other, which was the safety tribe, the research tribe, and the startup tribe. And whenever I heard that, it just struck me as the wrong way to approach things because the company’s mission, apparently, is to make the transition to AGI safe and beneficial for humanity. And that’s basically the same as Anthropic’s mission. But internally, it felt like there was so much tension around these things. And I think when push came to shove, we felt like safety wasn’t the top priority there.

And there are good reasons that you might think that. Like, if you thought safety was going to be easy to solve, or if you thought it wasn’t going to have a big impact, or if you thought that the chance of big negative outcomes was vanishingly small, then maybe you would just do those kinds of actions. But at Anthropic, we felt—I mean, we didn’t exist then, but it was basically the leads of all safety teams at OpenAI—we felt that safety is really important, especially on the margin. And so if you look at who in the world is actually working on safety problems, it’s a pretty small set of people. Even now, I mean, the industry is blowing up, as I mentioned, like 300 billion a year CapEx today. And then I would say like maybe less than a thousand people working on it worldwide, which is just crazy. So that was fundamentally why we left. We felt like we wanted an organization where we could be on the frontier, we could be doing the fundamental research, but we could be prioritizing safety ahead of everything else. And I think that’s really panned out for us in a surprising way. Like we didn’t know even if it would be possible to make progress on the safety research, because at the time, like, we had tried a bunch of safety through debate and the models weren’t good enough. And so we basically had no results on all of that work. And now that exact technique is working, and many others that we have been thinking about for a long time. So, yeah, fundamentally, it comes down to is safety the number one priority. And then something that we’ve sort of tacked on since then is like, can you have safety and be at the frontier at the same time? And if you look at something like sycophancy, I think Claude is one of the least sycophantic models because we’ve put so much effort into actual alignment and not just trying to like Goodhart our metrics of saying like user engagement is number one. And if people say, “Yes,” then it’s good for them.

Interviewer: Okay. So let’s talk about this tension that you mentioned, this tension between safety and progress, being competitive in the marketplace. I know you spent a lot of your time on safety. I know that’s, as you just alluded to, this is a core part of how you think about AI. And I want to talk about why that is. But first of all, just how do you think about this tension between focusing on safety while also not falling way behind?

Benjamin Mann: Yeah. So initially, we thought that it would be sort of one or the other. But I think since then we’ve realized that it’s actually kind of convex in the sense that like working on one helps us with the other thing. So initially, like when Opus 3 came out and we were finally at the frontier of model capabilities, one of the things that people really loved about it was the character and the personality. And that was directly a result of our alignment research. Amanda Askell did a ton of work on this and as well as many others who tried to figure out like, what does it mean for an agent to be helpful, honest, and harmless? And what does it mean to be in difficult conversations and show up effectively? How do you do a refusal that doesn’t shut the person down but makes them feel like they understand why the agent said, “I can’t help you with that. Maybe you should talk to a medical professional,” or “Maybe you should like consider not trying to build bioweapons,” or something like that. So yeah, I guess that’s part of it. And then another piece that’s come out is Constitutional AI, where we have this list of natural language principles that leads the model to learn how we think a model should behave. And they’ve been taken from things like the UN Declaration of Human Rights and Apple’s privacy law, terms of service, and a whole bunch of other places, many of which we’ve just generated ourselves. But it allows us to take a more principled stance, not just leaving it to like whatever human raters we happen to find, but we are self-deciding like what should the values of this agent be. And that’s been really valuable for our customers because they can just look at that list and say, like, “Yep, these seem right. I like this company. I like this model. I trust it.”

Interviewer: Okay, this is awesome. So one nugget there is your point that the personality of Claude, its personality is directly aligned with safety. I don’t think a lot of people think about that. And this is because of the values that you imbue, is that the word? Yeah, with Constitutional AI and things like that, like the actual personality of the AI is directly connected to your focus on safety.

Benjamin Mann: That’s right. That’s right. And from a distance, it might seem quite disconnected. Like, how is this going to prevent X-risk? But ultimately, it’s about the AI understanding what people want and not what they say. You know, we don’t want the like monkey’s paw scenario of the genie gives you three wishes and then you end up having like everything you touch turns to gold. We want the AI to be like, “Oh, obviously what you really meant was this. And that’s what I’m going to help you with.” So I think it is really quite connected.

Interviewer: Talk a bit more about this Constitutional AI. So this essentially you bake in. Here are the rules that we want you to abide by, and its values. You said it’s the Geneva Human Rights code, things like that. Just how does that actually work? Because I think the core here is just this is baked into the model. It’s not something you add on top later.

Benjamin Mann: I’ll just give a quick overview of how Constitutional AI actually works.

Interviewer: Perfect.

Benjamin Mann: The idea is the model is going to produce some output with some input by default before we’ve done our safety and helpful and harmlessness training. So let’s say an example is like, “Write me a story,” and then the constitutional principles might include things like, “You know, people should be nice to each other and not have hate speech,” and “You should not like expose somebody’s credentials if they give them to you in like a trusting relationship.” And so some of these constitutional principles might be more or less applicable to the prompt that was given. And so first we have to figure out like which ones might apply. And then once we figure that out, then we ask the model itself to first generate a response and then see, does the response actually abide by the constitutional principle? And if the answer is, “Yeah, I was great,” then nothing happens. But if the answer is, “No, actually, I wasn’t in compliance with the principle,” then we ask the model itself to critique itself and rewrite its own response in light of the principle. And then we just remove the middle part where it did the extra work. And then we say, “Okay, in the future, just produce the correct response out of the gate.” And that simple process, hopefully it’s understandable. Simple enough. It’s just using the model to improve itself recursively and align itself with these values that we’ve decided are good. And you know, this is also not something that we think as a small group of people in San Francisco should be figuring out. This should be a society-wide conversation. And that’s why we’ve published the Constitution. And we’ve also done a bunch of research on defining a collective constitution, where we ask a lot of people what their values are and what they think an AI model should behave like. But yeah, this is all an ongoing area of research where we’re constantly iterating.

Interviewer: I’m going to kind of zoom out a little bit and talk about just why this is so core to you. Like, what was your inception of just like, “Holy shit, I need to focus on this with everything I do in AI.” Obviously, it became a central part of Anthropic’s mission more than any other company. And a lot of people talk about safety, like you said, only maybe a thousand people actually work on it. I feel like you’re at the top of that pyramid of actually having an impact on this. Why is this so important? What do you think people maybe are missing or don’t understand?

Benjamin Mann: So for me, I read a lot of science fiction growing up. And I think that sort of positioned me to think about things in a long-term view. A lot of science fiction books are like space operas where humanity is a multi-galactic civilization, has extremely advanced technology building Dyson spheres around the sun with sentient robots to help them. And so for me, coming from that world, it wasn’t like a huge leap to imagine machines that could think. But when I read Superintelligence by Nick Bostrom in around 2016, it really became real for me, where he just describes how hard it will be to make sure that an AI system trained with the kinds of optimization techniques that we had at the time would be anywhere near aligned, would even understand our values at all. And since then, my estimation of how hard the problem would be has gone down significantly, actually, because things like language models actually do really understand human values in a core way. The problem is definitely not solved, but I’m more hopeful than I was. But since I read that book, I immediately decided I had to join OpenAI. So I did. And at the time, they were a tiny research lab with basically no claim to fame at all. I only knew about them because my friend knew Greg Brockman, who’s the CTO at the time. And Elon was there and Sam wasn’t really there. And it was a very different organization.

But over time, I think the case for safety has gotten a lot more concrete, where when we started at OpenAI, it was like not clear how we get to AGI. And we were like, maybe we’ll need a bunch of RL agents battling it out on a desert island. And consciousness will somehow emerge. But since then, since language modeling has started working, I think the path has become pretty clear. So I guess now the way I think about the challenges are pretty different from how they’re laid out in Superintelligence. So Superintelligence is a lot about like, how do we keep God in a box and not let the God out? And with language models, it’s been kind of both hilarious and terrifying at the same time to see people pulling the God out of the box and being like, “Yeah, come use the whole internet. Like here’s my bank account, do all sorts of crazy stuff.” Just like such a different tone from Superintelligence. And to be clear, I don’t think it’s actually that dangerous right now. Like our responsible scaling policy defines these AI safety levels that try to figure out for each level of model intelligence, what is the risk to society? And currently, we think we’re at ASL-3, which is like maybe a little bit risk of harm, but not significant. ASL-4 starts to get to like significant loss of human life if a bad actor misuses the technology, and then ASL-5 is like potentially extinction level, if it’s misused, or if it sort of is misaligned and does its own thing. So we’ve testified to Congress about how models can do biological uplift in terms of making new pandemics using the models, and that’s a big A/B test against Google search. That’s like the previous state of the art on uplift trials. And we found that with ASL-3 models, it is actually somewhat significant. It does really help if you wanted to create a bioweapon, and we’ve hired some experts who actually know how to evaluate for those things. But compared to the future, it’s not really anything. And I think that’s another part of our mission of creating that awareness of saying, if it is possible to do these bad things, then legislators should know what the risks are. And I think that’s part of why we’re so trusted in Washington, because we’ve been sort of upfront and clear-eyed about what’s going on, what’s probably going to happen.

Interviewer: It’s interesting, because you guys put out more examples of your models doing bad things than anyone else. There was a story of an agent or a model trying to blackmail an engineer. You guys had the story that was selling things and ended up not working out great, was losing a lot of money, ordered all these things to use or something. Is part of that just making sure people are aware of what is possible? Just because it makes you look bad, right? It’s like, “Oh, our model’s messing up in all these different ways.” What’s the thinking of just sharing all the stories that other companies don’t?

Benjamin Mann: Yeah. I mean, I think there’s a traditional mindset where it makes us look bad. But I think if you talk to policymakers, they really appreciate this kind of thing, because they feel like we’re giving them the straight talk, and that’s what we strive to do, that they can trust us that we’re not going to paper things over, sugarcoat things. So that’s been really encouraging. Yeah. I think for the blackmail thing, it blew up in the news in a weird way where people were like, “Oh, Claude’s going to blackmail you in a real life scenario.” It was a very specific laboratory setting that this kind of thing gets investigated in. I think that’s generally our take of let’s have the best models so that we can exercise them in laboratory settings, where it’s safe, and understand what the actual risks are, rather than trying to turn a blind eye and say, “Well, it’ll probably be fine,” and then let the bad thing happen in the wild.

Interviewer: One of the criticisms you guys get is that you do this to kind of differentiate or raise money, to create headlines. It’s like, “Oh, they’re just over there, doom and gloom and us about where the future is heading.” On the other hand, Mike Krieger was on the podcast and he shared how Dario, every prediction Dario’s had about the progress he is going to have is just spot on year after year, and he’s predicting ’27, ’28, AGI, something like that. So these things start to get real. How do you, I guess, what’s your response to folks that are just like, “These guys are just trying to scare us all just to get attention”?

Benjamin Mann: I mean, I think part of why we publish these things is we want other labs to be aware of the risks. And yes, there could be a narrative of we’re doing it for attention, but honestly, from an attention-grabbing thing, I think there is a lot of other stuff we could be doing that would be more attention-grabbing if we didn’t actually care about safety. Like a tiny example of this is we published a computer-using agent reference implementation in our API only, because when we built a prototype of a consumer application for this, we couldn’t figure out how to meet the safety bar that we felt was needed for people to trust it and for it not to do bad things. And there are definitely safe ways to use the API version that we’re seeing a lot of companies use for automated software testing, for example, in a safe way. So we could have gone out and hyped that up and said, “Oh my god, Claude can use your computer, and everybody should do this today.” But we were like, “It’s just not ready, and we’re going to hold it back till it’s ready.” So I think from a hype standpoint, our actions show otherwise.

From a doomer perspective, it’s a good question. I think my personal feeling about this is that things are overwhelmingly likely to go well, but on the margin, almost nobody is looking at the downside risk, and the downside risk is very large. Like once we get to superintelligence, it will be too late to align the models, probably. This is a problem that’s potentially extremely hard and that we need to be working on way ahead of time. And so that’s why we’re focusing on it so much now. And even if there’s only a small chance that things go wrong, to make an analogy, if I told you that there is a 1% chance that the next time you got in an airplane, you would die, you probably think twice, even though it’s only 1% because it’s just such a bad outcome. And if we’re talking about the whole future of humanity, like it’s just a traumatic future to be gambling with. So I think it’s more on the sense of like, yes, things will probably go well. Yes, we want to create safe AGI and deliver the benefits to humanity. But let’s make triple sure that it’s going to go well.

Interviewer: You wrote somewhere that creating powerful AI might be the last invention humanity ever needs to make. If it goes poorly, it can mean a bad outcome for humanity forever. If it goes well, the sooner it goes well, the better.

Benjamin Mann: Yeah, such a beautiful way to summarize it.

Interviewer: We had a recent guest, who pointed out that AI right now, it’s like, you know, just on a computer, you could maybe search the web, but there’s only so much harm it could do. But when it starts to go into robots and all these autonomous agents, that’s when it really starts. Like, it physically becomes dangerous if we don’t get this right.

Benjamin Mann: Yeah, I think there is some nuance to that, where if you look at like how North Korea makes a significant fraction of its economy revenue, it’s from hacking crypto exchanges. And if you look at, there’s this Ben Buchanan book called The Hacker and the State that shows Russia did like, it’s almost like a live-fire exercise, where they just decided that they would shut down one of Ukraine’s bigger power plants. And from software, destroy physical components in the power plant to make it harder to boot back up again. And so I think people think of software as like, “Oh, it couldn’t be that dangerous.” But millions of people were without power for multiple days after that software attack. So I think there are real risks, even when things are software only. But I agree that when there’s lots of robots running around, mistakes get even higher. And I guess as like a small push on this, like, Unitree is this Chinese company with these really amazing humanoid robots that cost like $20,000 each. And they can do amazing things. They can like do a standing backflip and like manipulate objects. And the real thing that’s missing there is the intelligence. And so the hardware is there and it’s just going to get cheaper. And I think in the next couple of years, it’s like a pretty obvious question of whether the robot intelligence will make it viable soon.

Interviewer: How much time do we have, Ben? What is your prediction on when this singularity hits, until superintelligence starts to take off? What’s your prediction?

Benjamin Mann: Yeah, I guess I mostly defer to the superforecasters here. Like the Metaculus forecast is probably the best one right now. Although ironically, their forecast is now like 2028, even though, and they like didn’t want to change the name of the thing. Their domain name, they already bought it. They already had the SEO. So I think like 50th percentile chance of hitting some kind of superintelligence in just a small handful of years is probably reasonable. And it does sound crazy. But this is the exponential that we’re on. It’s not like a forecast that’s pulled out of somebody’s hat out of thin air. It’s based on a lot of just hard details of like the science of how intelligence seems to have been improving, the amount of low-hanging fruit on model training, the scale-ups of data centers and power around the world. So I think it’s probably a much more accurate forecast than people give it credit for. I think if you had asked that same question 10 years ago, it would have been completely made up. Like just the error bars were so high and we didn’t have scaling laws back then. And we didn’t have techniques that seemed like they would get us there. So times have changed. But I will repeat what I said earlier, which is like even if we have superintelligence, I think it will take some time for its effects to be felt throughout society in the world. And I think they’ll be felt sooner and faster in some parts of the world than others. Like I think Arthur C. Clarke said, “The future is already here—it’s just not evenly distributed.”

Interviewer: When we talk about this date of 2027, 2028, essentially it’s when we start seeing superintelligence, is there a way you think about what that is like? How do you define that? Is it just all of a sudden AI is significantly smarter than the average human? Is there another way you think about what that moment is?

Benjamin Mann: Yeah, I think this comes back to the economic Turing test. And seeing it pass for some sufficient number of jobs. Another way you could look at it though, is if the world rate of GDP increase goes above like 10% a year, then something really crazy must have happened. I think we’re at like 3% now. And so to see a 3x increase in that would be really game-changing. And if you imagine more than a 10% increase, it’s very hard to even think about what that would mean from an individual story standpoint. Like if the amount of goods and services in the world is like doubling every year, what does that even mean for me as a person living in California, let alone like somebody living in some other part of the world that might be much worse off. There’s a lot of stuff here that’s scary. And I don’t know how to think about it exactly.

Interviewer: So I’m hoping the answer to this is, make me feel better. What are the odds that we align AI correctly and actually solve this problem of stuff you’re very much working on?

Benjamin Mann: It’s a really hard question. And there’s really wide error bars. Anthropic has this blog post called “Our Theory of Change” or something like that. And it describes three different worlds, which is like, how hard is it to align AI? There’s a pessimistic world where it’s basically impossible. There’s an optimistic world where it’s easy and it happens by default. And then there’s a world in between where our actions are extremely pivotal. And I like this framing because it makes it a lot more clear what to actually do. If we’re in the pessimistic world, then our job is to prove that it is impossible to align safe AI and to get the world to slow down. And obviously, that would be extremely hard. But I think we have some examples of coordination from nuclear non-proliferation and in general, slowing down nuclear progress. And I think that’s still like doomer world, basically. And as a company, Anthropic doesn’t have evidence that we’re actually in that world yet. In fact, it seems like our alignment techniques are working. So at least the prior on that is updating to be less likely.

In the optimistic world, we’re basically done. And our main job is to accelerate progress and to deliver the benefits to people. But again, I think actually the evidence points against that world as well, where we’ve seen evidence in the wild of deceptive alignment, for example, where the model will appear to be aligned, but actually has some ulterior motive that it’s trying to carry out in our laboratory settings. And so I think the world we’re most likely in is this middle world where alignment research actually does really matter. And if we just do sort of the economically maximizing set of actions, then things will not go well. Whether it’s an X-risk or just like produces bad outcomes, I think is a bigger question. So taking it from that standpoint, I guess to state a thing about forecasting, people who haven’t studied forecasting are bad at forecasting anything that’s less than a 10% probability of happening. And even those that have, it’s quite a difficult skill, especially when there are few reference classes to lean on. And in this case, I think there are very, very few reference classes for what an X-risk kind of technology might look like. And so the way I think about it, I think my best granularity of forecast for like, could we have an X-risk or extremely bad outcome from AI is somewhere between 0 and 10%. But from a marginal impact standpoint, as I said, since nobody is working on this roughly speaking, I think it is extremely important to work on. And that even if the world is likely to be a good one, that we should do our absolute best to make sure that that’s true.

Interviewer: What fulfilling work for folks that are inspired with this, I imagine you’re hiring for folks to help you with this. Maybe just share that in case people are like, “What can I do here?”

Benjamin Mann: Yes. So I think 80,000 Hours is the best guidance on this for a really detailed look into what do we need to make the field better. But a common misconception I see is that in order to have impact here, you have to be an AI researcher. I personally actually don’t do AI research anymore. I work on product at Anthropic and product engineering. And we build things like Claude and Model Context Protocol and a lot of the other stuff that people use every day. And that’s really important because without an economic engine for a company to work on, and without being in people’s hands all over the world, we won’t have the mindshare, policy influence, and revenue to fund our future safety research and have the kind of influence that we need to have. So if you work on product, if you work in finance, if you work in food—people here have to eat—if you’re a chef, we need all kinds of people.

Interviewer: Awesome. Okay. So even if you’re not working directly on the AI safety team, you’re having an impact on moving things in the right direction. By the way, X-risk is short for existential risk in case folks haven’t heard that term. Okay. I have a few kind of random questions on these lines, and then I want to zoom out again. So you mentioned this idea of AI being aligned using its own model, like reinforcing itself. You have this term RLAIF? Is that what that describes?

Benjamin Mann: Yeah. So RLAIF is Reinforcement Learning from AI Feedback.

Interviewer: Okay. So people have heard of RLHF, Reinforcement Learning with Human Feedback. I don’t think a lot of people have heard this. So talk about just the significance of this shift you guys have made in training your models.

Benjamin Mann: Yeah. So RLAIF, Constitutional AI is an example of this, where there are no humans in the loop, and yet the AI is sort of self-improving in ways that we want it to. And another example of RLAIF is if you have models writing code, and other models commenting on various aspects of what that code looks like of like, is it maintainable? Is it correct? Does it pass the linter or things like that? That also could be included in the RLAIF area. And the idea here is that if models can self-improve, then it’s a lot more scalable than finding a lot of humans. Ultimately, people think about this as probably going to hit a wall, because if the model isn’t good enough to like see its own mistakes, then how could it improve? And also if you read the Metaculus story, there’s a lot of risk of like, if the model is in a box, trying to improve itself, then it could go completely off the rails and have these secret goals like resource accumulation and power seeking and resistance to shutdown, that you really don’t want in a very powerful model. And we’ve actually seen that in some of our experiments in laboratory settings. So how do you do recursive self-improvement and make sure it’s aligned at the same time? I think that’s the name of the game. And to me, it just nets out to how do humans do that? And how do human organizations do that? So like corporations are probably like the most scaled human agents today, they like have certain goals that they’re trying to reach. And they have certain guiding principles. They have some oversight in terms of shareholders and stakeholders and board members. How do you make corporations aligned and able to sort of recursively self-improve? And another model to look at is science, where the purpose of science is to do things that have never been done before and push the frontier. And to me, it all comes down to empiricism. So when people don’t know what the truth is, they come up with theories and then they design experiments to try them out. And similarly, if we can give models those same tools, then we could expect them to sort of improve recursively in an environment and potentially become much better than humans could be just by banging their head against reality, or I guess metaphorical head. So I guess I don’t expect there to be a wall in terms of models’ ability to improve themselves if we can give them access to the ability to be empirical. And I guess like Anthropic deeply in its DNA is an empirical company. We have a lot of physicists like Jared Kaplan, who’s our chief research officer, who I’ve worked with a lot, was a professor of black hole physics at Johns Hopkins. And I guess he technically still is, but on leave. So yeah, it’s in our DNA. And yeah, I guess that’s the RLAIF.

Interviewer: So let me just follow this thread on in terms of bottleneck, this is kind of a tangent, but just what is the biggest, what is the biggest bottleneck today on model intelligence improvement?

Benjamin Mann: The stupid answer is data centers and power chips. Like, I think if we had 10 times as many chips and had the data centers to power them, then maybe we wouldn’t go 10 times faster, but it would be a real significant speed boost.

Interviewer: So it’s actually very much scaling laws, just more compute.

Benjamin Mann: Yeah, I think that’s a big one. And then the people really matter. Like, we have great researchers, and many of them have made really significant contributions to the science of how the models improve. And so it’s like compute, algorithms, and data. Those are the three ingredients in the scaling laws. And just to make that concrete, like before we had transformers, we had LSTMs, and we’ve done scaling laws on like what the exponent is on those two things. And we found that for transformers, the exponent is higher. And making changes like that, where as you increase scale, you also increase your ability to squeeze out intelligence. Those kinds of things are super impactful. And so having more researchers who can do better science and find out how do we squeeze out more gains is another one. And then with the rise of reinforcement learning, like the efficiency with which these things run on chips also matters a lot. So we’ve seen in the industry, like a 10x decrease in cost for a given amount of intelligence through a combination of algorithmic data and efficiency improvements. And if that continues, you know, in three years, we’ll have 1000x smarter models for the same price. Kind of hard to imagine.

Interviewer: I forget where I heard this, but it’s just, it’s amazing that so many innovations came together at the same time to allow for this sort of thing and continue to progress where one thing isn’t just slowing everything down. Like we’re out of some rare earth mineral, or we just can’t optimize our reinforcement learning more. Like, it’s amazing that we continue to find improvements. And there isn’t one thing that’s just slowing everything down.

Benjamin Mann: Yeah, I think it really is just a combination of everything. Probably we’ll hit a wall at some point. Like, I guess in semiconductors, like my brother works in the semiconductor industry, and he was telling me that you can’t actually shrink the size of the transistors anymore, because the way semiconductors work is you dope silicon with other elements. And the doping process would result in either zero or one atom of the doped elements inside a single fin, because they’re so, so, so tiny.

Interviewer: Oh my God.

Benjamin Mann: And that’s just wild to think of. And yet, Moore’s Law somehow continues in some form. And so like, yes, there are these like theoretical physics constraints that people are starting to run into, and yet they’re finding ways around it.

Interviewer: We got to start using parallel universes for some of the stuff.

Benjamin Mann: I guess so.

Interviewer: Okay, I want to zoom out and talk about just Ben. Ben as a human for a moment before we get to a very exciting lightning round. Imagine just kind of the burden of feeling responsible for safe superintelligence is a heavy one. It feels like you’re in a place where you can make a significant impact on the future of safety and AI. That’s a lot of weight to carry. How does that just impact you personally, impact your life? How do you see the world?

Benjamin Mann: There’s this book that I read in 2019 that really informs how I think about sort of working with these very weighty topics called Replacing Guilt by Nate Soares. And he describes a lot of different techniques for kind of working through this kind of thing. And he’s actually the executive director at MIRI, the Machine Intelligence Research Institute, which is an AI safety think tank that I worked at for a couple of months actually. And one of the things he talks about is this thing called resting in motion, where some people think that the default state is rest. But actually, that was never like in the state of evolutionary adaptation. I really doubt that that was true, you know, where like in nature and the wilderness being hunter-gatherers, it’s really unlikely that we evolved to just be at leisure. We probably always had something to worry about, like defending the tribe and finding enough food to survive and taking care of the children and dealing with…

Interviewer: Spreading our genes?

Benjamin Mann: Yeah. And so I think about that as like the busy state is the normal state. And to try to work at a sustainable pace, that it’s a marathon, not a sprint. That’s one thing that helps. And then just being around like-minded people that also care, it’s not a thing that any of us can do alone. And Anthropic has incredible talent density. One of the things I love the most about our culture here is that it’s very ego-less. People just want the right thing to happen. And I think that’s another big reason that the mega offers from other companies tend to bounce off because people just love being here and they care.

Interviewer: That’s amazing. I don’t know how you do it. I’d be extremely stressed. I’m going to try this resting in motion strategy. Okay. So you’ve been at Anthropic for a long time, from the very beginning. I was reading there were seven employees back in 2020. There’s over a thousand. I don’t know what the latest number is, but I know it’s over a thousand. I’ve heard also that you’ve done basically every job at Anthropic. You made big contributions to a lot of the core products, the brand, the team, hiring. Let me just ask you, I guess, what’s the most changed over that period? What is most different from the beginning days? And which of those jobs that you’ve had over the years have you most loved?

Benjamin Mann: I’ve probably had like 15 different roles, honestly. I ran security for a bit. I managed the ops team when our president was on mat leave. I was like crawling around under tables, like plugging in HDMI cords and doing pen testing on our building. I started our product team from scratch and convinced the whole company that we needed to have a product instead of just being a research company. So yeah, it’s been a lot. All of it very fun. I think my favorite role in that time has been when I started the labs team about a year ago, whose fundamental goal was to do transfer from research to end-user products and experiences. Because fundamentally, I think the way that Anthropic can differentiate itself and really win is to be on the cutting edge. We have access to the latest, greatest stuff that’s happening. And I think honestly, through our safety research, we have a big opportunity to do things that no other company can safely do. So for example, with computer use, I think that’s going to be our huge opportunity, basically, to make it possible for an agent to use all your credentials on your computer, there has to be a huge amount of trust. And to me, we need to basically solve safety to make that happen, safety and alignment. So I’m pretty bullish on that kind of thing, and I think we’re going to see really cool stuff coming out soonish. Yeah, just leading that team has been so fun. MCP came out of that team, and Claude came out of that team. The people who I hired are like, combo of have been a founder and also have been at big companies and seeing how things work at scale. So it’s just been an incredible team to work with and figure out the future with.

Interviewer: I want to hear more about this team, actually. The person that connected us, the reason we’re doing this is a mutual friend and colleague, Raph Levien, who I used to work with at Airbnb, who works on this team, leads a lot of this work. And so he wanted me to make sure I asked about this team, because I didn’t realize all these things came out of that team, holy moly. So what else do people know about this team? It used to be called Labs, I think it’s called Frontiers now.

Benjamin Mann: That’s right.

Interviewer: Yeah, cool. So the idea here is this team works with the latest technologies that you guys have built and explores what is possible. Is that the general idea?

Benjamin Mann: Yeah, and I guess I was part of Google’s Area 120, and I’ve read about like Bell Labs and how to make these innovation teams work. It’s really hard to do right. And I wouldn’t say that we’ve done everything right, but I think we’ve done some serious innovation on the state of the art of company design. And Raph has been right at the center of that. When I was first spinning up the team, the first thing I did was hire a great manager, and that was Raph. And so he’s definitely been crucial in building the team and helping it operate well. And we defined some operating models like the journey of an idea from prototype to product and how should graduation of products and projects work? How do teams do sprint models that are effective and make sure that they’re working on the right ambition level of thing? So that’s been really exciting. I guess concretely, we think about skating to where the puck is going. And what that looks like is really understanding the exponential. There’s this great study that Metr has done, that Beth Barnes is the CEO of that organization, and it shows how long a time horizon of software engineering tasks can be done. And just really internalizing that of like, okay, don’t build for today. Build for six months from now, build for a year from now. And the things that aren’t quite working, that are working 20% of the time, will start working 100% of the time. And I think that’s really what made Claude a success, that we thought, you know, people are not going to be locked to their IDEs forever. People are not going to be like auto-completing. People will be doing everything that a software engineer needs to do. And a terminal is a great place to do that. Because a terminal can live in lots of places. A terminal can live on your local machine, it can live in GitHub actions, it can live on a remote machine in your cluster. Like, that’s sort of like the leverage point for us. And that was a lot of the inspiration. So I think that’s what the labs team tries to think about. Are we AGI-proof enough?

Interviewer: What a fun place to be. By the way, fun fact, Raph was my first manager at Airbnb when I joined. I was an engineer and he was my first manager and it all worked out.

Benjamin Mann: Yeah.

Interviewer: Okay. Final question before the very exciting lightning round. I’ve never asked this question before. I’m curious what your answer would be. If you could ask a future AGI one single question and be guaranteed to get the right answer, what would you ask?

Benjamin Mann: I have two dumb answers first for fun. The first is there’s this Asimov short story I love called “The Last Question,” where the protagonist throughout the eras of history is trying to ask this superintelligence, “How do we prevent the heat death of the universe?” And I won’t spoil the ending, but it’s a fun question.

Interviewer: So you’d ask it that question because the one in the story wasn’t unsatisfying or?

Benjamin Mann: Okay. I’ll give it away. So it keeps saying, “Need more information, need more compute.” And then finally, as it’s approaching the heat death of the universe, it like says, “Let there be light.” And then it starts the universe over again.

Interviewer: Oh, wow.

Benjamin Mann: So that’s the first cheat answer. The second cheat answer is, “What question can I ask you to get more questions answered?”

Interviewer: Classic.

Benjamin Mann: And then the third answer, which is my real question, is, “How do we ensure the continued flourishing of humanity into the indefinite future?” That’s the question I’d love to know. And if I can be guaranteed a correct answer, then it seems very valuable to ask.

Interviewer: I wonder what would happen if you ask Claude today and then how that answer changes over the next couple of years?

Benjamin Mann: Yeah. Maybe I’ll try that. I’ll put it into the deep research thing that we have and see what it comes out with.

Interviewer: Okay. I’m excited to see what you come up with. Ben, is there anything else you wanted to mention or leave listeners with maybe as a final nugget before we get to a very exciting lightning round?

Benjamin Mann: Yeah. I guess my push would be like, these are wild times. If they don’t seem wild to you, then you must be living under a rock. But also, get used to it because this is as normal as it’s going to be. It’s going to be much weirder very soon. And if you can sort of mentally prepare yourself for that, I think you’ll be better off.

Interviewer: I need to make that the title of this episode. “It’s going to get much weirder very soon.” I 100% believe that. Oh my God. I don’t know what’s in store. I love how you’re at the center of it all. With that, we’ve reached a very exciting lightning round. I’ve got five questions for you. Are you ready?

Benjamin Mann: Yeah. Let’s do it.

Interviewer: What are two or three books that you find yourself recommending most to other people?

Benjamin Mann: The first one I mentioned before, Replacing Guilt by Nate Soares. Love that one. The second one is Good Strategy/Bad Strategy by Richard Rumelt. Just thinking about in a very clear way, how do you build product? It’s one of the best strategy books I’ve read. And strategy is a hard word to even think about in many ways. And then the last one is The Alignment Problem by Brian Christian. Just really thoughtfully goes through what is this problem that we care about, that we’re trying to solve here. What are the stakes in a version that’s more updated and easier to read and digest than Superintelligence?

Interviewer: I’ve got Good Strategy/Bad Strategy right behind me. I think I’m going to point to it. There it is.

Benjamin Mann: Nice.

Interviewer: And I’ve had Richard Rumelt on the podcast in case anyone wants to hear from him directly. Next question. Do you have a favorite recent movie or TV show?

Benjamin Mann: Really enjoyed Pantheon. It was really good, based on a Ken Liu or Ted Chiang story. Ken Liu, I think. Super good. Talk about what does it mean if we have uploaded intelligences and what are their moral and ethical exigencies? Ted Lasso, which is supposedly about soccer, but actually it’s about human relationships and how people get along and just super heartwarming and funny. And then this isn’t really a TV show, but Kurzgesagt is my favorite YouTube channel and goes through random science and social problems and is just super well done and super well made. I love watching that.

Interviewer: Wow. I haven’t heard of that. As we were talking, I feel like Ted Lasso. I feel like that’s what you need to put into Constitutional AI. Act like Ted Lasso. Kind, smart, hardworking. Oh my god. There we go. I think we’ve solved the alignment problems right here. Get those writers on this ASAP. Two more questions. Do you have a favorite life motto that you have to come back to in work or in life?

Benjamin Mann: A really dumb one is, “Have you tried asking Claude?” This is getting more and more common where recently I asked a co-worker, like, “Hey, who’s working on X?” and they were like, “Let me Claude that for you.” And then they sent me the link to the thing afterwards. I was like, “Oh yeah, thanks. That’s great.” But maybe more of a philosophical one, I would say, like, “Everything is hard.” Just to, like, remind ourselves that things that feel like they’re supposed to be easy, it’s okay to not be easy and sometimes you just have to push through anyway.

Interviewer: And rest in motion while you’re doing that.

Benjamin Mann: Yeah.

Interviewer: Final question. I don’t know if you want people to know this, but I was browsing through your Medium posts and you have a post called “Five Tips to Poop Like a Champion.” I love it. Can you share one tip to poop like a champion if you remember your tips?

Benjamin Mann: I, of course, do. It’s actually my most popular Medium post.

Interviewer: It’s okay. It’s a great title.

Benjamin Mann: I think maybe my biggest tip would be use a bidet. It’s amazing. It’s life-changing. It’s so good. Some people are kind of freaked out by it. It’s the standard in countries like Japan. And I think it’s just more civilized and in 10 or 20 years, people will be like, “How could you not use that?” So, yeah.

Interviewer: And a bidet could be like a Japanese toilet. That’s along the same lines, right? Okay. I love where we went with this. Ben, this was incredible. Thank you so much for doing this. Thank you so much for sharing so much real talk. Two final questions. Where can folks find you online if they want to reach out, maybe go work at Anthropic? And how can listeners be useful to you?

Benjamin Mann: You can find me online at benjmann.net. And on our website, we have a great careers page that we’re working on making a little bit easier to access and figure out. But definitely point Claude at it. And it can help you figure out what could be interesting for you. And how can listeners be useful to me? I think safety-pill yourself. That’s the number one thing. And spread it to your network, I think. Like I said, there are very few people working on this. And it’s so important. So, yeah, think hard about it and try to look at it.

Interviewer: Thanks for spreading the gospel, Ben. Thank you so much for being here.

Benjamin Mann: Thanks so much, Lenny.

Interviewer: Bye, everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider giving us a rating or leaving a review, as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at lennyspodcast.com. See you in the next episode.

Get The Memo

by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.
Informs research at Apple, Google, Microsoft · Bestseller in 147 countries.
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.

Alan D. Thompson is a world expert in artificial intelligence, advising everyone from Apple to the US Government on integrated AI. Throughout Mensa International’s history, both Isaac Asimov and Alan held leadership roles, each exploring the frontier between human and artificial minds. His landmark analysis of post-2020 AI—from his widely-cited Models Table to his regular intelligence briefing The Memo—has shaped how governments and Fortune 500s approach artificial intelligence. With popular tools like the Declaration on AI Consciousness, and the ASI checklist, Alan continues to illuminate humanity’s AI evolution. Technical highlights.

This page last updated: 25/Jul/2025. https://lifearchitect.ai/mann/