Monday AI Brief
Keeping up with AI is a full-time job (trust me on this). To make it a little easier, I publish a newsletter with a carefully curated selection of the most interesting news and and analysis from the past week. I focus on:
- Capabilities: what can AI do now?
- Trajectory: where will we be in five years?
- Alignment: will superhuman AI do what we want?
- Interpretability: how will we know if it doesn’t?
- Societal impacts: will we have jobs in five years?
- Existential risk: what about AI-engineered pandemics?
- Strategy: how do we prepare for superhuman AI?
- Using AI: how do you get the most from your AI?
Monday AI Brief is short and mostly non-technical. If you’d like to go deeper, Monday AI Radar is longer and more technical.
I’m pleased to report that I have no new AI-related crises for you this week. Instead we get to focus on the fun parts, starting with AI consciousness. We'll ask two leading neuroscientists whether AI is likely to become conscious (conclusion: probably yes, or almost certainly not).
AI is doing fascinating things to programmers: for many of us, this moment is simultaneously exhilarating and slightly heartbreaking. We’ll look at one high level overview of how AI is affecting programming, and one deeply personal reflection on that same topic. Programmers aren’t the only ones being disrupted: prinz joins us to argue that while the legal profession will survive AI, the big law firms will not.
The conflict between the Department of War and Anthropic has quieted somewhat, but nothing has been resolved and a catastrophic outcome is still entirely possible. Regardless of what happens next, two things are very clear.
This is the least political that AI will ever be. Politicians are finally waking up to the fact that AI is a big deal. Even though most of them don’t understand why it’s a big deal, you can safely assume they will have an increasing appetite for government intervention. The DoW incident is a preview, not an aberration.
This is the least stressful that AI will ever be. The last two weeks have been brutal: I notice several of the writers and thinkers that I most respect have been publicly struggling and in some cases decompensating. I’m afraid the pace is only going to get faster, and the stakes are only going to get higher. Pace yourselves.
In the spirit of pacing ourselves, we’ll cover what we need to cover about DoW, then put it down and move on to happier topics.
Last week’s conflict between the Department of War and Anthropic marked a turning point for AI. I’m cautiously hopeful that the parties involved will find some kind of deescalation from the current nuclear option, but immense irreparable damage has already been done: to Anthropic, to the entire AI industry, and to America’s pre-eminence in AI.
This is a complex, fast-moving situation that is outside my usual beat. Rather than trying to cover it in detail myself, I’m going to link to some of the most useful analysis. But I want to be extremely clear: this is the most important thing that’s happened in AI for a long time, and it’s gravely concerning. These are dark times and the road ahead just got much more difficult.
I’m on vacation, so this week’s newsletter is a bit lighter than usual. I wish I could say that the torrent of AI news was also lighter, but… yeah, not so much.
Our focus this week is on politics and strategy. We explore populist anger about AI, check in with Dean Ball on the Global South’s (lack of) readiness for AGI, and discuss using AI to help us navigate the transition to superintelligence. And for a change of pace, we’ll talk about AI video and what it means for Hollywood.
We’ve got lots to say about the future this week. Matt Shumer says what happened to coding last year will happen to everything else this year, and Dario Amodei still expects a country of geniuses in a data center by 2028. Steve Newman thinks the frenzy over OpenClaw gives us a peek into the future. And on the topic of predictions, AI is close to beating the best humans in forecasting tournaments.
This is what takeoff feels like. Anthropic and OpenAI have been explicit about their intention to create an intelligence explosion, and employees at both companies have recently confirmed that their models are significantly accelerating their own development.
This week we’ll talk about what that means, considering the trajectory of future progress, our increasing inability to measure the capabilities and risks of the frontier models, and some ideas for how humanity can successfully navigate what is coming.
First, an administrative note: I’m starting to write longer pieces on specific topics. I’ll link to them in each week’s newsletter, but you can subscribe to them directly if you like.
We have so much to talk about this week. The internet is taking a break from losing its mind over agents to instead lose its mind over Moltbook. Dario Amodei has an important new piece about the dangers of AI. Boaz Barak considers Claude’s Constitution. And more. There’s always more.
Anthropic has published Claude’s constitution (formerly known as the soul document). We’ll also visit with Dario and Demis at Davos, learn some surprising things about how LLMs think, worry about the children, and have fun with images.
We start this week’s brief with two pieces about the impact of AI on the economy and in particular employment. My money is on very major AI-related unemployment, fairly soon—I don’t think that’s certain, but the alternatives look increasingly unlikely.
In related news, prinz discusses an AI takeoff, we assess how well the AI forecasters are doing, and Nathan Lambert shares some guidance on how to pick the right model for the job. To finish up on a gruesome note, Dean Ball shares some of the worst ideas for AI legislation currently under consideration in various states.
People continue to lose their minds about Claude Code. We’ll begin this week’s newsletter with a look at what people are using it for and where they think it’s headed. Here’s my short take: Claude Code’s present usefulness is 30% overhyped. A lot of the amazing things people are reporting are genuinely amazing, but they’re quick working prototypes of fairly simple tools. But…
Sometime in the past couple of months, AI crossed a really important capability threshold. By the end of 2025, it was clear to any programmer who was paying attention that our profession has completely changed. By the end of 2026, I think that same thing will be true for many professions. Most people won’t realize it right away, and it may (or may not) take a few years for the changes to really take hold, but the writing is now very clearly on the wall.
Happy New Year! It would be silly for me to wish you an uneventful year, but I hope most of your surprises are good ones.
This week we’re talking about the challenges of benchmarking advanced AI, looking at new (and slightly longer) timelines from the AI-2027 team, worrying about AI-related job loss, and asking Claude whether it’s a person.
On paper, this was a quiet week: there were no major releases, and no big headlines. Online, though, there’s been a big shift in the vibe since the release of Opus 4.5 a month ago. It’s now undeniable that AI is transforming programming, and it feels increasingly likely that the same will happen to all other knowledge work before too long.
But that’s not all—we review the latest evidence of accelerating progress, discuss whether AI might increase the demand for knowledge workers, and look at how Claude handles mental health crises. And shoes! If you’ve been wanting more fashion reporting in these pages, today is your lucky day.
Welcome to the shorter and less technical version of Monday AI Radar.
This week brought some great 2025 retrospectives and some brave predictions for 2026. It was hard to pick just one, but I think Prinz did a great job summarizing what is likely to be one of humanity’s last “normal” years. We’ll also try to get our heads around the “jagged frontier” of AI capabilities, look at some impressive new accomplishments, and end on a meditative note as we contemplate life in a world that doesn’t need human labor.
Welcome to the shorter and less technical version of Monday AI Radar. We’re focusing on model psychology this week with pieces on training “character” at Anthropic, what we do and don’t know about the possibility of AI consciousness, and some hard questions about whether AI should prioritize obedience or virtue. Plus grading the big labs on their safety practices, copywriters talk about losing their jobs to AI, and a lighthearted look at a recent big AI conference.
Welcome to the shorter and less technical version of Monday Radar. Each week I’ll pick a couple of the most interesting and important pieces from the full update.
