Monday AI Radar
Every Monday I publish a newsletter with the most interesting AI news and writing that caught my eye over the past week. It’s available as an email newsletter. For something shorter and less technical, check out Monday AI Brief.
People continue to lose their minds about Claude Code. We’ll begin this week’s newsletter with a look at what people are using it for and where they think it’s headed. Here’s my short take: Claude Code’s present usefulness is 30% overhyped. A lot of the amazing things people are reporting are genuinely amazing, but they’re quick working prototypes of fairly simple tools. But…
Sometime in the past couple of months, AI crossed a really important capability threshold. By the end of 2025, it was clear to any programmer who was paying attention that our profession has completely changed. By the end of 2026, I think that same thing will be true for many professions. Most people won’t realize it right away, and it may (or may not) take a few years for the changes to really take hold, but the writing is now very clearly on the wall.
Happy New Year! It would be silly for me to wish you an uneventful year, but I hope most of your surprises are good ones.
We begin this week’s update with our final roundup of year-end retrospectives. After that we’ll get to a new (and somewhat lengthened) timeline from the AI-2027 team, gaze in wonder at the state of the art in image generation, hear a beautiful but heartbreaking story about AI-related job loss, and contemplate the possibility of a war over Taiwan.
On paper, this was a quiet week: there were no major releases, and no big headlines. Online, though, there’s been a big shift in the vibe since the release of Opus 4.5 a month ago. It’s now undeniable that AI is transforming programming, and it feels increasingly likely that the same will happen to all other knowledge work before too long. We’ll check in with some industry leaders to see how it feels in the trenches.
But that’s not all—we review the latest evidence of accelerating progress, gaze upon the wreckage of once-proud benchmarks, and try to figure out what to do about AI-related job loss. And shoes! If you’ve been wanting more fashion reporting in these pages, today is your lucky day.
As 2025 draws to a close, we look back on one of humanity’s last “normal” years with Dean Ball, Andrej Karpathy, and prinz. We have lots of AI-assisted science news including a big new benchmark, a look at AI in the wet lab, and a new startup working on emulating fruit fly brains.
Lest we get too carried away with holiday cheer, UK AISI reports on rapid growth in dangerous capabilities, Windfall Trust notes early signs of labor market impacts, and Harvey Lederman meditates on automation, meaning, and loss. Plus lots of political news, a few new models, and much more.
It’s the time of year when people start publishing retrospectives—we have a great review of Chinese AI in 2025, an in-depth review of technical developments, and a report on the state of enterprise AI deployment. Stand by for more of these over the next few weeks.
If you’re looking for data, we have overviews of on when prediction markets think AGI might arrive (hint: soon) and safety practices at the big labs (hint: not great). Plus AI crushes another major math contest, some guidance on integrating AI into education, and lots more. But let’s ease into it with a fun conversation about model psychology.
First, some housekeeping: I’ve started Monday Brief, which is a shorter and less technical version of Monday Radar. You can get the email newsletter here if you’re interested.
There was only one big new release last week, but there’s still lots to catch up on. We’ll look at a couple of new metrics from CAIS and Epoch as well as progress reports on AI-powered science, coding productivity, and autonomous cars. Plus some great pieces on cyberwarfare, the vibecession, alignment, and AI companions.
This week’s most interesting news is Claude’s “soul document”, which Anthropic used to train Claude on ethical behavior. There are so many facets to this story including how the document was discovered, what this tells us about Claude’s ability to introspect, and the complexities of codifying ethical behavior in the real world.
We also have a deeper look at Opus 4.5, plenty of political developments, some fascinating but troubling papers on safety and alignment, and a guide to giving money to support AI safety.
Welcome to the first issue of Monday Radar. It’s been a busy week, with significant releases from all three of the big labs. We also have deep dives on the bleeding edge of AI productivity, AI scientists, challenges with controlling even well-aligned AI, and much more.
