Monday Brief #3
Welcome to the shorter and less technical version of Monday Radar. Each week I’ll pick a couple of the most interesting and important pieces from the full update.
How AI-driven feedback loops could make things very crazy, very fast
Benjamin Todd has a great piece on how AI might get weird in a hurry:
But there are other feedback loops that could still make things very crazy – even without superintelligence – it’s just that they take five to twenty years rather than a few months. The case for an acceleration is more robust than most people realise.
This article will outline three ways a true AI worker could transform the world, and the three feedback loops that produce these transformations, summarising research from the last five years.
DeepSeek-V3.2
DeepSeek just released DeepSeek-V3.2, an extremely capable open weights model. It isn’t as capable as the frontier models, but it’s probably less than a year behind. As always, Zvi has a full analysis of the release. I have three questions, only one of which is rhetorical:
- Chinese open weight models continue to fast-follow the big labs, with DeepSeek and MoonshotAI both within a year of the frontier. Will they catch up? Fall behind? Continue to fast-follow?
- DeepSeek’s models seem to be significantly behind the frontier in some important but intangible ways. How much does that matter, and how hard will it be to close that gap?
- DeepSeek has provided almost no safety documentation for this release, and it seems easy to get dangerous output from the model. If the frontier labs achieve truly dangerous capabilities within a year AND the open models stay less than a year behind them AND the open models continue to have almost no meaningful safeguards, how do we think that’s going to go?
The CAIS AI Dashboard
The Center for AI Safety has a new AI Dashboard, which does a great job of summarizing capabilities and safety metrics for the leading models. This is now my top pick for a single place to keep an eye on capabilities.
AIs are getting pretty good at science
Some of you are old enough to remember September of 2025, when Scott Aaronson reported that ChatGPT had provided significant help with his most recent paper. Upping the ante, Steven Hsu reports of his paper in Physics Letters B that “the main idea in the paper originated de novo from GPT-5.”
The Medical Case for Self-Driving Cars
Jonathan Slotkin has an opinion piece about autonomous cars in The New York Times. Short version: Waymos are so much safer than human-driven vehicles that accelerating their deployment is a public health imperative. He argues that if this was a medical trial, medical ethics would require immediately ending the trial and canceling the human-drivers arm of the trial.
What if AI ends loneliness?
I really enjoyed this long but excellent piece by Tom Rachman on AI companions and loneliness. Obvious prediction: AI will give us the option of getting exactly what we really want in companions, without the reciprocity requirement of human companions. Cover your eyes—it’s gonna be gruesome.
