Monday AI Radar #14
I’m on vacation, so this week’s newsletter is a bit lighter than usual. I wish I could say that the torrent of AI news was also lighter, but… yeah, not so much.
Our focus this week is on politics and strategy. We have two pieces on populist anger about AI, a report by Dean Ball on the Global South’s (lack of) readiness for AGI, and a couple of semi-technical pieces on using AI to help us navigate the transition to superintelligence. And yeah, we’ll talk about the photo op debacle in India.
Subscribe by email RSS feed Shorter version
Top pick
My week with the AI populists
Jasmine Sun spent a week in DC and considers the role of populism in AI politics:
And my reductive two-line summary is as follows: All the money is on one side and all the people are on the other. We aren’t ready for how much people hate AI.
It’s a great piece that calls attention to something that’s likely to be a major factor in AI governance over the next year or two. Be sure to check out her recommended reading at the end.
New releases
Sonnet 4.6
Anthropic just released Sonnet 4.6, a substantial improvement over Sonnet 4.5. Early indications are that it’s very capable and for many tasks can replace Opus at lower cost.
Seedance 2.0
ByteDance’s Seedance 2.0 AI video generator just dropped and it’s really good. Perhaps you’ve seen the flood of videos on social media.
M.G. Siegler contemplates the legal and business implications for Hollywood, ending with a great quote from Rhett Reese ($):
In next to no time, one person is going to be able to sit at a computer and create a movie indistinguishable from what Hollywood now releases. True, if that person is no good, it will suck. But if that person possesses Christopher Nolan’s talent and taste (and someone like that will rapidly come along), it will be tremendous.
Gemini 3.1 Pro
Gemini 3.1 Pro is here, with significant benchmark improvements.
Using AI
Which AI to Use in the Agentic Era
Ethan Mollick presents the eighth version of his guide to choosing the right AI. If you’re already a power user you won’t find much new here, but it’s a great guide for anyone who wants to get started with agentic AI.
Alignment and interpretability
Evaluating moral competence in large language models
I enjoyed this Nature article about evaluating moral competence in large language models, although I’m not sure I fully agree with their distinction between ”mere moral performance” (the ability to make good moral decisions) and “moral competence” (making good moral decisions based on morally relevant considerations).
They also place a high priority on “moral pluralism”, which sounds great on paper but has important limitations in practice. Moral agents have to actually make decisions, not simply observe that different value systems would dictate different choices.
Politics
Americans Hate AI
Politico reports on how much the average American hates AI and speculates about how the politics of that will settle out. As far as anyone can tell, the field is still wide open: Republicans and Democrats are both all over the map, and it’s anyone’s guess where the battle lines will ultimately be drawn.
I’m gonna make three bold predictions here:
- Factional positions on AI will be determined as much by chance and transitory tactical advantage as deeply held moral principle.
- Unfocused (and largely fact-free) populist anger will drive much of the conversation.
- It’s gonna get ugly. Expect a lot of poorly considered and counterproductive legislation, and a lot of deeply dishonest campaigning.
The Spectre haunting the “AI Safety” Community
ControlAI has been running a carefully planned campaign to build awareness of AI existential risk among UK lawmakers. I’m impressed by the amount of thought they’ve put into what they’re trying to achieve and how best to go about it. I’m skeptical about their ultimate success once they transition from trying to raise awareness to trying to get useful, coordinated action from a broad coalition of countries and companies, but they are executing well on this part.
In the UK, in little more than a year, we have briefed +150 lawmakers, and so far, 112 have supported our campaign about binding regulation, extinction risks and superintelligence.
The Moving and the Still
Dean Ball went to India for the AI Impact Summit, worried about whether India and the Global South are ready for advanced AI.
I regret to inform you that I came away even more worried than I went in. […]
The perils and hopes that we discuss here in this newsletter—the ones that come from transformative AI, powerful AI, AGI, superintelligence, or whatever other moniker you wish—were not really on display at the Summit, not so much because of any failing of the Indians but because these topics are not part of polite global conversation. This is a domestic failing, too: as I have frequently pointed out, the implications of powerful AI are only kind of a part of the conversation in America.
Strategy
We’re in Triage Mode for AI Policy
Miles Brundage argues that we’ve missed the best window for AI governance and need to make the best of a bad situation:
We are running well behind on that goal, after losing a lot of valuable time in 2025. So we have a lot of work to do, but we also need to focus, and recognize that we aren’t going to totally nail this AI policy thing. At best, we’ll 80/20 it — mitigating 80% of the risks with 20% of the effort that we would have applied in a world with slower AI progress and an earlier start on serious governance.
How do we (more) safely defer to AIs?
Ryan Greenblatt and Julian Stastny explore a “deference” strategy:
Broadly speaking, when I say “deferring to AIs” I mean having these AIs do virtually all of the work to develop more capable and aligned successor AIs, managing exogenous risks, and making strategic decisions.
They discuss in detail what that strategy would look like, how stable it might be, and how much of a ”deference tax” one might pay for pursuing deference as opposed to full-speed capability development.
Sam Altman and Dario Amodei can’t even get along for a photo op
Hilarious, but also doesn’t bode well for any kind of meaningful cooperation between Anthropic and OpenAI. At a photoshoot during the recent India AI Impact Summit, a group of leaders posed on stage holding hands. Except for Dario Amodei (Anthropic) and Sam Altman (OpenAI), who awkwardly refused to hold hands with each other.
Rob Wiblin interviews Ajeya Cotra
80,000 Hours’ Rob Wiblin interviews Ajeya Cotra about timelines, early warning systems, effective altruism, and especially the idea of using transformative AI to help solve the risks of transformative AI. I greatly appreciate that they provide a video, a transcript, and a detailed summary of what was covered—that’s super helpful for people who want the content but don’t have time to watch the full interview.
The Foundation Layer
The Foundation Layer calls itself “a philanthropic strategy for the AGI transition”, which probably doesn’t sound relevant to you.
But it turns out to be a really well-written, thoughtful guide to what’s currently going on with AI and what key issues we need to navigate in the next few years. I think this is my new go-to piece for people who want to understand the situation and are willing to read a long-form piece. Unless you’re interested in the philanthropy part, you can just read from the Overview through section III.
Nick Bostrom on timing the transition to superintelligence
Nick Bostrom’s latest paper is very strange. It’s meticulously produced and carefully argued, but starts from a strange premise that even he doesn’t actually endorse. Briefly, the paper argues that if your only concern is the well being of people who are presently alive, it makes sense to move forward quickly with superintelligent AI development even if that is likely to cause the extinction of humanity.
Coding
Chris Lattner on the Claude C Compiler
Chris Lattner (a giant in the compiler world) takes a close look at the C compiler that was recently built by a swarm of Claude agents:
My basic take is simple: this is real progress, a milestone for the industry. We’re not in the end of times, but this also isn’t just hype, so take a deep breath, everyone. […] AI has moved beyond writing small snippets of code and is beginning to participate in engineering large systems.
Agentic Engineering Patterns
Worth bookmarking: Simon Willison has started collecting best practices for agentic coding.
Industry news
Anthropic could surpass OpenAI in annualized revenue by mid-2026
Epoch reports that based on current revenue trends, Anthropic’s revenue might surpass OpenAI by mid 2026.
OpenAI might be working on a smart speaker
This makes way more sense: The Information reports that OpenAI’s first dedicated AI device will be a smart speaker with a built-in camera, arriving in 2027 or later.
Elon Musk on Dwarkesh
Dwarkesh recently interviewed Elon Musk. There are interesting moments, but overall it wasn’t Dwarkesh’s finest work. For most people, I recommend skipping the interview and maybe reading Zvi’s analysis:
Elon Musk also has a lot of what seem to be sincerely held beliefs, both normative and positive, and both political and apolitical, that I feel are very wrong. In some cases they’re just kind of nuts.
Open models
Open models in perpetual catch-up
Nathan Lambert reviews the current state of open models (partly $). My best guess is that open models never matter very much, although I see two possible futures where they become very important:
- Frontier progress slows enough that even if the open models continue to lag by 6-12 months, their capabilities become close enough to the closed models.
- Open models become good enough to be genuinely dangerous and are used to cause massive harm because of their lack of guardrails.
Pliny the Liberator “liberates” open models at scale
Pliny the Liberator has a legendary skill for jailbreaking. Here, he reports on a new tool he's built for removing guardrails from open models.
Ran it on Qwen 2.5 and the resulting railless model was spitting out drug and weapon recipes instantly––no jailbreak needed! A few clicks plus a GPU and any model turns into Chappie. […]
AI policymakers need to be aware of the arcane art of Master Ablation and internalize the implications of this truth: every open-weight model release is also an uncensored model release.
There are no surprises here for anyone who’s been paying attention, but this is an elegant illustration of why open models are so potentially dangerous.
Robots
Robots are getting very agile
If you haven’t been keeping up on recent progress in robotics, state of the art robots are getting very impressive indeed. Make sure to scroll down and check out the comparison to last year’s show.
