Against Moloch
March 02, 2026

Monday AI Brief #15

Last week’s conflict between the Department of War and Anthropic marked a turning point for AI. I’m cautiously hopeful that the parties involved will find some kind of deescalation from the current nuclear option, but immense irreparable damage has already been done: to Anthropic, to the entire AI industry, and to America’s pre-eminence in AI.

This is a complex, fast-moving situation that is outside my usual beat. Rather than trying to cover it in detail myself, I’m going to link to some of the most useful analysis. But I want to be extremely clear: this is the most important thing that’s happened in AI for a long time, and it’s gravely concerning. These are dark times and the road ahead just got much more difficult.

Clawed

Dean Ball’s latest is grim but essential reading.

This strikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property. […]

This threat will now hover over anyone who does business with the government, not just in the sense that you may be deemed a supply chain risk but also in the sense that any piece of technology you use could be as well. […]

Zvi reviews the situation

Zvi’s post from this morning is the most comprehensive review of the situation. I highly recommend reading at least the first two sections.

45 Thoughts About Agents

Everything changed in November, with Opus 4.5 + Claude Code. Since then, we’ve all been frantically trying to figure out what it all means (when we weren’t preoccupied by building cool things). Steve Newman shares 45 characteristically insightful thoughts about AI agents—some of these will be obvious to you if you already use agents extensively, but I found multiple new ideas here.

39: Agents use vastly more compute than chatbots. Compute usage for chatbots is basically limited by how much output people want to read. An agent can spend virtually unlimited time doing intermediate work that no one will review directly. If 100M desk workers start using AI agents at the level of intensity which requires Anthropic’s current “Max 20x” plan, that would translate into $240 billion in revenue per year. It will be years before there are enough GPU chips to support that level of usage.

An overview of AI and programming

Timothy Lee talks to professional programmers to assess how AI is changing the programming profession. His analysis of current capabilities and impacts is solid, but I expect much faster near-term progress than he does. Recent progress has been incredibly fast (and accelerating), and there’s a huge gap between what the models are already capable of and what most people are using them for. I’m pretty sure 2026 will bring even more change and disruption to programming than 2025 did.

What Only You Can Say

This is the most useful “how to use AI” piece I’ve run across in a while: Luke Bechtel has AI interview him about his ideas as a way to organize his thoughts and prepare for a new piece of writing.

How much should we worry about AI biorisk?

The risk of bad actors (terrorists, perhaps, or extortionists) using AI to create a bioweapon is one of the most serious risks of advanced AI. Transformer explores why biorisk is so concerning, how dangerous current AIs are, and why it’s so hard to assess the danger level.