Ads, Incentives, and Destiny
There’s been some recent unpleasantness regarding Anthropic's Super Bowl ads. To recap:
- OpenAI started showing ads in some tiers of ChatGPT.
- Anthropic made some Super Bowl ads making fun of ads in AI.
- Sam Altman got mad about Anthropic’s ads.
If you haven't already, you should watch one of the ads—they’re very good. Even Sam laughed, right before he got mad about it.
Anthropic’s ads are a lot of fun, but they aren’t completely fair: they implicitly target OpenAI, but show ads that are far worse than what OpenAI is actually doing. But fair or not, they raise a valid concern.
Death, taxes, and enshittification
Let me be clear: OpenAI’s ad policy is thoughtful and ethical and I have no problem with it. If OpenAI rigorously adheres to this policy in the long run I’ll be surprised, delighted, and contrite.
Did I mention that I’d be surprised if OpenAI holds the line? Because I would be quite surprised. The tech industry is littered with companies that began with clear, ethical boundaries about ads, but slowly evolved into user-hostile rent-taking machines. The problem is not that ads are intrinsically bad, but that in certain tech products, the nature of the advertising business creates almost irresistible perverse incentives.
Google was once the canonical example of an ethical tech company. Their motto in those days was “don’t be evil”, and they weren’t. They had a great product that was a delight to use, and their ads were clearly marked as ads, in accordance with a thoughtful and ethical policy much like OpenAI’s new policy. Google was one of the best things about the internet, and they were committed to doing the right thing. But the Ring of Power has a will of its own…
Slowly but inexorably, Google began to change. It turned out that it was possible to make more money per search by showing more ads, and so there were more ads. And people clicked on ads more often if the ads looked more like organic search results, so it became harder and harder to tell them apart. And ads were more valuable if you knew more about the person you were showing them to, so the internet was carpet bombed with increasingly aggressive user-tracking technology.
Google’s downfall wasn’t a lack of good intentions—it was their business model. An ad-supported search engine will inevitably face a million opportunities to become a tiny bit worse and more profitable. And as the years go by? Incentives eat values for breakfast.
Cory Doctorow calls this process “enshittification” and once you know what to look for, it’s everywhere. Google, Facebook, Instagram, Amazon, Instacart… If the business model encourages enshittification, it’s just a matter of time before once-laudable ethical standards begin to bend, and an ad-supported product mutates into a product-supported ad-delivery machine.
Incentives as destiny
The New York Times has maintained ethical boundaries around ads for 175 years, while Google gave in to the dark side within 15 years. Google was once as idealistic as they come, so what went wrong? Why did NYT succeed where Google failed?
It’s complicated, and I don’t pretend to have a single master theory that explains everything. But three factors seem critical for whether a business enshittifies:
- Are there strong incentives to blur the line between content and advertising?
- Are there strong incentives to support ads via unethical behavior?
- Do strong lock-in effects make it hard for customers to leave?
Blurring the line between content and ads
Anthropic’s ads beautifully pointed out the toxicity of presenting advertising as content. Some business models simply offer more opportunity to cross that line than others.
For a newspaper, there’s relatively little money to be made by crossing that line: it’s cheap for the Times to maintain a strict separation between the newsroom and the advertising department. Google, on the other hand, can profit very handsomely by blurring the line between actual search results and “sponsored” results.
Incentives for unethical behavior
Enshittification often spreads beyond how ads are presented. Google, for example, can charge more for ads that are well targeted. It’s no surprise, then, that they have a long history of using very questionable techniques to track user activity across the internet. The Times, on the other hand, simply doesn’t have as many opportunities to profit from questionable behavior.
Lock in
It’s a lot easier to exploit your customers if they’re locked in to your platform. NYT is arguably the best newspaper, but it’s hardly the only one: if the experience of reading the Times becomes too unpleasant, readers will simply leave. Google, on the other hand, has immense lock in: individuals have to use Google because it’s by far the best and easiest way to find things, and businesses have to advertise on it because Google is where people find things. Google has enormous headroom for extractive behavior, because the cost of leaving is so high for both users and advertisers.
Where does that leave OpenAI?
Viewed from an incentives perspective, OpenAI looks more like Google than the New York Times:
- There is considerable incentive to blur the line between advertising and AI responses. It would be so easy to reduce the visual separation between response and advertisement, or to prompt topics that support more lucrative ads (in Pulse, for example).
- OpenAI has strong incentives to pursue the same kind of toxic engagement maxing that Facebook does: more time in the product means more ad impressions.
- Chatbots currently have limited lock in, but that is changing quickly. Features like memory, personalization, and continual learning are very valuable, but make it much harder to switch platforms.
So that’s hardly ideal: OpenAI has strong incentives to enshittify. I believe they don’t intend to do that, but history suggests that good intentions rarely overcome perverse incentives. Sam says:
we would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that.
I trust that he’s sincere, but he’s clearly wrong. Google’s success is proof that when the conditions are right, enshittification is a profitable strategy, and users will tolerate it.
Ads and accessibility
Sam makes a really good point: AI is quickly becoming a vital tool. Just as it’s important that the internet be accessible to everyone, it’s important that everyone be able to access AI. Frontier models are expensive to run, and ads are potentially one of our few tools for ensuring that everyone has access to capable AI. But accessibility considerations just underscore the dangers of enshittification.
Enshittified products are worse than paid products because the advertising model drives user-hostile product design. Google search isn’t just bad because of all the ads, it’s bad because Google relentlessly tracks you across the internet in order to target those ads. Facebook is toxic because it serves ragebait to keep you “engaged” and watching ads.
If OpenAI ensures that everyone has access to AI by serving an ethical ad-supported product, that’s great. But if that devolves into “if you can’t afford to pay for good AI, you get toxic, manipulative AI for free”—I’m not sure that actually helps.
Now we wait
Again: what OpenAI is doing today is absolutely fine. The question is whether they will continue to uphold their current standards, or whether they will follow so many others down the path of enshittification.
If their ads become increasingly difficult to distinguish from their content, and if they start finding reasons why it’s OK to include sponsored content in AI responses, then we’ll have our answer. And we’ll have new information about OpenAI’s ability to ethically manage superintelligence.
And conversely: if they hold the line, and succeed where so many others have failed, I will be delighted to admit that my concerns were unfounded. And I will update positively about how much I trust them with other, bigger decisions.
