Against Moloch
April 29, 2026

China Still Trails the US on Existential Risk

But there’s some cause for optimism

Precision technical illustration of a vast inspection bay seen in cross-section, with a five-stage industrial processing line — a tall column with stacked trays, a cylindrical reactor, a heat exchanger, a separator, and a small labeling terminal — running along a calibration platform on rails. A dense cluster of articulated precision instruments converges on the small terminal at one end, with amber-gold marking the convergence; the four larger stages stand fully visible and untouched. Articulated inspection arms are folded and stowed against the side walls, and small human figures staff analog consoles at the bay's edges and consult documents at the platform.

Epistemic status: thinking out loud

TL;DR

None of the US frontier labs manage existential risk as well as one would hope, but the Chinese labs are all doing significantly worse. And although China has a well-developed system of AI “safety” regulation, it focuses on political control and mundane harms rather than existential risk.

There’s some reason for optimism: the Chinese labs have been making progress on safety, and there’s some evidence that the government might be starting to pay more attention.

Current regulation

A wide range of AI apps and services are required to register with the Cyberspace Administration of China (CAC) and receive approval before broad public deployment. The process is non-trivial and typically involves more than 100 pages of assessments. For a sense of the scale, more than 6,000 Generative Algorithmic Tools (GATs) (i.e., products and algorithms) have registered to date.

Tellingly, registration is only required for services which “shape public opinion” or “mobilize society”. Among other requirements, services are tested against a database of tens of thousands of questions assessing whether they refuse to answer “sensitive” questions. Services are required to:

Note the absence of topics like biorisk, alignment, and loss of control.

Frameworks and standards layer

Even though China doesn’t have meaningful regulations that address x-risk, there is some evidence that may change in the future. In September 2025, the CAC released version 2.0 of the AI Safety Governance Framework. The framework is not legally binding, but it offers insight into CAC’s thinking as well as possible future regulatory directions.

From a safety perspective, version 2.0 is a substantial improvement over the original. It maintains the original’s focus on ideological conformity, but for the first time includes some discussion of the topics that concern Western AI safety advocates. It’s a good start and raises most of the key safety topics, but the technical content and proposed mitigations are weak. Topics introduced in version 2.0 include:

The new framework is a hopeful sign, but it remains to be seen whether it results in effective safety regulation.

Safety at the Chinese labs

The Future of Life Institute issues a regular AI Safety Index that scores the leading labs on safety. The best Chinese labs trail far behind the US frontier labs, although they have made progress:

Lab Summer 2025 Winter 2025
Anthropic (US)2.642.67
OpenAI (US)2.102.31
Google DeepMind (US)1.762.08
xAI (US)1.231.17
Zhipu AI (China)1.12
Meta (US)1.061.10
DeepSeek (China)0.371.02
Alibaba Cloud (China)0.98

(The best possible score is 4.0, so none of the labs are particularly stellar).

Concordia AI’s Frontier AI Risk Monitoring Report shows a similar pattern, although it ranks Gemini as being as risky as the Chinese models in some domains.

Open models pose unique risks

Most of the leading Chinese models are open, which poses some unique risks. Once released, open models cannot be withdrawn if they turn out to be more dangerous than expected. And perhaps more problematically, it is straightforward to strip safeguards from open models. At this time, there is no known way to release a highly capable open model with durable safeguards against misuse.

High-level political considerations

It’s clear that the CCP’s primary “safety” concern is ideological conformity and preserving the dominance of the CCP, although there are some modest signs of high-level change. Ding Xuexiang (the first-ranked vice premier of China) recently stated that “if the braking system isn't under control, you can't step on the accelerator with confidence”, and there is some evidence that the Politburo is beginning to pay more attention to a wider range of AI risks.

For now, those are promising signs, but no more: there’s no evidence of top leadership prioritizing existential risk.

Where does that leave us?

Despite some recent progress, China continues to lag well behind the US on existential risk. I have some hope, however, that the situation might change in future. There are signs that the CCP is beginning to pay more attention to existential risk. If it were to become a priority, China is well-positioned to make rapid progress. The Chinese government is capable of acting quickly and decisively when it chooses to, and its well-developed regulatory infrastructure would give it a strong base for enacting safety regulations.