Monday AI Brief #5
Welcome to the shorter and less technical version of Monday AI Radar.
This week brought some great 2025 retrospectives and some brave predictions for 2026. It was hard to pick just one, but I think Prinz did a great job summarizing what is likely to be one of humanity’s last “normal” years. We’ll also try to get our heads around the “jagged frontier” of AI capabilities, look at some impressive new accomplishments, and end on a meditative note as we contemplate life in a world that doesn’t need human labor.
Predictions for 2026
Prinz reviews how fast capabilities advanced in 2025 and some strong predictions for 2026. If I had to pick one “what’s gonna happen in 2026?” piece, it would be this one.
Understanding the jagged frontier.
AI capabilities form a jagged frontier: the models are superhumanly good at some things, but strangely incompetent at others. Ethan Mollick (who helped coin the term) presents several frameworks for He suggests that jaggedness is often caused by specific capability bottlenecks—as companies focus on solving those bottlenecks, expect to see rapid advances in previously jagged parts of the frontier.
ChatGPT Images
OpenAI continues their frenetic release schedule with a new version of ChatGPT Images. This is a very strong update that largely catches up to Google’s Nano Banana Pro. Google still seems to be better at complex infographics, though ChatGPT Images is way ahead of anything that was available just a few months ago.
AI for Systematic Reviews
I missed this when it came out in June, but I think it’s one of the most impressive achievements this year. Cochrane Reviews is the gold standard for systematic review in medicine. Here’s a paper on otto-SR, a framework that uses GPT-4.1 and o3-mini-high to conduct systematic reviews:
Using otto-SR, we reproduced and updated an entire issue of Cochrane reviews (n=12) in two days, representing approximately 12 work-years of traditional systematic review work. … These findings demonstrate that LLMs can autonomously conduct and update systematic reviews with superhuman performance, laying the foundation for automated, scalable, and reliable evidence synthesis.
Bernie Sanders proposes a moratorium on AI data center construction
Every complex problem has a solution that is simple, obvious, and wrong. Daniel Kokotajlo nails it:
I agree with your concerns and your goals, but disagree that this is a good means to achieve them. We need actual AI regulation, not NIMBYism about datacenters. The companies will just build them elsewhere.
ChatGPT and the Meaning of Life
Harvey Lederman has a long but lovely meditation on work, meaning, and loss:
And this round of automation could also lead to unemployment unlike any our grandparents saw. Worse, those of us working now might be especially vulnerable to this loss. Our culture, or anyway mine—professional America of the early 21st century—has apotheosized work, turning it into a central part of who we are. Where others have a sense of place—their particular mountains and trees—we’ve come to locate ourselves with professional attainment, with particular degrees and jobs. For us, ‘workists’ that so many of us have become, technological displacement wouldn’t just be the loss of our jobs. It would be the loss of a central way we have of making sense of our lives.
