/var/log/canartuc

/var/log/canartuc

Ilya Sutskever Just Told Us the Scaling Era Is Over

The “age of scaling” is ending. Models are overfitted competitive programmers who ace benchmarks but make mistakes humans never would.

Can Artuc
Nov 27, 2025
∙ Paid
Ilya Sutskever

The solution isn’t more compute or data. It’s better generalization, value functions like human emotions, and superintelligent learners that actually learn like us. Timeline: 5 to 20 years. If you’re still betting on pure scaling, you’re already behind.

I just watched Ilya Sutskever explain why everything we thought we knew about AI progress is wrong.

Let me be perfectly honest. When Ilya says “we’re back in the age of research,” this isn’t academic theory. This is the co-founder of OpenAI, the architect behind GPT-3, telling us that throwing more compute at pre-training is done. The scaling laws that drove billions in investment? They’re hitting a wall.

The gap between benchmark performance and real-world economic impact.

The Competitive Programmer Paradox

Ilya uses an analogy that hit me like a freight train.

Imagine two students learning competitive programming. Student one practices 10,000 hours, memorizing every proof technique, solving every problem. They become elite at competitions.

User's avatar

Continue reading this post for free, courtesy of Can Artuc.

Or purchase a paid subscription.
© 2026 Can Artuc · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture