Ilya Sutskever Just Told Us the Scaling Era Is Over

The “age of scaling” is ending. Models are overfitted competitive programmers who ace benchmarks but make mistakes humans never would.

Ilya Sutskever Just Told Us the Scaling Era Is Over
Ilya Sutskever

The solution isn’t more compute or data. It’s better generalization, value functions like human emotions, and superintelligent learners that actually learn like us. Timeline: 5 to 20 years. If you’re still betting on pure scaling, you’re already behind.

I just watched Ilya Sutskever explain why everything we thought we knew about AI progress is wrong.

Let me be perfectly honest. When Ilya says “we’re back in the age of research,” this isn’t academic theory. This is the co-founder of OpenAI, the architect behind GPT-3, telling us that throwing more compute at pre-training is done. The scaling laws that drove billions in investment? They’re hitting a wall.

The gap between benchmark performance and real-world economic impact.

The Competitive Programmer Paradox

Ilya uses an analogy that hit me like a freight train.

Imagine two students learning competitive programming. Student one practices 10,000 hours, memorizing every proof technique, solving every problem. They become elite at competitions.

This post is for subscribers only

Already have an account? Sign in.

Subscribe to Can Artuc

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe