The End of the Scaling Era, Just Announced by Ilya Sutskever
đ The Age of Scaling Is Over â Ilya Sutskeverâs Vision for AIâs Future

> "The Age of Scaling is over."
> â Ilya Sutskever, Founder of Safe Superintelligence Inc.
When Ilya Sutskever made this statement, the AI world stopped to listen. His words, shared in a 95âminute inâdepth interview with Dwarkesh Patel, startled many and resonated across top research and industry circles.

The conversation ranged from current issues in large model design, to human learning analogies, to safety frameworks for superintelligence. It quickly went viral, drawing over 1M views on X in hours.

---
đĽ Interview Overview
Full Video & Transcript: dwarkesh.com/p/ilya-sutskever-2
---
1ď¸âŁ Model Jaggedness & Generalization
- The Paradox: Current AI models often ace complex benchmarks but fail at simple, intuitive tasks.
- Root Cause: Reward hacking by human researchers â designing RL setups to boost test scores without improving true understanding.
- Analogy: Like a student who has practiced 10,000 hours for exams but lacks natural adaptability. In contrast, gifted students generalize better from limited practice.
---
2ď¸âŁ Emotions & Value Functions in Human Learning
- Key Insight: Emotions function like value functions in machine learning â guiding decisions before final outcomes (e.g., regret midâgame in chess).
- Sample Efficiency Gap: Humans learn from far fewer samples than AI due to:
- Evolutionary priors.
- Intrinsic emotional/value systems that enable selfâcorrection.
---
3ď¸âŁ From the Age of Scaling to the Age of Research
- 2020â2025: Scaling Era â Gains driven by throwing more compute/data at models.
- PostâScaling â Pretraining data is close to depletion, returns are diminishing.
- Whatâs Next? Smarter use of compute via new "recipes", reinforced reasoning, and paradigm shifts.
---
4ď¸âŁ SSIâs SafetyâFirst Strategy
- StraightâShot R&D: Focus purely on research until safety in superintelligence is solved.
- Avoiding the Rat Race: Commercial competition can push unsafe speed â SSI opts out.
- Core Goal: Solve reliable generalization and other core technical problems before release.
---
5ď¸âŁ Alignment & Future Outlook
- Primary Objective: Care for sentient life, beyond narrow humanâonly focus.
- MultiâAgent Futures: Several continentâscale AI clusters, with earliest powerful ones aligned.
- Equilibrium Vision: HumanâAI integration via future brainâcomputer interfaces to avoid marginalization.
---
6ď¸âŁ Research Taste
- TopâDown Conviction: Guided by beauty, simplicity, and correct inspiration from biology.
- Persistence: Continue despite contradictory data if the intuition is strong.
---
đ§ Key Themes in the Full Transcript
A. Uneven Model Capabilities
- Models excel in benchmarks but falter in realâworld persistence tasks (e.g., bug fixing loops in code).
- RL training setups are overly tailored to benchmarks, leading to poor crossâtask generalization.
B. Human Analogy
- Two student archetypes: Exam Specialist (overâtrained) vs Natural Learner (generalizes well).
- Most current models resemble the overâtrained archetype.
C. Pretraining Advantages
- Largeâscale, natural humanâgenerated data captures broad patterns & behaviors.
- However, depth of understanding still lags behind humans.
D. Emotions as ML Value Functions
- Guide midâtrajectory corrections.
- Potential highâutility structures, simple yet robust, evolved over millions of years.
---
đ Beyond Scaling: New Training Recipes
- Scaling pretraining â infinite progress â data limits loom.
- RL scaling is now consuming more compute than pretraining.
- Calls for efficiency: e.g., integrating effective value functions.
---
đ Deployment Strategies
- Debate between cautious incremental release vs "straight through" build.
- Realâworld deployment improves safety through exposure & iteration.
- Continuousâlearning AIs: akin to human workers learning onâtheâjob and sharing knowledge.
---
âď¸ Alignment & Governance
- Support for AIs aligned to care for sentient life.
- Calls for constraints on earliest most powerful AIs.
- Speculation on stable equilibria: possibly humans becoming semiâAI via Neuralink++.
---
đ§Š Research Era Atmosphere
- In scaling era, compute was differentiator.
- In research era, ideas regain primacy â smallerâcompute experiments can still prove breakthroughs.
- SSI positions itself as a true research company chasing highâimpact ideas.
---
đ Diversity, SelfâPlay & MultiâAgent Systems
- Lack of diversity stems from overlapping pretraining data.
- RL & adversarial setups (debate, proverâverifier) could induce methodological diversity.
- Selfâplay can grow narrow skill sets; variations may broaden capability.
---
đŻ The Role of Research Taste
- Think correctly about humans; extract the essence for AI design.
- Pursue beauty, simplicity, elegance, drawing inspiration from brainâs fundamentals.
- Topâdown belief sustains effort through debugging and adversity.
---
đ Practical Intersection: AiToEarn
The AiToEarn platform exemplifies multipurpose deployment and feedback loops in practice:
- Global AI Content Monetization â Enables creators to:
- Generate content with AI.
- Publish across Douyin, Kwai, WeChat, Bilibili, Xiaohongshu, Facebook, Instagram, LinkedIn, Threads, YouTube, Pinterest, X/Twitter simultaneously.
- Integrated Tools:
- AI model ranking: rank.aitoearn.ai
- Crossâplatform analytics.
- Openâsource repo: github.com/yikart/AiToEarn
- Docs: docs.aitoearn.ai
This mirrors how efficient frameworks can bridge AI research innovation and realâworld adoption, complementing themes in Ilyaâs vision for safe, aligned superintelligence.
---
đ Summary Takeaways
- Scalingâs limits are visible â innovation is shifting toward efficiency, generalization, and research taste.
- Human learning analogies provide a roadmap for improving AI sample efficiency and robustness.
- Alignment goals must encompass all sentient life, anticipating multiâagent ecosystems.
- Deployment strategies balance showcasing capability with safety.
- The research era demands bold ârecipesâ and interdisciplinary cooperation.
---
Next Steps for Readers:
- Watch the full interview: dwarkesh.com/p/ilya-sutskever-2
- Explore AiToEarnĺŽç˝ for practical AI deployment tools.
- Follow developments in value functions and generalization research â possible keys to postâscaling breakthroughs.
---
> đĄ In both AI research and creative economies, the winners will pair deep technical insight with efficient, aligned, multiâplatform deployment.