OpenAI Chief Scientist Mark Chen in Long Interview: Zuckerberg Personally Brought Soup to Poach Talent, So We Took the Soup to Meta

OpenAI Chief Scientist Mark Chen in Long Interview: Zuckerberg Personally Brought Soup to Poach Talent, So We Took the Soup to Meta

West Wind Report: Insights from OpenAI's Chief Scientist Mark Chen

Source: Quantum Bit | WeChat Official Account QbitAI

---

Overview

In a wide‑ranging, unusually candid interview on Core Memory (hosted by tech journalist Ashlee Vance), Mark Chen — Chief Scientist at OpenAI — shared fascinating inside details on:

  • OpenAI culture, research priorities, and rivalries
  • Recruitment battles (including the now‑famous “Soup War” with Meta)
  • Bold bets on reasoning, pretraining, and scaling
  • Management, talent retention, and alignment strategies
  • Personal anecdotes spanning poker, competitive programming, and Wall Street
image

---

Highlights and Key Anecdotes

Meta vs OpenAI: The "Soup War"

  • Meta's aggressive poaching: Zuckerberg personally delivered homemade soup to OpenAI researchers.
  • OpenAI’s playful counter: Chen sent Michelin‑grade soup to Meta talent they wanted to hire.
  • Both sides shared culinary recruitment tactics as part of the talent war.

---

Poker as a Research Parallel

  • Chen and Scott Gray frequently played poker.
  • He describes poker as a probability and expected‑value game — much like research prioritization.

---

Research & Roadmap

  • Core research team: ~500 people, working on ~300 active projects.
  • Focus: Identify new paradigms rather than reproducing others’ benchmarks.
  • Compute demand: “If I had 10× compute, I’d max it out in weeks.”

---

“42 Problem”: The Unsolved Benchmark

  • A probability/pseudo-random generator logic puzzle.
  • No model — even “thinking models” — has nailed it yet.

---

Inside OpenAI: Structure & Culture

Leadership Roles

  • Chen works closely with Jakub Pachocki (Chief Scientist) and Sam Altman.
  • Core process: Every 1–2 months, they review all projects, rank priorities, and allocate GPU resources accordingly.
  • Talent density: Experimentation with headcount freezes to keep quality extremely high.

---

Transition from IC to Manager

  • Started as Residency Researcher in 2018 with ~20 people at OpenAI.
  • Notable IC projects:
  • ImageGPT
  • Codex
  • Managed DALL·E, marking his shift into leadership.

---

Palace Intrigue & Team Alignment

  • During a leadership crisis, Chen rallied ~90% of researchers to petition for Sam Altman's return.
  • Hosted gatherings to maintain unity and morale.

---

Competitive Mindset and Industry Context

Bold Research Bets

  • Reasoning research initiated two years ago — now widely validated.
  • Ongoing push to rebuild "muscles" in pretraining alongside post‑training and RL.

---

Views on Talent and Stars

  • Balancing star hires with robust talent pipelines.
  • Emphasis on bottom‑up idea generation and meritocracy.

---

Open Culture vs Secrecy

  • OpenAI chooses speed and openness over silos.
  • Researchers are encouraged to share ideas freely to accelerate progress.

---

Pretraining, Scaling & AGI Outlook

Chen’s Stance

  • Pretraining remains potent — “Scaling is not dead”.
  • Scaling, algorithmic breakthroughs, and efficiency to continue aggressively.
  • AGI timelines: Avoids rigid dates, focuses on producing new scientific knowledge.

---

Two Clear Goals

  • Within 1 year: AI integrated as a research intern to boost productivity.
  • Within 2.5 years: AI completes end‑to‑end research autonomously.

---

Science, Alignment, and Safety

OpenAI for Science

  • Aim: Enable all scientists to make Nobel‑level discoveries.
  • Build tools that accelerate research across disciplines.

---

Alignment Strategies

  • Manage OpenAI’s alignment team.
  • Investigate “scheming” behaviours in RL‑trained models.
  • Design choice: Avoid supervising the reasoning process to maintain transparency for interpretability.

---

Personal Journey & Views

Career Path

  • Competitive math background; MIT → Wall Street quant → AI research.
  • Learned that AI is “still shallow” enough to reach the frontier in months.

---

Motivation

  • Strong belief in alignment and safety as central challenges.
  • Sees building AGI as a “big bet” worth full commitment.

---

> Parallel to scientific acceleration, open‑source platforms are emerging to support content creators similarly to how OpenAI aims to aid researchers.

AiToEarn — Open‑Source AI Monetization Platform

---

Original Video

Watch the Full Interview

---

Summary Takeaways

  • Talent wars in AI can be unexpectedly human (and humorous).
  • Leadership at OpenAI blends strategic prioritization with open culture.
  • Research bets — especially on reasoning and pretraining — are shaping competitive edges.
  • Scaling compute and AGI timelines remain fluid but optimistic.
  • Alignment and interpretability are critical safeguards in next‑gen models.
  • Ecosystem tools like AiToEarn show how open AI infrastructure can accelerate creativity and innovation globally.

---

Would you like me to also prepare a visually structured infographic that condenses Mark Chen’s key points into one-page reference? This could make the interview even easier to digest and share.

Read more

Translate the following blog post title into English, concise and natural. Return plain text only without quotes. 哈佛大学 R 编程课程介绍

Harvard CS50: Introduction to Programming with R Harvard University offers exceptional beginner-friendly computer science courses. We’re excited to announce the release of Harvard CS50’s Introduction to Programming in R, a powerful language widely used for statistical computing, data science, and graphics. This course was developed by Carter Zenke.