Product Manager Insights: How Can Internet Organizations Upgrade Their AI Capabilities?

Product Manager Insights: How Can Internet Organizations Upgrade Their AI Capabilities?

Facing the AI Wave: Turning “Repeated Defeat” into “Certain Victory”

> Summary:

> Through three rounds of internal AI capability upgrades, this article reveals the pitfalls of formalism and proposes a “competition-as-training” model to help enterprises truly transform their thinking, moving from traditional engineering logic to AI-first workflows.

---

Background: The AI Wave and the CEO's Vision

Since early 2024, the AI boom has impacted nearly every industry. To avoid falling behind, our organization initiated three phases of AI capability upgrades.

From the CEO’s perspective, the mission was clear:

  • Push product and R&D teams from engineering thinkingAI thinking.
  • Replace the “broadsword” with an “AK-47” — dramatically increasing AI firepower across development and delivery.
  • Achieve lower costs and higher efficiency by embedding AI into every stage.

The vision inspired many — but our actions failed to meet the goal.

---

Act One: Formalism in Practice

Despite apparent progress, all three “AI offensives” fell into the trap of superficial engagement.

Round 1 (Early 2024): Broad Net — Spark Interest

  • Everyone in product/R&D shared one AI-related topic.
  • Topics varied: FastGPT research, image recognition principles, LangChain framework, GitHub Copilot, Tongyi Lingma experiences.
  • Atmosphere: lively, but mostly information-only.

Round 2 (Late 2024): Narrow Focus — Work Applications

  • Shares had to be tied to actual work.
  • Content: AI-driven agents for workflow automation, DingTalk bots for KB maintenance, AI for log tagging & troubleshooting.
  • Closer to real application, but usage remained shallow.

Round 3 (Late 2025): Tool Adoption — Mandatory Use

  • Purchased Trae (¥649/person) for the whole team.
  • Everyone shared their Trae use cases in daily workflows.
  • Outcome still predictable: formal compliance, minimal depth.

Reality Check: Over 95% of content was “polished filler” — meeting requirements without deep hands-on engagement.

---

Act Two: Root Cause Analysis (Fogg Behavior Model)

The Fogg Behavior Model states:

> Behavior = Motivation + Ability + Prompt

Applying it to our case:

  • Motivation Deficit
  • AI not solving core problems.
  • Our decade-old system's complexity means AI setup costs > manual coding.
  • Result: AI feels like a “dragon-slaying skill” with no real battlefield.
  • Ability Gap
  • Coding alone = deep comfort zone.
  • AI-assisted coding = entirely new skill set, far outside familiarity.
  • Gap from “know” → “use” → “use well” remains large.
  • Weak Prompts
  • One-off admin orders; employees choose trivial cases to share.
  • Infrequent prompts → no sustained habit formation.

Dominant mindset became:

  • “Explaining context to AI takes longer than coding myself.”
  • “AI is useless for the real battles.”
  • “I’m too busy for AI.”

---

Act Three: Decision Point — Four Strategic Paths

Possible actions:

  • Force Use — Enforce sharing & track AI usage, regardless of willingness.
  • Pilot First — Run in small team, scale after proven success.
  • Competition-as-Training — Allocate 20% work time for real AI projects in team-based contests, with rewards.
  • Full Renewal — Remove non-adopters and hire AI-first talent.

Systems Thinking Insight:

  • Move from intermittent “sharing” → ongoing cycles of application, feedback, skill-building.
  • Replace formality with competitive, gamified environments where AI use impacts real outcomes.

---

My Choice: Option 3 — Competition-as-Training

Why?

  • Systemic Solution: Only options 3 & 4 change the structure; 3 has lower cost and risk.
  • Deliberate Practice:
  • Clear goals (“Build a valuable AI project” beats “complete learning session”).
  • Sufficient focus (dedicated 20% time).
  • Real feedback (market results vs silent meeting).
  • Comfort zone push (team collaboration that forces skill growth).
  • Proven Model: Tencent’s “horse racing,” ByteDance App Factory, Dedao’s AI contests all show its power.

---

Act Four: Execution Blueprint for an “AI Practice Competition”

Step 1 — Define the Goal

Enable product & R&D to complete a true AI-thinking project:

  • Traditional: specs → design → code → testing → launch
  • AI-thinking: interact via natural language → auto-generate specs, design, code, test, release — humans guide & refine.

---

Step 2 — Set the Battlefield

  • Self-selected projects, must meet the ≥50% AI-generated work rule.
  • Can be a plugin for SaaS or a standalone Agent.

Plugin Examples:

  • Factory Compliance Plugin: Dual reporting for client audits.
  • Attendance Compliance Plugin: Strict clock-in rules.
  • Attendance Audit Plugin: Auto-generate signed compliance reports.

Agent Examples:

  • Onboarding Assistant: Guides new hires start-to-finish.
  • HRBP Advisor: Detects attrition risks & advises managers.
  • Compensation Consultant: AI data analysis for pay decisions.
  • Human Cost Analyst: Fast NL-driven cost reports.
  • Performance Coach: SMART OKR/KPI setting support.

---

Step 3 — Set Rewards

Tie incentives to real impact:

  • Sales Commission: 30% if sold during competition.
  • Cash Award: 1–3 projects each receive ¥2000.
  • Resource Support: 1–3 projects get full company backing for product launch.

---

Step 4 — Define Schedule

  • Time Allocation: Up to 20% work hours.
  • Duration: ~3 months.
  • Evaluation: Public vote (1 per employee) + Expert vote (10 per judge).

---

Finale: Build Systems, Not Rely on Individuals

To truly upgrade AI capability:

  • Design Mechanisms, Not Just Commands: Give time, platforms, rules.
  • Create Battlefields, Not Only Issue Weapons: Real business scenarios where AI proves value.
  • Seek Feedback, Not Form: Market & customer validation > quiet internal applause.

> Core Belief: AI transformation succeeds by reshaping the system environment, not by forcing individual will.

---

———— / E N D / ————

Read more

Translate the following blog post title into English, concise and natural. Return plain text only without quotes. 哈佛大学 R 编程课程介绍

Harvard CS50: Introduction to Programming with R Harvard University offers exceptional beginner-friendly computer science courses. We’re excited to announce the release of Harvard CS50’s Introduction to Programming in R, a powerful language widely used for statistical computing, data science, and graphics. This course was developed by Carter Zenke.