Latest

In Math, Chinese Models Never Lose: DeepSeek Dominates Overnight, Math V2 Ends the 'Strongest Math Model' Debate

AI news

In Math, Chinese Models Never Lose: DeepSeek Dominates Overnight, Math V2 Ends the 'Strongest Math Model' Debate

DeepSeek-Math V2: Self-Verifiable Mathematical Reasoning Breakthrough On November 27, without prior announcement, DeepSeek open-sourced its new mathematical reasoning model DeepSeek‑Math V2 (685B parameters) on Hugging Face and GitHub. > Key milestone: First openly available math model to achieve International Mathematical Olympiad (IMO) gold medal level. --- Background & Progress

Video Understanding Leaderboard Dominator: Kuaishou Keye-VL Flagship Model Open-Sourced, Leading Multimodal Video Perception

AI news

Video Understanding Leaderboard Dominator: Kuaishou Keye-VL Flagship Model Open-Sourced, Leading Multimodal Video Perception

# 🚀 Keye-VL-671B-A37B Official Release **Kwai's next-generation flagship multimodal LLM** — **Keye-VL-671B-A37B** — marks a significant leap in **visual perception**, **cross-modal alignment**, and **complex reasoning chains**, while retaining the robust general capabilities of its base model. It **sees better**, **thinks deeper**, and **answers more accurately**, ensuring **precise, reliable responses** across everyday and

ROCK&ROLL: Alibaba's Dual-Framework Collaboration Drives Scalable Agentic RL Applications

Production AI

ROCK&ROLL: Alibaba's Dual-Framework Collaboration Drives Scalable Agentic RL Applications

# **ROLL + ROCK: End-to-End Agentic AI Training Infrastructure** **Article #131 of 2025** *(Estimated Reading Time: 15 minutes)* --- ## **01 — Preface** **ROLL** is an open-source reinforcement learning (RL) framework for large-scale models, developed by **Alibaba’s Future Life Lab** and **Intelligent Engine team**. It provides a **complete RL training pipeline**, enabling models

Huawei launches "near-trillion-scale MoE inference" with two killer optimization technologies released as open source

AI news

Huawei launches "near-trillion-scale MoE inference" with two killer optimization technologies released as open source

Machine Heart Report: Ultra-Large MoE Inference Breakthroughs --- 2025 Landscape: Inference Efficiency Takes Center Stage As 2025 concludes, large AI models have evolved from niche tools into foundational infrastructure powering enterprise systems. In this shift, inference efficiency has become the critical factor for scalable deployment. For ultra-large-scale MoE (Mixture-of-Experts) models,