LLM optimization

Today’s Open Source (2025-11-3): Kuaishou and Nanjing University Lab Co-Develop HiPO for Hybrid Strategy Optimization in LLM Dynamic Inference, Dual-Mode Switching Balances Accuracy and Efficiency

LLM optimization

Today’s Open Source (2025-11-3): Kuaishou and Nanjing University Lab Co-Develop HiPO for Hybrid Strategy Optimization in LLM Dynamic Inference, Dual-Mode Switching Balances Accuracy and Efficiency

🏆 Foundational Models ① Project: HiPO HiPO-8B is a novel reinforcement learning framework based on Hybrid Policy Optimization, enabling dynamic reasoning capabilities in large language models (LLMs). Key Highlights: * Developed by KwaiKAT team at Kuaishou in collaboration with NJU-LINK Laboratory (Nanjing University) and ARiSE Laboratory. * Features “think-on” and “think-off” mode switching to

By Honghao Wang

AI testing

New AI Development Rules: Testing to Ensure Every Deployment

# Bridging AI Evaluation with Real-World Business Impact *Magdalena Picariello reframes AI conversations around measurable business value, iterative development, and feedback-driven optimization.* --- ## Introduction Magdalena shifts focus from **algorithms and metrics** to **tangible business outcomes**, advocating for evaluation systems that go beyond accuracy. Her approach: - **Continuous feedback loops** - **Iterative

By Honghao Wang
NVIDIA, HKU, and MIT Launch Fast-dLLM v2: 2.5× End-to-End Throughput Boost

Fast-dLLM

NVIDIA, HKU, and MIT Launch Fast-dLLM v2: 2.5× End-to-End Throughput Boost

Autoregressive (AR) LLMs vs. Diffusion LLMs (dLLM) Autoregressive (AR) large language models generate output sequentially, token-by-token, which limits inference efficiency. Diffusion-type LLMs (dLLM) allow parallel generation, but traditionally struggle with: * KV cache reuse * Variable-length generation * Consistently outperforming AR in quality --- Fast-dLLM v2 — Pragmatic Parallel Decoding Fast-dLLM v2 adapts a

By Honghao Wang
A Brief Discussion on Context Engineering: From Claude Code, Manus, and Kiro — The Shift from Prompt Engineering to Context Engineering

context engineering

A Brief Discussion on Context Engineering: From Claude Code, Manus, and Kiro — The Shift from Prompt Engineering to Context Engineering

# 2025-10-24 · Zhejiang ![image](https://blog.aitoearn.ai/content/images/2025/10/img_001-454.jpg) ![image](https://blog.aitoearn.ai/content/images/2025/10/img_002-418.jpg) --- ## Introduction With the rapid growth of AI Agents, a new term — **Context Engineering** — has emerged. Many are asking: - How does it differ

By Honghao Wang
Tencent Releases Ultra-Low-Cost AI Training Method: $17 Beats $9,700 Fine-Tuning方案

Tencent AI

Tencent Releases Ultra-Low-Cost AI Training Method: $17 Beats $9,700 Fine-Tuning方案

Training-Free GRPO: A Cost-Effective Breakthrough in LLM Optimization Only 120 RMB — outperforming fine-tuning that costs 70,000 RMB! Tencent has introduced a new method for upgrading large-model agents: Training-Free Group Relative Policy Optimization (Training-Free GRPO). Key idea: No parameter adjustment required — the method leverages brief experiential learning within prompts to

By Honghao Wang
Stanford’s New Paper: Fine-Tuning is Dead, Long Live Autonomous In-Context Learning

AI research

Stanford’s New Paper: Fine-Tuning is Dead, Long Live Autonomous In-Context Learning

Farewell to Traditional Fine-Tuning: Introducing ACE A groundbreaking study from Stanford University, SambaNova Systems, and the University of California, Berkeley has demonstrated a transformative approach to improving AI models — without adjusting a single weight. The method, called Agent Contextual Engineering (ACE), relies on context engineering rather than retraining. It autonomously

By Honghao Wang