open-source AI

Today’s Open Source (2025-10-22): EditScore Released — 7B–72B Parameter Coverage for Accurate Instruction-Guided Image Editing Quality Evaluation

open-source AI

Today’s Open Source (2025-10-22): EditScore Released — 7B–72B Parameter Coverage for Accurate Instruction-Guided Image Editing Quality Evaluation

Daily Discovery of Latest LLMs — 2025-10-22 Location: Hong Kong, China --- 📢 Overview Today’s highlights include: * EditScore (Reward Model) * HumanSense (Comprehensive Benchmark) * CamCloneMaster (Framework) * AttnRL (Reinforcement Learning Project) * Reasoning with Sampling (PyTorch Implementation) * RewardMap (Toolbox) --- 🏆 Foundation Models 1. EditScore Description: A series of state-of-the-art open-source reward models (7B–72B)

By Honghao Wang
Tsinghua & Giant Network Pioneer MoE Multi-Dialect TTS Framework with Fully Open-Source Data, Code, and Methods

dialect TTS

Tsinghua & Giant Network Pioneer MoE Multi-Dialect TTS Framework with Fully Open-Source Data, Code, and Methods

🌍 Preserving Dialects with Open-Source Speech Synthesis Dialects — whether Cantonese, Minnan, Wu in Chinese, Dutch Bildts, French Occitan, or local languages in Africa and South America — are rich in phonetic systems and cultural heritage. Sadly, many are disappearing quickly. If speech technologies fail to support these dialects, the digital divide will

By Honghao Wang
Ant releases and open-sources trillion-parameter reasoning model Ring-1T, approaching GPT-5 capabilities

Ring-1T

Ant releases and open-sources trillion-parameter reasoning model Ring-1T, approaching GPT-5 capabilities

Ring-1T: Ant Group’s Trillion-Parameter Open-Source Reasoning Model In the early hours of October 14, Ant Group officially released Ring-1T, a trillion-parameter reasoning model with fully open-sourced model weights and training recipes. This release builds upon the Ring-1T-preview version from September 30, extending large-scale Verifiable Reward Reinforcement Learning (RLVR) to

By Honghao Wang
Qwen3 Turns into a Diffusion Language Model? Run Without Training from Scratch, Record-Breaking 30B Parameters

diffusion models

Qwen3 Turns into a Diffusion Language Model? Run Without Training from Scratch, Record-Breaking 30B Parameters

# RND1: The Largest Open-Source Diffusion Language Model **Date:** 2025-10-12 · **Location:** Beijing ![image](https://blog.aitoearn.ai/content/images/2025/10/img_001-88.jpg) --- ## Introduction Diffusion Language Models (DLMs) have fascinated researchers because—unlike **autoregressive (AR) models**, which must generate text left-to-right—**DLMs enable parallel generation**. **Advantages:** - Potential for

By Honghao Wang