The Rise of AI Agents: Is AI+Software Development at a New Turning Point?

The Rise of AI Agents: Is AI+Software Development at a New Turning Point?
# 🚀 The LLM Native Development Era  

Large Language Models (LLMs) are **profoundly transforming R&D**, evolving from mere auxiliary assistants to **core productivity drivers**. From assisted programming to autonomous *Coding Agents*, we’re witnessing AI’s shift toward being native to the software development process.  

Has the **native AI-based development era** truly arrived? Let’s explore insights from the *InfoQ “Geek Meets”* livestream in collaboration with the AICon Global AI Development & Application Conference, featuring:  

- **Wu Chaoxiong** – Senior Product Manager, Ping An Technology  
- **Yan Zhijie** – Senior Architect, Baidu  
- **Du Pei** – Client-side Architect, Autohome  

> 🗓 AICon Beijing 2025: [Full conference schedule](https://aicon.infoq.cn/202512/beijing/schedule)  
> 🎥 Replay: [Livestream video](https://www.infoq.cn/video/KGWqbzH6IKGhgoUiKDEi)  

---

## 📌 Key Highlights

- **AI in Testing** → More of an *efficiency booster* than a replacement; *native development* is not fully here yet.  
- **Prompting as Role-Play** → Effective prompts set clear roles (e.g., “as a domain expert”), aligning output to business logic.  
- **Coding Agents** → Towards *general intelligent agents* capable of independently completing dev tasks.  
- **Quality Over Quantity** → AI code must be **maintainable** and meet product needs, not just “run.”  
- **Tool Choice ≠ Outcome Quality** → Evaluation should be based on *impact and final output*, not whether AI was used.

---

## 1️⃣ Is the LLM Native Development Era Here?

### Yan Zhijie: *"Half Fire, Half Sea"*
- **Fire**: AI shines in small, well-structured tasks and 0 → 1 innovation.  
- **Sea**: Huge challenges in integrating AI into large, complex codebases.  

**Current Trends**:
- Growth in AI-powered dev tools beyond IDEs:  
  - **Devin**, **SWE Agent** (web-based DevOps integration)  
  - **Claude Code** (CLI-based deep workflow integration)  
- Rising disclosure of *AI-generated code percentage* by companies — some surpass **50%**, reshaping development culture.  

**For Non-Programmers**: LLMs make “can’t” tasks possible (e.g., no Photoshop skills needed for Doubao image edits).  
**For Programmers**: Still at a **critical pre-shift stage**.

---

### Wu Chaoxiong: AI in Testing
- **Strengths**: Data generation/analysis, monitoring, requirements-based test case creation.  
- **Weaknesses**: Complex microservices, data topology — requires human judgment.  
- **Conclusion**: AI is an **efficiency tool**, not a replacement. Native AI is *midway up the slope*.

---

## 2️⃣ Human-Driven → AI-Driven Development Steps

### Du Pei: Design-to-Code
- Early experiments (2023) suffered from high hallucinations.  
- **Breakthrough**: Multimodal models with image understanding improved UI intent recognition.  
- **Results**: 80–90% usable rate in generated code; still requires manual pixel-level review.  

Optimized **multi-end code conversion** (H5 ↔ Mini programs ↔ Frameworks):  
- AI quality: ~70%  
- Overall efficiency boost: **1.5×**  

**Extra Gains**: Faster large-scale engineering analysis, bug reduction (30–40% in testing stage).

---

### Yan Zhijie: Filling Automation Gaps
- Example: Wenxin Fast Code — automatic bilingual code version maintenance.  
- **Best Fit**: Repetitive, mechanical tasks; 0 → 1 prototyping; Figma → Code pipelines.  

---

### Wu Chaoxiong: Test Automation
- AI auto-generates runnable test scripts considering **DB constraints, API logic, parameters**.  
- Complex API testing time reduced from hours to minutes.  
- **Goal**: Full-process automation (generation → execution → reporting) by next year.

---

## 3️⃣ First Wall in AI R&D Adoption

**Challenges**:
1. **Stability Issues** → AI misinterpreting SPEC, making illogical modifications.  
2. **Trust Gaps** → Early failures cause hesitation among users.  
3. **Prompt Quality** → Simple/vague prompts lead to poor or incorrect outputs.  

**Skills Needed**:
- **Prompt Engineering** → Role + Scenario + Objective + Task  
- **Knowledge Engineering** → Making implicit team standards explicit for AI learning.

---

## 4️⃣ From Tool to Collaborator

**Agents vs Assistants**:
- Assistants = single-point help.  
- Agents = chain tasks into a **closed-loop** (code → test → review).  

**Requirements**:
- Maintainable, logically sound code.  
- Dynamic runtime testing for mobile/complex apps.  
- Gradual integration via plugin-based ecosystems before “grand platforms.”

---

## 5️⃣ Value & People in the AI Era

Roles with **amplified value in next 2 years**:
- **Holistic product managers** (business + architecture + testing).  
- **Full-stack engineers** expanding capability through AI.  
- **Architects** orchestrating system-level design & exception handling.  
- **Communicators** with clarity in AI task delegation for 5–10× efficiency gains.  

**Core Career Advice**:
- Be more than an executor → Become an **evaluator** or **decision-maker**.  
- Maintain strong foundational skills; AI is an enhancer, not a substitute.  

---

> 💡 **Takeaway**: True AI-native development demands *stable AI outputs*, *deep human oversight*, and *structured workflows*. The winners will be those combining **technical breadth, architectural thinking, and communication skill** — all while aligning AI with business goals.

---

Read more

How to Use Umami to Build Your Own Google Analytics Alternative

How to Use Umami to Build Your Own Google Analytics Alternative

# **Umami: A Privacy-Friendly, Open-Source Analytics Alternative to Google Analytics** Website analytics are **crucial** for understanding how visitors engage with your content. While **Google Analytics** dominates, it can be **complex** for small projects and raises **privacy concerns**. If you want something **simpler**, **open-source**, and **privacy-friendly**, [**Umami**](https://github.com/umami-software/umami)

By Honghao Wang