2025-11-08 Hacker News Top Stories
You Should Write an Agent
Original article — by security expert Thomas Ptacek
Ptacek argues that building an LLM (Large Language Model) Agent is now an essential skill for anyone in computing — whether you're an enthusiast or a skeptic. Understanding LLM agents means grasping how modern AI systems can manage context, role-play, and call tools in the real world.
---
Why Building an LLM Agent Is Easy
Ptacek demonstrates that an LLM agent can be implemented in as little as 15 lines of code using OpenAI’s Responses API:
- Context management: A simple list of strings recording user input and AI output enables multi‑turn conversations.
- Key insight: The “context window” isn’t magic — it’s just structured state storage.
---
Dual-Persona Agents
Example setup:
- Alph — always tells the truth.
- Ralph — always lies.
- The agent randomly switches between personas by selecting different context sets.
Purpose: Show how LLMs can combine role‑play with logical reasoning.
---
Tool Calling: Giving Agents Real-World Power
- Implemented via a custom `ping` tool.
- LLM autonomously chooses targets and calls tools — without a manual loop.
- Demonstrates how tool invocation decisions can be driven entirely by the model.
---
Core Agent Logic – Step-by-Step
- Provide available tools to the LLM at each call.
- Wait for a tool‑call request from the LLM.
- Execute the requested tool.
- Return the output to the LLM.
- Repeat until done.
Note: You only manage tools and responses — the flow itself stays LLM‑driven.
---
From Toy Example to Practical Agent
Real-world upgrades might include:
- Persistent storage (e.g., SQLite)
- Expanded toolset (e.g., `traceroute`)
- Porting to another language (e.g., Go)
Even so, the core workflow stays the same.
---
Core Takeaway
LLM agents are not black boxes — they’re simple loops with tool calls.
Grasping their mechanics is the first step toward critically evaluating their capabilities and limitations.
---
Related Application – AiToEarn
AiToEarn官网 offers an open‑source AI content monetization framework that can integrate LLM agent logic into automated multi‑platform workflows.
Features:
- Connect AI‑generated logic to cross‑platform publishing.
- Analytics + monetization across Douyin, Kwai, WeChat, Bilibili, Xiaohongshu, Facebook, Instagram, LinkedIn, Threads, YouTube, Pinterest, and X.
- Links: AiToEarn博客 | AI模型排名 | GitHub
---
Hacker News Response
968 points | 384 comments | 1 day ago
Highlights:
- LLMs behave like composable Unix text processors (`sed`, `awk`).
- Lightweight agents (even 25 lines of PHP) are possible — before tool‑calls existed.
- Dedicated local models enhance privacy & offline usability.
- Small models can outperform large ones in narrow domains.
- Useful in home automation (context‑aware controls).
- Multi‑agent systems can simplify complex automation.
- Local agents avoid “long context” slowdowns.
- Risks: malicious instructions & injection attacks.
- Agents have no will — they execute inputs + design.
- Integration with Home Assistant can optimize energy usage.
---
Editorial Note:
This rewritten summary groups information into clear headings, highlights key points in bold, uses lists for structure, and preserves all original links — giving readers a concise but complete overview.
If you'd like, I can continue restructuring the rest of your long document into this clear, heading‑driven format so it's easier to navigate end‑to‑end. Would you like me to proceed with the full rewrite in this style?