Research Agent, Writing Agent, Editing Agent
Session 9.3 · ~5 min read
The Most Common Chain
The Research-Writing-Editing chain is the workhorse of content production. It is the most tested, most refined, and most broadly applicable agent chain you will build. This session walks through a complete implementation: system prompts, handoff formats, and end-to-end execution.
Agent 1: The Research Agent
The Research Agent takes a topic and a set of research questions, queries search APIs (Tavily, Google Search Grounding), filters results, and produces a structured research brief.
System prompt core:
You are a research assistant. Your job is to find, verify, and organize factual information. You do not write prose. You do not generate opinions. You report what you find with source citations for every claim. If you cannot verify a claim, mark it as unverified. Output as structured JSON following the provided schema.
Input: Topic string + 3-5 specific research questions + optional API search results as context.
Output: JSON research brief with topic_summary, key_findings, sources, data_points, and gaps.
Agent 2: The Writing Agent
The Writing Agent takes the research brief, your outline, and your voice fingerprint, and produces a first draft. It does not search for information. It does not verify facts. It writes prose from the materials provided.
System prompt core:
You are a ghostwriter. Write in the voice described below. Follow the provided outline exactly. Use only the facts and data from the provided research brief. Do not add information from your training data. If the research brief does not cover something the outline requires, write "[NEEDS RESEARCH]" in that section. Never hedge. Never use: "it's important to note," "in today's world," or any of the forbidden phrases listed below.
The voice fingerprint (from Module 6) is appended to this system prompt. The research brief and outline go in the user message.
Input: Research brief (filtered: topic_summary, key_findings, data_points only) + outline + content spec.
Output: Markdown draft with H1, H2 sections matching the outline, within specified word count.
Agent 3: The Editing Agent
The Editing Agent reads the draft and flags issues. It does not rewrite. It does not fix. It identifies problems and scores the output against your quality rubric.
System prompt core:
You are a content quality reviewer. Read the provided draft and evaluate it against five dimensions: factual accuracy, voice consistency, structural clarity, originality of insight, and AI artifact absence. Score each dimension 0-10. For any score below 7, provide specific examples from the text. Flag every instance of the following AI artifacts: hedging phrases, tricolons, false bridges, enthusiasm spikes, empty superlatives. Do not rewrite any part of the draft. Your job is diagnosis, not treatment.
Input: The complete draft from Agent 2.
Output: Structured review with dimension scores, flagged issues with line references, and an overall verdict (PASS/REWORK/FAIL).
Performance Comparison
The table below shows typical results from running the same content task through a single-agent approach versus the three-agent chain.
| Metric | Single Agent | Three-Agent Chain |
|---|---|---|
| Factual accuracy score | 5-6 / 10 | 7-8 / 10 |
| Voice consistency | 4-5 / 10 | 7-8 / 10 |
| AI artifacts per 1000 words | 8-12 | 3-5 |
| Human review time needed | 25-35 min | 10-20 min |
| API cost per piece | $0.05 - $0.15 | $0.15 - $0.45 |
| Total time (generation + review) | 35-50 min | 25-35 min |
The chain costs more in API fees and saves time in human review. At scale, human review is the bottleneck, not API costs. The economics favor the chain for any production volume above a few pieces per week.
Implementation Options
You can implement the chain three ways, depending on your technical comfort:
- Manual execution: Run each agent separately, copy-paste the output to the next. No code required. Good for learning the chain dynamics.
- Three scripts: One Python script per agent. Run them in sequence. Intermediate outputs saved as files. Your AI coding assistant can write all three scripts.
- Single orchestration script: One script that calls all three agents in sequence, validates handoffs, and saves the final output. This is the production-ready version.
Start with manual execution. Understand where the handoffs work and where they break. Then automate. Automating a broken chain just produces broken output faster.
Further Reading
- Multi-Agent AI Content Generation: Complete Guide 2026, Sight AI
- Building Autonomous Systems with Agentic AI, DigitalOcean
- Prompt Engineering Overview, Anthropic
Assignment
Implement the three-agent chain for a real piece of content:
- Run Agent 1 (Research) with a topic and 3-5 research questions. Save the output.
- Validate the research brief against the schema from Session 9.2.
- Run Agent 2 (Writer) with the research brief, your outline, and voice fingerprint. Save the output.
- Run Agent 3 (Editor) with the draft. Save the scored review.
Compare the final output (after human review of Agent 3's flags) to a single-prompt generation on the same topic. Which is more accurate? Which sounds more like you? Which requires less human editing? Document the comparison.