Course → Module 5: Prompt Engineering
Session 4 of 10

Think Before You Write

Chain-of-thought prompting asks the AI to reason step-by-step before producing its final output. Instead of jumping directly from prompt to prose, the AI first outlines its reasoning: what angle it will take, what evidence supports that angle, what structure it will use, and what counterarguments exist. Then it writes.

This mirrors how competent writers actually work. A good writer does not sit down and produce finished prose from the first word. A good writer thinks about the argument, considers the audience, decides on the structure, and then writes. Chain-of-thought forces the AI to simulate that process.

Chain-of-thought does not make the AI smarter. It makes the AI more structured. The reasoning may contain errors. But structured reasoning with errors is easier to catch and fix than unstructured generation that sounds good but has no logical skeleton.

Direct Prompting vs Chain-of-Thought

The difference is visible in the output structure, not just the quality.

Aspect Direct Prompt Chain-of-Thought Prompt
Process Prompt goes in, prose comes out Prompt goes in, reasoning comes out, then prose
Logical structure Implicit (may be absent) Explicit (visible in reasoning step)
Error detection Errors hidden in fluent prose Errors visible in reasoning chain
Token cost Lower (output only) Higher (reasoning + output)
Best for Simple, factual content Analysis, argument, complex topics

How to Prompt for Chain-of-Thought

The simplest approach is explicit instruction. Add a section to your prompt that says: "Before writing the article, outline your reasoning: (1) What is the central argument? (2) What evidence supports it? (3) What counterarguments exist? (4) What structure will best communicate this argument? Then write the article based on your reasoning."

The AI produces two outputs: the reasoning chain and the final content. You read the reasoning chain first. If the reasoning is flawed, you correct it before the AI writes the content. This saves time because correcting a 100-word reasoning chain is faster than rewriting a 1000-word article.

graph TD A["User prompt:
'Analyze why remote work
productivity claims are overstated'"] --> B["Step 1: AI reasoning"] B --> C["Central argument:
Studies conflate output
with productivity"] B --> D["Evidence: 3 studies
with methodological issues"] B --> E["Counterargument:
Some industries genuinely benefit"] B --> F["Structure: problem,
evidence, nuance, conclusion"] C --> G["Step 2: AI writes
article based on reasoning"] D --> G E --> G F --> G G --> H["Output: Structured article
with logical foundation"] style A fill:#222221,stroke:#c8a882,color:#ede9e3 style B fill:#222221,stroke:#6b8f71,color:#ede9e3 style G fill:#222221,stroke:#c47a5a,color:#ede9e3 style H fill:#222221,stroke:#6b8f71,color:#ede9e3

When Chain-of-Thought Helps (and When It Does Not)

Chain-of-thought adds value for content that requires an argument, an analysis, or a position. If the content is making a case for something, reasoning first produces a stronger case. If the content is comparing options, reasoning first ensures all options are fairly represented.

Chain-of-thought adds less value for purely descriptive content, formatting tasks, or simple factual summaries. If you need a product description or a reformatted table, direct prompting is faster and cheaper.

The Two-Pass Technique

An advanced application splits chain-of-thought into two separate API calls. The first call produces only the reasoning. You review and correct the reasoning. The second call takes the corrected reasoning as input and produces the content. This gives you a human review gate between the thinking and the writing.

The two-pass technique costs twice the API tokens but catches structural errors before they propagate into the final output. For high-stakes content (published articles, client deliverables, course material), the additional cost is justified by the quality improvement.

Technique API Calls Human Review Points Best For
Direct prompting 1 After generation Simple, low-stakes content
Single-pass chain-of-thought 1 After generation (can check reasoning) Analytical content, moderate stakes
Two-pass chain-of-thought 2 After reasoning AND after generation High-stakes, complex arguments

Further Reading

Assignment

Take a complex content task (e.g., "Write an analysis of why remote work productivity claims are overstated"). Run it once as a direct request and once with chain-of-thought instructions ("First, outline your reasoning process: central argument, supporting evidence, counterarguments, and structure. Then draft the analysis based on your reasoning."). Compare the logical structure of both outputs. Is the chain-of-thought version more coherent? Where does the reasoning chain contain errors?