Course → Module 1: What Makes Slop, Slop
Session 10 of 10

Ira Glass, the host of This American Life, described a gap that haunts creative people. "Your taste is good enough to tell that what you're making isn't as good as you want it to be." You can tell the work is not right. You cannot tell why it is not right. You feel the wrongness but lack the vocabulary to diagnose it.

With AI content, there is a version of this gap that affects everyone who produces or evaluates AI-assisted work. You read the output. Something feels off. You cannot point to the specific failure. So you either publish it anyway ("it's close enough") or rewrite it from scratch ("I'll just do it myself"), neither of which is a sustainable production strategy.

Taste Without Diagnosis

Taste is the ability to recognize quality. Diagnosis is the ability to explain quality. Most people have more taste than diagnostic ability. They can rank five pieces of writing from best to worst but cannot articulate what makes the best one best and the worst one worst.

In AI content production, this gap is expensive. Every time you look at AI output and think "something is off" without being able to specify what, you face a choice: accept substandard work, spend time on unfocused editing, or discard the output entirely. All three waste resources.

Taste without diagnosis is frustration. Taste with diagnosis is craft. The gap between them is vocabulary.

The Diagnostic Vocabulary

Module 1 has been building your diagnostic vocabulary. Each session added specific, nameable categories of failure. The table below consolidates them into a diagnostic reference.

Category What You Feel Diagnostic Term Session Reference
Sounds generic "This could be about anything" Default voice, RLHF smoothing 1.1
Says nothing "A lot of words, no content" Hedging, filler, restatement 1.2
Too excited "Nobody is this enthusiastic about databases" False enthusiasm, superlative stacking 1.3
Looks fake "Too much decoration, not enough substance" Emoji pollution, performative formatting 1.4
Feels robotic "Every paragraph follows the same pattern" Parallel structure overuse, synonym cycling 1.5, 1.6
Almost human "I can't tell what's wrong but something is" Uncanny valley, specificity gap 1.7
Untrustworthy "Where did these claims come from?" Context-free confidence, over-attribution 1.6, 1.8
Wrong structure "The pieces are fine but the whole is off" Architectural anchoring, AI-driven framing 1.9

From Feeling to Diagnosis to Fix

The diagnostic process follows three steps. First, register the feeling. Second, name the failure. Third, apply the corresponding fix.

graph TD A["Read AI output"] --> B["Register feeling:
'Something is off'"] B --> C["Name the failure:
Which diagnostic category?"] C --> D{"Match to fix"} D --> E["Generic voice → Add specifics,
inject experience"] D --> F["Hedging/filler → Compress,
remove qualifiers"] D --> G["False enthusiasm → Strip
superlatives, add evidence"] D --> H["Wrong structure → Rewrite
outline, regenerate"] D --> I["Uncanny valley → Add
real details, name sources"]

The fix depends on the diagnosis. Hedging is fixed by deletion (remove the qualifiers). False enthusiasm is fixed by substitution (replace superlatives with specific evidence). Wrong structure requires regeneration (new outline, new generation). Trying to fix the wrong problem wastes time and produces a result that still feels off, just in a different way.

Developing Diagnostic Speed

Like any diagnostic skill, this improves with deliberate practice. The process starts slow: read the text, consult the checklist, identify the category, look up the fix. With practice, the diagnosis becomes faster. You will eventually read an AI-generated paragraph and think "synonym cycling, hollow metaphor, false bridge" in the time it takes to scan the text.

Practice Stage Diagnostic Speed Typical Time to Diagnose
Beginner Conscious, checklist-dependent 5-10 minutes per 500 words
Intermediate Pattern recognition emerging 2-3 minutes per 500 words
Advanced Intuitive, can name failures on first read 30-60 seconds per 500 words
Expert Instant recognition, prevents failures at prompt stage Near-zero (quality controlled at input)

The expert stage is the goal of this entire course. At the expert level, you do not diagnose problems in AI output because your prompts, system instructions, and pipeline design prevent most problems from occurring. You move from reactive editing to proactive quality control. The diagnostic skill does not become unnecessary. It becomes the foundation that informs how you design your production system.

Closing Module 1

Module 1 gave you the vocabulary to diagnose what makes AI content bad. You can now identify hedging, filler, false enthusiasm, performative formatting, 15 specific forensic markers, the uncanny valley, and architectural failures. You know why editing alone does not fix AI output. You know the difference between taste and diagnosis.

Module 2 shifts from diagnosis to architecture. You have the tools to identify problems. Now you build the systems that prevent them.

Further Reading

Assignment

  1. Collect 3 pieces of AI writing that feel "off" to you.
  2. For each one, write a detailed diagnosis using the vocabulary from this module. Name the specific failure categories. Point to specific sentences or paragraphs that exhibit each failure.
  3. For each diagnosed failure, write the corresponding fix: what would you change, add, or remove to resolve it?
  4. "It sounds weird" is not a diagnosis. "The opening uses a false bridge (Marker 3) that promises an insight, then delivers a restatement of the previous paragraph's point, while the second section exhibits synonym cycling (Marker 8) across three consecutive sentences" is a diagnosis.