Course → Module 2: How Google Recognizes Companies
Session 7 of 7

Google AI Overviews, Perplexity, ChatGPT with browsing, and Gemini do not work like traditional search. They do not return a list of links for you to click. They synthesize answers from multiple sources into a single response. For your business to appear in these answers, the AI must know you exist as an entity, and it must have structured evidence to draw from.

AI search tools rely on entity recognition even more heavily than traditional search. Traditional search can fall back on keyword matching and backlinks. AI search needs to understand what things are and how they relate to each other. If Google's Knowledge Graph barely registers your entity, AI search has nothing to work with.

How AI Search Tools Find Information

AI search operates through three layers, each dependent on entity infrastructure:

graph TD AI["AI Search Query"] --> L1["Layer 1: Training Data
(Static, from model training)"] AI --> L2["Layer 2: Retrieved Sources
(Real-time web search)"] AI --> L3["Layer 3: Knowledge Graph
(Structured entity data)"] L1 --> R["Synthesized Answer"] L2 --> R L3 --> R style AI fill:#2a2a28,stroke:#c8a882,color:#ede9e3 style L1 fill:#2a2a28,stroke:#6b8f71,color:#ede9e3 style L2 fill:#2a2a28,stroke:#6b8f71,color:#ede9e3 style L3 fill:#2a2a28,stroke:#6b8f71,color:#ede9e3 style R fill:#2a2a28,stroke:#c8a882,color:#ede9e3
LayerSourceWhat It ContainsEntity Infrastructure Required
Training DataWikipedia, Wikidata, Common Crawl, news, booksFacts the model learned during trainingPresence in Wikipedia/Wikidata, news coverage, published content
Retrieved SourcesReal-time web search resultsCurrent web pages matching the queryTraditional SEO + structured data for clean extraction
Knowledge GraphGoogle's entity databaseVerified entity properties and relationshipsSchema, GBP, citations, sameAs chain

AI search draws from training data, retrieved sources, and knowledge graphs. Entity infrastructure feeds all three layers. Without it, you are invisible to AI.

Layer 1: Training Data

Large language models are trained on massive text corpora that include Wikipedia, Wikidata, Common Crawl (a snapshot of the web), news archives, and books. If your entity appears in these sources at the time of training, the model "knows" about you.

This has a critical implication: getting into training data is a long-term play. Models are retrained periodically, but the training data cutoff is always in the past. What you build today in Wikipedia, Wikidata, and on authoritative news sites will enter future model training data. What you do not build today cannot enter any model.

Layer 2: Retrieved Sources

When AI tools browse the web in real time (Perplexity, ChatGPT with browsing, Google AI Overviews), they use traditional search to find relevant pages and then extract information from them. This means your pages must:

  1. Rank in traditional search (otherwise the AI cannot find them).
  2. Be structured for extraction (schema markup, clear headings, factual statements).

A page of flowing prose is harder for AI to extract facts from than a page with structured sections, bullet points, tables, and schema markup. Entity infrastructure makes your content extractable.

Layer 3: Knowledge Graph

Google's AI Overviews draw directly from the Knowledge Graph. If your entity has a Knowledge Graph entry with verified properties, those properties can appear in AI-generated answers without any page needing to rank.

This is the most powerful implication: a Knowledge Graph entry can make you visible in AI search even if your individual pages do not rank for the query. The entity itself becomes the answer source.

Platform-Specific Differences

Each AI platform has different source preferences:

Google AI Overviews lean heavily on the Knowledge Graph and structured data. Perplexity relies more on real-time web retrieval. ChatGPT draws more from training data and Wikipedia. Optimizing for all three platforms means building entity infrastructure across all layers: structured data for Google, strong web presence for Perplexity, and Wikipedia/Wikidata entries for ChatGPT.

The Compounding Effect

Businesses that build entity infrastructure now will compound their AI visibility as these platforms grow. Those who wait will find the gap increasingly difficult to close, because established entities accumulate training data references, Knowledge Graph entries, and citation networks that new entrants cannot replicate quickly.

This is not speculation. It is the same compounding dynamic that has always existed in search, accelerated by AI's reliance on structured, verified entity data.

Further Reading

Assignment

Ask three different AI tools (ChatGPT, Perplexity, Google Gemini) about your company by name. Then ask them about your industry plus your location (e.g., "best pump suppliers in Jakarta"). Document whether you are mentioned at all, what information they provide, and whether it is accurate. This is your AI visibility baseline. Keep it for comparison after building your entity infrastructure.