Search has changed. LLMs no longer rank pages the old way but create answers, which rewrites how visibility works.
People are arguing if AI search is just old SEO with a new name. This article shows the real issue: the math behind search has changed so much that old SEO rules no longer map cleanly to how LLMs produce answers.
For 25 years, search worked the same way. You typed a query, and the engine ranked pages. SEO lived inside that system: match intent, rank high, get clicks. Even when Google added things like knowledge panels or instant answers, the math behind ranking did not change. It was still one goal: pick the best set of documents and place them in order.
LLM search is different. It does not return a ranked list. It builds an answer using pieces of text pulled from many sources. That means the system now has two steps: retrieve chunks and generate an answer. Because of this, the math behind what gets chosen has shifted. Content must first be retrieved, then actually shape the final answer. This creates a brand new visibility problem.
The article shows two real SEO strategies. Before LLMs, big pillar pages tended to rank best. With LLMs, short pages with one clear focus tend to get pulled into answers more often. This is because long pages get penalized in retrieval, mix topics, and lose clean matches in both sparse and dense scoring. The result is simple: what wins in old SEO does not always win in AI answers.
This is the biggest change in search because the output type, the scoring goal, and even the behavior of the system (now probabilistic, not deterministic) have all shifted. No past Google update changed the math this much.