Google’s AI summaries often point to other AI-made pages instead of human-written ones. This can create a loop where AI learns from AI, not real experts.
People rely on AI Overviews to get quick answers, but if these summaries pull mostly AI-generated content, they risk spreading mistakes or made-up facts.
Researchers checked one million search results that featured Google’s AI Overviews and looked at the top three links each summary cited. They found only 8.6% of those links were fully human-written, while 3.6% were pure AI and the rest mixed both. Compared to general web pages, AI Overviews cite more AI content than you’d expect.
This shows Google’s summaries don’t favor human sources over AI ones. Since most new pages already include some AI content, the AI Overviews end up feeding on themselves. As AI-made content grows on the internet, machine-written answers keep referring to other machine-written pages, forming a feedback circle.
Even with Google’s methods to fact-check (RAG), this trend may spread errors and bias because machines are talking mostly to machines. Users should know that AI Overviews may not be citing the best or most reliable sources.