This article explains how AI can test product activation by acting like a real user and finding friction teams often miss.
Most activation problems are hard to notice because teams are too close to the product. Traditional user tests miss small but critical friction. AI can catch these issues early by behaving like a confused first-time user.
The article explains how an AI browser called Comet can be used to test product activation flows. The idea is simple. If AI struggles to complete a task, a real user probably will too.
The author tested several products by asking AI to complete one core action. In travel booking, the AI failed due to slow load times and confusing defaults. In a design tool, the AI got stuck on paywalls, icons, export formats, and hidden UI elements. These are the same issues many real users face.
Some tools performed much better. A scheduling tool worked almost perfectly, with only small issues around time zones and popups. A public railway site completely failed, where AI could not even start an action. The takeaway is that AI can surface real UX and activation problems quickly and cheaply.