Our Methodology

Transparency is a core value. Here's a look under the hood at how we generate your report.

1

Page Rendering & Analysis

We don't just read raw HTML. We extract your page into a structured format (JSON + Markdown) that our AI agents can consistently analyze. For pages where text extraction is low-quality (common on JS-heavy SPAs), we fall back to an isolated **headless browser screenshot + Vision extraction**.

  • JavaScript Support: Modern SPAs (React/Next.js/Vue/Svelte) are supported via headless-browser fallback when needed.
  • Content Distillation: We compress and distill noisy pages into a clean, structured representation (headline/CTA/pricing/trust signals + readable content) so the model focuses on what matters.
  • Security: Processing runs in an isolated environment. We store your submitted URL and the generated report for a limited time, but we don't store the raw scraped page content beyond processing.
2

AI Persona Activation

The concept of "100 users" is more than just a number. Each AI agent is activated with a unique `System Prompt` that defines a believable, multi-faceted persona (the exact count depends on your plan).

  • Diverse Profiles: Our persona pool is built from real-world user archetypes (Indie Hackers, Tech Leads, Marketers, etc.). Each persona has a defined set of traits, including technical savvy, patience level, budget, and current mood, which directly influence their feedback.
  • Underlying Model: We route your analysis to a curated set of verified models and select the primary model based on your plan, with fallbacks for reliability.
  • Consistent & Unbiased: By using a large number of personas, we mitigate the biases of any single AI response. The aggregated report provides a balanced view that highlights common themes and statistically relevant issues, rather than one-off opinions.
3

Structured Feedback & Reporting

We don't just ask the AI for a simple review. Our prompt engineering is highly structured to elicit actionable feedback.

  • Structured JSON Output: We instruct the model to return strict JSON with consistent fields like `score`, `wouldBuy`, `firstImpression`, `confusion`, `liked`, and a short quote. This keeps the data reliable and easy to aggregate.
  • Calibrated Scoring: The scoring guide provided to the AI is based on our V12 benchmark tests, which calibrated model scores against human expert ratings across dozens of websites. This helps prevent score inflation and ensures a 7/10 score is meaningful.
  • Aggregation and Insights: The final report aggregates scores and feedback across all personas to identify the most common "Top Issues" and calculate metrics like "Would Buy %". This turns many individual opinions into a single, actionable dashboard.