$ cat problem.md
The Problem
Competitive research is one of those “important but always pushed” tasks because it’s painfully manual:
- Click around a bunch of sites
- Screenshot everything “just in case”
- Try to compare pricing/features/messaging across different IA patterns
- Lose your notes or never turn them into something shareable
I wanted a workflow that makes competitor research repeatable, fast, and organized (so the output is usable, not just a pile of tabs).
$ cat solution.md
The Solution
I built a template repo that turns competitor research into a structured project:
- A research template that guides what to look for (pricing, positioning, UX, GTM, proof, etc.)
- Automated browser capture with Playwright MCP
- Faster synthesis and report writing with Claude Code
- A consistent folder structure for screenshots, reports, and status tracking
Repo: ai-competitor-toolkit
$ cat stack.md
Technical Stack
This is intentionally lightweight — the “product” is the workflow:
- Claude Code (CLI): drives the analysis and report generation
- Playwright MCP: navigates pages and captures screenshots reliably
- Node scripts (optional): research one competitor or batch through a list
- Markdown + JSON: simple, portable artifacts that work in any repo
$ cat impact.md
Impact
Research on Rails
The biggest win is consistency:
- Every competitor gets the same baseline review (so comparisons are real)
- Screenshots are organized and easy to reference later
- Reports follow a predictable structure, so you can actually use them
It also makes the task easy to delegate or hand off — you’re not relying on one person’s personal research habits.
$ _
Directory Structure
.
├── competitors.json # your competitor list (create from template)
├── PROJECT-CONTEXT.md # project briefing (create from template)
├── research-template.md # the research checklist
├── screenshots/ # auto-captured during research
├── reports/ # generated writeups per competitor
└── scripts/ # optional automation
It’s designed to be copied into any product repo (or used as a standalone workspace).
Typical Run
- Define your baseline (your product + what you care about)
- Add competitors to
competitors.json - For each competitor:
- capture key pages (home, pricing, features, docs, customers)
- extract claims + differentiators
- write a structured report
- Mark status + date so you can track progress over time
The goal is to make “competitive research” feel like running a checklist, not doing archaeology.
What You Get Per Competitor
- Report: a single markdown writeup you can share internally
- Screenshots: evidence for claims, pricing tables, feature lists, etc.
- Notes/status: progress tracking so you can batch research without losing your place
If you want the template: ai-competitor-toolkit