Crawlfix isn't just a scanner anymore
Tonight we shipped a full fix loop, eight pricing tiers, six new modules, and MCP tools for agent-driven workflows. Here is everything that changed.
The scanner has always been free. You paste a URL, Crawlfix fetches it twice, diffs the raw HTML against the rendered DOM, and hands you a list of things that are broken. That has been true since day one. What was missing was the second half: a clear path from "here is what is broken" to "here is the patch, applied."
Tonight that second half exists. This post covers what shipped, what it means for teams building with AI agents, and what is still on the roadmap.
Three things changed
1. A new report format
The old report tried to organize findings by category with tabs and nested views. The feedback we heard consistently was that it added navigation overhead without adding clarity. People wanted to know what to fix first, not how to browse the results.
The V3 report is a single-column action plan. Every finding is a row, ranked by launch-blocking severity. Next to each finding is a collapsible evidence section: the raw signal, the rendered signal, and the exact diff that produced the flag. The fix recipe sits directly beside the finding. You do not scroll to a separate page or switch tabs to connect a problem to its solution.
The short version: stop hunting for the fix. The fix sits next to the finding.
The V3 report at a glance: action bar at top, findings ranked by severity, evidence and fix recipe inline per row.
2. Eight pricing tiers
We went from a single free tier to eight tiers covering the full range from solo developer to enterprise. The tiers are Free, Starter at $19, Growth at $59, Pro at $149, Business at $299, Agency at $599, Agency Scale at $1,199, and Enterprise with custom pricing.
The reason for the spread is not complexity for its own sake. A solo developer building a personal project has different needs than a digital agency running audits on fifty client domains every month. The lower tiers get full access to the core scan and report. Higher tiers unlock higher crawl limits, team seats, API access, MCP embedding rights, and the new modules described below. No tier ever runs LLM inference on your scan for free and charges it back to us silently. The free tier is rule-based only, and that is a permanent guarantee.
Eight tiers from Free to Enterprise, each adding crawl volume, seats, and module access.
3. Six new modules
These are the areas where the product grew most in the past cycle.
Compliance readiness scanner. Flags common GDPR, CCPA, and cookie-consent issues based on page content and network behavior. Not a legal opinion. A structured checklist of the technical signals that compliance reviews look for.
Privacy and cookie policy generator. Answer a short form about your site's data practices and Crawlfix generates a starting-point policy document. Edit it before you publish. We are not lawyers.
Per-engine AI visibility. The existing AI citability score was aggregate. The new module breaks it out by engine: Perplexity, ChatGPT Search, Claude Web, and Gemini each have different retrieval behaviors. The report now shows which engines your content is positioned for and which ones it is likely invisible to.
The six AI engines Crawlfix queries for per-engine visibility scoring.
Verified mention scanner. Early foundation for tracking where your brand and product actually appear in AI-generated answers. The underlying data pipeline is in place. Full scan depth depends on a Brave Search API key, which we cover in the "not yet live" section below.
Analytics snippet and consent SDK. A thin layer that drops your analytics tag and a consent banner in a way that does not block Core Web Vitals. If you are currently loading Google Tag Manager synchronously in the <head>, this swaps it for a non-blocking pattern with one copy-paste.
GitHub PR check and Google Search Console integration. Scan against a pull request before it merges, so a deploy that would swap your <title> to a client-side render gets caught in review rather than after the fact. Search Console integration pulls your real crawl data and coverage errors into the same report as the Crawlfix scan, so you see both together.
The fix loop
Here is the workflow the new report is designed around.
You run a scan. The report surfaces ranked launch blockers. You click "Fix All" at the top and Crawlfix assembles a structured patch bundle: one instruction set per finding, ordered by priority. You paste that bundle into Claude, Cursor, or any coding agent. The agent applies the patches to your codebase. You re-run the scan and verify the issues are resolved.
That cycle is the core thesis. A scanner that just reports problems forces you to translate findings into fixes manually. The V3 report does that translation for you and hands the output to whatever tool is applying the code changes.
The full fix loop: scan, ranked findings, Fix All, agent applies patches, re-scan.
For teams using agent-driven workflows, there are two new MCP tools: crawlfix_fix_all_next_step and crawlfix_fix_all_mark_done. The first retrieves the next ranked fix from the queue. The second marks it resolved so the agent can advance to the next item. This means you can wire Crawlfix into an agentic loop that runs scans, pulls fixes, applies them, and marks them done without a human in the middle of each iteration.
The six-step MCP handshake between your coding agent and the Crawlfix fix queue.
What is still free
The free tier is the same core product it has always been. Full scan, rule-based audit, top issues list, overall grade, and pass/fail verdict. Every scan does both the raw HTTP fetch and the full Chromium render, so the diff is always real.
The one thing that does not change at any tier: the free scan never runs LLM inference on your page content and charges it to us. Free is rule-based. The model-backed analysis, the fix recipes, the AI visibility breakdown, those are paid features. That line does not move.
What is not yet live
Honest accounting of the gaps.
GitHub PR check and Google OAuth require their OAuth applications to be registered with GitHub and Google before they can go live in production. The code is written and tested. We are waiting on the OAuth app approvals. We expect to flip those on within the week.
The mention scanner at full depth needs a Brave Search API key. The pipeline exists. Until the key is configured, the scanner runs on a reduced data set. If you are on a tier that includes mention scanning and see partial results, that is why.
Postgres and ClickHouse migration for the audit log and analytics data is on the roadmap. The current data layer works but does not scale to the crawl volumes the higher tiers are designed for. This is a Q3 project.
Better Auth migration is also Q3. Current auth is functional. The magic-link flow works end to end. The migration is about long-term maintainability, not a user-visible gap today.
None of these gaps block using the product. They are engineering projects on a timeline, not missing features you would hit during a normal scan.
Try it
Point it at a URL at crawlfix.ai. Signup is magic-link only. No password to create or forget. If you want to upgrade, it is one click to Stripe and you are on the new tier immediately.
If you find a bug, email [email protected]. We are a small team and we read everything.