How fastCRW frames internal and third-party benchmark claims, including metric definitions, source provenance, and interpretation rules.
The benchmark center exists for one purpose: make fastCRW's claims auditable.
Every claim on the marketing site should fall into one of two buckets:
If a claim cannot be placed into one of those buckets, it should not appear as a benchmark claim.
This discipline matters because scraper benchmarks are easy to abuse. A useful benchmark page should tell you what was tested, how it was framed, and where the result stops being reliable.
Coverage refers to whether the system extracts a useful primary content result from the benchmark URL set. It is not a claim about every possible target site on the internet.
Average latency refers to the mean response time observed in the benchmark setup. It should always be paired with:
Idle RAM is especially important for fastCRW because the product differentiates on operational weight. In this site, the canonical fastCRW framing is:
That is a product metric for this deployment framing, not a universal guarantee across every deployment shape.
The goal is not to simulate every possible internet page or every deployment configuration. The goal is to produce claims that are:
A smaller honest benchmark is more useful than a bigger benchmark that quietly mixes workloads, environments, and unsupported inferences.
The main internal dataset used in this release is the Firecrawl scrape-content dataset. That matters because it grounds the comparison in a public source rather than a hand-picked marketing demo.
The market-context section also references third-party reporting, especially where it supports:
To keep the site honest, these rules apply across all benchmark and comparison pages:
Third-party benchmark material is useful, but it should be handled carefully:
This benchmark center is designed to support decision-making, not absolutist claims.
That means:
That discipline is what makes the benchmark center useful instead of noisy.