Search Benchmark: fastCRW vs Tavily vs Firecrawl
100-query concurrent search benchmark comparing fastCRW, Tavily, and Firecrawl on latency, win rate, and reliability across 10 query categories.
fastCRW won 73 out of 100 search latency races — 2.3x faster than Tavily and consistently faster than Firecrawl across all percentiles.
Summary
We benchmarked fastCRW, Firecrawl, and Tavily head-to-head on 100 search queries — all run concurrently against all three providers. Same queries, same conditions.
fastCRW won 73 out of 100 latency races. More than Firecrawl and Tavily combined.
Search Latency Results
| Metric | fastCRW | Firecrawl | Tavily |
|---|---|---|---|
| Average latency | 880 ms | 954 ms | 2,000 ms |
| Median latency | 785 ms | 932 ms | 1,724 ms |
| P95 latency | 1,433 ms | 1,343 ms | 3,534 ms |
| Latency wins | 73/100 | 25/100 | 2/100 |
| Success rate | 100% | 100% | 100% |
fastCRW is 2.3x faster than Tavily on average and consistently faster than Firecrawl across all percentiles except P95 (where Firecrawl edges ahead by 90ms). Tavily managed only 2 wins out of 100 — both on outlier queries where all providers were slow.
Why These Numbers Matter
For AI agent developers, search latency compounds fast:
- 10 searches per agent run: fastCRW saves 11.2 seconds vs Tavily
- 100 agent runs per day: 18+ minutes of wall-clock time saved on search alone
- P95 matters: fastCRW's worst-case is still faster than Tavily's average — this means fewer timeouts and more predictable agent behavior in production
What We Tested
100 Queries Across 10 Categories
The dataset spans real-world AI agent usage patterns:
| Category | Queries | Example |
|---|---|---|
| Programming | 15 | "Next.js 15 server actions best practices" |
| AI / Machine Learning | 15 | "fine tuning LLM with LoRA QLoRA guide" |
| DevOps / Cloud | 12 | "kubernetes horizontal pod autoscaler custom metrics" |
| Current Events | 10 | "SpaceX Starship latest launch update" |
| Product Research | 10 | "Supabase vs Firebase vs PocketBase comparison" |
| Security | 8 | "post-quantum cryptography NIST standards" |
| Scientific | 8 | "CRISPR gene editing clinical trials results" |
| Niche / Long-tail | 12 | "eBPF XDP packet processing Linux kernel" |
| Business / Startup | 5 | "SaaS pricing strategies freemium vs usage based" |
| Multilingual | 5 | "yapay zeka ile web kazıma otomasyonu" (Turkish) |
The categories are deliberately diverse to test how each provider handles different query types — from technical programming queries to current events to non-English queries.
Methodology
Full transparency on how we ran this:
- Concurrent execution: All 3 providers tested simultaneously per query via
Promise.all— no sequential advantage for any provider - 5 results per query for all providers
- Tavily advanced search depth — we used Tavily's best mode, not basic
- Single run from the same network location, same time of day
- No retries — if a provider failed, that failure was recorded
- Full results: 124KB JSON report with per-query data
What we are NOT claiming
- This is not a search quality benchmark. We did not evaluate result relevance, freshness, or accuracy. All three providers returned reasonable results for most queries.
- This is a single point-in-time measurement. Provider performance can vary by region, time of day, and query complexity.
- This benchmark was run by the fastCRW team. We encourage independent reproduction — the benchmark script and dataset are open source.
Why fastCRW Is Faster
The speed advantage comes from architectural decisions:
- Multi-engine aggregation: fastCRW queries multiple search engines simultaneously — the fastest response wins
- Rust-native processing: No JVM, no Python runtime, no Node.js. Just compiled code handling requests with minimal overhead
- Edge normalization: Results are normalized and scored without AI post-processing on the hot path
- Connection pooling: Persistent connections to upstream providers reduce handshake overhead
Scrape Benchmark (Bonus)
The same benchmark also tested scrape performance on 101 URLs. fastCRW outperformed Firecrawl by 2.2x:
| Metric | fastCRW | Firecrawl |
|---|---|---|
| Average latency | 595 ms | 866 ms |
| Median latency | 255 ms | 557 ms |
| P95 latency | 1,236 ms | 1,999 ms |
Tavily does not offer a scrape endpoint, so it was not included in this portion of the benchmark.
Reproduce It Yourself
The benchmark script, query dataset, and full JSON results are open source:
git clone https://github.com/us/crw
cd crw
bun benchmarks/triple-bench.ts
Add your own API keys for all three providers and verify independently.
Related
- Full benchmark blog post — detailed analysis with additional context
- 1,000-URL scrape benchmark — CRW vs Firecrawl on Firecrawl's own dataset
- Benchmark methodology — how we approach benchmarking and source our claims
- fastCRW vs Tavily comparison — feature, pricing, and migration comparison
Continue exploring
More from Benchmarks
Related hubs