Skip to main content
Benchmarks/Benchmark / Search API

Search Benchmark: fastCRW vs Tavily vs Firecrawl

100-query concurrent search benchmark comparing fastCRW, Tavily, and Firecrawl on latency, win rate, and reliability across 10 query categories.

Published
April 5, 2026
Updated
April 5, 2026
Category
benchmarks
Verdict

fastCRW won 73 out of 100 search latency races — 2.3x faster than Tavily and consistently faster than Firecrawl across all percentiles.

880ms average latency vs Tavily's 2,000ms73/100 latency wins vs Firecrawl (25) and Tavily (2)100% success rate across all 100 queries

Summary

We benchmarked fastCRW, Firecrawl, and Tavily head-to-head on 100 search queries — all run concurrently against all three providers. Same queries, same conditions.

fastCRW won 73 out of 100 latency races. More than Firecrawl and Tavily combined.

Search Latency Results

MetricfastCRWFirecrawlTavily
Average latency880 ms954 ms2,000 ms
Median latency785 ms932 ms1,724 ms
P95 latency1,433 ms1,343 ms3,534 ms
Latency wins73/10025/1002/100
Success rate100%100%100%

fastCRW is 2.3x faster than Tavily on average and consistently faster than Firecrawl across all percentiles except P95 (where Firecrawl edges ahead by 90ms). Tavily managed only 2 wins out of 100 — both on outlier queries where all providers were slow.

Why These Numbers Matter

For AI agent developers, search latency compounds fast:

  • 10 searches per agent run: fastCRW saves 11.2 seconds vs Tavily
  • 100 agent runs per day: 18+ minutes of wall-clock time saved on search alone
  • P95 matters: fastCRW's worst-case is still faster than Tavily's average — this means fewer timeouts and more predictable agent behavior in production

What We Tested

100 Queries Across 10 Categories

The dataset spans real-world AI agent usage patterns:

CategoryQueriesExample
Programming15"Next.js 15 server actions best practices"
AI / Machine Learning15"fine tuning LLM with LoRA QLoRA guide"
DevOps / Cloud12"kubernetes horizontal pod autoscaler custom metrics"
Current Events10"SpaceX Starship latest launch update"
Product Research10"Supabase vs Firebase vs PocketBase comparison"
Security8"post-quantum cryptography NIST standards"
Scientific8"CRISPR gene editing clinical trials results"
Niche / Long-tail12"eBPF XDP packet processing Linux kernel"
Business / Startup5"SaaS pricing strategies freemium vs usage based"
Multilingual5"yapay zeka ile web kazıma otomasyonu" (Turkish)

The categories are deliberately diverse to test how each provider handles different query types — from technical programming queries to current events to non-English queries.

Methodology

Full transparency on how we ran this:

  • Concurrent execution: All 3 providers tested simultaneously per query via Promise.all — no sequential advantage for any provider
  • 5 results per query for all providers
  • Tavily advanced search depth — we used Tavily's best mode, not basic
  • Single run from the same network location, same time of day
  • No retries — if a provider failed, that failure was recorded
  • Full results: 124KB JSON report with per-query data

What we are NOT claiming

  • This is not a search quality benchmark. We did not evaluate result relevance, freshness, or accuracy. All three providers returned reasonable results for most queries.
  • This is a single point-in-time measurement. Provider performance can vary by region, time of day, and query complexity.
  • This benchmark was run by the fastCRW team. We encourage independent reproduction — the benchmark script and dataset are open source.

Why fastCRW Is Faster

The speed advantage comes from architectural decisions:

  1. Multi-engine aggregation: fastCRW queries multiple search engines simultaneously — the fastest response wins
  2. Rust-native processing: No JVM, no Python runtime, no Node.js. Just compiled code handling requests with minimal overhead
  3. Edge normalization: Results are normalized and scored without AI post-processing on the hot path
  4. Connection pooling: Persistent connections to upstream providers reduce handshake overhead

Scrape Benchmark (Bonus)

The same benchmark also tested scrape performance on 101 URLs. fastCRW outperformed Firecrawl by 2.2x:

MetricfastCRWFirecrawl
Average latency595 ms866 ms
Median latency255 ms557 ms
P95 latency1,236 ms1,999 ms

Tavily does not offer a scrape endpoint, so it was not included in this portion of the benchmark.

Reproduce It Yourself

The benchmark script, query dataset, and full JSON results are open source:

git clone https://github.com/us/crw
cd crw
bun benchmarks/triple-bench.ts

Add your own API keys for all three providers and verify independently.

Related

Continue exploring

More from Benchmarks

View all benchmarks

Related hubs

Keep the crawl path moving