Google ADK Web Scraping Integration — fastCRW [Firecrawl-Compatible]
Wire fastCRW into Google's Agent Development Kit as a FunctionTool. Firecrawl-compatible scrape and search, 6.6 MB RAM runtime, 92% coverage on the 1,000-URL benchmark.
Wrap fastCRW as a Google ADK FunctionTool so Gemini-powered agents can scrape, search, and crawl the live web.
Why Google ADK + fastCRW
Google's Agent Development Kit (ADK) is the production-oriented framework for building agents on Gemini. ADK ships first-class agent abstractions — LlmAgent, SequentialAgent, ParallelAgent — but it does not ship a built-in scraper. fastCRW is the right scraping primitive for ADK because the API is Firecrawl-compatible and the runtime is 6.6 MB of RAM, which keeps tool latency low. Wrap fastCRW once with a FunctionTool and every Google ADK agent — including multi-agent compositions — can reach the live web without bespoke plumbing.
Setup
- Install
google-adkandrequests. - Provision a fastCRW API key from the dashboard.
- Export
FASTCRW_API_KEYand your Google API credentials. - Define Python functions that wrap fastCRW endpoints and register them as
FunctionToolinstances.
pip install -U google-adk requests
export FASTCRW_API_KEY="fcrw_..."
export GOOGLE_API_KEY="..."
There is no Google ADK-specific fastCRW package. The FunctionTool wrapper is enough.
Code Example
import os
import requests
from google.adk.agents import LlmAgent
from google.adk.tools import FunctionTool
FASTCRW_BASE = "https://api.fastcrw.com"
def fastcrw_scrape(url: str) -> dict:
"""Scrape a URL via fastCRW and return Markdown.
Args:
url: The URL to scrape.
Returns:
A dict with the scraped Markdown under the 'markdown' key.
"""
r = requests.post(
f"{FASTCRW_BASE}/v1/scrape",
headers={"Authorization": f"Bearer {os.environ['FASTCRW_API_KEY']}"},
json={"url": url, "formats": ["markdown"]},
timeout=60,
)
r.raise_for_status()
return {"markdown": r.json()["data"]["markdown"]}
def fastcrw_search(query: str, limit: int = 5) -> dict:
"""Web search via fastCRW. Returns ranked results.
Args:
query: The search query.
limit: Maximum number of results.
Returns:
A dict with a 'results' list of search hits.
"""
r = requests.post(
f"{FASTCRW_BASE}/v1/search",
headers={"Authorization": f"Bearer {os.environ['FASTCRW_API_KEY']}"},
json={"query": query, "limit": limit},
timeout=60,
)
r.raise_for_status()
return {"results": r.json()["data"]}
scrape_tool = FunctionTool(func=fastcrw_scrape)
search_tool = FunctionTool(func=fastcrw_search)
researcher = LlmAgent(
name="researcher",
model="gemini-2.0-flash",
description="Researches topics using live web sources via fastCRW.",
instruction=(
"Use fastcrw_search to find sources, then fastcrw_scrape "
"to read the top results before answering the user."
),
tools=[search_tool, scrape_tool],
)
For a SequentialAgent pipeline that first searches via fastCRW, then scrapes the top hit, then summarizes — define one LlmAgent per step and chain them together. ADK passes data through session state so each step can read what the previous one wrote.
When to Use This
- Gemini-powered research agents — search and scrape the live web with fastCRW from inside Google ADK.
- Vertex AI deployments — package an ADK agent with fastCRW tools and deploy to Vertex AI Agent Engine.
- Sequential research pipelines — fastCRW search step → fastCRW scrape step → analysis step.
- Multi-agent ADK systems — give a researcher agent fastCRW tools and hand off to a writer agent for synthesis.
Limits + Gotchas
- ADK FunctionTool argument typing must be JSON-serializable. Stick to primitive types and basic dicts.
- Function docstrings drive tool selection. Be precise about when each fastCRW tool should be used.
- Long fastCRW crawls may exceed ADK step budgets. Use scrape per URL inside loops and run crawls as standalone jobs.
- ADK on Vertex AI imposes per-call timeouts. Match the fastCRW request timeout to the deployment's invocation budget.
Related
Continue exploring
More from Integrations
Zapier Web Scraping Integration — fastCRW [Firecrawl-Compatible]
n8n Web Scraping Integration — fastCRW [Firecrawl-Compatible]
Make Web Scraping Integration — fastCRW [Firecrawl-Compatible]
Add fastCRW to Make scenarios with the HTTP module. Firecrawl-compatible scrape and search, 6.6 MB RAM runtime, 92% coverage on the 1,000-URL benchmark.
Langflow Web Scraping Integration — fastCRW [Firecrawl-Compatible]
Add fastCRW to Langflow as a custom component or HTTP node. Firecrawl-compatible scrape and search, 6.6 MB RAM runtime, 92% coverage on the 1,000-URL benchmark.
Claude Code Web Scraping Integration — fastCRW [Firecrawl-Compatible]
Add fastCRW as a Claude Code MCP server. One npx command registers scrape, search, crawl, map, and extract tools. 6.6 MB RAM runtime, 92% coverage on the 1,000-URL benchmark.
Related hubs