Make Web Scraping Integration — fastCRW [Firecrawl-Compatible]
Add fastCRW to Make scenarios with the HTTP module. Firecrawl-compatible scrape and search, 6.6 MB RAM runtime, 92% coverage on the 1,000-URL benchmark.
Use Make's HTTP module to call fastCRW scrape and search endpoints from any scenario — no custom app required.
Why Make + fastCRW
Make is the visual automation tool ops and growth teams reach for when Zapier feels too constrained. Make scenarios are great at branching and aggregating, but the platform does not ship a first-party scraper. fastCRW closes that gap with a Firecrawl-compatible API that Make can call through its standard HTTP module. The fastCRW runtime is 6.6 MB of RAM and the API returns clean Markdown, so Make scenarios stay readable and the downstream LLM modules do not waste tokens on HTML noise. Existing Make scenarios that already call Firecrawl swap to fastCRW with a single base URL change.
Setup
- Open Make and create a new scenario.
- Add a Connection of type HTTP with a custom header
Authorization: Bearer fcrw_.... - Add an HTTP > Make a request module to the scenario.
- Set the URL to
https://api.fastcrw.com/v1/scrapeand the body type to JSON.
No Make custom app is required. The built-in HTTP module covers every fastCRW endpoint.
Code Example
Make HTTP module configuration for a fastCRW scrape:
- URL:
https://api.fastcrw.com/v1/scrape - Method: POST
- Headers:
Authorization: Bearer fcrw_...,Content-Type: application/json - Body type: Raw, Content type JSON
- Request content:
{
"url": "{{ 1.url }}",
"formats": ["markdown"]
}
A complete Make scenario for batch scraping:
- Google Sheets — Search Rows module emits a list of URLs.
- Iterator module unpacks the list.
- HTTP > Make a request module calls fastCRW per URL.
- Set Variable extracts
data.markdownfrom the fastCRW response. - Aggregator assembles the Markdown into one bundle.
- OpenAI module summarizes the aggregated content.
For fastCRW search inside Make:
{
"query": "{{ 1.searchQuery }}",
"limit": 5
}
POST that body to https://api.fastcrw.com/v1/search and Make's downstream modules can iterate the ranked results. The flat JSON shape that fastCRW returns plays well with Make's mapping panel — no manual JSON parsing module required.
When to Use This
- Lead enrichment scenarios — scrape company URLs after a CRM trigger and write structured fields back to HubSpot or Salesforce via Make.
- Content monitoring — schedule a Make scenario to scrape competitor pages and post diffs to Slack.
- No-code RAG ingestion — pull pages into a Pinecone or Supabase vector store from inside Make.
- AI workflows — chain fastCRW with the Make OpenAI or Anthropic modules to run summarization or extraction on every scrape.
Limits + Gotchas
- Make scenarios have a per-execution timeout. For long fastCRW crawls, prefer scrape-per-URL inside an Iterator over a single deep crawl call.
- The HTTP module parses JSON automatically when Parse response is on — leave it on for fastCRW responses to avoid manual
parseJSONcalls. - Make operations are billed per module execution. Filter early in the scenario so fastCRW only runs on URLs that survive the filter.
- The HTTP module cannot stream responses. For very large pages, paginate at the source or use fastCRW's crawl with limit set explicitly.
Related
Continue exploring
More from Integrations
Claude Code Web Scraping Integration — fastCRW [Firecrawl-Compatible]
Add fastCRW as a Claude Code MCP server. One npx command registers scrape, search, crawl, map, and extract tools. 6.6 MB RAM runtime, 92% coverage on the 1,000-URL benchmark.
OpenAI Agents SDK Web Scraping Integration — fastCRW [Firecrawl-Compatible]
Give OpenAI Agents SDK agents a fastCRW scrape and search tool with the @function_tool decorator. 6.6 MB RAM runtime, 92% coverage on the 1,000-URL benchmark.
Zapier Web Scraping Integration — fastCRW [Firecrawl-Compatible]
Wire fastCRW into Zapier Zaps via the Webhooks step. Firecrawl-compatible scrape and search API, 6.6 MB RAM runtime, 92% coverage on the 1,000-URL benchmark.
Related hubs