n8n

This guide explains how to connect your scraper to n8n to automate workflows

n8n Integration

n8n is an open-source workflow automation tool that lets teams connect apps, services, and APIs using a visual, node-based interface. Similar to Zapier or Make, n8n automates repetitive tasks and builds workflows without custom code.

Workflows in n8n run automatically based on triggers such as schedules, webhooks, or events from connected tools.

Overview

The MrScraper n8n integration enables you to:

  • Create Scrapers - Set up AI-powered scrapers directly from your workflow
  • Rerun Scrapers - Trigger existing scrapers manually, on schedule, or from workflow events
  • Get Results - Retrieve scraped data including latest results, paginated results, or specific results by ID

Why Use This Integration?

Integrating MrScraper with n8n enables fully automated data pipelines:

  • Automatically create and rerun scrapers on a schedule or trigger
  • Fetch and process scraping results programmatically
  • Send scraped data to other tools (Google Sheets, databases, APIs, webhooks, notification systems)
  • Build end-to-end workflows by connecting MrScraper with hundreds of n8n-supported services

This transforms scraping from a standalone task into a seamless part of broader automation workflows.

Prerequisites

Before you start, ensure you have:

Understanding MrScraper Resources

The MrScraper node in n8n provides different Resources for various scraping operations. Understanding these resources will help you choose the right one for your workflow.

Account

Retrieve your MrScraper account information, including account type, usage limits, and token consumption.

Use Case: Monitor account status and usage in automated workflows.

Agent

Create a new AI-powered scraping agent. This resource creates agents that run and return results immediately.

Available Agent Types:

Extract structured data from single pages or listings

Best for:

  • Product detail pages
  • Article pages
  • Profile pages
  • Single-item data extraction

Parameters:

ParameterRequiredDescription
URLYesThe target URL to scrape
PromptYesInstructions for the agent on what data to extract
ModeNoCheap for weak security, Super for stronger protection. Learn more
Proxy CountryNoCountry code for proxy (e.g., us, uk, sg)

Extract data from paginated listings

Best for:

  • Product category pages
  • Search results
  • Multi-page listings
  • Directory pages

Parameters:

ParameterRequiredDescription
URLYesThe target URL to scrape
PromptYesInstructions for data extraction
Proxy CountryNoCountry code for proxy

Discover and extract all URLs from a website

Best for:

  • Site mapping
  • URL discovery
  • Website crawling preparation
  • Building link inventories

Parameters:

ParameterRequiredDescription
URLYesStarting URL for crawling
Max DepthNoHow many levels deep to follow links
Max PagesNoMaximum number of pages to crawl
LimitNoMaximum number of results to return
Include PatternsNoRegex patterns for URLs to include
Exclude PatternsNoRegex patterns for URLs to exclude

Batch Operation

Run multiple URLs against an existing AI or manual scraper in a single operation.

Use Case: Scrape multiple product pages, profiles, or articles using the same scraper configuration without creating separate workflow nodes.

Tips

Retrieve batch operation results by passing the batch operation ID as a parameter to the Get Result Details action node.

Parameters:

ParameterRequiredDescription
ModeYesThe scraper mode
Scraper IDYesID of your AI or Manual scraper (found in scraper URL or settings)
URLsYesList of URLs to scrape

Create Scraper

Create a persistent scraper in your MrScraper account that can be reused and triggered multiple times.

Usecase: When you need a reusable scraper configuration that you'll run multiple times with different URLs.

Available Scraper Types:

Create a reusable general agent scraper.

Parameters:

ParameterRequiredDescription
URLYesThe target URL to scrape
PromptYesInstructions for data extraction
ModeNoCheap or Super scraping mode
Proxy CountryNoCountry code for proxy

Create a reusable listing agent scraper.

Parameters:

ParameterRequiredDescription
URLYesThe target URL to scrape
PromptYesInstructions for data extraction
Proxy CountryNoCountry code for proxy

Create a reusable map agent scraper.

Parameters:

ParameterRequiredDescription
URLYesStarting URL for crawling
Max DepthNoLink depth to follow
Max PagesNoMaximum pages to crawl
LimitNoMaximum results to return
Include PatternsNoRegex patterns to include
Exclude PatternsNoRegex patterns to exclude

Rerun Scraper

Trigger an existing scraper to run again with new parameters. This requires an existing scraper created through the Create Scraper resource or in the MrScraper dashboard.

Use Case: Run the same scraper configuration on different URLs or schedules without recreating the scraper.

Available for all agent types:

Parameters:

ParameterRequiredDescription
Scraper IDYesID of the scraper to run
URLYesTarget URL (overrides default)
Max RetryNoNumber of retry attempts on failure

Parameters:

ParameterRequiredDescription
Scraper IDYesID of the scraper to run
URLYesTarget URL (overrides default)
Max RetryNoNumber of retry attempts on failure

Parameters:

ParameterRequiredDescription
Scraper IDYesID of the scraper to run
URLYesTarget URL (overrides default)
Max RetryNoNumber of retry attempts on failure
Max PagesNoMaximum number of pages to scrape
TimeoutNoRequest timeout in seconds

Parameters:

ParameterRequiredDescription
Scraper IDYesID of the scraper to run
URLYesTarget URL (overrides default)
Max RetryNoNumber of retry attempts on failure
Max DepthNoLink depth to follow
Max PagesNoMaximum pages to scrape
LimitNoMaximum results to return
Include PatternsNoRegex patterns to include
Exclude PatternsNoRegex patterns to exclude

Important

The rerun action must match the agent type used by the scraper. For example, if your scraper uses General Agent, select "Run General Agent Scraper" in the rerun operation.

Results

Retrieve data produced by your scrapers. This is typically the final step in a scraping workflow, where you fetch the data to send to other systems.

Available Operations:

Retrieve paginated results with filtering and sorting

Best for: Large result sets that need pagination or specific sorting

Parameters:

ParameterRequiredDescription
Scraper IDYesID of the scraper
PageNoPage number to retrieve
Page SizeNoNumber of results per page
Sort ByNoField to sort by
Sort OrderNoascending or descending

Retrieve the most recent results

Best for: Monitoring workflows where you only need the latest data

Parameters:

ParameterRequiredDescription
Scraper IDYesID of the scraper
Limit (N)NoNumber of latest results to fetch

Retrieve a specific result by ID

Best for: Fetching a known result or following up on a specific scraping operation

Parameters:

ParameterRequiredDescription
Scraper IDYesID of the scraper
Result IDYesUnique result ID to retrieve

Common Use Case

This action is commonly used to pass scraped data to other n8n nodes like Google Sheets, databases, webhooks, or notifications.

Scraping

Quick scraping operations for specific scenarios without creating persistent scrapers. These are pre-built actionss that run and return results immediately.

Available Operations:

Extract all URLs from a website

Parameters:

ParameterRequiredDescription
URLYesStarting URL for crawling
Max DepthNoLink depth to follow
Max PagesNoMaximum pages to crawl
LimitNoMaximum results to return
Include PatternsNoRegex patterns to include
Exclude PatternsNoRegex patterns to exclude

Extract data using AI with custom prompts

Parameters:

ParameterRequiredDescription
URLYesTarget URL to scrape
PromptYesInstructions for data extraction
ModeNoCheap or Super scraping mode
Proxy CountryNoCountry code for proxy

Extract data from multi-page listings

Parameters:

ParameterRequiredDescription
URLYesTarget URL to scrape
PromptYesInstructions for data extraction
Max PagesNoMaximum pages to scrape

Extract data using preset schemas

Parameters:

ParameterRequiredDescription
URLYesTarget URL to scrape
Structured Data CategoryYesPreset schema (article, product, hotel, etc.)
ModeNoCheap or Super scraping mode
Proxy CountryNoCountry code for proxy

Available Categories: Article, Product, Hotel, Event, Recipe, Job Posting, and more.

Fetch rendered HTML via stealth browser

Parameters:

ParameterRequiredDescription
URLYesTarget URL to fetch
TimeoutNoMaximum seconds to wait for page load
Geo CodeNoISO country code for proxy
Block ResourcesNoBlock images, CSS, and fonts for faster loading

Web Unblocker

Get raw HTML content from a page, bypassing anti-scraping measures.

Use Case: Troubleshooting scraping issues, retrieving page source, or when you need raw HTML without structured data extraction.

Parameters:

ParameterRequiredDescription
URLYesTarget URL to fetch
TimeoutNoMaximum seconds to wait for page load
Geo CodeNoISO country code for proxy
Block ResourcesNoBlock images, CSS, and fonts for faster loading

Setup Guide

Now that you understand the available resources, let's set up your first MrScraper workflow.

Step 1: Add the MrScraper Node

  1. Open the n8n workflow editor
  2. Click the + button to add a new node
  3. Search for MrScraper
  4. Select the MrScraper node

Step 2: Configure Credentials

  1. In Credential to connect with, click Create new credential
  2. Paste your MrScraper API key
  3. Click Save

Step 3: Choose Your Resource and Configure

  1. Select the Resource that matches your use case (see Understanding MrScraper Resources above)
  2. Fill in the required parameters for your selected resource
  3. Configure any optional parameters as needed

Step 4: Test and Execute

  1. Click Test step to verify your configuration
  2. Review the returned data
  3. Connect the output to other nodes in your workflow

Quick Tip

Start with the Agent resource for testing and one-off scraping. Once you have a working configuration, use Create Scraper to save it for reuse with the Rerun Scraper resource.

Example Workflows

Prebuilt Workflow Templates

MrScraper provides ready-to-deploy n8n workflow templates for common automation use cases. Each template is built around real-world scenarios and can be deployed in minutes.

Quick Start

Select the template that fits your use case, follow the setup guide, and you'll have a working automation running in minutes.

On this page