n8n
This guide explains how to connect your scraper to n8n to automate workflows
n8n is an open-source workflow automation tool that lets teams connect apps, services, and APIs using a visual, node-based interface. Similar to tools like Zapier or Make, n8n helps automate repetitive tasks and build workflows without writing custom code.
Workflows in n8n run automatically based on triggers such as schedules, webhooks, or events from connected tools.
What is the MrScraper n8n Integration?
The MrScraper integration allows n8n workflows to do the following actions in n8n:
- Create Scraper: Set up AI-powered scraper directly from a workflow.
- Rerun Scraper: Trigger an existing scraper manually, on a schedule, or from another workflow event.
- Get Scraper Results: Retrieve scraped data, including latest results, paginated results, or a specific result by ID.
Why Use MrScraper with n8n?
Integrating MrScraper with n8n enables fully automated data pipelines without manual intervention. For example:
- Automatically create and rerun scrapers on a schedule or trigger.
- Fetch and process scraping results programmatically.
- Send scraped data to other tools such as Google Sheets, databases, APIs, webhooks, or notification systems.
- Build end-to-end workflows by connecting MrScraper with hundreds of services supported by n8n.
This integration allows scraping to become a seamless part of broader automation workflows rather than a standalone task.
Prerequisites
Before you start, make sure you have:
- A MrScraper API key.
- A MrScraper scraper with API access enabled.
- Access to an n8n instance (self-hosted or cloud).
Usage Steps
Step 1: Add the MrScraper Node to n8n
- Open the n8n workflow editor.
- Click the + button to add a new node.
- Search for MrScraper.
- Select the MrScraper node.
- Choose the action you want to run (for example, Create Scraper, Rerun Scraper or Get Result).
Step 2: Configure the MrScraper Node
Set Up Credentials
- In Credential to connect with, click Create new credential.
- Paste your MrScraper API key.
- Save the credential.
Choose the Resource
In the Resource field, select what you want to do with MrScraper:
- Create Scraper: Create a new scraper.
- Rerun: Trigger a scraper rerun.
- Result: Fetch scraper results.
Create Scraper
Use this resource to create a new scraper in MrScraper. You can choose these operation based on your scraping needs:
Each agent have a different parameter, see below for details:
Create a general agent scraper by providing a URL and a prompt.
| Parameter | Required | Description |
|---|---|---|
| URL | Yes | The URL to be scraped. |
| Prompt | Yes | The message to instruct the agent on what data to extract. |
| Mode | No | Choose the scraping mode. Select Cheap if the website has weak security, select Super otherwise. Read here for more information. |
| Proxy Country | No | Choose the proxy country (e.g. us, uk, sg), adjust it to match the website's country domain. |
Create a listing agent scraper by providing a URL and a prompt.
| Parameter | Required | Description |
|---|---|---|
| URL | Yes | The URL to be scraped. |
| Prompt | Yes | The message to instruct the agent on what data to extract. |
| Proxy Country | No | Choose the proxy country (e.g. us, uk, sg), adjust it to match the website's country domain. |
Create a map agent scraper by providing a URL and a prompt.
| Parameter | Required | Description |
|---|---|---|
| URL | Yes | Override the default target URL for this run. |
| Max Depth | No | How deep the scraper should follow links from the starting page. |
| Max Pages | No | Maximum number of pages to scrape. Useful for listings or paginated pages. |
| Limit | No | Maximum number of results to return. |
| Include Patterns | No | Regular Expressions to include when scraping. |
| Exclude Patterns | No | Regular Expressions to exclude when scraping. |
Rerun Scraper
Use this resource to trigger an existing scraper to run again. You can choose these operation based on your scraping needs:
Each agent have a different parameter, see below for details:
Re-run a manual scraper using the scraper ID and retrieve the results.
| Parameter | Required | Description |
|---|---|---|
| Scraper ID | Yes | The ID of the scraper you want to run. You can find this on the scraper page in MrScraper. |
| URL | Yes | Override the default target URL for this run. |
| Max Retry | No | Number of retry attempts if the scraping fails. |
Re-run a general agent scraper using the scraper ID and retrieve the results.
| Parameter | Required | Description |
|---|---|---|
| Scraper ID | Yes | The ID of the scraper you want to run. You can find this on the scraper page in MrScraper. |
| URL | Yes | Override the default target URL for this run. |
| Max Retry | No | Number of retry attempts if the scraping fails. |
Re-run a listing agent scraper using the scraper ID and retrieve the results.
| Parameter | Required | Description |
|---|---|---|
| Scraper ID | Yes | The ID of the scraper you want to run. You can find this on the scraper page in MrScraper. |
| URL | Yes | Override the default target URL for this run. |
| Max Retry | No | Number of retry attempts if the scraping fails. |
| Max Pages | No | Maximum number of pages to scrape. Useful for listings or paginated pages. |
| Timeout | No | Request timeout in seconds. Increase this for large or complex websites. |
Re-run a map agent scraper using the scraper ID and retrieve the results.
| Parameter | Required | Description |
|---|---|---|
| Scraper ID | Yes | The ID of the scraper you want to run. You can find this on the scraper page in MrScraper. |
| URL | Yes | Override the default target URL for this run. |
| Max Retry | No | Number of retry attempts if the scraping fails. |
| Max Depth | No | How deep the scraper should follow links from the starting page. |
| Max Pages | No | Maximum number of pages to scrape. Useful for listings or paginated pages. |
| Limit | No | Maximum number of results to return. |
| Include Patterns | No | Regular Expressions to include when scraping. |
| Exclude Patterns | No | Regular Expressions to exclude when scraping. |
Note
The rerun action must match the agent used by the scraper ID (e.g., General Agent → Run General Agent Scraper).
Get Scraper Results
Use this resource to retrieve data produced by a scraper. You can choose these operation based on your scraping needs:
Get results based on given page number, page size, filters, and sorting.
| Parameter | Required | Description |
|---|---|---|
| Scraper ID | Yes | The ID of the scraper. Found in the URL or settings of your scraper in MrScraper. |
| Page | No | Page number to retrieve. |
| Page Size | No | Number of results per page. |
| Sort By | No | Sort order for results. |
| Sort Order | No | Sort order direction: ascending or descending. |
Get the latest scraper's results.
| Parameter | Required | Description |
|---|---|---|
| Scraper ID | Yes | The ID of the scraper. Found in the URL or settings of your scraper in MrScraper. |
| Limit (N) | No | Number of latest results to fetch. |
Get a result detail by its ID.
| Parameter | Required | Description |
|---|---|---|
| Scraper ID | Yes | The ID of the scraper. Found in the URL or settings of your scraper in MrScraper. |
| Result ID | Yes | Unique result ID to retrieve a single record. |
Tips
This action is commonly used to pass scraped data to other n8n nodes, such as Google Sheets, databases, webhooks, or notifications.
Example Workflows
Create a Scraper
Create a scraper with the MrScraper n8n node and export results to Google Sheets. Use the generated outputs as reusable inputs for building end-to-end scraping workflows.
Listing Agent - General Agent
This workflow automates data extraction from real estate listing websites using a two-agent approach (Listing & General Agents).
Map Agent - General Agent
This workflow uses a two-agent approach to scrape entire websites (Map & General Agents).
Map Agent - Listing Agent - General Agent
This workflow combines three powerful agents for comprehensive website scraping (Map & Listing & General Agents).