MCP Server

Access MrScraper’s web scraping API via a Model Context Protocol (MCP) server to enable AI agents to fetch and extract web data securely.

Use MrScraper’s web scraping API through the Model Context Protocol (MCP) to let AI agents discover, fetch, and extract web data in real time. This cloud-hosted MCP server integrates directly with MrScraper, enabling secure, scalable scraping workflows without local setup or infrastructure management.

Features

AI Scraping

Extract structured data using natural language instructions powered by AI.

Geo Unblocker

Access geo-restricted and protected websites with built-in unblocker support.

Scraper Execution

Run scrapers manually or trigger AI-driven executions based on your workflow.

Bulk Scraping

Perform large-scale scraping operations across multiple URLs in one run.

Results Management

Retrieve, organize, and manage scraping results from a centralized interface.

Rotating Proxies

Improve reliability with rotating residential proxy support.

Cloud MCP Server

Use a fully managed, cloud-hosted MCP server with no local installation required.

Installation

Prerequisites

To use the MrScraper MCP server, you'll need:

  1. A MrScraper account
  2. MrScraper Account API Key

Remote Hosted URL via HTTP Transport Protocol

The simplest way to use MrScraper MCP is through our remotely hosted server. No local installation required!

{
  "mcpServers": {
    "mrscraper": {
      "type": "http",
      "url": "https://mcp.mrscraper.com/mcp"
    }
  }
}

Note

This configuration uses the HTTP transport protocol to connect to our hosted MCP server at https://mcp.mrscraper.com/mcp.

Running on Different Platforms

Running on Claude Desktop

Add this to your Claude config file (claude_desktop_config.json):

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "mrscraper": {
      "type": "http",
      "url": "https://mcp.mrscraper.com/mcp"
    }
  }
}

Running on Claude Code

Add the MrScraper MCP server using the Claude Code CLI:

claude mcp add mrscraper --transport http https://mcp.mrscraper.com/mcp

Running on Cursor

Note

Requires Cursor version 0.45.6+.

  1. Open Cursor Settings.
  2. Go to Features > MCP Servers.
  3. Click + Add new global MCP server.
  4. Enter the following configuration:
{
  "mcpServers": {
    "mrscraper": {
      "type": "http",
      "url": "https://mcp.mrscraper.com/mcp"
    }
  }
}

Note

After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use MrScraper MCP when appropriate for web scraping tasks.

Running on Windsurf

Add this to your ./codeium/windsurf/model_config.json:

{
  "mcpServers": {
    "mrscraper": {
      "type": "http",
      "url": "https://mcp.mrscraper.com/mcp"
    }
  }
}

Running on VS Code

Add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P and search Preferences: Open User Settings (JSON).

{
  "mcp": {
    "servers": {
      "mrscraper": {
        "type": "http",
        "url": "https://mcp.mrscraper.com/mcp"
      }
    }
  }
}

Optionally, you can add it to .vscode/mcp.json in your workspace to share the configuration with your team.

{
  "servers": {
    "mrscraper": {
      "type": "http",
      "url": "https://mcp.mrscraper.com/mcp"
    }
  }
}

Running on Google Antigravity

Google Antigravity allows you to configure MCP servers directly through its Agent interface.

  1. Open the Agent sidebar in the Editor or the Agent Manager view.
  2. Click the "…" (More Actions) menu and select MCP Servers.
  3. Select View raw config to open your local mcp_config.json file.
  4. Add the following configuration:
{
  "mcpServers": {
    "mrscraper": {
      "type": "http",
      "url": "https://mcp.mrscraper.com/mcp"
    }
  }
}
  1. Save the file and click Refresh in the Antigravity MCP interface.

Running on n8n

To connect the MrScraper MCP server in n8n:

  1. In your n8n workflow, add an AI Agent node.
  2. In the AI Agent configuration, add a new Tool.
  3. Select MCP Client Tool as the tool type.
  4. Enter the MCP server Endpoint:
https://mcp.mrscraper.com/mcp
  1. Set Server Transport to HTTP Streamable.
  2. Set Authentication to None (authentication is handled via API keys in tool calls).
  3. For Tools to include, you can select All, Selected, or All Except.

Available Tools

The MrScraper MCP server provides comprehensive scraping tools.

1. create_scraper

Creates an AI-powered scraper that intelligently extracts structured data from a website.

Listing/General Agent

{
  "token": "your_api_token",
  "url": "https://example.com/page",
  "message": "Extract product names and prices",
  "agent": "general",
  "proxy_country": "US"
}
Parameters
ParameterRequiredDescription
tokenYesMrScraper API token.
urlYesTarget URL to scrape.
messageYesNatural language instructions for data extraction.
agentNoScraper agent type: general (default), listing, or map.
proxy_countryNoISO country code for proxy routing (e.g., US, GB).

Map Agent

{
  "token": "your_api_token",
  "url": "https://example.com/page",
  "agent": "map",
  "max_depth": 2,
  "max_pages": 50,
  "limit": 1000,
  "include_patterns": "",
  "exclude_patterns": ""
}
Parameters
ParameterRequiredDescription
tokenYesMrScraper API token.
urlYesTarget URL to scrape.
agentYesThe map agent type.
max_depthNoMaximum crawl depth (default: 2).
max_pagesNoMaximum number of pages to scrape (default: 50).
limitNoMaximum number of records to extract (default: 1000).
include_patternsNoURL patterns to include (regex).
exclude_patternsNoURL patterns to exclude (regex).

2. rerun_scraper

Reruns an existing AI-powered scraper on a new URL with crawling parameters.

{
  "token": "your_api_token",
  "scraper_id": "scraper_12345",
  "url": "https://example.com/category",
  "max_depth": 2,
  "max_pages": 50,
  "limit": 1000,
  "include_patterns": "*/products/*",
  "exclude_patterns": "*/cart/*||*/checkout/*"
}

Parameters

ParameterRequiredDescription
tokenYesMrScraper API token.
scraper_idYesScraper ID returned from create_scraper.
urlYesTarget URL to scrape.
max_depthNoMaximum crawl depth (default: 2).
max_pagesNoMaximum number of pages to scrape (default: 50).
limitNoMaximum number of records to extract (default: 1000).
include_patternsNoURL patterns to include (regex).
exclude_patternsNoURL patterns to exclude (regex).

3. bulk_rerun_scraper

Reruns an existing AI scraper on multiple URLs simultaneously in one batch.

{
  "token": "your_api_token",
  "scraper_id": "scraper_12345",
  "urls": [
    "https://example.com/page1",
    "https://example.com/page2",
    "https://example.com/page3"
  ]
}

Parameters

ParameterRequiredDescription
tokenYesMrScraper API token.
scraper_idYesManual scraper ID (configured via web dashboard).
urlYesTarget URL to scrape.

4. rerun_manual_scraper

Reruns a manually configured scraper (with custom selectors/rules) on a new URL.

{
  "token": "your_api_token",
  "scraper_id": "manual_scraper_67890",
  "url": "https://example.com/new-item"
}

Parameters

ParameterRequiredDescription
tokenYesMrScraper API token.
scraper_idYesManual scraper ID (configured via web dashboard).
urlYesTarget URL to scrape.

5. fetch_html

Fetches HTML content from a URL with stealth, unblocking, and rendering capabilities.

{
  "token": "your_api_token",
  "url": "https://example.com/page",
  "timeout": 120,
  "geo_code": "US",
  "block_resources": false
}

Parameters

ParameterRequiredDescription
tokenYesMrScraper API token.
urlYesTarget URL to fetch.
timeoutNoMaximum wait time in seconds (default: 120).
geo_codeNoISO country code for geolocation (default: US).
block_resourcesNoWhether to block images, CSS, and fonts (default: false).

6. get_all_results

Retrieves a paginated list of all scraping results with filtering, sorting, and search.

{
  "token": "your_api_token",
  "sort_field": "updatedAt",
  "sort_order": "DESC",
  "page_size": 10,
  "page": 1,
  "search": "product",
  "date_range_column": "updatedAt",
  "start_at": "2024-01-01",
  "end_at": "2024-01-08"
}

Parameters

ParameterRequiredDescription
tokenYesMrScraper API token.
sort_fieldNoSort field (createdAt, updatedAt, id, type, url, status, etc.).
sort_orderNoSort order: ASC or DESC (default: DESC).
page_sizeNoNumber of results per page (default: 10).
pageNoPage number (default: 1).
searchNoFree-text search query.
date_range_columnNoDate field for filtering (updatedAt, createdAt, scrapedAt).
start_atNoStart date (ISO 8601: YYYY-MM-DD).
end_atNoEnd date (ISO 8601: YYYY-MM-DD).

7. get_result_by_id

Retrieves detailed information for a specific scraping result by its result ID.

{
  "token": "your_api_token",
  "result_id": "result_12345"
}

Parameters

ParameterRequiredDescription
tokenYesMrScraper API token.
result_idYesUnique identifier of the scraping result.

Error Handling

The MrScraper MCP server provides comprehensive error handling:

  • 401 Unauthorized: Invalid or missing API token.
  • 500 Internal Server Error: Internal/Unexpected server error (Contact support@mrscraper.com if the issue persistent).

Example error response:

{
  "error": "Unauthorized or invalid token. Please go to https://app.mrscraper.com to get your token.",
  "status_code": 401
}

On this page