MCP Server
Access MrScraper’s web scraping API via a Model Context Protocol (MCP) server to enable AI agents to fetch and extract web data securely.
Use MrScraper’s web scraping API through the Model Context Protocol (MCP) to let AI agents discover, fetch, and extract web data in real time. This cloud-hosted MCP server integrates directly with MrScraper, enabling secure, scalable scraping workflows without local setup or infrastructure management.
Features
AI Scraping
Extract structured data using natural language instructions powered by AI.
Geo Unblocker
Access geo-restricted and protected websites with built-in unblocker support.
Scraper Execution
Run scrapers manually or trigger AI-driven executions based on your workflow.
Bulk Scraping
Perform large-scale scraping operations across multiple URLs in one run.
Results Management
Retrieve, organize, and manage scraping results from a centralized interface.
Rotating Proxies
Improve reliability with rotating residential proxy support.
Cloud MCP Server
Use a fully managed, cloud-hosted MCP server with no local installation required.
Installation
Prerequisites
To use the MrScraper MCP server, you'll need:
Remote Hosted URL via HTTP Transport Protocol
The simplest way to use MrScraper MCP is through our remotely hosted server. No local installation required!
{
"mcpServers": {
"mrscraper": {
"type": "http",
"url": "https://mcp.mrscraper.com/mcp"
}
}
}Note
This configuration uses the HTTP transport protocol to connect to our hosted
MCP server at https://mcp.mrscraper.com/mcp.
Running on Different Platforms
Running on Claude Desktop
Add this to your Claude config file (claude_desktop_config.json):
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"mrscraper": {
"type": "http",
"url": "https://mcp.mrscraper.com/mcp"
}
}
}Running on Claude Code
Add the MrScraper MCP server using the Claude Code CLI:
claude mcp add mrscraper --transport http https://mcp.mrscraper.com/mcpRunning on Cursor
Note
Requires Cursor version 0.45.6+.
- Open Cursor Settings.
- Go to Features > MCP Servers.
- Click + Add new global MCP server.
- Enter the following configuration:
{
"mcpServers": {
"mrscraper": {
"type": "http",
"url": "https://mcp.mrscraper.com/mcp"
}
}
}Note
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use MrScraper MCP when appropriate for web scraping tasks.
Running on Windsurf
Add this to your ./codeium/windsurf/model_config.json:
{
"mcpServers": {
"mrscraper": {
"type": "http",
"url": "https://mcp.mrscraper.com/mcp"
}
}
}Running on VS Code
Add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P and search Preferences: Open User Settings (JSON).
{
"mcp": {
"servers": {
"mrscraper": {
"type": "http",
"url": "https://mcp.mrscraper.com/mcp"
}
}
}
}Optionally, you can add it to .vscode/mcp.json in your workspace to share the configuration with your team.
{
"servers": {
"mrscraper": {
"type": "http",
"url": "https://mcp.mrscraper.com/mcp"
}
}
}Running on Google Antigravity
Google Antigravity allows you to configure MCP servers directly through its Agent interface.
- Open the Agent sidebar in the Editor or the Agent Manager view.
- Click the "…" (More Actions) menu and select MCP Servers.
- Select View raw config to open your local
mcp_config.jsonfile. - Add the following configuration:
{
"mcpServers": {
"mrscraper": {
"type": "http",
"url": "https://mcp.mrscraper.com/mcp"
}
}
}- Save the file and click Refresh in the Antigravity MCP interface.
Running on n8n
To connect the MrScraper MCP server in n8n:
- In your n8n workflow, add an AI Agent node.
- In the AI Agent configuration, add a new Tool.
- Select MCP Client Tool as the tool type.
- Enter the MCP server Endpoint:
https://mcp.mrscraper.com/mcp- Set Server Transport to HTTP Streamable.
- Set Authentication to None (authentication is handled via API keys in tool calls).
- For Tools to include, you can select All, Selected, or All Except.
Available Tools
The MrScraper MCP server provides comprehensive scraping tools.
1. create_scraper
Creates an AI-powered scraper that intelligently extracts structured data from a website.
Listing/General Agent
{
"token": "your_api_token",
"url": "https://example.com/page",
"message": "Extract product names and prices",
"agent": "general",
"proxy_country": "US"
}Parameters
| Parameter | Required | Description |
|---|---|---|
token | Yes | MrScraper API token. |
url | Yes | Target URL to scrape. |
message | Yes | Natural language instructions for data extraction. |
agent | No | Scraper agent type: general (default), listing, or map. |
proxy_country | No | ISO country code for proxy routing (e.g., US, GB). |
Map Agent
{
"token": "your_api_token",
"url": "https://example.com/page",
"agent": "map",
"max_depth": 2,
"max_pages": 50,
"limit": 1000,
"include_patterns": "",
"exclude_patterns": ""
}Parameters
| Parameter | Required | Description |
|---|---|---|
token | Yes | MrScraper API token. |
url | Yes | Target URL to scrape. |
agent | Yes | The map agent type. |
max_depth | No | Maximum crawl depth (default: 2). |
max_pages | No | Maximum number of pages to scrape (default: 50). |
limit | No | Maximum number of records to extract (default: 1000). |
include_patterns | No | URL patterns to include (regex). |
exclude_patterns | No | URL patterns to exclude (regex). |
2. rerun_scraper
Reruns an existing AI-powered scraper on a new URL with crawling parameters.
{
"token": "your_api_token",
"scraper_id": "scraper_12345",
"url": "https://example.com/category",
"max_depth": 2,
"max_pages": 50,
"limit": 1000,
"include_patterns": "*/products/*",
"exclude_patterns": "*/cart/*||*/checkout/*"
}Parameters
| Parameter | Required | Description |
|---|---|---|
token | Yes | MrScraper API token. |
scraper_id | Yes | Scraper ID returned from create_scraper. |
url | Yes | Target URL to scrape. |
max_depth | No | Maximum crawl depth (default: 2). |
max_pages | No | Maximum number of pages to scrape (default: 50). |
limit | No | Maximum number of records to extract (default: 1000). |
include_patterns | No | URL patterns to include (regex). |
exclude_patterns | No | URL patterns to exclude (regex). |
3. bulk_rerun_scraper
Reruns an existing AI scraper on multiple URLs simultaneously in one batch.
{
"token": "your_api_token",
"scraper_id": "scraper_12345",
"urls": [
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3"
]
}Parameters
| Parameter | Required | Description |
|---|---|---|
token | Yes | MrScraper API token. |
scraper_id | Yes | Manual scraper ID (configured via web dashboard). |
url | Yes | Target URL to scrape. |
4. rerun_manual_scraper
Reruns a manually configured scraper (with custom selectors/rules) on a new URL.
{
"token": "your_api_token",
"scraper_id": "manual_scraper_67890",
"url": "https://example.com/new-item"
}Parameters
| Parameter | Required | Description |
|---|---|---|
token | Yes | MrScraper API token. |
scraper_id | Yes | Manual scraper ID (configured via web dashboard). |
url | Yes | Target URL to scrape. |
5. fetch_html
Fetches HTML content from a URL with stealth, unblocking, and rendering capabilities.
{
"token": "your_api_token",
"url": "https://example.com/page",
"timeout": 120,
"geo_code": "US",
"block_resources": false
}Parameters
| Parameter | Required | Description |
|---|---|---|
token | Yes | MrScraper API token. |
url | Yes | Target URL to fetch. |
timeout | No | Maximum wait time in seconds (default: 120). |
geo_code | No | ISO country code for geolocation (default: US). |
block_resources | No | Whether to block images, CSS, and fonts (default: false). |
6. get_all_results
Retrieves a paginated list of all scraping results with filtering, sorting, and search.
{
"token": "your_api_token",
"sort_field": "updatedAt",
"sort_order": "DESC",
"page_size": 10,
"page": 1,
"search": "product",
"date_range_column": "updatedAt",
"start_at": "2024-01-01",
"end_at": "2024-01-08"
}Parameters
| Parameter | Required | Description |
|---|---|---|
token | Yes | MrScraper API token. |
sort_field | No | Sort field (createdAt, updatedAt, id, type, url, status, etc.). |
sort_order | No | Sort order: ASC or DESC (default: DESC). |
page_size | No | Number of results per page (default: 10). |
page | No | Page number (default: 1). |
search | No | Free-text search query. |
date_range_column | No | Date field for filtering (updatedAt, createdAt, scrapedAt). |
start_at | No | Start date (ISO 8601: YYYY-MM-DD). |
end_at | No | End date (ISO 8601: YYYY-MM-DD). |
7. get_result_by_id
Retrieves detailed information for a specific scraping result by its result ID.
{
"token": "your_api_token",
"result_id": "result_12345"
}Parameters
| Parameter | Required | Description |
|---|---|---|
token | Yes | MrScraper API token. |
result_id | Yes | Unique identifier of the scraping result. |
Error Handling
The MrScraper MCP server provides comprehensive error handling:
- 401 Unauthorized: Invalid or missing API token.
- 500 Internal Server Error: Internal/Unexpected server error (Contact support@mrscraper.com if the issue persistent).
Example error response:
{
"error": "Unauthorized or invalid token. Please go to https://app.mrscraper.com to get your token.",
"status_code": 401
}