Node.js
Scrape, crawl, and extract structured data from websites using the MrScraper Node SDK.
Use the official @mrscraper/sdk package to fetch HTML, run AI scrapers, rerun existing scrapers, and retrieve results from your Node.js apps.
Installation
npm install @mrscraper/sdkRequirements
- Node.js
>= 18 - ES Modules enabled (
"type": "module"in yourpackage.json)
{
"type": "module"
}Authentication
Set your API token as an environment variable, Get your API key from the MrScraper app. You can learn how to generate an API token in the Generate Token guide.
export MRSCRAPER_API_TOKEN=your_token_hereTip
Every SDK method also accepts an optional token parameter if you want to override MRSCRAPER_API_TOKEN per request.
Quick Start
import {
fetchHtml,
createAiScraper,
MrScraperError,
} from "@mrscraper/sdk";
try {
// 1. Fetch raw HTML (or JSON string) from a URL
const html = await fetchHtml({
url: "https://example.com",
});
console.log(html);
// 2. Create an AI scraper (listing agent)
const scraper = await createAiScraper({
url: "https://example.com/products",
message: "Extract all product names and prices",
agent: "listing",
});
console.log(scraper);
} catch (err) {
if (err instanceof MrScraperError) {
console.error(`[${err.status ?? "network"}] ${err.message}`);
} else {
throw err;
}
}Usage
Fetch raw HTML (stealth browser)
Fetch raw HTML (or JSON text) from a target URL.
import { fetchHtml } from "@mrscraper/sdk";
const html = await fetchHtml({
url: "https://example.com",
timeout: 120,
geoCode: "US",
blockResources: false,
// token: "optional_override_token"
});Options:
| Option | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | - | URL to fetch |
timeout | number | No | 120 | Timeout in seconds (1-600) |
geoCode | string | No | "US" | Two-letter ISO country code |
blockResources | boolean | No | false | Block images/fonts/CSS for faster fetches |
token | string | No | - | Per-request token override |
Create AI scraper
Create a new AI scraper.
Supported agents:
general: Extract structured data from natural-language instructions.listing: Optimized for list pages (products, jobs, articles, etc.).map: Crawl a site and collect URLs.
General/listing example:
import { createAiScraper } from "@mrscraper/sdk";
const scraper = await createAiScraper({
url: "https://example.com/products",
message: "Extract all product names and prices",
agent: "listing",
proxyCountry: "US",
// token: "optional_override_token"
});Map example:
import { createAiScraper } from "@mrscraper/sdk";
const mapResult = await createAiScraper({
url: "https://example.com",
agent: "map",
maxDepth: 2,
maxPages: 50,
limit: 1000,
includePatterns: "/blog",
excludePatterns: "/admin",
// token: "optional_override_token"
});Key options:
| Option | Type | Required | Default |
|---|---|---|---|
url | string | Yes | - |
message | string | No | "" |
agent | "general" | "listing" | "map" | No | "general" |
proxyCountry | string | null | No | - |
maxDepth | number | No | 2 |
maxPages | number | No | 50 |
limit | number | No | 1000 |
includePatterns | string | No | "" |
excludePatterns | string | No | "" |
token | string | No | - |
Rerun AI Scraper
Rerun an existing AI scraper on a new URL.
import { rerunAiScraper } from "@mrscraper/sdk";
const result = await rerunAiScraper({
scraperId: "your-scraper-id",
url: "https://example.com/new-page",
maxDepth: 2,
maxPages: 50,
limit: 1000,
includePatterns: "",
excludePatterns: "",
// token: "optional_override_token"
});Bulk rerun (AI scraper)
Rerun an AI scraper in bulk.
import { bulkRerunAiScraper } from "@mrscraper/sdk";
const bulkResult = await bulkRerunAiScraper({
scraperId: "your-scraper-id",
urls: ["https://example.com/page1", "https://example.com/page2"],
// token: "optional_override_token"
});Rerun manual scraper
Rerun a dashboard-configured manual scraper on a URL.
import { rerunManualScraper } from "@mrscraper/sdk";
const single = await rerunManualScraper({
scraperId: "your-manual-scraper-id",
url: "https://example.com/target",
// token: "optional_override_token"
});Bulk rerun (manual scraper)
Rerun a manual scraper across multiple URLs.
import { bulkRerunManualScraper } from "@mrscraper/sdk";
const bulk = await bulkRerunManualScraper({
scraperId: "your-manual-scraper-id",
urls: ["https://example.com/a", "https://example.com/b"],
// token: "optional_override_token"
});Get All Results in Range
List results with sorting, pagination, search, and date filters.
import { getAllResults } from "@mrscraper/sdk";
const results = await getAllResults({
sortField: "updatedAt",
sortOrder: "DESC",
pageSize: 10,
page: 1,
search: "example.com",
dateRangeColumn: "createdAt",
startAt: "2024-01-01T00:00:00Z",
endAt: "2024-12-31T23:59:59Z",
// token: "optional_override_token"
});Supported sortField values: "createdAt", "updatedAt", "id", "type", "url", "status", "error", "tokenUsage", "runtime".
Get Result by ID
Fetch a single result by ID.
import { getResultById } from "@mrscraper/sdk";
const result = await getResultById({
resultId: "your-result-id",
// token: "optional_override_token"
});Error Handling
All SDK methods throw MrScraperError for API, network, or timeout failures.
import { fetchHtml, MrScraperError } from "@mrscraper/sdk";
try {
await fetchHtml({ url: "https://example.com" });
} catch (err) {
if (err instanceof MrScraperError) {
console.error(err.message);
console.error(err.status);
}
}MrScraperError properties:
| Property | Type | Description |
|---|---|---|
message | string | Human-readable error message |
status | number | undefined | HTTP status (401, 429, 500, etc.) or undefined for network errors |
name | string | Always "MrScraperError" |
Mapping from Firecrawl
If you are coming from the Firecrawl Node SDK, the following table maps common Firecrawl flows to MrScraper.
| Firecrawl (Node) | MrScraper |
|---|---|
Scrape a URL with HTML output (formats including html) | fetchHtml |
Scrape a URL with structured JSON (e.g. formats with JSON / extraction) | createAiScraper with agent: "general", or rerunAiScraper on an existing general scraper — see General Agent |
| Crawl a website, sitemap-only crawl, or map URLs | createAiScraper with agent: "map", or rerunAiScraper with map-oriented options — see Map Agent |
| Batch scrape | bulkRerunAiScraper or bulkRerunManualScraper |
| Search or category listing pages where you need as many items as possible across pagination | createAiScraper with agent: "listing"; when rerunning, raise maxPages so the run follows more listing pages — see Listing Agent and Activating API (maxPages controls listing depth). |