Node.js

Scrape, crawl, and extract structured data from websites using the MrScraper Node SDK.

Use the official @mrscraper/sdk package to fetch HTML, run AI scrapers, rerun existing scrapers, and retrieve results from your Node.js apps.

Installation

npm install @mrscraper/sdk

Requirements

  • Node.js >= 18
  • ES Modules enabled ("type": "module" in your package.json)
package.json
{
	"type": "module"
}

Authentication

Set your API token as an environment variable, Get your API key from the MrScraper app. You can learn how to generate an API token in the Generate Token guide.

export MRSCRAPER_API_TOKEN=your_token_here

Tip

Every SDK method also accepts an optional token parameter if you want to override MRSCRAPER_API_TOKEN per request.

Quick Start

import {
	fetchHtml,
	createAiScraper,
	MrScraperError,
} from "@mrscraper/sdk";

try {
	// 1. Fetch raw HTML (or JSON string) from a URL
	const html = await fetchHtml({
		url: "https://example.com",
	});

	console.log(html);

	// 2. Create an AI scraper (listing agent)
	const scraper = await createAiScraper({
		url: "https://example.com/products",
		message: "Extract all product names and prices",
		agent: "listing",
	});

	console.log(scraper);
} catch (err) {
	if (err instanceof MrScraperError) {
		console.error(`[${err.status ?? "network"}] ${err.message}`);
	} else {
		throw err;
	}
}

Usage

Fetch raw HTML (stealth browser)

Fetch raw HTML (or JSON text) from a target URL.

import { fetchHtml } from "@mrscraper/sdk";

const html = await fetchHtml({
	url: "https://example.com",
	timeout: 120,
	geoCode: "US",
	blockResources: false,
	// token: "optional_override_token"
});

Options:

OptionTypeRequiredDefaultDescription
urlstringYes-URL to fetch
timeoutnumberNo120Timeout in seconds (1-600)
geoCodestringNo"US"Two-letter ISO country code
blockResourcesbooleanNofalseBlock images/fonts/CSS for faster fetches
tokenstringNo-Per-request token override

Create AI scraper

Create a new AI scraper.

Supported agents:

  • general: Extract structured data from natural-language instructions.
  • listing: Optimized for list pages (products, jobs, articles, etc.).
  • map: Crawl a site and collect URLs.

General/listing example:

import { createAiScraper } from "@mrscraper/sdk";

const scraper = await createAiScraper({
	url: "https://example.com/products",
	message: "Extract all product names and prices",
	agent: "listing",
	proxyCountry: "US",
	// token: "optional_override_token"
});

Map example:

import { createAiScraper } from "@mrscraper/sdk";

const mapResult = await createAiScraper({
	url: "https://example.com",
	agent: "map",
	maxDepth: 2,
	maxPages: 50,
	limit: 1000,
	includePatterns: "/blog",
	excludePatterns: "/admin",
	// token: "optional_override_token"
});

Key options:

OptionTypeRequiredDefault
urlstringYes-
messagestringNo""
agent"general" | "listing" | "map"No"general"
proxyCountrystring | nullNo-
maxDepthnumberNo2
maxPagesnumberNo50
limitnumberNo1000
includePatternsstringNo""
excludePatternsstringNo""
tokenstringNo-

Rerun AI Scraper

Rerun an existing AI scraper on a new URL.

import { rerunAiScraper } from "@mrscraper/sdk";

const result = await rerunAiScraper({
	scraperId: "your-scraper-id",
	url: "https://example.com/new-page",
	maxDepth: 2,
	maxPages: 50,
	limit: 1000,
	includePatterns: "",
	excludePatterns: "",
	// token: "optional_override_token"
});

Bulk rerun (AI scraper)

Rerun an AI scraper in bulk.

import { bulkRerunAiScraper } from "@mrscraper/sdk";

const bulkResult = await bulkRerunAiScraper({
	scraperId: "your-scraper-id",
	urls: ["https://example.com/page1", "https://example.com/page2"],
	// token: "optional_override_token"
});

Rerun manual scraper

Rerun a dashboard-configured manual scraper on a URL.

import { rerunManualScraper } from "@mrscraper/sdk";

const single = await rerunManualScraper({
	scraperId: "your-manual-scraper-id",
	url: "https://example.com/target",
	// token: "optional_override_token"
});

Bulk rerun (manual scraper)

Rerun a manual scraper across multiple URLs.

import { bulkRerunManualScraper } from "@mrscraper/sdk";

const bulk = await bulkRerunManualScraper({
	scraperId: "your-manual-scraper-id",
	urls: ["https://example.com/a", "https://example.com/b"],
	// token: "optional_override_token"
});

Get All Results in Range

List results with sorting, pagination, search, and date filters.

import { getAllResults } from "@mrscraper/sdk";

const results = await getAllResults({
	sortField: "updatedAt",
	sortOrder: "DESC",
	pageSize: 10,
	page: 1,
	search: "example.com",
	dateRangeColumn: "createdAt",
	startAt: "2024-01-01T00:00:00Z",
	endAt: "2024-12-31T23:59:59Z",
	// token: "optional_override_token"
});

Supported sortField values: "createdAt", "updatedAt", "id", "type", "url", "status", "error", "tokenUsage", "runtime".

Get Result by ID

Fetch a single result by ID.

import { getResultById } from "@mrscraper/sdk";

const result = await getResultById({
	resultId: "your-result-id",
	// token: "optional_override_token"
});

Error Handling

All SDK methods throw MrScraperError for API, network, or timeout failures.

import { fetchHtml, MrScraperError } from "@mrscraper/sdk";

try {
	await fetchHtml({ url: "https://example.com" });
} catch (err) {
	if (err instanceof MrScraperError) {
		console.error(err.message);
		console.error(err.status);
	}
}

MrScraperError properties:

PropertyTypeDescription
messagestringHuman-readable error message
statusnumber | undefinedHTTP status (401, 429, 500, etc.) or undefined for network errors
namestringAlways "MrScraperError"

Mapping from Firecrawl

If you are coming from the Firecrawl Node SDK, the following table maps common Firecrawl flows to MrScraper.

Firecrawl (Node)MrScraper
Scrape a URL with HTML output (formats including html)fetchHtml
Scrape a URL with structured JSON (e.g. formats with JSON / extraction)createAiScraper with agent: "general", or rerunAiScraper on an existing general scraper — see General Agent
Crawl a website, sitemap-only crawl, or map URLscreateAiScraper with agent: "map", or rerunAiScraper with map-oriented options — see Map Agent
Batch scrapebulkRerunAiScraper or bulkRerunManualScraper
Search or category listing pages where you need as many items as possible across paginationcreateAiScraper with agent: "listing"; when rerunning, raise maxPages so the run follows more listing pages — see Listing Agent and Activating API (maxPages controls listing depth).

On this page