PHP

PHP code examples for using Residential Proxy with cURL and Guzzle.

Practical PHP examples for integrating Residential Proxy into your web scraping projects.

1. Basic cURL Request

Demonstrates the fundamental cURL setup for using Residential Proxy with a static session. This example shows essential cURL options for proxy configuration and basic error handling.

<?php

// Static proxy configuration
$proxy = 'network.mrproxy.com:10000';
$proxyAuth = 'user-country-us-sessid-php1:pass123';

$ch = curl_init('https://api.ipify.org');
curl_setopt($ch, CURLOPT_PROXY, $proxy);
curl_setopt($ch, CURLOPT_PROXYUSERPWD, $proxyAuth);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 30);

$response = curl_exec($ch);

if (curl_errno($ch)) {
    echo 'Error: ' . curl_error($ch);
} else {
    echo 'Your IP: ' . $response;
}

curl_close($ch);
?>

Use Case: Simple proxy testing and basic scraping tasks where you need a consistent IP address. Perfect for beginners learning PHP proxy integration.

2. Rotating Proxy for Scraping

Shows how to create a reusable scraping function with rotating proxies. Each request gets a fresh IP address, and the function returns structured data for easy processing.

<?php

function scrapeWithProxy($url) {
    // Rotating proxy
    $proxy = 'network.mrproxy.com:10000';
    $proxyAuth = 'user-country-de:pass123';
    
    $ch = curl_init($url);
    curl_setopt($ch, CURLOPT_PROXY, $proxy);
    curl_setopt($ch, CURLOPT_PROXYUSERPWD, $proxyAuth);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
    curl_setopt($ch, CURLOPT_TIMEOUT, 30);
    
    // Set user agent
    curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)');
    
    $html = curl_exec($ch);
    $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
    
    curl_close($ch);
    
    return [
        'status' => $httpCode,
        'html' => $html
    ];
}

// Scrape multiple pages
$urls = [
    'https://example.com/page1',
    'https://example.com/page2',
    'https://example.com/page3'
];

foreach ($urls as $url) {
    $result = scrapeWithProxy($url);
    echo "Scraped {$url}: Status {$result['status']}\n";
}
?>

Use Case: High-volume scraping where IP diversity helps avoid detection and rate limiting. Ideal for scraping product catalogs, news sites, or job boards.

3. Multiple Sessions

Implements an object-oriented approach with multiple static proxy sessions. The ProxyScraper class encapsulates proxy configuration and provides clean session management.

<?php

class ProxyScraper {
    private $proxy = 'network.mrproxy.com:10000';
    private $username;
    private $password;
    
    public function __construct($username, $password) {
        $this->username = $username;
        $this->password = $password;
    }
    
    public function scrape($url) {
        $ch = curl_init($url);
        curl_setopt($ch, CURLOPT_PROXY, $this->proxy);
        curl_setopt($ch, CURLOPT_PROXYUSERPWD, "{$this->username}:{$this->password}");
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        curl_setopt($ch, CURLOPT_TIMEOUT, 30);
        
        $response = curl_exec($ch);
        $info = curl_getinfo($ch);
        curl_close($ch);
        
        return [
            'status' => $info['http_code'],
            'content' => $response
        ];
    }
}

// Create multiple scrapers with different sessions
$scrapers = [];
for ($i = 1; $i <= 3; $i++) {
    $scrapers[] = new ProxyScraper("user-country-uk-sessid-worker{$i}", 'pass123');
}

// Use different sessions
$urls = ['https://example.com/page1', 'https://example.com/page2', 'https://example.com/page3'];
foreach ($urls as $index => $url) {
    $scraper = $scrapers[$index % count($scrapers)];
    $result = $scraper->scrape($url);
    echo "Scraped {$url}: {$result['status']}\n";
}
?>

Use Case: Organized scraping operations that need multiple consistent identities. Great for maintaining separate sessions for different data sources or user profiles.

4. Error Handling & Retries

Implements robust retry logic with exponential backoff to handle network failures gracefully. This pattern is essential for production scraping applications.

<?php

function scrapeWithRetry($url, $maxRetries = 3) {
    $proxy = 'network.mrproxy.com:10000';
    $proxyAuth = 'user-country-fr:pass123';
    
    for ($attempt = 1; $attempt <= $maxRetries; $attempt++) {
        $ch = curl_init($url);
        curl_setopt($ch, CURLOPT_PROXY, $proxy);
        curl_setopt($ch, CURLOPT_PROXYUSERPWD, $proxyAuth);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        curl_setopt($ch, CURLOPT_TIMEOUT, 30);
        
        $response = curl_exec($ch);
        $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
        $error = curl_error($ch);
        curl_close($ch);
        
        if ($error) {
            echo "Attempt {$attempt} failed: {$error}\n";
            sleep(pow(2, $attempt - 1)); // Exponential backoff
            continue;
        }
        
        if ($httpCode >= 200 && $httpCode < 300) {
            return $response;
        }
        
        echo "Attempt {$attempt} returned HTTP {$httpCode}\n";
        sleep(pow(2, $attempt - 1));
    }
    
    return false;
}

$result = scrapeWithRetry('https://example.com');
if ($result) {
    echo "Success!\n";
} else {
    echo "Failed after retries\n";
}
?>

Use Case: Production-ready scraping that must handle temporary network issues, server errors, and proxy failures without losing data or crashing the application.

5. Guzzle HTTP Client

Demonstrates using the popular Guzzle library for more elegant HTTP requests. Guzzle provides better error handling and a more intuitive API compared to raw cURL.

<?php

require 'vendor/autoload.php';

use GuzzleHttp\Client;
use GuzzleHttp\Exception\RequestException;

// Create Guzzle client with proxy
$client = new Client([
    'proxy' => 'http://user-country-jp-sessid-guzzle1:pass123@network.mrproxy.com:10000',
    'timeout' => 30,
    'verify' => false
]);

try {
    $response = $client->get('https://api.ipify.org');
    echo 'Your IP: ' . $response->getBody();
} catch (RequestException $e) {
    echo 'Error: ' . $e->getMessage();
}
?>

Use Case: Modern PHP applications that prefer object-oriented HTTP clients with better exception handling and cleaner syntax than raw cURL.

6. Async Requests with Guzzle

Shows how to make concurrent HTTP requests using Guzzle's promise-based async functionality. This dramatically improves performance for bulk scraping operations.

<?php

require 'vendor/autoload.php';

use GuzzleHttp\Client;
use GuzzleHttp\Promise;

$client = new Client([
    'proxy' => 'http://user-country-au:pass123@network.mrproxy.com:10000',
    'timeout' => 30
]);

// Create array of promises
$promises = [
    'page1' => $client->getAsync('https://example.com/page1'),
    'page2' => $client->getAsync('https://example.com/page2'),
    'page3' => $client->getAsync('https://example.com/page3'),
];

// Wait for all requests to complete
$results = Promise\Utils::settle($promises)->wait();

// Process results
foreach ($results as $key => $result) {
    if ($result['state'] === 'fulfilled') {
        echo "{$key}: Success\n";
    } else {
        echo "{$key}: Failed - {$result['reason']}\n";
    }
}
?>

Use Case: High-performance scraping that needs maximum throughput. Perfect for bulk data collection where you need to process many URLs simultaneously.

7. Environment Variables

Demonstrates secure credential management by loading proxy credentials from environment variables instead of hardcoding them in your source code.

<?php

// Load credentials from environment
$proxyUser = getenv('MRPROXY_USERNAME') ?: 'user-country-ca-sessid-env1';
$proxyPass = getenv('MRPROXY_PASSWORD') ?: 'pass123';

$proxy = 'network.mrproxy.com:10000';
$proxyAuth = "{$proxyUser}:{$proxyPass}";

$ch = curl_init('https://api.ipify.org');
curl_setopt($ch, CURLOPT_PROXY, $proxy);
curl_setopt($ch, CURLOPT_PROXYUSERPWD, $proxyAuth);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

$response = curl_exec($ch);
curl_close($ch);

echo $response;
?>

Use Case: Production deployments where credentials must be kept secure and separate from code. Essential for team development and CI/CD pipelines.