Ruby
Ruby code examples for using Residential Proxy with Net::HTTP and HTTParty.
Practical Ruby examples for integrating Residential Proxy into your scraping projects.
Prerequisites
gem install httparty- HTTParty: Popular Ruby gem that provides a simple, elegant API for HTTP requests with built-in proxy configuration.
1. Basic HTTP Request
Demonstrates the fundamental setup using Ruby's built-in Net::HTTP library with a static proxy session. This example shows the essential proxy configuration parameters.
require 'net/http'
require 'uri'
# Static proxy configuration
proxy_uri = URI.parse('http://user-country-us-sessid-ruby1:pass123@network.mrproxy.com:10000')
uri = URI.parse('https://api.ipify.org')
http = Net::HTTP.new(uri.host, uri.port,
proxy_uri.host, proxy_uri.port,
proxy_uri.user, proxy_uri.password)
http.use_ssl = true
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)
puts "Your IP: #{response.body}"Use Case: Simple proxy testing and basic scraping tasks where you need a consistent IP address. Perfect for understanding Ruby's proxy configuration fundamentals.
2. Rotating Proxy with HTTParty
Shows how to use the HTTParty gem for cleaner proxy configuration with rotating IPs. The class-based approach provides reusable scraping functionality with automatic IP rotation.
require 'httparty'
class ProxyScraper
include HTTParty
# Rotating proxy
http_proxy 'network.mrproxy.com', 10000, 'user-country-jp', 'pass123'
def self.scrape(url)
response = get(url, timeout: 30)
{
status: response.code,
body: response.body
}
rescue => e
{ error: e.message }
end
end
# Scrape multiple pages
urls = [
'https://example.com/page1',
'https://example.com/page2',
'https://example.com/page3'
]
urls.each do |url|
result = ProxyScraper.scrape(url)
puts "Scraped #{url}: #{result[:status]}"
endUse Case: High-volume scraping where IP diversity helps avoid detection and rate limiting. HTTParty's clean syntax makes it ideal for production scraping applications.
3. Multiple Sessions
Implements an object-oriented approach with multiple static proxy sessions. Each SessionScraper instance maintains its own consistent IP address throughout its lifetime.
require 'net/http'
require 'uri'
class SessionScraper
def initialize(session_id, password)
@username = "user-country-uk-sessid-#{session_id}"
@password = password
@proxy_uri = URI.parse("http://#{@username}:#{@password}@network.mrproxy.com:10000")
end
def scrape(url)
uri = URI.parse(url)
http = Net::HTTP.new(uri.host, uri.port,
@proxy_uri.host, @proxy_uri.port,
@proxy_uri.user, @proxy_uri.password)
http.use_ssl = true
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)
{
status: response.code,
body_length: response.body.length
}
rescue => e
{ error: e.message }
end
end
# Create multiple scrapers
scrapers = (1..3).map { |i| SessionScraper.new("worker#{i}", 'pass123') }
urls = [
'https://example.com/page1',
'https://example.com/page2',
'https://example.com/page3'
]
urls.each_with_index do |url, index|
scraper = scrapers[index % scrapers.length]
result = scraper.scrape(url)
puts "Scraped #{url}: #{result}"
endUse Case: Organized scraping operations that need multiple consistent identities. Great for maintaining separate sessions for different data sources or simulating multiple users.
4. Error Handling & Retries
Implements robust retry logic with exponential backoff using HTTParty's proxy configuration options. Essential for production applications that need to handle failures gracefully.
require 'httparty'
class RobustScraper
include HTTParty
def self.scrape_with_retry(url, max_retries = 3)
proxy_config = {
http_proxyaddr: 'network.mrproxy.com',
http_proxyport: 10000,
http_proxyuser: 'user-country-de',
http_proxypass: 'pass123'
}
max_retries.times do |attempt|
begin
response = get(url, proxy_config.merge(timeout: 30))
return response if response.success?
puts "Attempt #{attempt + 1} failed with status #{response.code}"
sleep(2 ** attempt) # Exponential backoff
rescue => e
puts "Attempt #{attempt + 1} failed: #{e.message}"
sleep(2 ** attempt)
end
end
nil
end
end
result = RobustScraper.scrape_with_retry('https://example.com')
puts result ? "Success" : "Failed after retries"Use Case: Production-ready scraping that must handle temporary network issues, server errors, and proxy failures without losing data or crashing the application.
5. Concurrent Scraping
Demonstrates parallel scraping using Ruby threads with different static proxy sessions. Each thread maintains its own consistent IP address for maximum performance.
require 'net/http'
require 'uri'
require 'thread'
def scrape_with_proxy(url, session_id)
proxy_uri = URI.parse("http://user-country-fr-sessid-#{session_id}:pass123@network.mrproxy.com:10000")
uri = URI.parse(url)
http = Net::HTTP.new(uri.host, uri.port,
proxy_uri.host, proxy_uri.port,
proxy_uri.user, proxy_uri.password)
http.use_ssl = true
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)
puts "Scraped #{url}: #{response.code}"
rescue => e
puts "Error scraping #{url}: #{e.message}"
end
urls = [
'https://example.com/page1',
'https://example.com/page2',
'https://example.com/page3',
'https://example.com/page4',
'https://example.com/page5'
]
threads = []
urls.each_with_index do |url, index|
threads << Thread.new do
scrape_with_proxy(url, "worker#{index + 1}")
end
end
threads.each(&:join)
puts "All scraping completed"Use Case: High-performance scraping that needs to process multiple URLs quickly while maintaining separate identities. Perfect for bulk data collection with time constraints.
6. Custom Headers
Shows how to add realistic browser headers when using HTTParty with proxies. This helps avoid detection by making requests appear more legitimate and browser-like.
require 'httparty'
class CustomScraper
include HTTParty
http_proxy 'network.mrproxy.com', 10000, 'user-country-ca', 'pass123'
def self.scrape_with_headers(url)
headers = {
'User-Agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)',
'Accept-Language' => 'en-US,en;q=0.9'
}
response = get(url, headers: headers, timeout: 30)
{
status: response.code,
content_type: response.headers['content-type'],
body_length: response.body.length
}
rescue => e
{ error: e.message }
end
end
result = CustomScraper.scrape_with_headers('https://example.com')
puts resultUse Case: Scraping sites that check for browser-like behavior or when you need to match headers to your proxy's geographic location for consistency and avoiding detection.