Introducing Crawlera, a Smart Page Downloader

We are proud to introduce Crawlera, a smart web downloader designed specifically for web crawling.

Crawlera routes requests through a large, distributed pool of IPs, throttling access by introducing (carefully crafted) delays and discarding IPs from the pool when they get banned from certain domains, or have other problems. As a scraping user, you no longer have to worry about tinkering with download delays, concurrent requests, user agents, cookies or referrers to avoid getting banned or throttled, you just configure your crawler to download pages through Crawlera and relax.

Crawlera supports HTTPS and POST requests, and a HTTP Proxy API for seamless configuration. It also supports custom region routing (only US supported for now, more regions to come).

To learn more or sign up (public beta is open!) please visit the Crawlera home page.

April 19, 2017 In "deploy" , "github" , "Releases" , "Scrapinghub" , "Scrapy" , "scrapy" , "Scrapy Cloud" , "scrapy cloud"
September 01, 2016 In "broad crawling" , "crawl frontier" , "Open source" , "python 3" , "Releases" , "Scrapinghub" , "web crawling"
August 17, 2016 In "Products" , "python 3" , "Releases" , "Scrapinghub" , "scrapy" , "Scrapy Cloud"
Crawlera, Products, Releases

Be the first to know. Gain insights. Make better decisions.

Use web data to do all this and more. We’ve been crawling the web since 2010 and can provide you with web data as a service.

Tell me more


Here we blog about all things related to web scraping and web data.

If you want to learn more about how you can use web data in your company, check out our Data as a Services page for inspiration.

Learn More

Recent Posts