Introducing Crawlera, a Smart Page Downloader

We are proud to introduce Crawlera, a smart web downloader designed specifically for web crawling.

Crawlera routes requests through a large, distributed pool of IPs, throttling access by introducing (carefully crafted) delays and discarding IPs from the pool when they get banned from certain domains, or have other problems. As a scraping user, you no longer have to worry about tinkering with download delays, concurrent requests, user agents, cookies or referrers to avoid getting banned or throttled, you just configure your crawler to download pages through Crawlera and relax.

Crawlera supports HTTPS and POST requests, and a HTTP Proxy API for seamless configuration. It also supports custom region routing (only US supported for now, more regions to come).

To learn more or sign up (public beta is open!) please visit the Crawlera home page.

October 10, 2019 In "Web Scraping" , "Crawlera" , "GDPR" , "Extract Summit" , "Web Data Extraction Summit"
August 22, 2019 In "Crawlera" , "python" , "python 3" , "Proxies" , "Requests"
August 08, 2019 In "Crawlera" , "Scrapy" , "Proxies" , "scrapyproject"
Crawlera, Releases, Products
Sign up now

Be the first to know. Gain insights. Make better decisions.

Use web data to do all this and more. We’ve been crawling the web since 2010 and can provide you with web data as a service.

Tell me more


Here we blog about all things related to web scraping and web data.

If you want to learn more about how you can use web data in your company, check out our Data as a Services page for inspiration.

Follow Us

Learn More

Recent Posts