Browsed by
Category: crawlera

How to use a proxy in Puppeteer

Puppeteer is a high-level API for headless chrome. It’s one of the most popular tools to use for web automation or web scraping in Node.js. In web scraping, many developers use it to handle javascript rendering and web data extraction. In this article, we are going to cover how to set up a proxy in Puppeteer and what your options are if you want to rotate proxies.

Building Blocks of an Unstoppable Web Scraping Infrastructure

More and more businesses leverage the power of web scraping. Extracting data from the web is becoming popular. But it doesn't mean that the technical challenges are gone. Building a sustainable web scraping infrastructure takes expertise and experience. Here, at Scrapinghub we scrape 9 billion pages per month. In this article, we are going to summarize what the essential elements of web...

Backconnect Proxy: Explanation & Comparison To Other Proxies

Scaling up your web scraping project is not an easy task. Adding proxies is one of the first actions you will need to take. You will need to manage a healthy proxy pool to avoid bans. There are a lot of proxy services/providers, each having a whole host of different types of proxies. In this blog post, you are going to learn how backconnect proxies work and when you should use them.

How to use Crawlera with Scrapy

Crawlera is a proxy service, specifically designed for web scraping. In this article, you are going to learn how to use Crawlera inside your Scrapy spider.

Web Scraping Questions & Answers Part I

As you know we held the first ever Web Data Extraction Summit last month. During the talks, we had a lot of questions from the audience. We have divided the questions into two parts - in the first part, we will cover questions on Web Scraping at Scale - Proxy and Anti-Ban Best Practice, and Legal Compliance, GDPR in the World of Web Scraping. Enjoy! You can also check out the full talks on...

How to use proxies with Python Requests module

Sending HTTP requests in Python is not necessarily easy. We have built-in modules like urllib, urllib2 to deal with HTTP requests. Also, we have third-party tools like Requests. Many developers use Requests because it is high level and designed to make it extremely easy to send HTTP requests.

How to set up a custom proxy in Scrapy?

When scraping the web at a reasonable scale, you can come across a series of problems and challenges. You may want to access a website from a specific country/region. Or maybe you want to work around anti-bot solutions. Whatever the case, to overcome these obstacles you need to use and manage proxies. In this article, I'm going to cover how to set up a custom proxy inside your Scrapy spider in...

Proxy Management: Should I Build My Proxy Infrastructure In-House Or Use AN Off-The-Shelf Proxy Solution?

Proxy management is the thorn in the side of most web scrapers. Without a robust and fully featured proxy infrastructure, you will often experience constant reliability issues and hours spent putting out proxy fires.

A situation no web scraping professional wants to deal with. Us web scrapers are interested in extracting and using web data, not managing proxies.

In this article, we’re going to...

A Sneak Peek Inside Crawlera: The World’s Smartest Web Scraping Proxy Network

“How does Scrapinghub Crawlera work?” is the most common question we get asked from customers who after struggling for months (or years) with constant proxy issues, only to have them disappear completely when they switch to Crawlera. 

Today we’re going to give you a behind the scenes look at Crawlera so you can see for yourself why it is the world’s smartest web scraping proxy network and the...

Why We Created Crawlera? The World’s Smartest Web Scraping Proxy Network

Let’s face it, managing your proxy pool can be an absolute pain and the biggest bottleneck to the reliability of your web scraping! 

Nothing annoys developers more than crawlers failing because their proxies are continuously being banned.