Browsed by
Category: crawlera

How to Get High Success Rates With Proxies: 3 Steps to Scale Up

In this article we give you some insight on how you can scale up your web data extraction project. You will learn what are the basic elements of scaling up and what are the steps that you should take when looking for the best rotating proxy solution.

How To Scrape The Web Without Getting Blocked

Web scraping is when you extract data from the web and put it in a structured format. Getting structured data from publicly available websites and pages should not be an issue as everybody with an internet connection can access these websites. You should be able to structure it as well. In reality though, it’s not that easy.

Try Crawlera For Free

Introducing Crawlera free trials & new plans

One of the biggest pain points we’ve heard from our Crawlera customers last year is the inconvenience of having to jump from one Crawlera plan to another, when more requests are needed in a month. For this reason, we have been working on rethinking our Crawlera plans to better accommodate these cases and be more flexible with customers that have variable crawling requirements from month to...

How to use a proxy in Puppeteer

Puppeteer is a high-level API for headless chrome. It’s one of the most popular tools to use for web automation or web scraping in Node.js. In web scraping, many developers use it to handle javascript rendering and web data extraction. In this article, we are going to cover how to set up a proxy in Puppeteer and what your options are if you want to rotate proxies.

...

Building Blocks of an Unstoppable Web Scraping Infrastructure

More and more businesses leverage the power of web scraping. Extracting data from the web is becoming popular. But it doesn't mean that the technical challenges are gone. Building a sustainable web scraping infrastructure takes expertise and experience. Here, at Scrapinghub we scrape 9 billion pages per month. In this article, we are going to summarize what the essential elements of web...

Backconnect Proxy: Explanation & Comparison To Other Proxies

Scaling up your web scraping project is not an easy task. Adding proxies is one of the first actions you will need to take. You will need to manage a healthy proxy pool to avoid bans. There are a lot of proxy services/providers, each having a whole host of different types of proxies. In this blog post, you are going to learn how backconnect proxies work and when you should use them.

...

How to use Crawlera with Scrapy

Crawlera is a proxy service, specifically designed for web scraping. In this article, you are going to learn how to use Crawlera inside your Scrapy spider.

TRY CRAWLERA FOR FREE

Web Scraping Questions & Answers Part I

As you know we held the first ever Web Data Extraction Summit last month. During the talks, we had a lot of questions from the audience. We have divided the questions into two parts - in the first part, we will cover questions on Web Scraping at Scale - Proxy and Anti-Ban Best Practice, and Legal Compliance, GDPR in the World of Web Scraping. Enjoy! You can also check out the full talks on...

Learn how to configure and utilize proxies with Python Requests module

Sending HTTP requests in Python is not necessarily easy. We have built-in modules like urllib, urllib2 to deal with HTTP requests. Also, we have third-party tools like Requests. Many developers use Requests because it is high level and designed to make it extremely easy to send HTTP requests.

How to set up a custom proxy in Scrapy?

When scraping the web at a reasonable scale, you can come across a series of problems and challenges. You may want to access a website from a specific country/region. Or maybe you want to work around anti-bot solutions. Whatever the case, to overcome these obstacles you need to use and manage proxies. In this article, I'm going to cover how to set up a custom proxy inside your Scrapy spider in...