Attila Toth
3 Mins
January 28, 2021

Large scale web scraping

From inconsistent website layouts that break our extraction logic to badly written HTML. Being able to scale web scraping comes with its share of difficulties.

Over the last few years, the single most important challenge in web scraping has been to actually get to the data - and not get blocked. This is due to the antibots or the underlying technologies that websites use to protect their data.

Proxies are a major component in any scalable web scraping infrastructure. However, not many people understand the technicalities of the different types of proxies and how to make the best use of proxies to get the data they want, with the least possible blocks.

Is it all about proxies?

Oftentimes the emphasis is on proxies to get around antibots when trying to scale web scraping. But the logic of the scraper is important too. It is fairly intertwined. Using good quality proxies is surely important. If you use blacklisted proxies, even the best scraper logic will not yield good results.

However, a good circumvention logic of the scraper that is in tune with the requirement of the website is equally important. Over the years, antibots have shifted from server-side validation to client-side validation where they look at javascript and browser fingerprinting, etc…

So really, it depends a lot on the target website. Most of the time, decent proxies combined with good crawling knowledge and accrual strategy should do the trick and deliver acceptable results.

When you start getting blocks...

Bans and antibots are primarily designed to prevent the abuse of a website and it is very important to remain polite while you scrape.

Thus, the first thing before even starting a web scraping project is to understand the website you are trying to scrape.

scale web scraping

Your crawls should be well under the total number of users that a website has the infrastructure to successfully serve and never exceed the number of resources the website has.

Staying respectful to the website will take you a long way to scale web scraping projects.

If you are still getting banned, we have a few pointers that will help you succeed when looking to scale web scraping projects.

Here are a few basic checkpoints:

  • Check if your headers are able to mimic real-world browsers.
  • The next step would be to check if the website has enabled geo-blocking. Using region-specific proxies may help here.
  • Residential proxies may be useful in case the website is blocking data center proxies.
  • Then it comes down to your crawl strategy. You should be careful before hitting the predicted ajax or mobile endpoints and try to be organic and follow the site-map.
  • If you start getting white-listed sessions, leverage those by creating a good cookie handling and session management strategy.
  • Most of the websites vigorously check for browser fingerprints and employ javascript in a big way so your infrastructure should be designed to handle those challenges.

Dealing with captchas

The best thing to do against captchas is to ensure that you don't even get a captcha. Scraping politely might be enough in your case. If not, then using different types of proxies, regional proxies, and efficiently handling javascript challenges can reduce the chances of getting a captcha.

Despite all the efforts to scale web scraping, if you still get a captcha, you could try third party solutions or design a simple solution yourself to handle easy captchas.

Factors to look at if you decide to outsource proxy management

Managing proxies for web scraping is very complex and challenging, which is why many people prefer to outsource their proxy management. When choosing a proxy solution, what factors should you look at?

It is very important to use a proxy solution that provides good quality as well as a good quantity of proxies that are spread across different regions. A good proxy solution should also provide added features like TLS fingerprinting, DCP/IP fingerprinting, header profiles, browser profiles, etc... so that requests don't return unsuccessfully.

If a provider offers a trial of their solution, it would be useful to test the success ratio against the target website. A provider that handles captchas seamlessly is a great bonus. The best situation would be if your proxy provider is GDPR compliant and provides responsibly sourced IPs.

We know it would be so much easier to just send a request and not worry about the proxies, which is why we are constantly working on improving our technology to ensure that our partners enjoy successful requests without dealing with the hassles of proxy management.

We hope this short article helped answer your questions about good proxy management and how to scale web scraping effectively.

If you have more questions, just leave them in the comments below and we will get back to you as soon as possible.