Browsed by
Category: crawling

How to Architect a Web Scraping Solution: The Step-by-Step Guide

For many people (especially non-techies), trying to architect a web scraping solution for their needs and estimate the resources required to develop it, can be a tricky process.

Oftentimes, this is their first web scraping project and as a result have little reference experience to draw upon when investigating the feasibility of a data extraction project.

In this series of articles we’re going...

A Sneak Peek Inside Crawlera: The World’s Smartest Web Scraping Proxy Network

“How does Scrapinghub Crawlera work?” is the most common question we get asked from customers who after struggling for months (or years) with constant proxy issues, only to have them disappear completely when they switch to Crawlera. 

Today we’re going to give you a behind the scenes look at Crawlera so you can see for yourself why it is the world’s smartest web scraping proxy network and the...

Why We Created Crawlera? The World’s Smartest Web Scraping Proxy Network

Let’s face it, managing your proxy pool can be an absolute pain and the biggest bottleneck to the reliability of your web scraping! 

Nothing annoys developers more than crawlers failing because their proxies are continuously being banned.

Scrapy Tips from the Pros: Part 1

Scrapy is at the heart of Scrapinghub. We use this framework extensively and have accumulated a wide range of shortcuts to get around common problems. We’re launching a series to share these Scrapy tips with you so that you can get the most out of your daily workflow. Each post will feature two to three tips, so stay tuned.

Scrapinghub Crawls the Deep Web

"The easiest way to think about Memex is: How can I make the unseen seen?"

Introducing Data Reviews

One of the things that takes more time when building a spider is reviewing the scraped data and making sure it conforms to the requirements and expectations of your client or team. This process is so time consuming that, in many cases, it ends up taking more time than writing the spider code itself, depending on how well the requirements are written. To make this process more efficient we have...

Why MongoDB Is a Bad Choice for Storing Our Scraped Data

MongoDB was used early on at Scrapinghub to store scraped data because it's convenient. Scraped data is represented as (possibly nested) records which can be serialized to JSON. The schema is not known ahead of time and may change from one job to the next. We need to support browsing, querying and downloading the stored data. This was very easy to implement using MongoDB (easier than the...

Extract Summit Blog banner - 1200x250

Web Data Extraction Summit 2019

presented by Scrapinghub

Dublin, Ireland 
17th September 2019

EARLY BIRD TICKETS

Be the first to know. Gain insights. Make better decisions.

Use web data to do all this and more. We’ve been crawling the web since 2010 and can provide you with web data as a service.

Tell me more

Welcome

Here we blog about all things related to web scraping and web data.

If you want to learn more about how you can use web data in your company, check out our Data as a Services page for inspiration.

Follow Us

Learn More

Recent Posts