Browsed by
Category: scrapinghub

Spidermon: Scrapinghub’s Secret Sauce To Our Data Quality & Reliability Guarantee

If you know anything about Scrapinghub, you know that we are obsessed with data quality and data reliability.

Outside of building some of the most powerful web scraping tools in the world, we also specialise in helping companies extract the data they need for their mission-critical business requirements. Most notably companies who:

  • Rely on web data to make critical business decisions, or;
  • ...

Proxy Management: Should I Build My Proxy Infrastructure In-House Or Use A Off-The-Shelf Proxy Solution?

Proxy management is the thorn in the side of most web scrapers. Without a robust and fully featured proxy infrastructure, you will often experience constant reliability issues and hours spent putting out proxy fires.

A situation no web scraping professional wants to deal with. Us web scrapers are interested in extracting and using web data, not managing proxies.

In this article, we’re going to...

Looking Back at 2018

What a year 2018 has been for Scrapinghub!!

It’s hard to know where to start…

This year has seen tremendous growth at Scrapinghub, setting us up to have a great 2019.

Here are some of the highlights of 2018…

Shubber GetTogether 2018

It’s hard to believe our annual Shubber GetTogether is already over.

A Sneak Peek Inside What Hedge Funds Think of Alternative Financial Data

Unbeknownst to many, there is a data revolution happening in finance.

Want to Predict Fitbit’s Quarterly Revenue? Eagle Alpha Did It Using Web Scraped Product Data

Throughout the history of the financial markets information has been power. The trader with access to the most accurate information can quickly gain an edge over the market.

How Data Compliance Companies Are Turning To Web Crawlers To Take Advantage of the GDPR Business Opportunity

Over the last couple weeks, GDPR has brought data protection center stage. What was once a fringe concern for most businesses overnight became a burning problem that needed to be solved immediately.

Looking Back at 2017

It’s been another standout year for Scrapinghub and the scraping community at large. Together we crawled 79.1 billion pages (nearly double 2016), with over 103 billion scraped records; what a year!

A Faster, Updated Scrapinghub

We’re very excited to announce a new look for Scrapinghub!

Deploy your Scrapy Spiders from GitHub

Up until now, your deployment process using Scrapy Cloud has probably been something like this: code and test your spiders locally, commit and push your changes to a GitHub repository, and finally deploy them to Scrapy Cloud using shub deploy. However, having the development and the deployment processes in isolated steps might bring you some issues, such as unversioned and outdated code running...