Browsed by
Category: data-quality

A Practical Guide to Web Data QA Part V: Broad Crawls

If you haven’t read the previous ones, here’s the first part, second part, third part and fourth part of the series.

News & Article Data Extraction: Open Source vs Closed Source Solutions

Article extraction is the process of extracting data fields from an article page and putting it into a machine-readable structured format like JSON. In many use cases, the article page that you want to extract is a news page but it can be any other type of article. Based on our experience in the web data extraction industry for over 10 years, the demand for structured article data is getting...

A PRACTICAL GUIDE TO WEB DATA QA PART IV: COMPLEMENTING SEMI-AUTOMATED TECHNIQUES

If you haven’t read the previous ones, here’s the first part, the second and third part of the series.

A Practical Guide to Web Data Extraction QA Part II: Common validation pitfalls

In case you missed the first part of this series, where we went through data validation techniques, you can read it now: A Practical Guide To Web Data  Extraction QA Part I: Validation Techniques

Building Blocks of an Unstoppable Web Scraping Infrastructure

More and more businesses leverage the power of web scraping. Extracting data from the web is becoming popular. But it doesn't mean that the technical challenges are gone. Building a sustainable web scraping infrastructure takes expertise and experience. Here, at Scrapinghub we scrape 9 billion pages per month. In this article, we are going to summarize what the essential elements of web...

Spidermon: Scrapinghub’s Secret Sauce To Our Data Quality & Reliability Guarantee

If you know anything about Scrapinghub, you know that we are obsessed with data quality and data reliability.

Outside of building some of the most powerful web scraping tools in the world, we also specialise in helping companies extract the data they need for their mission-critical business requirements. Most notably companies who:

  • Rely on web data to make critical business decisions, or;
  • ...

Meet Spidermon: Scrapinghub’s Battle Tested Spider Monitoring Library [Now Open Sourced]

Your spider is developed and we are getting our structured data daily, so our job is done, right?

Absolutely not! Website changes (sometimes very subtly), anti-bot countermeasures and temporary problems often reduce the quality and reliability of our data.

The Rise of Web Data in Hedge Fund Decision Making & The Importance of Data Quality

Over the past few years, there has been an explosion in the use of alternative data sources in investment decision making in hedge funds, investment banks and private equity firms.

These new data sources, collectively known as “alternative data”, have the potential to give firms a crucial informational edge in the market, enabling them to generate alpha.