Four Use Cases for Online Public Sentiment Data

The manual method of discovery for gauging online public sentiment towards a product, company, or industry is cursory at best, and at worst, may harm your business by providing incorrect or misleading insights.

The First-Ever Web Data Extraction Summit!

The range of use cases for web data extraction is rapidly increasing and with it the necessary investment. Plus the number of websites continues to grow rapidly and is expected to exceed 2 billion by 2020.

Learn how to configure and utilize proxies with Python Requests module

Sending HTTP requests in Python is not necessarily easy. We have built-in modules like urllib, urllib2 to deal with HTTP requests. Also, we have third-party tools like Requests. Many developers use Requests because it is high level and designed to make it extremely easy to send HTTP requests.

How to set up a custom proxy in Scrapy?

When scraping the web at a reasonable scale, you can come across a series of problems and challenges. You may want to access a website from a specific country/region. Or maybe you want to work around anti-bot solutions. Whatever the case, to overcome these obstacles you need to use and manage proxies. In this article, I'm going to cover how to set up a custom proxy inside your Scrapy spider in...

GDPR Update: Scraping Public Personal Data

One common misconception about scraping personal data is that public personal data does not fall under the GDPR. Many businesses assume that because the data has already been made public on another website that it is fair game to scrape. In actuality, GDPR makes no blanket exceptions for public personal data and the same analysis for any other personal data must be conducted prior to scraping...

Solution Architecture Part 5: Designing A Well-Optimised Web Scraping Solution

In the fifth and final post of this solution architecture series, we will share with you how we architect a web scraping solution, all the core components of a well-optimized solution, and the resources required to execute it.

Solution Architecture Part 4: Accessing The Technical Feasibility of Your Web Scraping Project

In the fourth post of this solution architecture series, we will share with you our step-by-step process for evaluating the technical feasibility of a web scraping project.

Visual Web Scraping Tools: What to Do When They Are No Longer Fit For Purpose?

Visual web scraping tools are great. They allow people with little to no technical know-how to extract data from websites with only a couple hours of upskilling, making them great for simple lead generation, market intelligence and competitor monitoring projects. Removing countless hours of manual entry work for sales and marketing teams, researchers, and business intelligence team in the...

Solution Architecture Part 3: Conducting a Web Scraping Legal Review

In this the third post in our solution architecture series, we will share with you our step-by-step process for conducting a legal review of every web scraping project we work on.

ScrapyRT: Turn Websites Into Real-Time APIs

If you’ve been using Scrapy for any period of time, you know the capabilities a well-designed Scrapy spider can give you.