To accurately extract data from a web page, developers usually need to develop custom code for each website. This is manageable and recommended for tens or hundreds of websites and where data quality is of the utmost importance, but if you need to extract data from thousands of sites, or rapidly extract data from sites that are not yet covered by pre-existing code, this is often an...
Today, we’re delighted to announce the launch of the beta program for Scrapinghub’s new AI powered developer data extraction API for automated product and article extraction.
After much development and refinement with alpha users, our team have refined this machine learning technology to the point that data extraction engine is capable of automatically identifying common items on product and...
In this the second post in our solution architecture series, we will share with you our step-by-step process for data extraction requirement gathering.
For many people (especially non-techies), trying to architect a web scraping solution for their needs and estimate the resources required to develop it, can be a tricky process.
Oftentimes, this is their first web scraping project and as a result have little reference experience to draw upon when investigating the feasibility of a data extraction project.
In this series of articles we’re going...
St Patrick’s Day Special: Finding Dublin’s Best Pint of Guinness With Web Scraping
At Scrapinghub we are known for our ability to help companies make mission critical business decisions through the use of web scraped data.
But for anyone who enjoys a freshly poured pint of stout, there is one mission critical question that creates a debate like no other…
“Who serves the best pint of Guinness?”
If you know anything about Scrapinghub, you know that we are obsessed with data quality and data reliability.
Outside of building some of the most powerful web scraping tools in the world, we also specialise in helping companies extract the data they need for their mission-critical business requirements. Most notably companies who:
- Rely on web data to make critical business decisions, or;
Proxy management is the thorn in the side of most web scrapers. Without a robust and fully featured proxy infrastructure, you will often experience constant reliability issues and hours spent putting out proxy fires.
A situation no web scraping professional wants to deal with. Us web scrapers are interested in extracting and using web data, not managing proxies.
In this article, we’re going to...
“How does Scrapinghub Crawlera work?” is the most common question we get asked from customers who after struggling for months (or years) with constant proxy issues, only to have them disappear completely when they switch to Crawlera.
Today we’re going to give you a behind the scenes look at Crawlera so you can see for yourself why it is the world’s smartest web scraping proxy network and the...
Let’s face it, managing your proxy pool can be an absolute pain and the biggest bottleneck to the reliability of your web scraping!
Nothing annoys developers more than crawlers failing because their proxies are continuously being banned.
Over the past few years, there has been an explosion in the use of alternative data sources in investment decision making in hedge funds, investment banks and private equity firms.
These new data sources, collectively known as “alternative data”, have the potential to give firms a crucial informational edge in the market, enabling them to generate alpha.