Attila Toth
3 Mins
October 15, 2019

Scrapy & Zyte Automatic Extraction API integration

We’ve just released a new open-source Scrapy middleware which makes it easy to integrate Zyte Automatic Extraction into your existing Scrapy spider.

If you haven’t heard about Zyte Automatic Extraction (formerly AutoExtract) yet, it’s an AI-based web scraping tool that automatically extracts data from web pages without the need to write any code.

Learn more about Zyte Automatic Extraction here.

Installation

This project uses Python 3.6+ and pip. A virtual environment is strongly encouraged.

$ pip install git+https://github.com/scrapinghub/scrapy-autoextract

Configuration

Enable middleware

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
DOWNLOADER_MIDDLEWARES = {
'scrapy_autoextract.AutoExtractMiddleware': 543, }
DOWNLOADER_MIDDLEWARES = { 'scrapy_autoextract.AutoExtractMiddleware': 543, }
DOWNLOADER_MIDDLEWARES = {
 'scrapy_autoextract.AutoExtractMiddleware': 543, }

This middleware should be the last one to be executed so make sure to give it the highest value.

Zyte Automatic Extraction settings

Mandatory

These settings must be defined in order for Zyte Automatic Extraction to work.

  • AUTOEXTRACT_USER: your Zyte Automatic Extraction API key
  • AUTOEXTRACT_PAGE_TYPE: the kind of data to be extracted (current options: "product" or "article")

Optional

  • AUTOEXTRACT_URL: Zyte Automatic Extraction service URL (default: autoextract.scrapinghub.com)
  • AUTOEXTRACT_TIMEOUT: response timeout from Zyte Automatic Extraction API (default: 660 seconds)

Spider

Zyte Automatic Extraction requests are opt-in and they must be enabled for each request, by adding:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
meta['autoextract'] = {'enabled': True}
meta['autoextract'] = {'enabled': True}
meta['autoextract'] = {'enabled': True} 

If the request was sent to Zyte Automatic Extraction, inside your Scrapy spider you can access the result through the meta attribute:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
def parse(self, response):
yield response.meta['autoextract']
def parse(self, response): yield response.meta['autoextract']
def parse(self, response):
 yield response.meta['autoextract'] 

Example

In the Scrapy settings file:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
DOWNLOADER_MIDDLEWARES = { 'scrapy_autoextract.AutoExtractMiddleware': 543, } # Disable AutoThrottle middleware AUTHTHROTTLE_ENABLED = False AUTOEXTRACT_USER = 'my_autoextract_apikey' AUTOEXTRACT_PAGE_TYPE = 'article'
DOWNLOADER_MIDDLEWARES = { 'scrapy_autoextract.AutoExtractMiddleware': 543, } # Disable AutoThrottle middleware AUTHTHROTTLE_ENABLED = False AUTOEXTRACT_USER = 'my_autoextract_apikey' AUTOEXTRACT_PAGE_TYPE = 'article'
DOWNLOADER_MIDDLEWARES = { 'scrapy_autoextract.AutoExtractMiddleware': 543, } # Disable AutoThrottle middleware AUTHTHROTTLE_ENABLED = False AUTOEXTRACT_USER = 'my_autoextract_apikey' AUTOEXTRACT_PAGE_TYPE = 'article'

In the spider:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class ExampleSpider(Spider):
name = 'example'
start_urls = ['example.com']
def start_requests(self):
yield scrapy.Request(url, meta={'autoextract': {'enabled': True}}, callback=self.parse)
def parse(self, response):
yield response.meta['autoextract']
class ExampleSpider(Spider): name = 'example' start_urls = ['example.com'] def start_requests(self): yield scrapy.Request(url, meta={'autoextract': {'enabled': True}}, callback=self.parse) def parse(self, response): yield response.meta['autoextract']
class ExampleSpider(Spider):
 name = 'example'
 start_urls = ['example.com']

 def start_requests(self):
 yield scrapy.Request(url, meta={'autoextract': {'enabled': True}}, callback=self.parse)
 def parse(self, response):
 yield response.meta['autoextract'] 

Example output:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[{
"query":{
"domain":"example.com",
"userQuery":{
"url":"https://www.example.com/news/2019/oct/15/lorem-dolor-sit",
"pageType":"article"
},
"id":"1570771884892-800e44fc7cf49259"
},
"article":{
"articleBody":"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat...",
"description":"Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatu",
"probability":0.9717171744215637,
"inLanguage":"en",
"headline":"'Lorem Ipsum Dolor Sit Amet",
"author":"Attila Toth",
"articleBodyHtml":"<article>nn<p>Lorem ipsum...",
"images":["https://i.example.com/img/media/12a71d2200e99f9fff125972b88ff395f5e...",],
"mainImage":"https://i.example.com/img/media/12a71d2200e99f9fff125972b88ff395f5e..."}
}]
[{ "query":{ "domain":"example.com", "userQuery":{ "url":"https://www.example.com/news/2019/oct/15/lorem-dolor-sit", "pageType":"article" }, "id":"1570771884892-800e44fc7cf49259" }, "article":{ "articleBody":"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat...", "description":"Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatu", "probability":0.9717171744215637, "inLanguage":"en", "headline":"'Lorem Ipsum Dolor Sit Amet", "author":"Attila Toth", "articleBodyHtml":"<article>nn<p>Lorem ipsum...", "images":["https://i.example.com/img/media/12a71d2200e99f9fff125972b88ff395f5e...",], "mainImage":"https://i.example.com/img/media/12a71d2200e99f9fff125972b88ff395f5e..."} }]
[{
 "query":{
 "domain":"example.com",
 "userQuery":{
 "url":"https://www.example.com/news/2019/oct/15/lorem-dolor-sit",
 "pageType":"article"
 },
 "id":"1570771884892-800e44fc7cf49259"
 },
 "article":{
 "articleBody":"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat...",
 "description":"Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatu",
 "probability":0.9717171744215637,
 "inLanguage":"en",
 "headline":"'Lorem Ipsum Dolor Sit Amet",
 "author":"Attila Toth",
 "articleBodyHtml":"<article>nn<p>Lorem ipsum...",
 "images":["https://i.example.com/img/media/12a71d2200e99f9fff125972b88ff395f5e...",],
 "mainImage":"https://i.example.com/img/media/12a71d2200e99f9fff125972b88ff395f5e..."}
}]

Limitations

  • The incoming spider request is rendered by Zyte Automatic Extraction, not just downloaded by Scrapy, which can change the result - the IP is different, headers are different, etc.
  • Only GET requests are supported
  • Custom headers and cookies are not supported (i.e. Scrapy features to set them don't work)
  • Proxies are not supported (they would work incorrectly, sitting between Scrapy and Zyte Automatic Extraction, instead of Zyte Automatic Extraction and website)
  • AutoThrottle extension can work incorrectly for Zyte Automatic Extraction requests because timing can be much larger than the time required to download a page, so it's best to use AUTOTHROTTLE_ENABLED=False in the settings.
  • Redirects are handled by Zyte Automatic Extraction, not by Scrapy, so these kinds of middlewares might have no effect
  • Retries should be disabled because Zyte Automatic Extraction handles them internally (use RETRY_ENABLED=False in the settings) There is an exception if there are too many requests sent in a short amount of time and Zyte Automatic Extraction API returns HTTP code 429. For that case, it's best to use RETRY_HTTP_CODES=[429].

Check out the middleware on Github or learn more about Zyte Automatic Extraction (formerly AutoExtract)!

automatic extraction

Zyte Automatic Extraction Intro