Scrapy & AutoExtract API integration

We’ve just released a new open-source Scrapy middleware which makes it easy to integrate AutoExtract into your existing Scrapy spider. If you haven’t heard about AutoExtract yet, it’s an AI-based web scraping tool which automatically extracts data from web pages without the need to write any code. Learn more about AutoExtract here.


This project uses Python 3.6+ and pip. A virtual environment is strongly encouraged.

$ pip install git+



Enable middleware

'scrapy_autoextract.AutoExtractMiddleware': 543, }

This middleware should be the last one to be executed so make sure to give it the highest value.

AutoExtract settings


These settings must be defined in order for AutoExtract to work.

  • AUTOEXTRACT_USER: your AutoExtract API key
  • AUTOEXTRACT_PAGE_TYPE: the kind of data to be extracted (current options: "product" or "article")


  • AUTOEXTRACT_URL: AutoExtract service url (default:
  • AUTOEXTRACT_TIMEOUT: response timeout from AutoExtract (default: 660 seconds)


AutoExtract requests are opt-in and they must be enabled for each request, by adding:

meta['autoextract'] = {'enabled': True}

If the request was sent to AutoExtract, inside your Scrapy spider you can access the AutoExtract result through the meta attribute:

def parse(self, response):
yield response.meta['autoextract']



In the Scrapy settings file:

    'scrapy_autoextract.AutoExtractMiddleware': 543,

# Disable AutoThrottle middleware

AUTOEXTRACT_USER = 'my_autoextract_apikey'

In the spider:

class ExampleSpider(Spider):
name = 'example'
start_urls = ['']

def start_requests(self):
yield scrapy.Request(url, meta={'autoextract': {'enabled': True}}, callback=self.parse)
def parse(self, response):
yield response.meta['autoextract']

Example output:

"articleBody":"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat...",
"description":"Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatu",
"headline":"'Lorem Ipsum Dolor Sit Amet",
"author":"Attila Toth",
"articleBodyHtml":"<article>\n\n<p>Lorem ipsum...",



  • The incoming spider request is rendered by AutoExtract, not just downloaded by Scrapy, which can change the result - the IP is different, headers are different, etc.
  • Only GET requests are supported
  • Custom headers and cookies are not supported (i.e. Scrapy features to set them don't work)
  • Proxies are not supported (they would work incorrectly, sitting between Scrapy and AutoExtract, instead of AutoExtract and website)
  • AutoThrottle extension can work incorrectly for AutoExtract requests, because AutoExtract timing can be much larger than the time required to download a page, so it's best to use AUTOTHROTTLE_ENABLED=False in the settings.
  • Redirects are handled by AutoExtract, not by Scrapy, so these kinds of middlewares might have no effect
  • Retries should be disabled, because AutoExtract handles them internally (use RETRY_ENABLED=False in the settings) There is an exception, if there are too many requests sent in a short amount of time and AutoExtract returns HTTP code 429. For that case it's best to use RETRY_HTTP_CODES=[429].

Check out the middleware on Github or learn more about AutoExtract!

September 10, 2020 In "open source" , "Data Quality" , "AutoExtract" , "Article Data Extraction" , "News and Articles API"
August 06, 2020 In "data extraction" , "Scrapy" , "web scraping basics"
June 23, 2020 In "AutoExtract" , "AI" , "News and Articles API"
Scrapy, AutoExtract