passing selenium response url to scrapy

Use Downloader Middleware to catch selenium-required pages before you process them regularly with Scrapy:

The downloader middleware is a framework of hooks into Scrapy’s request/response processing. It’s a light, low-level system for globally altering Scrapy’s requests and responses.

Here’s a very basic example using PhantomJS:

from scrapy.http import HtmlResponse
from selenium import webdriver

class JSMiddleware(object):
    def process_request(self, request, spider):
        driver = webdriver.PhantomJS()
        driver.get(request.url)

        body = driver.page_source
        return HtmlResponse(driver.current_url, body=body, encoding='utf-8', request=request)

Once you return that HtmlResponse (or a TextResponse if that’s what you really want), Scrapy will cease processing downloaders and drop into the spider’s parse method:

If it returns a Response object, Scrapy won’t bother calling any other
process_request() or process_exception() methods, or the appropriate
download function; it’ll return that response. The process_response()
methods of installed middleware is always called on every response.

In this case, you can continue to use your spider’s parse method as you normally would with HTML, except that the JS on the page has already been executed.

Tip: Since the Downloader Middleware’s process_request method accepts the spider as an argument, you can add a conditional in the spider to check whether you need to process JS at all, and that will let you handle both JS and non-JS pages with the exact same spider class.

Leave a Comment