Scrapy爬虫框架

前言:

使用  requests + Beautifulsoup的爬虫模式,随着业务的扩展,会遇到 性能、数据快速存储、多爬虫统一管理的问题,所以选择了爬虫框架----Scrapy!

Scrapy爬虫介绍

Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中。
其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。

Scrapy功能

----引用twisted模块异步下载页面

-----HTML解析成对象

-----代理

----延迟下载

----URL字段去重

----指定深度、广度

...........................

Scrapy架构及工作流程

Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下

 

Scrapy主要包括了以下组件:

引擎(Scrapy)
用来处理整个系统的数据流处理, 触发事务(框架核心)

调度器(Scheduler)
用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址

下载器(Downloader)
用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)

爬虫(Spiders)
爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面

项目管道(Pipeline)
负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。

下载器中间件(Downloader Middlewares)
位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。

爬虫中间件(Spider Middlewares)
介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。

调度中间件(Scheduler Middewares)
介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。

Scrapy运行流程大概如下:

1.程序员在Spiders里定义爬虫的起始URL。

2.ScrapyEngine把Spider中的起始URL,推送到Scheduler。

3.Scheduler调度URL通过Downloader去互联网下载HTML内容。

4.Downloader下载HTML内容并返回给Spiders(回调函数)。

5.Spiders调用 Item Pipeline把爬到的内容保存的数据库/文件,或者继续循环流程1-5。

Scrapy安装&使用

安装 

1.Linux

pip install scrapy

2.Windows

2.1:下载twisted 

Twisted‑18.7.0‑cp36‑cp36m‑win_amd64.whl:cp36是cpython解释器的版本,amd64Windows的位数;

2.2:安装scrapy

pip install scrapy -i http://pypi.douban.com/simple --trusted-host pypi.douban.com

基本使用

scrapy startproject projectname            #创建1个Scrapy项目
cd projectname


scrapy genspider [-t template] <name> <domain> #创建爬虫应用
scrapy gensipider -t basic le le.com        #创建虫子1
scrapy gensipider -t xmlfeed bestseller.com.cn  #创建虫子2



scrapy list                         #展示爬虫应用列表



scrapy crawl  爬虫应用名 --nolog                  #运行单独爬虫应用 --nolog不打印日志

修改setings.py
ROBOTSTXT_OBEY = False:是否遵守爬虫协议
建议读者一定要遵循爬虫协议,如果ROBOTSTXT_OBEY = True,不能获取到respose一点要和对方打电话谈谈!


 爬虫开始。。。

# -*- coding: utf-8 -*-
import scrapy

# import sys,os,io
# sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030') #解决编码错误

class BaiduSpider(scrapy.Spider):
    name = 'baidu'
    allowed_domains = ['baidu.com']      #起始URL
    start_urls = ['http://baidu.com/']  #限制域名

    def parse(self, response):             #回调函数
        print(response.text)
baidu

选择器

# -*- coding: utf-8 -*-
import scrapy
from scrapy.selector import Selector, HtmlXPathSelector
from scrapy.http import HtmlResponse
# import sys,os,io
# sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030') #解决编码错误

class BaiduSpider(scrapy.Spider):
    name = 'baidu'
    allowed_domains = ['baidu.com']      #起始URL
    start_urls = ['http://baidu.com/']  #限制域名

    def parse(self, response):             #回调函数
        html = """<!DOCTYPE html>
        <html>
            <head lang="en">
                <meta charset="UTF-8">
                <title></title>
            </head>
            <body>
                <ul>
                    <li class="item-"><a id='i1' href="link.html">first item</a></li>
                    <li class="item-0"><a id='i2' href="llink.html">first item</a></li>
                    <li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li>
                </ul>
                <div><a href="llink2.html">second item</a></div>
            </body>
        </html>
        """
        response = HtmlResponse(url='http://example.com', body=html, encoding='utf-8')
        # hxs=Selector(response=response).xpath('//a')                    #查询所有a标签
        # hxs = Selector(response=response).xpath('//a[@id]')             #查询包含id属性的a标签
        #hxs = Selector(response=response).xpath('//a[@id="i1"]')         #查询id=i1的a标签

        #hxs = Selector(response=response).xpath('//a[@href="link.html"][@id="i1"]')   #查询href="link.html" &id="i1"的a标签

        # hxs = Selector(response=response).xpath('//a[contains(@href, "link")]')      #查询href="link.html"包含link关键字的a标签
        #hxs = Selector(response=response).xpath('//a[starts-with(@href, "link")]')    # 查询href="link.html"以link关键字开头的a标签

                          #使用正则匹配
        #hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]')    #查询id为 i数字  的a标签
        #hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]/text()').extract()     #/text()获取文本内容
        # hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]/@href').extract() #/@href获取href属性

        # hxs = Selector(response=response).xpath('/html/body/ul/li/a/@href').extract()   #python数据类型
        # hxs = Selector(response=response).xpath('//body/ul/li/a/@href').extract_first() #获取匹配到的第一个a标签


        ul_list = Selector(response=response).xpath('//body/ul/li') #支持for循环
        for item in ul_list:
            v = item.xpath('./a/span')#相对当前标签下寻找子代 ./,*/,a     #注意//遍历所有后代, /遍历所有子代
            #
            # v = item.xpath('a/span')
            #
            # v = item.xpath('*/a/span')
            print(v)
xpath选择器
# -*- coding: utf-8 -*-
import scrapy,urllib.parse
from scrapy.http import Request
from scrapy.selector import Selector
from scrapy.http.cookies import CookieJar


class ChoutiSpider(scrapy.Spider):
    name = 'chouti'
    allowed_domains = ['chouti.com']
    start_urls = ['http://chouti.com/']
    cookie_dict = {}

    '''
    1. 发送一个GET请求,抽屉
       获取cookie
   
    2. 用户密码POST登录:携带上一次cookie
       返回值:9999
       
    3. 为为所欲为,携带cookie
    
    '''

    def start_requests(self):#子类重写父类的start_requests,指定其实url
        for url in  self.start_urls:
            yield Request(url,dont_filter=True,callback=self.index)

    def index(self,response):#首页
        cookie_jar=CookieJar() #提取本次请求的cokie,保存到cookie_jar对象
        cookie_jar.extract_cookies(response, response.request)#去响应中获取cookie
        #把cookie保存到字典
        for k, v in cookie_jar._cookies.items():
            for i, j in v.items():
                for m, n in j.items():
                    self.cookie_dict[m] = n.value
        post_dict={
            "phone": '8613220198866',
            "password": "woshiniyeye",
            "oneMonth": 1,
        }

        yield Request(  #发送post请求,进行登录
            url='https://dig.chouti.com/login',
            method='POST',
            cookies= self.cookie_dict,
            headers = {'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
            body=urllib.parse.urlencode(post_dict),

            callback=self.login
                       )

    def login(self,response) :
        yield Request(url='https://dig.chouti.com/',cookies=self.cookie_dict,callback=self.get_news)

    def get_news(self,response):
        hxs=Selector(response)
        link_id_list=hxs.xpath('//div[@class="part2"]/@share-linkid').extract() #获取新闻ID
        for link in link_id_list:
            base_url = "http://dig.chouti.com/link/vote?linksId=%s" % (link)
            yield Request(
                    url=base_url,
                    method='POST',
                    cookies=self.cookie_dict,
                    callback=self.end_parse
                        )
    def end_parse(self,response):
        print(response.text)
抽屉点赞

Pipeline组件

序列化和存储爬取的数据,以下是使用方法,Pipline组件是全局生效的,这意味着所有的爬虫只要return了item对象,都会执行pipline组件。

如何在pipline区别每个爬虫做不同操作?

 def process_item(self, item, spider):  #爬虫爬取数据过程中
        '''
         爬虫爬取数据过程中
        :param item: 爬虫中yield回来的对象
        :param spider:爬虫对象 obj= JandanSpider()
        :return:
        '''
        if spider.name=='jandan':
            print(item) #将item传递给下一个pipline的 process_item方法,串起来执行!
            return item
通过spider.name做爬虫区分

0.在scrapy项目setings.py配置文件注册pipeline

ITEM_PIPELINES = {
   'sp2.pipelines.Sp2Pipeline': 300,#组册pipeline,300优先级值越小越先执行
}
settings.py

1.在爬虫中yield Sp2Item()对象

from ..items import Sp2Item

yield Sp2Item(url=url,text=text) #yeid item对象表示把标签内容交给 ItemPipeline组件!
爬虫.py

2.在item中定义爬虫yield的字段

import scrapy


class Sp2Item(scrapy.Item):
    url = scrapy.Field() #定义字段
    text = scrapy.Field()
items.py

3.在pipelines设计爬取数据的存储逻辑

class Sp2Pipeline(object):
    def __init__(self):
        self.f = None


    def process_item(self, item, spider):  #爬虫爬取数据过程中
        '''
         爬虫爬取数据过程中
        :param item: 爬虫中yield回来的对象
        :param spider:爬虫对象 obj= JandanSpider()
        :return:
        '''
        print(item)
        return item

    @classmethod
    def from_crawler(cls,crawler):      #初始化时候,用于创建pipeline对象
        """
        初始化时候,用于创建pipeline对象
        :param crawler:
        :return:
        """

        return cls()

    def open_spider(self, spider):  #爬虫开始执行时,调用
        """
        爬虫开始执行时,调用
        :param spider:
        :return:
        """
        print('爬虫开始!!')

    def close_spider(self, spider):   #爬虫关闭时,被调用
        """
        爬虫关闭时,被调用
        :param spider:
        :return:
        """
        print('爬虫结束')
pipelines.py

scrapy中间件

scrapy和Django一样具有中间件功能,可以在scrapy请求网页和下载网页的过程中做统一操作,例如修改equest请求头加爬虫代理,设置response解码.....;

0.在scrapy项目setings.py配置文件注册中间件

SPIDER_MIDDLEWARES = {
   'sp33.middlewares.Sp33SpiderMiddleware': 3,
}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   'sp33.middlewares.Sp33DownloaderMiddleware': 543,
}
setings.py

1.爬虫中间件

class Sp33SpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        #0、 创建爬虫的时候调用!
        s = cls()
        #通过信号来扩展spider_opened
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
       #3.当爬虫下载完毕,还没有经过parse处理之前调用;
        print('----------------------------------------------------process_spider_input')
        return None

    def process_spider_output(self, response, result, spider):
        print('------------------------------------------------------process_spider_input')
        #4.当爬虫下载完毕,经过parse处理之后调用。
        # :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
        for i in result:
            yield i

    def process_spider_exception(self, response, exception, spider):
        #触发异常是执行
        #return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
        pass

    def process_start_requests(self, start_requests, spider):
        print('-------------------------------------------------process_start_requests')
        #2.在爬虫启动的时 调用  start_requests
        # Must return only requests (not items).
        for r in start_requests:
            yield r

    def spider_opened(self, spider):
        #1在爬虫打开自己注册的信号
        print('-------------------------------------------------------spider_opened')
        spider.logger.info('Spider opened: %s' % spider.name)
爬虫中间件

2.下载中间件

class Sp33DownloaderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_request(self, request, spider):
        """
               spider处理完成,返回时调用
               :param response:
               :param result:
               :param spider:
               :return:
                   Response 对象:转交给其他中间件process_response
                   Request 对象:停止中间件,request会被重新调度下载
                   raise IgnoreRequest 异常:调用Request.errback
               """
        # Called for each request that goes through the downloader
        # middleware.

        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        return None

    def process_response(self, request, response, spider):
        """
               spider处理完成,返回时调用
               :param response:
               :param result:
               :param spider:
               :return:
                   Response 对象:转交给其他中间件process_response
                   Request 对象:停止中间件,request会被重新调度下载
                   raise IgnoreRequest 异常:调用Request.errback
               """
        # Called with the response returned from the downloader.

        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response

    def process_exception(self, request, exception, spider):
        """
              当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
              :param response:
              :param exception:
              :param spider:
              :return:
                  None:继续交给后续中间件处理异常;
                  Response对象:停止后续process_exception方法
                  Request对象:停止中间件,request将会被重新调用下载
              """
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
下载中间件

Scrapy扩展、定制

定制URL去重功能

scrapy默认自带&开启了url去重功能,是通过文件保存url访问记录实现的,所以可尝试自己扩展url去重功能,把数以万计的url记录到iowait更快的内存数据库中去!

1.设置去重规则

class RepeatUrl:
    def __init__(self):
        self.visited_url = set() # 放在当前服务的内存

    @classmethod
    def from_settings(cls, settings):
        """
        初始化时,调用
        :param settings:
        :return:
        """
        return cls()

    def request_seen(self, request):
        """
        检测当前请求是否已经被访问过
        :param request:
        :return: True表示已经访问过;False表示未访问过
                """
        print('============================================================='+request.url)
        if request.url in self.visited_url:
            return True
        self.visited_url.add(request.url)
        return False

    def open(self):
        """
        开始爬去请求时,调用
        :return:
        """
        print('open replication')

    def close(self, reason):
        """
        结束爬虫爬取时,调用
        :param reason:
        :return:
        """
        print('close replication')

    def log(self, request, spider):
        """
        记录日志
        :param request:
        :param spider:
        :return:
        """
        print('repeat', request.url)
rep.py

2.在scrapy项目setings.py配置文件中注册

DUPEFILTER_CLASS = 'sp2.rep.RepeatUrl'
settings.py

基于scrapy预留信号自定义扩展

scrapy是一个扩展性极好的框架,类似Django的信号,scrapy同样预留了许多信号钩子,以便我们在爬虫工作的任何环节,做各种自定制扩展。

engine_started
engine_stopped
spider_opened
spider_idle
spider_closed
spider_error
request_scheduled
request_dropped
response_received
response_downloaded
item_scraped
item_dropped

1.settings.py配置文件中注册信号

EXTENSIONS = {
   'sp2.extends.MyExtension': 1,# 自定制信号的所在目录:优先级
}

2.扩展内容

from scrapy import signals


class MyExtension(object):
    def __init__(self, value):
        self.value = value

    @classmethod
    def from_crawler(cls, crawler):
        val = crawler.settings.getint('MMMM')
        ext = cls(val)

        # 在scrapy中注册信号: spider_opened
        crawler.signals.connect(ext.opened, signal=signals.spider_opened)
        # 在scrapy中注册信号: spider_closed
        crawler.signals.connect(ext.closed, signal=signals.spider_closed)

        return ext

    def opened(self, spider):
        print('###########################打开爬虫###########################')

    def closed(self, spider):
        print('###########################关闭爬虫###########################')



#  engine_started = object()
# engine_stopped = object()
# spider_opened = object()
# spider_idle = object()
# spider_closed = object()
# spider_error = object()
# request_scheduled = object()
# request_dropped = object()
# response_received = object()
# response_downloaded = object()
# item_scraped = object()
# item_dropped = object()
extends.py

扩展scrapy执行命令

1.在spiders同级创建任意目录,如:commands

2.在其中创建 crawlall.py 文件 (此处文件名就是自定义的命令)

from scrapy.commands import ScrapyCommand
    from scrapy.utils.project import get_project_settings


    class Command(ScrapyCommand):

        requires_project = True

        def syntax(self):
            return '[options]'

        def short_desc(self):
            return 'Runs all of the spiders'

        def run(self, args, opts):
            spider_list = self.crawler_process.spiders.list()
            for name in spider_list:
                self.crawler_process.crawl(name, **opts.__dict__)
            self.crawler_process.start()

crawlall.py
crawlall.py

3.在settings.py 中添加配置 COMMANDS_MODULE = '项目名称.目录名称'

4.在项目目录执行命令:scrapy crawlall,一次启动所有爬虫;

参考:http://www.cnblogs.com/wupeiqi/articles/6229292.html

原文地址:https://www.cnblogs.com/sss4/p/9429835.html