爬虫之SCRAPY

- scrapy环境的安装
      a. pip3 install wheel

      b. 下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted

      c. 进入下载目录,执行 pip3 install Twisted‑17.1.0‑cp35‑cp35m‑win_amd64.whl

      d. pip3 install pywin32

      e. pip3 install scrapy

- scrapy使用
    - 1.创建一个工程:scrapy startproject ProName   (注  startproject    中间没有空格)
    - 2.cd ProName
    - 3.创建爬虫文件:scrapy genspider first www.xxx.com
    - 4.执行:(allowed_domains  注释掉)
        - settings.py:
            - 不遵从rbotes协议
            - 进行UA伪装
            - 指定日志等级:LOG_LEVEL = ‘ERROR’
        scrapy crawl spiderName

- 持久化存储
    - 基于终端指令:
        - 前提:只可以将parse方法的返回值进行本地文件的持久化存储
        - 指令:scrapy crawl spiderName -o filePath
    - 基于管道:
        - 编码流程:
            1.数据解析
            2.需要在item类中定义相关的属性(存储解析到的数据)(在 items.py 中添加属性)
            3.将解析到的数据存储或者封装到一个item类型的对象中(在 主文件里 实例化item 封装数据)  (注是item['name']     不是item.name)
            4.将item对象提交到管道中(yield item)
            5.在管道中需要接收item,且将item对象中的数据进行任意形式的持久化操作(在piplines.py里添加

       6.在配置文件中开启管道(settings 里打开 ITEM_PIPELINES)
5.scrapy如何实现持久化存储?
    - 1.数据解析
    - 2.封装item类(定义属性)
    - 3.将解析的数据存储到item对象
    - 4.将item提交到管道
    - 5.在管道中接受item且对其进行任意i形式的持久化存储
    - 6.在配置文件中开启管道
6.简述基于移动端数据爬取的配置流程?
    - 下载fiddler
    - 配置fiddler:
        - 配置端口
        - allow remote xxx
    - 在手机端下载安装证书
        - 手机和fiddler的pc端处于同一个网段(pc分享一个热点手机连接)
        - 在手机浏览器中访问ip:port下载证书
    - 在手机中设置网络代理:代理信息都是来源于fiddler所在的pc


 scrapy管道的细节处理
    - 数据的爬取和持久化存储,将同一组数据分别存储到不同的平台
        - 一个管道类负责将数据存储到某一个载体或者平台中
        - 爬虫文件提交的item只会提交给第一个被执行的管道类
        - 在管道类的process_item中的return item表示的含义就是将当前管道类接收到的item传递给下一个即将被执行的管道类

    - 注意:爬虫类和管道类进行数据交互的形式
        - yild item:只可以交互item类型的对象
        - spider参数:可以交互任意形式的数据


- 全站数据的爬取
    - 基于手动请求发送实现
        - 实现全站数据爬取
        - 实现深度爬取
     - 手动请求发送:
        - yield scrapy.Request(url,callback)
        - yield scrapy.FormRequest(url,formdata,callback)

- post请求的发送和cookie的处理
    - cookie的处理是scrapy自动封装好
- start_urls自动请求发送的实现原理:
    #是父类的一个请求发送的封装
    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(url=url,callback=self.parse)


- 深度爬取
    - 手动请求发送
    - 请求传参:持久化存储,将不同的回调中解析的数据存储到同一个item对象。请求传参传递的就是item对象。
        - 使用场景:如果使用scrapy爬取的数据没有在同一张页面中
        - 传递方式:将传递数据封装到meta字典中,meta传递个了callback
            yield scrapy.Request(url,callback,meta)
        - 接收:
            在callback指定的回调函数中使用response进行接收:
                - item = response.meta['key']

- 五大核心组件
    - 引擎(Scrapy)
        用来处理整个系统的数据流处理, 触发事务(框架核心)
    - 调度器(Scheduler)
        用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址
    - 下载器(Downloader)
        用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)
    - 爬虫(Spiders)
        爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面
    - 项目管道(Pipeline)
        负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。

- 如何提升scrapy爬取数据的效率
增加并发:
    默认scrapy开启的并发线程为32个,可以适当进行增加。在settings配置文件中修改CONCURRENT_REQUESTS = 100值为100,并发设置成了为100。

降低日志级别:
    在运行scrapy时,会有大量日志信息的输出,为了减少CPU的使用率。可以设置log输出信息为INFO或者ERROR即可。在配置文件中编写:LOG_LEVEL = ‘INFO’

禁止cookie:
    如果不是真的需要cookie,则在scrapy爬取数据时可以禁止cookie从而减少CPU的使用率,提升爬取效率。在配置文件中编写:COOKIES_ENABLED = False

禁止重试:
    对失败的HTTP进行重新请求(重试)会减慢爬取速度,因此可以禁止重试。在配置文件中编写:RETRY_ENABLED = False

减少下载超时:
    如果对一个非常慢的链接进行爬取,减少下载超时可以能让卡住的链接快速被放弃,从而提升效率。在配置文件中进行编写:DOWNLOAD_TIMEOUT = 10 超时时间为10s

 

BOSS直聘深度爬取

import scrapy
from bossDeepPro.items import BossdeepproItem

class BossSpider(scrapy.Spider):
    name = 'boss'
    # allowed_domains = ['www.xxx.com']
    start_urls = ['https://www.zhipin.com/job_detail/?query=python%E5%BC%80%E5%8F%91&city=101010100&industry=&position=']

    # 通用的url模板(不可变)
    url = 'https://www.zhipin.com/c101010100/?query=python开发&page=%d'
    page = 2

    def parse(self, response):
        print('正在爬取第{}页的数据'.format(self.page))
        li_list = response.xpath('//*[@id="main"]/div/div[3]/ul/li | //*[@id="main"]/div/div[2]/ul/li')
        for li in li_list:
            job_title = li.xpath('.//div[@class="info-primary"]/h3/a/div[1]/text()').extract_first()
            salary = li.xpath('.//div[@class="info-primary"]/h3/a/span/text()').extract_first()
            #实例化item对象:对象必须要让parse和parse_detail共享
            item = BossdeepproItem()
            item['job_title'] = job_title
            item['salary'] = salary

            detail_url = 'https://www.zhipin.com'+li.xpath('.//div[@class="info-primary"]/h3/a/@href').extract_first()
            #对详情页的url发起手动请求
            yield scrapy.Request(url=detail_url,callback=self.parse_detail,meta={'item':item})   #(多个管道间的数据传输)


        if self.page <= 5:
            # 对其他页码进行手动请求的发送
            new_url = format(self.url % self.page)
            print(new_url)
            self.page += 1
            # 手动请求发送
            # callback进行数据解析
            yield scrapy.Request(url=new_url, callback=self.parse)

    #解析岗位描述
    def parse_detail(self,response):
        item = response.meta['item']
        job_desc = response.xpath('//*[@id="main"]/div[3]/div/div[2]/div[2]/div[1]/div//text()').extract()
        job_desc = ''.join(job_desc)

        item['job_desc'] = job_desc

        yield item
import scrapy


class BossdeepproItem(scrapy.Item):
    # define the fields for your item here like:
    job_title = scrapy.Field()
    job_desc = scrapy.Field()
    salary = scrapy.Field()
# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals


class BossproSpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.

        # Should return None or raise an exception.
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, dict or Item objects.
        for i in result:
            yield i

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Request, dict
        # or Item objects.
        pass

    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.

        # Must return only requests (not items).
        for r in start_requests:
            yield r

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)


class BossproDownloaderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.

        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        return None

    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.

        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response

    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
class BossdeepproPipeline(object):
    def process_item(self, item, spider):
        print(item)
        return item
# -*- coding: utf-8 -*-

# Scrapy settings for bossDeepPro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'bossDeepPro'

SPIDER_MODULES = ['bossDeepPro.spiders']
NEWSPIDER_MODULE = 'bossDeepPro.spiders'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'bossDeepPro (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR'
# LOG_FILE = 'log.txt'

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'bossDeepPro.middlewares.BossdeepproSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'bossDeepPro.middlewares.BossdeepproDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'bossDeepPro.pipelines.BossdeepproPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
原文地址:https://www.cnblogs.com/qj696/p/11316762.html