12 Scrapy框架的日志等级和请求传参

一.Scrapy的日志等级

  - 在使用scrapy crawl spiderFileName运行程序时,在终端里打印输出的就是scrapy的日志信息。

  - 日志信息的种类:

        ERROR : 一般错误

        WARNING : 警告

        INFO : 一般的信息

        DEBUG : 调试信息

  - 设置日志信息指定输出:

    在settings.py配置文件中,加入

                    LOG_LEVEL = ‘指定日志信息种类’即可。

                    LOG_FILE = 'log.txt'则表示将日志信息写入到指定文件中进行存储。

二.请求传参

  - 在某些情况下,我们爬取的数据不在同一个页面中,例如,我们爬取一个电影网站,电影的名称,评分在一级页面,而要爬取的其他电影详情在其二级子页面中。这时我们就需要用到请求传参。

  - 案例展示:爬取www.id97.com电影网,将一级页面中的电影名称,类型,评分一级二级页面中的上映时间,导演,片长进行爬取。

  爬虫文件:

 1 import scrapy
 2 from moviePro.items import MovieproItem
 3 
 4 class MovieSpider(scrapy.Spider):
 5     name = 'movie'
 6     allowed_domains = ['www.id97.com']
 7     start_urls = ['http://www.id97.com/']
 8 
 9     def parse(self, response):
10         div_list = response.xpath('//div[@class="col-xs-1-5 movie-item"]')
11 
12         for div in div_list:
13             item = MovieproItem()
14             item['name'] = div.xpath('.//h1/a/text()').extract_first()
15             item['score'] = div.xpath('.//h1/em/text()').extract_first()
16             #xpath(string(.))表示提取当前节点下所有子节点中的数据值(.)表示当前节点
17             item['kind'] = div.xpath('.//div[@class="otherinfo"]').xpath('string(.)').extract_first()
18             item['detail_url'] = div.xpath('./div/a/@href').extract_first()
19             #请求二级详情页面,解析二级页面中的相应内容,通过meta参数进行Request的数据传递
20             yield scrapy.Request(url=item['detail_url'],callback=self.parse_detail,meta={'item':item})
21 
22     def parse_detail(self,response):
23         #通过response获取item
24         item = response.meta['item']
25         item['actor'] = response.xpath('//div[@class="row"]//table/tr[1]/a/text()').extract_first()
26         item['time'] = response.xpath('//div[@class="row"]//table/tr[7]/td[2]/text()').extract_first()
27         item['long'] = response.xpath('//div[@class="row"]//table/tr[8]/td[2]/text()').extract_first()
28         #提交item到管道
29         yield item

  items文件:

 1 # -*- coding: utf-8 -*-
 2 
 3 # Define here the models for your scraped items
 4 #
 5 # See documentation in:
 6 # https://doc.scrapy.org/en/latest/topics/items.html
 7 
 8 import scrapy
 9 
10 
11 class MovieproItem(scrapy.Item):
12     # define the fields for your item here like:
13     name = scrapy.Field()
14     score = scrapy.Field()
15     time = scrapy.Field()
16     long = scrapy.Field()
17     actor = scrapy.Field()
18     kind = scrapy.Field()
19     detail_url = scrapy.Field()

  管道文件:

 1 # -*- coding: utf-8 -*-
 2 
 3 # Define your item pipelines here
 4 #
 5 # Don't forget to add your pipeline to the ITEM_PIPELINES setting
 6 # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
 7 
 8 import json
 9 class MovieproPipeline(object):
10     def __init__(self):
11         self.fp = open('data.txt','w')
12     def process_item(self, item, spider):
13         dic = dict(item)
14         print(dic)
15         json.dump(dic,self.fp,ensure_ascii=False)
16         return item
17     def close_spider(self,spider):
18         self.fp.close()

三. 如何提高scrapy的爬虫效率

 1 增加并发:
 2     默认scrapy开启的并发线程为32个,可以适当进行增加。在settings配置文件中修改CONCURRENT_REQUESTS = 100值为100,并发设置成了为100。
 3 
 4 降低日志级别:
 5     在运行scrapy时,会有大量日志信息的输出,为了减少CPU的使用率。可以设置log输出信息为INFO或者ERROR即可。在配置文件中编写:LOG_LEVEL = ‘INFO’
 6 
 7 禁止cookie:
 8     如果不是真的需要cookie,则在scrapy爬取数据时可以进制cookie从而减少CPU的使用率,提升爬取效率。在配置文件中编写:COOKIES_ENABLED = False
 9 
10 禁止重试:
11     对失败的HTTP进行重新请求(重试)会减慢爬取速度,因此可以禁止重试。在配置文件中编写:RETRY_ENABLED = False
12 
13 减少下载超时:
14     如果对一个非常慢的链接进行爬取,减少下载超时可以能让卡住的链接快速被放弃,从而提升效率。在配置文件中进行编写:DOWNLOAD_TIMEOUT = 10 超时时间为10s

  测试案例:爬取校花网校花图片 www.521609.com

  爬虫文件:

 1 import scrapy
 2 from xiaohua.items import XiaohuaItem
 3 
 4 class XiahuaSpider(scrapy.Spider):
 5 
 6     name = 'xiaohua'
 7     allowed_domains = ['www.521609.com']
 8     start_urls = ['http://www.521609.com/daxuemeinv/']
 9 
10     pageNum = 1
11     url = 'http://www.521609.com/daxuemeinv/list8%d.html'
12 
13     def parse(self, response):
14         li_list = response.xpath('//div[@class="index_img list_center"]/ul/li')
15         for li in li_list:
16             school = li.xpath('./a/img/@alt').extract_first()
17             img_url = li.xpath('./a/img/@src').extract_first()
18 
19             item = XiaohuaItem()
20             item['school'] = school
21             item['img_url'] = 'http://www.521609.com' + img_url
22 
23             yield item
24 
25         if self.pageNum < 10:
26             self.pageNum += 1
27             url = format(self.url % self.pageNum)
28             #print(url)
29             yield scrapy.Request(url=url,callback=self.parse)

  items文件:

 1 # -*- coding: utf-8 -*-
 2 
 3 # Define here the models for your scraped items
 4 #
 5 # See documentation in:
 6 # https://doc.scrapy.org/en/latest/topics/items.html
 7 
 8 import scrapy
 9 
10 
11 class XiaohuaItem(scrapy.Item):
12     # define the fields for your item here like:
13     # name = scrapy.Field()
14     school=scrapy.Field()
15     img_url=scrapy.Field()

  管道文件:

 1 # -*- coding: utf-8 -*-
 2 
 3 # Define your item pipelines here
 4 #
 5 # Don't forget to add your pipeline to the ITEM_PIPELINES setting
 6 # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
 7 
 8 import json
 9 import os
10 import urllib.request
11 class XiaohuaPipeline(object):
12     def __init__(self):
13         self.fp = None
14 
15     def open_spider(self,spider):
16         print('开始爬虫')
17         self.fp = open('./xiaohua.txt','w')
18 
19     def download_img(self,item):
20         url = item['img_url']
21         fileName = item['school']+'.jpg'
22         if not os.path.exists('./xiaohualib'):
23             os.mkdir('./xiaohualib')
24         filepath = os.path.join('./xiaohualib',fileName)
25         urllib.request.urlretrieve(url,filepath)
26         print(fileName+"下载成功")
27 
28     def process_item(self, item, spider):
29         obj = dict(item)
30         json_str = json.dumps(obj,ensure_ascii=False)
31         self.fp.write(json_str+'
')
32 
33         #下载图片
34         self.download_img(item)
35         return item
36 
37     def close_spider(self,spider):
38         print('结束爬虫')
39         self.fp.close()

  settings文件

 1 USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'
 2 
 3 # Obey robots.txt rules
 4 ROBOTSTXT_OBEY = False
 5 
 6 # Configure maximum concurrent requests performed by Scrapy (default: 16)
 7 CONCURRENT_REQUESTS = 100
 8 COOKIES_ENABLED = False
 9 LOG_LEVEL = 'ERROR'
10 RETRY_ENABLED = False
11 DOWNLOAD_TIMEOUT = 3
12 # Configure a delay for requests for the same website (default: 0)
13 # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
14 # See also autothrottle settings and docs
15 # The download delay setting will honor only one of:
16 #CONCURRENT_REQUESTS_PER_DOMAIN = 16
17 #CONCURRENT_REQUESTS_PER_IP = 16
18 DOWNLOAD_DELAY = 3
原文地址:https://www.cnblogs.com/a2534786642/p/10998494.html