scrapy错误汇总

爬虫开始
<pymysql.connections.Connection object at 0x0000012D63A1EFD0>
2019-05-02 09:56:21 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-02 09:56:21 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-05-02 09:56:24 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://m.kxzhijia.com/en/pg.jsp?entry=mallNav&pgs=1> from <GET http://m.kxzhijia.com/pg.jsp?entry=mallNav&pgs=1>
2019-05-02 09:56:30 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://m.kxzhijia.com/en/pg.jsp?entry=mallNav&pgs=1> (referer: None)
finished
2019-05-02 09:56:30 [scrapy.core.engine] INFO: Closing spider (finished)
2019-05-02 09:56:30 [chyp_tj] INFO: 结束(正常):chyp_tj

  没有请求头headers时就会这样报错,例如:

'DEFAULT_REQUEST_HEADERS' : {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36',
}
原文地址:https://www.cnblogs.com/qiaoer1993/p/10801693.html