scrapy 带认证的代理

官方方法:

from w3lib.http import basic_auth_header

class CustomProxyMiddleware(object):
    def process_request(self, request, spider):
        request.meta['proxy'] = "https://<PROXY_IP_OR_URL>:<PROXY_PORT>"
        request.headers['Proxy-Authorization'] = basic_auth_header(
            '<PROXY_USERNAME>', '<PROXY_PASSWORD>')
DOWNLOADER_MIDDLEWARES = {
    '<PROJECT_NAME>.middlewares.CustomProxyMiddleware': 350,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 400,
}

 

来源: https://support.scrapinghub.com/support/solutions/articles/22000219743-using-a-custom-proxy-in-a-scrapy-spider

经过测试,不带认证的这么写也没有问题

原文地址:https://www.cnblogs.com/WalkOnMars/p/12207063.html