python scrapy,beautifulsoup,regex,sgmparser,request,connection


In [2]: import requests

 
In [3]: s = requests.Session()
 
In [4]: s.headers

如果你是爬虫相关的业务?抓取的网站还各种各样,每个服务器的地址都不一样,那么你不适用于我上面的方法,而是需要把Connection给关闭. 当然还是看场景. 多方调试下. 

r = requests.post(url=url, data=body, headers={‘Connection’:'close’})

headers = {'Content-Type': 'application/json','Connection':'keep-alive'}

        r = client.post(SIGMENT_ADDRESS, data=json.dumps(text_list), headers=headers)

python scrapy,beautifulsoup,regex,sgmparser

原文地址:https://www.cnblogs.com/SZLLQ2000/p/5408562.html