python3 使用urllib报错urlopen error EOF occurred in violation of protocol (_ssl.c:841)

python3源码:

import urllib.request
from bs4 import BeautifulSoup

response = urllib.request.urlopen("http://php.net/")
html = response.read()
soup=BeautifulSoup(html, "html5lib")
text=soup.get_text(strip=True)
print(text)

  代码很简单,就是抓取http://php.net/页面文本内容,然后使用BeautifulSoup模块清除过滤掉多余的html标签。貌似第一次允许成功了,之后一直卡着再报错:

  File "C:Python36liburllib
equest.py", line 504, in _call_chain
    result = func(*args)
  File "C:Python36liburllib
equest.py", line 1361, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "C:Python36liburllib
equest.py", line 1320, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error EOF occurred in violation of protocol (_ssl.c:841)>

  实际上google浏览器是能够访问的。

  此问题可能是由于Web服务器上禁用了SSLv2,而比较老的python库Python 2.x尝试默认情况下与PROTOCOL_SSLv23建立连接。因此在这种情况下,需要选择请求使用的SSL版本。

  要更改HTTPS中使用的SSL版本,需要将该HTTPAdapter类子类化并将其挂载到 Session对象。例如,如果想强制使用TLSv1,则新的传输适配器将如下所示:

from requests.adapters import HTTPAdapter
from requests.packages.urllib3.poolmanager import PoolManager

class MyAdapter(HTTPAdapter):
    def init_poolmanager(self, connections, maxsize, block=False):
        self.poolmanager = PoolManager(num_pools=connections,
                                       maxsize=maxsize,
                                       block=block,
                                       ssl_version=ssl.PROTOCOL_TLSv1)

  然后,可以将其挂载到Requests Session对象:

s=requests.Session()
s.mount('https://', MyAdapter())
response = urllib.request.urlopen("http://php.net/")

  编写一个通用传输适配器还是很简单,它可以从ssl构造函数中的包中获取任意SSL类型并使用它。

from requests.adapters import HTTPAdapter
from requests.packages.urllib3.poolmanager import PoolManager

class SSLAdapter(HTTPAdapter):
    '''An HTTPS Transport Adapter that uses an arbitrary SSL version.'''
    def __init__(self, ssl_version=None, **kwargs):
        self.ssl_version = ssl_version

        super(SSLAdapter, self).__init__(**kwargs)

    def init_poolmanager(self, connections, maxsize, block=False):
        self.poolmanager = PoolManager(num_pools=connections,
                                       maxsize=maxsize,
                                       block=block,
                                       ssl_version=self.ssl_version)

  修改后的上述出错的代码:

import urllib.request
from bs4 import BeautifulSoup
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.poolmanager import PoolManager
import ssl

class MyAdapter(HTTPAdapter):
    def init_poolmanager(self, connections, maxsize, block=False):
        self.poolmanager = PoolManager(num_pools=connections,
                                       maxsize=maxsize,
                                       block=block,
                                       ssl_version=ssl.PROTOCOL_TLSv1)

s=requests.Session()
s.mount('https://', MyAdapter())
response = urllib.request.urlopen("http://php.net/")
html = response.read()
soup=BeautifulSoup(html, "html5lib")
text=soup.get_text(strip=True)
print(text)

  可以正常抓取网页文本信息。

原文地址:https://www.cnblogs.com/czx1/p/11442442.html