requests 快速入门

Requests 是使用 Apache2 Licensed 许可证的 基于Python开发的HTTP 库,其在Python内置模块的基础上进行了高度的封装,从而使得Pythoner进行网络请求时,变得美好了许多,使用Requests可以轻而易举的完成浏览器可有的任何操作。

import requests

# 发送请求
r = requests.get('https://github.com/timeline.json')
r = requests.post("http://httpbin.org/post")
r = requests.put("http://httpbin.org/put")
r = requests.delete("http://httpbin.org/delete")
r = requests.head("http://httpbin.org/get")
r = requests.options("http://httpbin.org/get")

先来看看GET请求吧

GET请求分为无参请求和有参请求两种:

  无参请求:get方法中只接收到了url

  有参请求:参数为URL的查询字符串传递某种数据。当你手工构建URL时,数据会以键/值对的形式置于URL中,跟在一个问号的后面,

requests提供了params关键字来接收字典数据或字符串数据。

# 无参数请求
import requests
ret = requests.get('https://github.com/timeline.json')
print ret.url
print ret.text

# 有参数请求
# 注意:字典值为None不会传入url中,字典值为列表值会分开传入URL中

payload={'user':'kong','pwd':None,'email':['1@qq.com','2@qq.com']}
r = requests.get('https://github.com/timeline.json',params=payload)
print r.url
print r.text 结果: https://github.com/timeline.json?user=kong&email=1%40qq.com&email=2%40qq.com

响应内容

 requests请求时,会解析http头部来推测文档对应的编码。当访问r.text时,会使用其来对文档内容进行解码,以字符形式返回数据,大多数 unicode 字符集都能被无缝地解码.

当然你也可以使用r.encoding来自定义编码.

import requests

r = requests.get('https://github.com/timeline.json')
print r.encoding
r.encoding ='ISO-8859-1'
print r.text

二进制数据响应内容

我们可以通过r.content方式以字节的方式来获取响应体内容,request会自动为你解码gzip和deflate编码的响应数据。

from PIL import Image
from io import BytesIO
# 直接获取字节数据来生成图片
i = Image.open(BytesIO(r.content))

json数据响应内容

requests中内置了json解码器,可直接解析json数据,如果失败则抛出异常。

import requests

r = requests.get('https://github.com/timeline.json')
print r.json()
结果:
{u'documentation_url': u'https://developer.github.com/v3/activity/events/#list-public-events', u'message': u'Hello there, wayfaring stranger. If youu2019re reading this then you probably didnu2019t see our blog post a couple of years back announcing that this API would go away: http://git.io/17AROg Fear not, you should be able to get what you need from the shiny new Events API instead.'}

原始数据响应内容

在罕见情况下,你可能想获取来自文档的原始套接字。r.raw可以满足你的需求,但在url请求时请加上stream=True参数。

import requests

r = requests.get('https://github.com/timeline.json',stream=True)
print r.raw
print r.raw.read(10)
结果:
<requests.packages.urllib3.response.HTTPResponse object at 0x0000000002A51320>
{"message"

当想把文本流保存成文件时,我们会推荐使用r.iter_content方式来获取文本流数据。

import requests

r = requests.get('https://github.com/timeline.json',stream=True)
with open('test.txt','wb') as f:
    for i in r.iter_content(10):
        f.write(i)

定制请求头

如果你想为请求的url改变HTTP头部,需要给headers参数传递一个自定义字典。虽然在最后的请求时,所有的headers信息都被传递了进去,但requests不会因自定义的headers信息而改变自己的行为。

import requests

MyHead = {'user-agent': 'my-app/0.0.1'}
r = requests.get('https://api.github.com/some/endpoint',headers=MyHead)
print r.text

post请求

表单数据的发送

传递一个dic给data参数,在requests发出请求时,传递的字典数据会自动编码成html表单形式数据随之一起发送。

import requests
import json
payload={'user':'kong','pwd':None,'email':['1@qq.com','2@qq.com']}
r = requests.post("http://httpbin.org/post",data=payload)
print r.text
结果:
{
  "args": {}, 
  "data": "", 
  "files": {}, 
  "form": {
    "email": [
      "1@qq.com", 
      "2@qq.com"
    ], 
    "user": "kong"
  }, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip, deflate", 
    "Cache-Control": "max-age=259200", 
    "Content-Length": "43", 
    "Content-Type": "application/x-www-form-urlencoded", 
    "Host": "httpbin.org", 
    "User-Agent": "python-requests/2.11.1", 
    "Via": "1.1 squid.david.dev:3128 (squid/2.6.STABLE21)"
  }, 
  "json": null, 
  "origin": "172.10.236.215, 106.37.197.164", 
  "url": "http://httpbin.org/post"
}

定制请求头

如果你想为请求的url改变HTTP头部,需要给headers参数传递一个自定义字典。虽然在最后的请求时,所有的headers信息都被传递了进去,但requests不会因自定义的headers信息而改变自己的行为。

import requests

MyHead = {'user-agent': 'my-app/0.0.1'}
r = requests.post('https://api.github.com/some/endpoint',headers=MyHead)
print r.text

字符串的发送

传递一个dic给json参数,在requests发出请求时,dic会被自动编码成json格式,直接发布出去

import requests
import json

payload={'user':'kong','pwd':None,'email':['1@qq.com','2@qq.com']}
r = requests.post("http://httpbin.org/post",json=payload)
print r.text
结果:
{
  "args": {}, 
  "data": "{"pwd": null, "user": "kong", "email": ["1@qq.com", "2@qq.com"]}", 
  "files": {}, 
  "form": {}, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip, deflate", 
    "Cache-Control": "max-age=259200", 
    "Content-Length": "64", 
    "Content-Type": "application/json", 
    "Host": "httpbin.org", 
    "User-Agent": "python-requests/2.11.1", 
    "Via": "1.1 squid.david.dev:3128 (squid/2.6.STABLE21)"
  }, 
  "json": {
    "email": [
      "1@qq.com", 
      "2@qq.com"
    ], 
    "pwd": null, 
    "user": "kong"
  }, 
  "origin": "172.10.236.215, 106.37.197.164", 
  "url": "http://httpbin.org/post"
}

传递一个string给data参数,在requests发出请求时,传递的字符串会被直接发布出去。

import requests
import json

payload={'user':'kong','pwd':None,'email':['1@qq.com','2@qq.com']}
r = requests.post("http://httpbin.org/post",data=json.dumps(payload))
print r.text
结果:
{
  "args": {}, 
  "data": "{"pwd": null, "user": "kong", "email": ["1@qq.com", "2@qq.com"]}", 
  "files": {}, 
  "form": {}, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip, deflate", 
    "Cache-Control": "max-age=259200", 
    "Content-Length": "64", 
    "Host": "httpbin.org", 
    "User-Agent": "python-requests/2.11.1", 
    "Via": "1.1 squid.david.dev:3128 (squid/2.6.STABLE21)"
  }, 
  "json": {
    "email": [
      "1@qq.com", 
      "2@qq.com"
    ], 
    "pwd": null, 
    "user": "kong"
  }, 
  "origin": "172.10.236.215, 106.37.197.164", 
  "url": "http://httpbin.org/post"
}

再来看看响应信息吧

响应状态码

我们可以通过r.status_code来获取url爬取状态,通过r.raise_for_status()来抛出异常

import requests

r = requests.post("http://httpbin.org/post")
print r.status_code
print r.raise_for_status()
结果:
200
None

响应头

在response数据中,我们会得到一个专为http头部而生的python字典响应头,根据RFC2612,http头部大小写是不敏感的。

import requests

r = requests.post("http://httpbin.org/post")

print r.headers
print r.headers['content-length']
print r.headers.get('via')
结果:
{'Content-Length': '448', 'Via': '1.0 squid.david.dev:3128 (squid/2.6.STABLE21)', 'Proxy-Connection': 'keep-alive', 'X-Cache': 'MISS from squid.david.dev', 'X-Cache-Lookup': 'MISS from squid.david.dev:3128', 'Server': 'nginx', 'Access-Control-Allow-Credentials': 'true', 'Date': 'Thu, 29 Dec 2016 07:35:26 GMT', 'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json'}
448
1.0 squid.david.dev:3128 (squid/2.6.STABLE21)

响应cookie

有些响应中会有cookie信息,你可以这样获取

r = requests.post("http://httpbin.org/post")
print r.cookies

可以通过cookies参数发送cookie信息到服务器

import requests

cookies = dict(cookies_are='working')
r = requests.post("http://httpbin.org/post",cookies=cookies)
print r.text
结果:
{
  "args": {}, 
  "data": "", 
  "files": {}, 
  "form": {}, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip, deflate", 
    "Cache-Control": "max-age=259200", 
    "Content-Length": "0", 
    "Cookie": "cookies_are=working", 
    "Host": "httpbin.org", 
    "User-Agent": "python-requests/2.11.1", 
    "Via": "1.1 squid.david.dev:3128 (squid/2.6.STABLE21)"
  }, 
  "json": null, 
  "origin": "172.10.236.215, 106.37.197.164", 
  "url": "http://httpbin.org/post"
}

重定向与请求历史

重定向:默认情况下,head不能自动处理重定向,可以通过allow_redirects=True来启用自动处理重定向

  GET、OPTIONS、POST、PUT、PATCH 或者 DELETE可以自动处理重定向,可以通过allow_redirects=False来禁用重定向

可以使用r.history方法来追踪重定向,是一个 Response 对象的列表,为了完成请求而创建了这些对象。这个对象列表按照从最老到最近的请求进行排序。

r = requests.head('http://github.com')
print r.url
print r.history
结果:head无法处理重定向
http://github.com/
[]

r = requests.head('http://github.com',allow_redirects=True)
print r.url
print r.history
结果:head启用了自动处理重定向
https://github.com/
[<Response [301]>]
r = requests.get('http://github.com')
print r.url
print r.status_code
print r.history
结果:http被自动处理重定向为https
https://github.com/
200
[<Response [301]>]

r = requests.get('http://github.com', allow_redirects=False)
print r.url
print r.status_code
print r.history
结果:get禁用了自动处理重定向
http://github.com/
301
[]

超时

可以传递一个数值给timeout参数,来设置请求后的最大应达时间,如果在此时间内没有应答,将会引发一个异常。

requests.get('http://github.com', timeout=0.001)

错误与异常

网络问题:ConnectionError

返回不成功的状态码:r.raise_for_status()抛出一个HTTPError

请求超时:Timeout

超过最大重定向次数:TooManyRedirects

实例讲解:

 1 #!/usr/bin/env python
 2 # _*_ coding:utf-8 _*_
 3 
 4 import requests
 5 # 登陆任何页面获取cookes
 6 ck = requests.get(url="http://dig.chouti.com")
 7 cookies = ck.cookies.get_dict()
 8 
 9 # 用户登陆,携带上一次的cookie,获取最新的cookie
10 payload = {
11     'phone':'8615xx',
12     'password':'xx',
13     'oneMonth':"1",
14 }
15 login = requests.post("http://dig.chouti.com/login",
16                       data=payload,
17                       cookies=cookies)
18 # 点赞
19 dian = {"linksId":"10769761"}
20 # requests.post(url="http://dig.chouti.com/link/vote",
21 #               cookies=cookies,
22 #               data=dian)
23 # 减赞
24 requests.post(url="http://dig.chouti.com/vote/cancel/vote.do",
25                  cookies=cookies,
26                  data=dian)
登陆抽屉并点赞
 1 #!/usr/bin/env python
 2 # _*_ coding:utf-8 _*_
 3 
 4 import requests
 5 
 6 s = requests.Session()
 7 s.get(url="http://dig.chouti.com")
 8 
 9 payload = {
10     'phone':'8615201417639',
11     'password':'kongzhagen.com',
12     'oneMonth':"1",
13 }
14 s.post("http://dig.chouti.com/login",data=payload)
15 
16 dian = {"linksId":"10769761"}
17 # 点赞
18 s.post(url="http://dig.chouti.com/link/vote",data=dian)
19 # 减赞
20 # s.post(url="http://dig.chouti.com/vote/cancel/vote.do",data=dian)
另一种方法
原文地址:https://www.cnblogs.com/kongzhagen/p/6231761.html