异步调用与回调机制

提交任务的两种方式。

同步调用:提交完任务后,就在原地等待任务执行完毕,拿到结果,再执行下一行代码,导致程序是串行执行

异步调用:提交完任务后,不等待任务执行完毕

from concurrent.futures import ThreadPoolExecutor
import time,random

def la(name):
    print('%s is laing'%name)
    time.sleep(random.randint(3,5))
    res = random.randint(7,13)*'#'
    return {'name':name,'res':res}

def weigh(shit):
    shit = shit.result() # 异步回掉时,处理接收到的对象
    name = shit['name']
    size = len(shit['res'])
    print('%s 拉了 《%s》kg'%(name,size))

if __name__ =='__main__':
    pool = ThreadPoolExecutor(13)

    # 同步调用
    # shit1 = pool.submit(la,'alex').result()
    # weigh(shit1)
    # shit2 = pool.submit(la, 'huhao').result()
    # weigh(shit2)
    # shit3 = pool.submit(la, 'zhanbin').result()
    # weigh(shit3)

    # 异步调用
    pool.submit(la, 'alex').add_done_callback(weigh)
    pool.submit(la, 'huhao').add_done_callback(weigh)
    pool.submit(la, 'zhanbin').add_done_callback(weigh)

简单网页爬虫示例:

import requests,time
from concurrent.futures import ThreadPoolExecutor

def get(url):
    print('get url',url)
    response = requests.get(url)
    time.sleep(3)
    return {'url':url,'content':response.text}

def parse(res):
    res = res.result()
    print('%s parse res is %s'%(res['url'],len(res['content'])))

if __name__ == '__main__':
    urls = [
        'http://www.cnblogs.com/stin',
        'https://www.python.org',
        'https://www.openstack.org',
    ]
    pool = ThreadPoolExecutor(2)
    for url in urls:
        pool.submit(get,url).add_done_callback(parse)
原文地址:https://www.cnblogs.com/stin/p/8548454.html