tornado的非异步阻塞模式

【优化tornado阻塞任务的三个选择】

1、优化阻塞的任务,使其执行时间更快。经常由于是一个DB的慢查询,或者复杂的上层模板导致的,这个时候首要的是加速这些任务,而不是优化复杂的webserver。可以提升99%的效率。

2、开启一个单独的线程或者进程执行耗时任务。这意味着对于IOLoop来说,可以开启另一个线程(或进程)处理off-loading任务,这样它就可以再接收其他请求了,而不是阻塞住。

3、使用异步的驱动或者库函数来执行任务,例如gevent , motor

【例子1】

import time

import tornado.ioloop
import tornado.web


class MainHandler(tornado.web.RequestHandler):

    def get(self):
        self.write("Hello, world %s" % time.time())


class SleepHandler(tornado.web.RequestHandler):

    def get(self, n):
        time.sleep(float(n))
        self.write("Awake! %s" % time.time())


application = tornado.web.Application([
    (r"/", MainHandler),
    (r"/sleep/(d+)", SleepHandler),
])


if __name__ == "__main__":
    application.listen(8888)
    tornado.ioloop.IOLoop.instance().start()

这样在一个tab页中开启http://localhost:8888/sleep/10,同时在另一个tab页访问http://localhost:8888/,会发现没有打印"Hello World"直到第一个页面完成为止。实际上,第一个调用将IOLoop阻塞住了,导致其无法响应第二个请求。

【例子2——非阻塞模式】

from concurrent.futures import ThreadPoolExecutor
from functools import partial, wraps

import tornado.ioloop
import tornado.web


EXECUTOR = ThreadPoolExecutor(max_workers=4)


def unblock(f):

    @tornado.web.asynchronous
    @wraps(f)
    def wrapper(*args, **kwargs):
        self = args[0]

        def callback(future):
            self.write(future.result())
            self.finish()

        EXECUTOR.submit(
            partial(f, *args, **kwargs)
        ).add_done_callback(
            lambda future: tornado.ioloop.IOLoop.instance().add_callback(
                partial(callback, future)))

    return wrapper


class SleepHandler(tornado.web.RequestHandler):

    @unblock
    def get(self, n):
        time.sleep(float(n))
        return "Awake! %s" % time.time()

unblock修饰器将被修饰函数提交给线程池,返回一个future。在future中添加一个callback函数,并将控制权交给IOLoop。

这个回调函数最终将调用self.finish,并结束此次请求。

Note:这个修饰器函数本身还需要被tornado.web.asynchronous修饰,为了是避免调用self.finish太快。

self.write不是线程安全(thread-safe)的,因此避免在主线程中处理future的结果。

当你使用@tornado.web.asynchonous装饰器时,Tornado永远不会自己关闭连接,需要显式的self.finish()关闭

 【完整的demo】

from concurrent.futures import ThreadPoolExecutor
from functools import partial, wraps
import time
 
import tornado.ioloop
import tornado.web
 
 
EXECUTOR = ThreadPoolExecutor(max_workers=4)
 
 
def unblock(f):
 
    @tornado.web.asynchronous
    @wraps(f)
    def wrapper(*args, **kwargs):
        self = args[0]
 
        def callback(future):
            self.write(future.result())
            self.finish()
 
        EXECUTOR.submit(
            partial(f, *args, **kwargs)
        ).add_done_callback(
            lambda future: tornado.ioloop.IOLoop.instance().add_callback(
                partial(callback, future)))
 
    return wrapper
 
 
class MainHandler(tornado.web.RequestHandler):
 
    def get(self):
        self.write("Hello, world %s" % time.time())
 
 
class SleepHandler(tornado.web.RequestHandler):
 
    @unblock
    def get(self, n):
        time.sleep(float(n))
        return "Awake! %s" % time.time()
 
 
class SleepAsyncHandler(tornado.web.RequestHandler):
 
    @tornado.web.asynchronous
    def get(self, n):
 
        def callback(future):
            self.write(future.result())
            self.finish()
 
        EXECUTOR.submit(
            partial(self.get_, n)
        ).add_done_callback(
            lambda future: tornado.ioloop.IOLoop.instance().add_callback(
                partial(callback, future)))
 
    def get_(self, n):
        time.sleep(float(n))
        return "Awake! %s" % time.time()
 
 
application = tornado.web.Application([
    (r"/", MainHandler),
    (r"/sleep/(d+)", SleepHandler),
    (r"/sleep_async/(d+)", SleepAsyncHandler),
])
 
 
if __name__ == "__main__":
    application.listen(8888)
    tornado.ioloop.IOLoop.instance().start()

【ThreadPoolExecutor】

上面涉及到ThreadPoolExecutor两个方法,初始化以及submit,查看帮助

class ThreadPoolExecutor(concurrent.futures._base.Executor)
 |  Method resolution order:
 |      ThreadPoolExecutor
 |      concurrent.futures._base.Executor
 |      __builtin__.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, max_workers)
 |      Initializes a new ThreadPoolExecutor instance.
 |      
 |      Args:
 |          max_workers: The maximum number of threads that can be used to
 |              execute the given calls.
 |  
 |  submit(self, fn, *args, **kwargs)
 |      Submits a callable to be executed with the given arguments.
 |      
 |      Schedules the callable to be executed as fn(*args, **kwargs) and returns
 |      a Future instance representing the execution of the callable.
 |      
 |      Returns:
 |          A Future representing the given call.

1、max_workers可以处理给定calls的最大线程数目,如果超过这个数目会怎么样呢??

2、submit调用fn(*args, **kwargs),返回一个Future的实例

【Future】

Help on class Future in module concurrent.futures._base:

class Future(__builtin__.object)
 |  Represents the result of an asynchronous computation.
 |  
 |  Methods defined here:
 |  
 |  __init__(self)
 |      Initializes the future. Should not be called by clients.
 |  
 |  __repr__(self)
 |  
 |  add_done_callback(self, fn)
 |      Attaches a callable that will be called when the future finishes.
 |      
 |      Args:
 |          fn: A callable that will be called with this future as its only
 |              argument when the future completes or is cancelled. The callable
 |              will always be called by a thread in the same process in which
 |              it was added. If the future has already completed or been
 |              cancelled then the callable will be called immediately. These
 |              callables are called in the order that they were added.

  

【参考文献】

1、http://lbolla.info/blog/2013/01/22/blocking-tornado

2、http://www.tuicool.com/articles/36ZzA3

原文地址:https://www.cnblogs.com/gsblog/p/3904265.html