day44——celery多实例、celery与定时任务

celery支持定时任务,设定好任务的执行时间,celery就会定时自动帮你执行, 这个定时任务模块叫celery beat

写一个脚本 叫periodic_task.py

from celery import Celery
from celery.schedules import crontab
 
app = Celery()
 
@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    # Calls test('hello') every 10 seconds.
    sender.add_periodic_task(10.0, test.s('hello'), name='add every 10')
 
    # Calls test('world') every 30 seconds
    sender.add_periodic_task(30.0, test.s('world'), expires=10)
 
    # Executes every Monday morning at 7:30 a.m.
    sender.add_periodic_task(
        crontab(hour=7, minute=30, day_of_week=1),
        test.s('Happy Mondays!'),
    )
 
@app.task
def test(arg):
    print(arg)

add_periodic_task 会添加一条定时任务

上面是通过调用函数添加定时任务,也可以像写配置文件 一样的形式添加, 下面是每30s执行的任务

1 app.conf.beat_schedule = {
2     'add-every-30-seconds': {
3         'task': 'tasks.add',
4         'schedule': 30.0,
5         'args': (16, 16)
6     },
7 }
8 app.conf.timezone = 'UTC'

任务添加好了,需要让celery单独启动一个进程来定时发起这些任务, 注意, 这里是发起任务,不是执行,这个进程只会不断的去检查你的任务计划, 每发现有任务需要执行了,就发起一个任务调用消息,交给celery worker去执行

启动任务调度器 celery beat

1 $ celery -A periodic_task beat

输出like below

 1 celery beat v4.0.2 (latentcall) is starting.
 2 __    -    ... __   -        _
 3 LocalTime -> 2017-02-08 18:39:31
 4 Configuration ->
 5     . broker -> redis://localhost:6379//
 6     . loader -> celery.loaders.app.AppLoader
 7     . scheduler -> celery.beat.PersistentScheduler
 8     . db -> celerybeat-schedule
 9     . logfile -> [stderr]@%WARNING
10     . maxinterval -> 5.00 minutes (300s)

此时还差一步,就是还需要启动一个worker,负责执行celery beat发起的任务

启动celery worker来执行任务

 1 $ celery -A periodic_task worker
 2   
 3  -------------- celery@Alexs-MacBook-Pro.local v4.0.2 (latentcall)
 4 ---- **** -----
 5 --- * ***  * -- Darwin-15.6.0-x86_64-i386-64bit 2017-02-08 18:42:08
 6 -- * - **** ---
 7 - ** ---------- [config]
 8 - ** ---------- .> app:         tasks:0x104d420b8
 9 - ** ---------- .> transport:   redis://localhost:6379//
10 - ** ---------- .> results:     redis://localhost/
11 - *** --- * --- .> concurrency: 8 (prefork)
12 -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
13 --- ***** -----
14  -------------- [queues]
15                 .> celery           exchange=celery(direct) key=celery
原文地址:https://www.cnblogs.com/yangjinbiao/p/8078952.html