部署

一 部署

 什么是部署:就是把自己开发完的程序,卡被盗服务器上面运行。

 学习部署:

  Flask:python run.py

  Django:python manage.py runserver ip:端口

 uwsgi:

  安装:pip install uwsgi

  启动:

   Flask的启动方式:uwsgi --http :9002 --wsgi-file run.py --callable app

   Django的启动方式:uwsgi --http :9005 --chdir /data/dmysite/ --wsgi-file dmysite/wsgi.py

  配置文件:并且包括静态文件

   Flask:

                deploy_uwsgi.ini
                    [uwsgi]
                    http = 0.0.0.0:9005
                    chdir = /data/deploy/
                    wsgi-file = run.py
                    processes = 4
                    static-map = /static=/data/deploy/static
                    callable = app
View Code

   启动:uwsgi --ini deploy_uwsgi.ini

   Django:

    收集静态文件:

                    - settings.py 
                        STATIC_ROOT = "/data/xxx"
                    python manage.py collectstatic
View Code

    dmysite_uwsgi.ini:

                    [uwsgi]
                    http = 0.0.0.0:9005
                    chdir = /data/dmysite/
                    wsgi-file = dmysite/wsgi.py
                    processes = 4
                    static-map = /static=/data/dmysite/allstatic
View Code

    启动:uwsgi --ini dmysite_uwsgi.ini

 补充:查看进程的命令

                ps -ef|grep 关键字
                pgrep -f 关键字
View Code

 nginx:

  安装:yum install nginx

  修改配置文件:

                    vim /etc/nginx/nginx.conf
                    
                        user root;
                        worker_processes 4;

                        error_log /var/log/nginx/error.log;
                        pid /var/run/nginx.pid;

                        events {
                            worker_connections  1024;
                        }

                        http {
                            log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                                              '$status $body_bytes_sent "$http_referer" '
                                              '"$http_user_agent" "$http_x_forwarded_for"';

                            access_log  /var/log/nginx/access.log  main;

                            sendfile            on;
                            tcp_nopush          on;
                            tcp_nodelay         on;
                            keepalive_timeout   65;

                            include             /etc/nginx/mime.types;
                            default_type        application/octet-stream;

                            upstream django {
                                server 127.0.0.1:8001;
                            }
                            server {
                                listen      80;

                                charset     utf-8;

                                # max upload size
                                client_max_body_size 75M;

                                location /static {
                                    alias  /data/dmysite/allstatic; 
                                }

                                location / {
                                    uwsgi_pass  django;
                                    include     uwsgi_params;
                                }
                            }
                        }
        
View Code

  nginx启动:

                    /bin/systemctl start nginx.service
                    /etc/init.d/nginx start 
View Code

 nohub uwsgi --ini dmysite_uwsgi.ini &:关闭掉linux,程序还是可以访问的。

 supervisor:保护进程

  配置文件:vim /etc/supervisor.conf

                [supervisord]
                http_port=/var/tmp/supervisor.sock ; (default is to run a UNIX domain socket server)
                ;http_port=127.0.0.1:9001  ; (alternately, ip_address:port specifies AF_INET)
                ;sockchmod=0700              ; AF_UNIX socketmode (AF_INET ignore, default 0700)
                ;sockchown=nobody.nogroup     ; AF_UNIX socket uid.gid owner (AF_INET ignores)
                ;umask=022                   ; (process file creation umask;default 022)
                logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
                logfile_maxbytes=50MB       ; (max main logfile bytes b4 rotation;default 50MB)
                logfile_backups=10          ; (num of main logfile rotation backups;default 10)
                loglevel=info               ; (logging level;default info; others: debug,warn)
                pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
                nodaemon=false              ; (start in foreground if true;default false)
                minfds=1024                 ; (min. avail startup file descriptors;default 1024)
                minprocs=200                ; (min. avail process descriptors;default 200)

                ;nocleanup=true              ; (don't clean up tempfiles at start;default false)
                ;http_username=user          ; (default is no username (open system))
                ;http_password=123           ; (default is no password (open system))
                ;childlogdir=/tmp            ; ('AUTO' child log dir, default $TEMP)
                ;user=chrism                 ; (default is current user, required if root)
                ;directory=/tmp              ; (default is not to cd during start)
                ;environment=KEY=value       ; (key value pairs to add to environment)

                [supervisorctl]
                serverurl=unix:///var/tmp/supervisor.sock ; use a unix:// URL  for a unix socket
                ;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket
                ;username=chris              ; should be same as http_username if set
                ;password=123                ; should be same as http_password if set
                ;prompt=mysupervisor         ; cmd line prompt (default "supervisor")

                ; The below sample program section shows all possible program subsection values,
                ; create one or more 'real' program: sections to be able to control them under
                ; supervisor.

                ;[program:theprogramname]
                ;command=/bin/cat            ; the program (relative uses PATH, can take args)
                ;priority=999                ; the relative start priority (default 999)
                ;autostart=true              ; start at supervisord start (default: true)
                ;autorestart=true            ; retstart at unexpected quit (default: true)
                ;startsecs=10                ; number of secs prog must stay running (def. 10)
                ;startretries=3              ; max # of serial start failures (default 3)
                ;exitcodes=0,2               ; 'expected' exit codes for process (default 0,2)
                ;stopsignal=QUIT             ; signal used to kill process (default TERM)
                ;stopwaitsecs=10             ; max num secs to wait before SIGKILL (default 10)
                ;user=chrism                 ; setuid to this UNIX account to run the program
                ;log_stdout=true             ; if true, log program stdout (default true)
                ;log_stderr=true             ; if true, log program stderr (def false)
                ;logfile=/var/log/cat.log    ; child log path, use NONE for none; default AUTO
                ;logfile_maxbytes=1MB        ; max # logfile bytes b4 rotation (default 50MB)
                ;logfile_backups=10          ; # of logfile backups (default 10)


                [program:oldboy]
                command=/usr/bin/uwsgi --ini /data/dmysite/dmysite_uwsgi.ini ;命令
                priority=999                ; 优先级(越小越优先)
                autostart=true              ; supervisord启动时,该程序也启动
                autorestart=true            ; 异常退出时,自动启动
                startsecs=10                ; 启动后持续10s后未发生异常,才表示启动成功
                startretries=3              ; 异常后,自动重启次数
                exitcodes=0,2               ; exit异常抛出的是0、2时才认为是异常
                stopsignal=QUIT             ; 杀进程的信号
                stopwaitsecs=10             ; 向进程发出stopsignal后等待OS向supervisord返回SIGCHILD 的时间。若超时则supervisord将使用SIGKILL杀进程 
                user=root                 ; 设置启动该程序的用户
                log_stdout=true             ; 如果为True,则记录程序日志
                log_stderr=false            ; 如果为True,则记录程序错误日志
                logfile=/var/log/cat.log    ; 程序日志路径
                logfile_maxbytes=1MB        ; 日志文件最大大小
                logfile_backups=10          ; 日志文件最大数量
            
View Code

  启动:supervisord -c /etc/supervisord.conf

 项目部署:

            1. 启动nginx
            2. 启动supervisor(启动uwsgi)
View Code

二 celery

 selery是什么:使用python语言实现的一个任务队列,但是可以使用任何语言实现其协议。除了python以外,还有node.js实现的node-celery和一个php实现的客户端。

 什么是任务队列:将工作分布到多线程和多台机器上的机制。

 详细信息:https://blog.csdn.net/happyanger6/article/details/51404837

 redis补充:

  redis的安装:pip install redis

  使用连接池:redis-py使用connection pool来管理对一个redis server的所有连接,避免每次建立、释放连接的开销。默认,每个Redis实例都会维护一个自己的连接池。可以直接建立一个连接池,然后作为参数Redis,这样就可以实现多个Redis实例共享一个连接池。

#!/usr/bin/env python
# -*- coding:utf-8 -*-
 
import redis
 
pool = redis.ConnectionPool(host='10.211.55.4', port=6379)
 
r = redis.Redis(connection_pool=pool)
r.set('foo', 'Bar')
print r.get('foo')
View Code

  redis的五大数据类型的详细信息:http://www.cnblogs.com/wupeiqi/articles/5132791.html

  管道:redis-py默认在执行每次请求都会创建(连接池申请连接)和断开(归还连接池)一次连接操作,如果想要在一次请求中指定多个命令,则可以使用pipline实现一次请求指定多个命令,并且默认情况下一次pipline 是原子性操作。

#!/usr/bin/env python
# -*- coding:utf-8 -*-
 
import redis
 
pool = redis.ConnectionPool(host='10.211.55.4', port=6379)
 
r = redis.Redis(connection_pool=pool)
 
# pipe = r.pipeline(transaction=False)
pipe = r.pipeline(transaction=True)
 
pipe.set('name', 'alex')
pipe.set('role', 'sb')
 
pipe.execute()
View Code

  发布和订阅:

                发布:
                    import redis

                    conn = redis.Redis(host='192.168.11.28', port=6379)
                    conn.publish('107.8', '吃了吗')
                订阅者:
                    import redis
                    conn = redis.Redis(host='192.168.11.28', port=6379)
                    pub = conn.pubsub()
                    pub.subscribe('107.8')

                    while True:
                        result = pub.parse_response()
                        print(result)
View Code

 rabbtmq:RabbitMQ是一个在AMQP基础上完整的,可复用的企业消息系统。他遵循Mozilla Public License开源协议。

MQ全称为Message Queue, 消息队列(MQ)是一种应用程序对应用程序的通信方法。应用程序通过读写出入队列的消息(针对应用程序的数据)来通信,而无需专用连接来链接它们。消 息传递指的是程序之间通过在消息中发送数据进行通信,而不是通过直接调用彼此来通信,直接调用通常是用于诸如远程过程调用的技术。排队指的是应用程序通过 队列来通信。队列的使用除去了接收和发送应用程序同时执行的要求。

 应用:

  减轻服务器的压力,提高单位时间处理请求数量

  RPC:

   自己实现两个py文件,比如:

                - fangjinghong.py 
                - liuwu.py 
                实现: 
                    a. 刘武运行,等待放景洪来。【消费者去队列中取任务】
                    b. 
                        范景红发送消息:'{"func_path":"xx.xxx.func","kwargs":{"k1":123,"k2":456},"backqueue":"xxxxxxxx"}'
                        放景洪去 xxxxxxxx 队列中获取数据
                    c. func(...)返回值发送到xxxxxxxx队列。
                    d. 放景洪接收到值。
                    
                    PS:saltstack
        
View Code

 特殊:数据----->exchange------->queue-------->人

  RabbitMQ安装:

安装配置epel源
   $ rpm -ivh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
 
安装erlang
   $ yum -y install erlang
 
安装RabbitMQ
   $ yum -y install rabbitmq-server
View Code

  注意:service rabbitmq-server start/stop

  安装API

pip install pika
or
easy_install pika
or
源码
 
https://pypi.python.org/pypi/pika
View Code

  详细信息:http://www.cnblogs.com/wupeiqi/articles/5132791.html

  使用celery实现定时任务:

  相关软件:

   - rabbmitmq or redis
   - celery

  - celery :

   tornado:

    异步非阻塞
    - 客户端请求夯住
    - 服务端处理客户端请求时,必须是IO请求

   Flask+celery:

    - 客户端请求立即返回(无结果)
    - celery内部进行处理(内部生成结果)
    - 客户端再来查看....

   Flask+数据库:xx

   celery就是帮助我们对rabbitmq进行了高度封装的一个模块。

  - 使用celery

   pip3 install celery
   pip3 install eventlet windows特殊

  总结:

            - 安装和使用
            - 三人配合:3p
                - user app: 
                - broker:
                - workder:
            - 快速入门
                s1.py 
                    import time
                    from celery import Celery

                    app = Celery('tasks', broker='amqp://47.98.134.86:5672', backend='amqp://47.98.134.86:5672')

                    @app.task
                    def xxxxxxxx(x1,x2):
                        print('开始执行任务')
                        time.sleep(15)
                        print('执行完毕')
                        return 123123
                    
                    
                    运行:
                        - rabbmitmq or redis 
                        - 创建worker:
                             nohub celery worker -A s1 -l info &
                             nohub celery worker -A s1 -l info &
                             nohub celery worker -A s1 -l info &
                             
                             celery worker -A s1 -P eventlet  -l info 
                             
                app.py 
                    
                    from s1 import xxxxxxxx 
                    
                    # 立即执行任务
                    async_result = xxxxxxxx.delay(1,2)
                    
                    # 定时执行任务
                    local_date = datetime(2019, 4, 11, 13, 30, 0)
                    utc_date = datetime.utcfromtimestamp(local_date.timestamp())
                    async_result = xxxxxxxx.async_apply(args=[1,2],eta=utc_date)
                    
                    'PENDING' # 正在执行,未执行完毕
                    'REVOKED'
                    'SUCCESS'
                    'FAILURE'
                    # 获取状态
                    #async_result.state # 
                    
                    # 成功/失败/被终止  'REVOKED''SUCCESS''FAILURE'#async_result.ready()
        
                    # 阻塞住获取数据,timeout表示最多等待多少x
                    # async_result.get(timeout=2)
                    
                    # 撤销
                    async_result.revoke()
                    # 撤销(正在执行中断)
                    async_result.revoke(terminate=True)
        
                
            - 多任务结构
                proj
                    - pp_celery
                        - celery.py 
                            celery = Celery('tasks', broker='amqp://47.98.134.86:5672', backend='amqp://47.98.134.86:5672')
                            
                            celery.config.xx = "ffff"
                            celery.config.xx = "ffff"
                            celery.config.xx = "ffff"
                            celery.config.xx = "ffff"
                            celery.config.xx = "ffff"
                            配置文件: http://docs.celeryproject.org/en/latest/userguide/configuration.html
                        - t1.py 
                            from .celery import celery 
                            @celery.task
                            def x1(x1,x2):
                                print('开始执行任务')
                                time.sleep(15)
                                print('执行完毕')
                                return 123123
                            @celery.task
                            def x2(x1,x2):
                                print('开始执行任务')
                                time.sleep(15)
                                print('执行完毕')
                                return 123123
                        - t2.py 
                            @celery.task
                            def x3(x1,x2):
                                print('开始执行任务')
                                time.sleep(15)
                                print('执行完毕')
                                return 123123
                            @celery.task
                            def x4(x1,x2):
                                print('开始执行任务')
                                time.sleep(15)
                                print('执行完毕')
                                return 123123
                        - t3.py 
                            @celery.task
                            def x5(x1,x2):
                                print('开始执行任务')
                                time.sleep(15)
                                print('执行完毕')
                                return 123123
                            @celery.task
                            def x6(x1,x2):
                                print('开始执行任务')
                                time.sleep(15)
                                print('执行完毕')
                                return 123123
                    - app.py 
                        from pp_celery import t1 
                        
                        async_result = t1.xxxxxxxx.delay(1,2)
                        
        
                - rabbmitmq or redis 
                - 创建worker: 进入proj目录
                     celery worker -A pp_celery -l info &
                     celery worker -A pp_celery -l info &
                     celery worker -A pp_celery -l info &
                     
                     celery worker -A pp_celery -P eventlet  -l info 

        
                - 执行: 
                    python app.py 
                    
        
        
            - 静态定时任务:(crontab)
                s1.py 
                    from celery import Celery
                    from celery.schedules import crontab

                    app = Celery('tasks', broker='amqp://47.98.134.86:5672', backend='amqp://47.98.134.86:5672')


                    @app.on_after_configure.connect
                    def setup_periodic_tasks(sender, **kwargs):
                        # 每10s执行一次:test('hello')
                        sender.add_periodic_task(10.0, test.s('hello'), name='add every 10')

                        # 每30s执行一次:test('world')
                        sender.add_periodic_task(30.0, test.s('world'), expires=10)

                        # 每天早上7:30执行一次:test('Happy Mondays!')
                        sender.add_periodic_task(
                            crontab(hour=7, minute=30, day_of_week=1),
                            test.s('Happy Mondays!'),
                        )

                        # 每周3,5的3,7,20点 每12分钟执行一次:test('Happy Mondays!')
                        sender.add_periodic_task(
                            crontab(
                                minute=12, hour="3,7,20", day_of_week='thu,fri', day_of_month="*", day_of_year='*',
                            ),
                            test.s('11111'),
                        )

                        # 每周3,5的3,7,20点 每12分钟执行一次:test('Happy Mondays!')
                        sender.add_periodic_task(
                            crontab(
                                minute=25, hour=7, day_of_month=11, month_of_year=4,
                            ),
                            test.s('11111'),
                        )


                    @app.task
                    def test(arg):
                        print(arg)
                        return arg 
            
                # 固定时间向broker中放任务
                celery beat -A s1 
                # 执行任务 
                celery worker -A s1 
                
                
        应用:代码发布
View Code
原文地址:https://www.cnblogs.com/fangjie0410/p/8781575.html