GIL全局解释锁.死锁与递归锁

一.GIL全局解释器

In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple 
native threads from executing Python bytecodes at once. This lock is necessary mainly 
because CPython’s memory management is not thread-safe. (However, since the GIL 
exists, other features have grown to depend on the guarantees that it enforces.)
View Code

Python解释器有很多种,最常见的Python解释器就是Cpython解释器

GIL(global interpreter lock) :本质是一把互斥锁,把并发变成串行牺牲效率来保证数据的安全性

在Cpython解释器中,同一个进程开启多线程,同一时刻只能有一个线程执行,无法利用多核优势。(只能并发,无法并行)

GIL存在的原因:是因为CPython解释器的内存管理线程是不安全的

GIL作用:保证python解释器同一时间只能执行一个线程

GIL不是python的特点,是Cpython解释器的特点

ps :单进程下多线程无法利用多核优势是所有解释型语言的通病。

多线程与多进程在计算密集型情况下的比较

from multiprocessing import Process
from threading import Thread
import os,time

def work() :
    res = 0
    for i in range(10000000):
        res *= i

if __name__ == '__main__':
    l = []
    print(os.cpu_count())
    start = time.time()
    for i in range(4) :
        # p = Process(target= work) # run time is 2.1351988315582275
        p = Thread(target= work) # run time is 3.096200942993164
        l.append(p)
        p.start()

    for p in l :
        p.join()
    stop = time.time()
    print('run time is %s'%(stop - start))

# 单核情况下
#     开线程更省资源
# 多核情况下
#     开进程更快
View Code

多线程与多进程在I/O密集型情况下的比较

from multiprocessing import Process
from threading import Thread
import os,time

def work() :
    time.sleep(2)

if __name__ == '__main__':
    l = []
    print(os.cpu_count())
    start = time.time()
    for i in range(4000) :
        # p = Process(target= work) # run time is 284.4450669288635
        p = Thread(target= work) # run time is 2.799881935119629
        l.append(p)
        p.start()

    for p in l :
        p.join()
    stop = time.time()
    print('run time is %s'%(stop - start))
# 单核情况下
#     开线程更节省资源
# 多核情况下
#     开线程更节省资源,也更快
View Code

GIL与普通的互斥锁:

from threading import Thread
import time

n = 100


def task():
    global n
    tmp = n
    time.sleep(1)
    n = tmp -1

t_list = []
for i in range(100):
    t = Thread(target=task)
    t.start()
    t_list.append(t)

for t in t_list:
    t.join()

print(n)
View Code

针对不同的数据应该加不同的锁进行处理

GIL锁只是用来保证解释器级的数据,就是保证线程的安全

保护用户自己的数据则需要自己加锁处理

二.死锁与递归锁

死锁:两个或两个以上的进程或线程在执行过程中,因争夺资源而造成的一种互相等待的现象,

   若无外力作用,他们将无法推进下去。

from threading import Thread,Lock,current_thread

import time

mutexA = Lock()
mutexB = Lock()

class MyThread(Thread) :

    def run(self):
        print('***',current_thread().name,'***')
        self.func1()
        self.func2()

    def func1(self):
        mutexA.acquire()
        print('%s抢到了A锁' % self.name)
        # self.name等价于current_thread().name
        mutexB.acquire()
        print('%s抢到了B锁' % self.name)
        mutexB.release()
        print('%s释放了B锁' % self.name)
        mutexA.release()
        print('%s释放了A锁' % self.name)

    def func2(self):
        mutexB.acquire()
        print('%s抢到了B锁'%self.name)
        time.sleep(1)
        mutexA.acquire()
        print('%s抢到了A锁' % self.name)
        mutexA.release()
        print('%s释放了A锁' % self.name)
        mutexB.release()
        print('%s释放了B锁' % self.name)

for i in range(10,20) :
    t = MyThread()
    t.start()
View Code

递归锁:在在Python中为了支持在同一线程中多次请求同一资源,python提供了可重入锁RLock。

from threading import Thread,Lock,current_thread,RLock

import time

"""
Rlock可以被同一个执行线程连续的acquire和release
每acquire一次锁counter加1
每release一次锁counter减1
只要锁的计数不为0 其线程都必须等待
Rlock与Lock相比多了一个counter变量
counter记录了acquire的次数
"""

mutexA = mutexB = RLock()


class MyThread(Thread) :

    def run(self):
        # print('***',current_thread().name,'***')
        self.func1()
        self.func2()

    def func1(self):
        mutexA.acquire()
        print('%s抢到了A锁' % self.name)
        # self.name等价于current_thread().name
        mutexB.acquire()
        print('%s抢到了B锁' % self.name)
        mutexB.release()
        print('%s释放了B锁' % self.name)
        mutexA.release()
        print('%s释放了A锁' % self.name)

    def func2(self):
        mutexB.acquire()
        print('%s抢到了B锁'%self.name)
        time.sleep(1)
        mutexA.acquire()
        print('%s抢到了A锁' % self.name)
        mutexA.release()
        print('%s释放了A锁' % self.name)
        mutexB.release()
        print('%s释放了B锁' % self.name)

for i in range(10,20) :
    t = MyThread()
    t.start()
View Code

三.信号量

信号量 :同时允许一定数量的线程修改资源

"""
互斥锁:同时只允许一个线程更改数据
信号量:同时允许一定数量的线程更改数据
"""
from threading import Semaphore,Thread
import time
import random


sm = Semaphore(5)  # 传入的参数是线程数量

def task(name):
    sm.acquire()
    print('%s占了一个坑'%name)
    time.sleep(random.randint(1,3))
    sm.release()

for i in range(20):
    t = Thread(target=task,args=(i,))
    t.start()
View Code

四.event事件

event:用于线程与线程之间的状态的判定

方法

event.isSet():返回event的状态值;

event.wait():event的状态为False将进入阻塞态,为True时解除阻塞态

event.set(): 设置event的状态值为True,所有阻塞池的线程激活进入就绪状态, 等待操作系统调度;

event.clear():恢复event的状态值为False。
View Code

示例

from threading import Event,Thread
import time

# 先生成一个event对象
e = Event()


def light():
    print('红灯正亮着')
    time.sleep(3)
    e.set()  # 发信号
    print('绿灯亮了')

def car(name):
    print('%s正在等红灯'%name)
    e.wait()  # 等待信号
    print('%s加油门飙车了'%name)

t = Thread(target=light)
t.start()

for i in range(10):
    t = Thread(target=car,args=('伞兵%s'%i,))
    t.start()
View Code

五.队列在线程中的使用

线程间可以进行直接通信,但使用队列的话,可以避免手动加锁的问题,减少死锁问题的发生

Queue  # FIFO

import queue

q=queue.Queue()
q.put('first')
q.put('second')
q.put('third')

print(q.get())
print(q.get())
print(q.get())
View Code

LifoQueue  # Last in First out

import queue

q=queue.LifoQueue()
q.put('first')
q.put('second')
q.put('third')

print(q.get())
print(q.get())
print(q.get())
View Code

PriorityQueue #存储数据时可设置优先级的队列,值越小优先级越高

import queue

q=queue.PriorityQueue()
q.put(10,'first')
q.put(-100,'second')
q.put(20,'third')

print(q.get())
print(q.get())
print(q.get())
View Code
原文地址:https://www.cnblogs.com/Cpsyche/p/11354974.html