操作系统-进程同步

Cooperating process: A process that can affect or be affected by other processes executing in the system, can either 1) directly share a logical address space (e.g. threads)or 2) be allowed to share data through files or messages.

Race condition: A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place.

 

The critical-section problem

/* General structure of a typical process Pi */
do {
    [entry section]
    [critical section]
    [exit section]
    [remainder section]
} while (true);

Critical section – A segment of code that each process has, in which the process may be changing common variables, updating a table, writing a file, and so on.

Entry section – The section of code implementing the (necessary) request for permission to enter its critical section.

Exit section – The section of code that may follow the critical section.

Remainder section – The remaining code.

Critical-section problem: To design a protocol that the processes in a system can use to cooperate so that no 2 processes are executing in their critical sections at the same time.

A solution to the critical-section problem must satisfy 3 requirements:

1)Mutual exclusion If a process is executing in its critical section, then no other processes can be executing in their critical sections.

2)Progress If no process is executing in its critical section and some processes wish to enter their critical sections then only processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next.

3)Bounded waiting – There exists a bound/limit on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

 

2 general approaches used to handle critical sections in OSs:

1)Preemptive kernel: allows a process to be preempted while it is running in kernel mode; (essentially free from race conditions on kernel data structure)

2)Nonpreemptive kernel: A kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU. (more responsive, less risk of long waiting)

 Preemptive kernel is more responsive, suitable for real-time programming.

Peterson’s solution -- a classic software-based solution to critical-section problem

There are no guarantees that it will work correctly on modern computer architectures.

Peterson’s solution is restricted to 2 processes Pand P1

Pi– one of the process; Pj– the other process;(j=1-i)

turn – indicates whose turn it is to enter its critical section;

flag[2] – indicates if a process is ready to enter its critical section, if flag[i]=true, then Pis ready.

turn and flag[2] are shared between the 2 processes.

/* The structure of process Pi */
do {
    flag[i] = true;
    turn = j;
    while (flag[j] && turn == j)
        ;
    [critical section]
    flag[i] = false;
    [remainder section]
} while true;

2. Synchronization hardware -- hardware-based solution to critical-section problem

Locking: Protecting critical regions through the use of locks.

In a single-processor environment, the critical-section problem can be solved simply by preventing interrupts while a shared variable was being modified. (often taken by nonpreemptive kernels)

In a multiprocessor environment, disabling interrupts is not as feasible (because of the time consumption of passing messages to all processors). Many modern computer systems provide special hardware instructions (2 types – test_and_set() and compare_and_swap()) that can be executed atomically to solve the critical-section problem.

* Atomically: to be executed as one uninterruptible unit.

1) test_and_set() instruction 

   a test_and_set instruction is executed atomically. -- If two test_and_set() are executed simultaneoulsy each on a different CPU, they will be executed sequentially in some arbitrary oreder.

boolean test_and_set (Boolean *target) {
    boolean rv = *target;
    *target = true;
    return rv;
}

    implement mutual exclusion by a boolean variable lock (initialized to false). 

/* The structure of process Pi */
do {
    while (test_and_set(&lock))
        ; /* do nothing */
    [critical section]
    lock = false;
    [remainder section]
} while (true);

2) compare_and_swap() instruction

  a compare_and_swap instruction is also executed atomically just like test_and_set().

int compare_and_swap (int *value, int expected, int new_value) {
    int temp = *value;
    if (*value == expected)
        *value = new_value;
    return temp;
}

   implement mutual exclusion by a global variable lock (initialized to false). 

/* The structure of process Pi */
do {
    while (compare_and_swap(&lock, 0, 1) != 0)
        ; /* do nothing */
    [critical section]
    lock = 0;
    [remainder section]
} while (true);

The above algorithm 1 and 2 satisfy the mutual-exclusion requirements, but not the bounded-waiting requirements.

3) Algorithm satisfies all the critical-section requirements: (using test_and_set)

do {
    waiting[i] = true;
    key = true;
    while (waiting[i] && key)
        key = test_and_set(&lock);
    waiting[i] = false;
    [critical section]
    j = (i+1)%n;
    while ((j !+ i) && !waiting[j])
        j = (j+1)%n;
    if (j == i)
        lock = false;
    else
        waiting[j] = false;
    [remainder section]
} while (true);

(waiting[n] and lock are initialized to false.)

Cons: (hardware-based solutions) are generally inaccessible to application programmers.

 

3. Mutex Locks -- the simplest software synchronization tool to solve critical-section problem

Mutex lock: a.k.a.spinlock, used to protect critical regions and thus prevent race conditions. A process must acquire the lock (call acquire()) before entering a critical section, and release the lock (call release()) when it exits the critical section.

Variable available indicates if the lock is available or not.

 acquire () {
    while (!available)
        ; /* busy wait */
    available = false;
}
release () {
    available = true;
}
/* The structure of process Pi */
do {
    [acquire lock]
    [critical section]
    [release lock]
    [remainder section]
} while (true);

Pros: no context switch is required when a process must wait on a lock. (Thus useful when locks are expected to be held for short times)

Cons: requires busy waiting.

busy waiting: While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the call to acquire(). In multiprogramming system, this wastes CPU cycles that some other process might be able to use productively.

 

 

4. Semaphores -- a more robust tool that behave similarly to a mutex lock, but also provide more sophisticated ways.

Semaphore [S]: an integer variable that, apart from initialization, is accessed only through 2 standard atomic operations: wait() [P] and signal() [V].

/* P - proberen, “to test” */
wait (S) {
    while  (S <= 0)
        ;
    S--;
}
/* V – verhogen, “to increment” */
signal (S) {
    S++;
}

All modification to the semaphore in wait() and signal() must be executed indivisibly.

*indivisibly: 1) when one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value; & 2) in wait(S), the testing of S(S<=0) and S-- must be executed without interruption.

Semaphore Usage

Binary semaphore: ranges only between 0 and 1. can be used instead on systems that do not provide mutex locks.

Counting semaphore: ranges over an unrestricted domain, can be used to – 

1) control access to a given resource consisting of a finite number of instances.

  -> Initialize the semaphore to the number of available resources;

  -> each process performs a wait() on the semaphore (decrementing the count) when wishing to use a resource

                       & peforms a signal() on the semaphore (incrementing the count) when releasing a resource;

  // When semaphore (count) == 0, all resources are being used. Processes that wish to use a resource will block until count >0.

2) solve synchronization problems.

Consider 2 concurrently running processes P1 (with statement S1) and P2 (with statement S2):

Suppse: it requires that S2 needs to be executed only after S1 has completed

Implementation:

-> Let P1 and P2 share a common semaphore synch (initialized to 0);

-> implement Structure of P1, P2 like:

/* The structure of P1 */
S1;
signal(synch);
/* The structure of P2 */
wait(synch);
S2;

//thus P2 will execute S2 only after P1 has invoked signal(synch) (i.e. when synch>0).

Semaphore implementation (modified to overcome busy waiting)

/* The structure of a semaphore */
typedef struct {
    int value;
    struct process *list;
} semaphore;

the list of waiting process can be implemented by a pointer to a lis of PCBs.

->When a process executes wait() and finds that the semaphore value < 0, it can block itself.

    -- when a process must wait on a semaphore, it is added to the the semaphore's list.

wait (semaphore *S) {
    S->value--;
    if (S->value < 0) {
        add this process to S->list;
        block();
    }
}

* block() and wakeup() operations are systems called provided by OS.

block() places a process into a waiting queue associated with the semaphore, switches the state of the process to ‘waiting’, and then the control is transferred to the CPU scheduler. -- block() suspends the process

->A blocked process waiting on a semaphore should be restarted when some other process executes a signal() operation.

    -- signal() removes one process from the semaphore's list (of waiting processes) and awaken that process.

signal(semaphore *S) {
    S->value++;
    if (S->value <= 0) {
        remove a process P from S->list;
        wakeup(P);
    }
}

wakeup() switches the state of a blocked process from ‘waiting’ to ‘ready’, places the process in the ready queue. -- wakeup() resume the process

Semaphore operations must be executed atomically, no 2 processes can execute wait() and signal() on the same semaphore at the same time.

-> In single-processor environment, inhibit interrupts during the time wait() and signal() are executing (so that instructions from different processes cannot be interleaved).

->  In multiprocessor environment, disable interrupts on ever processor (difficult, diminish performance).

  --> SMP systems provide alternative locking techniques (e.g. compare_and_swap(), spinlocks) to ensure the atomicity of wait() and signal().

 

Deadlocks & starvation ////

A set of processes is said to be in a deadlocked state when every process in the set is waiting for an event (e.g. the execution of a signal()) that can be caused only by another process in the set.

Indefinite blocking/starvation: A situation in which processes wait indefinitely within the semaphore, may occur if processes are removed form the list in LIFO order.

Priority inversion

Priority inversion: (A scheduling problem that occurs only in systems with more than 2 priorities.) Given 3 processes L, M, H with priorities L<M<H, H requires resource R that is currently being accessed by L. Ordinarily, H would wait for L to finish using R. However, if M becomes runnable and preempt L, it will indirectly affect how long H must wait for L to relinquish R.

Typical solution –> priority inheritance protocol: All processes that are accessing resources needed by a higher-priority process inherit the higher priority until they are finished with the resources in the question. When they are finished, their priorities revert to their original values.

(thus M cannot preempt L’s execution, when L finish using R, H would run next.)

原文地址:https://www.cnblogs.com/RDaneelOlivaw/p/9753330.html