Operating System: Three Easy Pieces --- The Wish For Atomicity (Note)

One way to solve this problem would be to have more powerful instructions that, in a single

step, did exactly whatever we needed done and thus removed the possibility of an untimely

interrupt. For example, what if we has a super instruction that looke like this?

memory-add 0x8049a1c, $0x1

Assume this instruction adds a value to memory location, and the hardware guarantees that

it executes atomically; when the instruction executes, it would perform the update as desired.

It could not be interrupted mid-instruction, because that is precisely the guarantee we receive

from the hardware: when an interrupt occurs, either the instruction has not run at all, or it has

run to completion; there is no in-between state. Hardware can be a beautiful thing, no?

Atomically, in this context, means as a unit, which sometimes we taks as all or none. What

we would like is to execute the three instruction sequences atomically:

As we said, if we has a single instruction to do this, we would just issue that instruction and

be done. But in the general case, we won't have such an instruction. Imagain we were building

a concurrent B-tree, and wished to update it; would we really want the hardware to support

an atomic update of B-tree instruction? Probably not, at least in a sane instruction set.

Thus, what we will instead do is ask the hardware for a few useful instructions upon which we

can build a general set of what we call synchronization primitives. By using these hardware

synchronization primitives, in combination with some help from the operating system, we will

be able to build multi-threaded code that accesses critical sections in a synchronized and

controlled manner, and thus reliably produces the correct result despite the challenging nature

of concurrent execution, Pretty awesome, right?

This is the problem we will study in this section of the book. It is a wonderful and hard problem

, and should make your mind hurt a bit. If it doesn't, then you don't understand! Keep working

until your head hurts; you then know you are headed in the right direction. At that point, take

a break; we don't want your head hurting too much.

                Aside: Key Concurrency Terms

Thses four terms are so central to concurrent code that we thought it worth while to call them

out explicitly.

A critical section is a piece of code that accesses a shared resource, usually a variable or data

structure.

A race conditon arises if multiple threads of execution enter the critical section at roughly the

same time; both attempt to update the shared data structure, leading to a surprising and

perhaps undesirable outcome.

An indeterminate program consists of one or more race conditions; the output of program 

varies from run to run, depending on which threads ran when. The outcome is thus not

determinstic, something we usually expect from computer systems.

To avoid these problems, threads should use some kind of mutual exclusion primitives; doing

so guarantees that only a single thread even enters a critical section, thus avoiding races, and

resulting on determinstic program outputs.

原文地址:https://www.cnblogs.com/miaoyong/p/4955852.html