Operating System: Three Easy Pieces --- One More Problem (Note)

This chapter has set up the problem of concurrency as if only one type of interaction occurs

between threads, that of accessing shared variables and the need to support atomicity for

critical sections. As it turns out, there is another common interaction that arises, where one

thread must wait for another to complete some action before it continues. This interaction

arises, for example, when a process performs a disk I/O and is put to sleep; when the I/O

completes, the process needs to be roused from its slumber so it can continue.

Thus, in the coming chapters, we will be not only studying how to build support for 

syncronization primitives to support atomicity but also for mechanisms to support this type

of sleeping/waking interaction that is common in multi-threaded programs. If this doesn't make

sense right now, that is OK! It will soon enough, when you read the chapter on condition 

variables. If it doesn't by then, well, then it is less OK, and you should read that chapter again

and again until it does make sense.

                   Why in OS Class?

Before wrapping up, one question that you might have is: why are we studying this in OS 

class ? History is the one word answer; the OS was the first concurrent program, and many

techniques were created for use within the OS. Later, with multi-threaded processes,

application programs also had to consider such things.

For example, imagin the case where there are two processes running. Assume they both call

write() to write to the file, and both wish to append the data to the file (i.e., add the data to

the end of the file, thus increasing ite lenght). To do so, both must allocate a new block, 

record in the inode of the file where this block lives, and change the size of the file to reflect

the new larger size (among other things; we will learn more about files in the third part of

the book). Because an interrupt may occur at any time, the code that updates these shared

structures (e.g., a bitmap for allocation, or the file's inode) are critical sections; thus, OS

designers, from the very beginning of the introduction of the intertupt, had to worry about how

the OS updates internal structures. An untimely interrupt causes all of the problems described

above. Not surprisingly, page tables, process lists, file system structures, and virtually every

kernel data structure has to be carefully accessed, with the proper synchronization promitives,

to work correctly.

                Tips: Use Atomic Operation

Atomic operations are one of the most powerful underlying techniques in building computer

systems, from the computer architecture, to concurrent code, to file systems, database

management systems, and even distributed systems. The idea behind makeing a series of

actions atomic is simply expressed with the phrase all or nothing; it should either appear

as if all of the actions you wish to group together occurred, or that none of them occurred,

with no in-between state visible. Sometimes, the grouping of many actions into a single

atomic action is called a transaction, an idea developed in great detail in the world of database

and transaction processing. In our theme of exploring concurrency, we will be using

synchronization promitives to turn short sequences of instructions into atomic blocks of 

execution, but the idea of atomicity is much bigger than that, as we will see. For example,

file systems use techniques such as journaling or cop-on-write in order to atomicity transition

their on-disk state, critical for operating correctly in the face of system failures. If that does

not make sense, don't worry-it will, in some future chapter.

原文地址:https://www.cnblogs.com/miaoyong/p/4961893.html