Virtual Memory DEMAND PAGING

COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION

With the use of paging, truly effective multiprogramming
systems came into being. Furthermore, the simple tactic of breaking a process up
into pages led to the development of another important concept: virtual memory.

To understand virtual memory, we must add a refinement to the paging
scheme just discussed. That refinement is demand paging, which simply means that
each page of a process is brought in only when it is needed, that is, on demand.

Consider a large process, consisting of a long program plus a number of arrays
of data. Over any short period of time, execution may be confined to a small sec-
tion of the program (e.g., a subroutine), and perhaps only one or two arrays of data
are being used. This is the principle of locality, which we introduced in Appendix
4A. It would clearly be wasteful to load in dozens of pages for that process when
only a few pages will be used before the program is suspended. We can make bet-
ter use of memory by loading in just a few pages. Then, if the program branches
to an instruction on a page not in main memory, or if the program references data
on a page not in memory, a page fault is triggered. This tells the OS to bring in the
desired page.

Thus, at any one time, only a few pages of any given process are in memory,
and therefore more processes can be maintained in memory. Furthermore, time is
saved because unused pages are not swapped in and out of memory. However, the
OS must be clever about how it manages this scheme. When it brings one page in, it
must throw another page out; this is known as page replacement. If it throws out a
page just before it is about to be used, then it will just have to go get that page again
almost immediately. Too much of this leads to a condition known as thrashing: the
processor spends most of its time swapping pages rather than executing instructions.
The avoidance of thrashing was a major research area in the 1970s and led to a vari-
ety of complex but effective algorithms. In essence, the OS tries to guess, based on
recent history, which pages are least likely to be used in the near future.

A discussion of page replacement algorithms is beyond the scope of this chap-
ter. A potentially effective technique is least recently used (LRU), the same algo-
rithm discussed in Chapter 4 for cache replacement. In practice, LRU is difficult to
implement for a virtual memory paging scheme. Several alternative approaches that
seek to approximate the performance of LRU are in use; see Appendix F for details.

With demand paging, it is not necessary to load an entire process into main
memory. This fact has a remarkable consequence: It is possible for a process to be
larger than all of main memory. One of the most fundamental restrictions in pro-
gramming has been lifted. Without demand paging, a programmer must be acutely
aware of how much memory is available. If the program being written is too large,
the programmer must devise ways to structure the program into pieces that can 

be loaded one at a time. With demand paging, that job is left to the OS and the
hardware. As far as the programmer is concerned, he or she is dealing with a huge
memory, the size associated with disk storage.

Because a process executes only in main memory, that memory is referred to
as real memory. But a programmer or user perceives a much larger memory—that
which is allocated on the disk. This latter is therefore referred to as virtual memory.
Virtual memory allows for very effective multiprogramming and relieves the user of
the unnecessarily tight constraints of main memory.

原文地址:https://www.cnblogs.com/rsapaper/p/6216656.html