Virtual Memory
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Allocating Kernel Memory
- Other Considerations
Swapping
Demand Paging
Could bring a page into memory only when it is needed
Advantages:
- Less I/O needed
- No unnecessary I/O
- Less memory needed
- Faster response
- More users/processes
Valid-Invalid Bit
Page Fault
EAT
Locality Reference 局部性原理
Demand Paging Optimizations
Copy-on-Write
Page Replacement
Global Allocation
Local Allocation
Page and Frame Replacement Algorithms
- Frame-allocation algorithm determines
- How many frames to give each process
- Page-replacement algorithm
- Which frames to replace Want lowest page-fault rate on both first access and re-access
Page Replacement Algorithms
- FIFO
- Optimal
- Least Recently Used (LRU)
- Second chance
- Enhanced second chance
FIFO Algorithm
Rule
- Replace the oldest one
Implementation
- OS maintains a circular queue of all pages
- Page at head of the list: Oldest one
- Page at the tail: Recent arrival
- When there is a page fault
- Page at the head is removed
- New page added to the tail
Belady’s Anomaly
Belady’s Anomaly For some page-replacement algorithms, the page-fault rate may increase as the number of allocated frames increases.
Adding more frames might cause more page faults!
Optimal Algorithm
Replace the page that will not be used for longest period of reference string
Problem
- Can’t read the future This method can only be used to measure how other algorithms are close to the optimal
Least Recently Used (LRU) Algorithm
Replace page that has not been used in the most amount of time
- LRU is a good algorithm and frequently used
- LRU and OPT don’t have Belady’s Anomaly
Counter Implementation
- There is a global counter that increases by 1 whenever a page is referenced. Every page in the memory has its own counter
- When a page is referenced, its counter is synchronized with the global counter
- When a page needs to be replaced, find the page in the memory with the smallest value
Disadvantage
- Take O(n) time to search
Stack Implementation
Keep a stack of page numbers in a double link form:
When a page referenced:
- move it to the top
- Advantage
- No search for replacement
- Disadvantage
- Each update of the stack might requires 6 pointers to be changed
LRU Approximation Algorithms
Usage of a reference bit
- Associate each page with a reference bit, initial value 0
- When a page is referenced, set the bit 1
- When a page is needed to be replaced, find the ones with reference bit 0 if exist one
Second-chance algorithm
Second-chance algorithm
- Use a circular FIFO and reference bit for each page in memory
- If page to be replaced has reference bit = 0 » replace it
- reference bit = 1 » set reference bit to 0 » search for the next page
Enhanced second-chance algorithm
Improve algorithm by using reference bit and modify bit for each page in memory
- i.e., an ordered pair (reference, modify)
All pages in memory fall into four classes
- (0, 0) neither recently used not modified – best page to replace
- (0, 1) not recently used but modified – not quite as good, must write out before replacement
- (1, 0) recently used but clean – probably will be used again soon
- (1, 1) recently used and modified – probably will be used again soon and need to write out before replacement – worst page to replace
When page replacement called for, replace page in lowest nonempty class in clock scheme
- Disadvantage
- Might need to search circular queue several times
Page Buffering
More strategies (algorithms) in improving the efficiency of page replacement
Page-buffering consideration
Keep a pool of free frames always
- Whenever it is possible, select a victim to evict and add it to free pool
- When convenient, evict the victim
- When frame needed, read page into free frame
- Advantage: Reduce time to find a free frame at a page fault
Expansion
- Keep a list of modified pages
- Whenever the paging device is idle, write the modified page to the disk. Its modify bit is then reset to 0
- Keep free frame contents intact (完整的,内容未被清除的)
- Reduce penalty if wrong victim frame was selected
- If the page is referenced again, no need to load from the disk
- Keep a list of modified pages
Allocation of Frames
For performance reason, each process needs minimum number of frames
Fixed allocation
Equal allocation
- Each process is allocated same number of frames
- Disadvantage
- Space waste for small process
Propositional allocation
- Allocate frames according to the size of a process
- Disadvantage
- Process size might be changed during the execution
Priority allocation
- the ratio of frames depends on the combination of size and priority of a process
- Replace the pages of process with lower priority
Thrashing 系统颠簸
If a process does not have “enough” frames in memory, the page-fault rate is very high
- Replace page frequently
- The replaced page might be used again
- This leads to:
- Low CPU utilization
- Operating system keeps adding new process, trying to increase CPU utilization
- Things get worse -> entering a bad cycle
- Thrashing
- A process is busy swapping pages in and out, instead of doing useful work.
[!question]+ Solutions to thrashing
- Decrease the degree of multiprogramming
- Establish “acceptable” page-fault frequency (PFF) rate and use local replacement policy
- Install enough physical memory (hardware)
- Install faster hard disk
Page Fault Frequency
- If actual rate too low, process loses frame (we think too many frames assigned, remove some frames)
- If actual rate too high, process gains frame
Allocating Kernel Memory
- Allocating memory for Kernel is different from the allocation of memory for user applications
- Special features of kernel
- Kernel requests memory for structures of varying sizes
- Structure for PCB
- Structure for file descriptor
- There are multiples instances for each structure
- there is a PCB for each process
- Memory needs to be contiguous
- Kernel requests memory for structures of varying sizes
Buddy system
- Allocates memory from fixed-size segment consisting of physically contiguous pages
- Power-of-2 allocator: memory allocated in units is sized as power of2, one smallest but bigger than requested size
Advantage:
- Unused chunks can be merged to a bigger one
Slab allocator
- A cache
- is used for unique kernel data structure
- consists of one or more slabs
- A slab is one or more physically contiguous pages
- Filled with same kind of objects – instantiations of the data structure,
- Each object has two status: free and used
- If slab is full of used objects, next object is allocated from empty slab
- If no empty slabs, new slab allocated