Non-contiguous Allocation

  • The non-contiguous memory allocation also reduces memory wastage caused by internal and external fragmentation.
  • Paging and Segmentation are the two ways that allow a process’s physical address space to be non-contiguous.

Idea behind Paging

  • If we have only two small non-contiguous free holes in the memory, say 1KB each
  • If OS wants to allocate RAM to a process of 2KB, in contiguous allocation, it is not possible, as we must have contiguous memory space available of 2KB. (External Fragmentation)
  • What if we divide the process into 1KB-1KB blocks?

Basics of Binary Addresses

  • Using 1 Bit we can represent 2^1 i.e 2 memory locations.
  • Using 2 bits, we can represent 2^2 i.e. 4 memory locations.
  • Using 3 bits, we can represent 2^3 i.e. 8 memory locations.
  • Using n bits, we can assign 2^n memory locations.

1. Paging

  • Paging is a non-contiguous memory-management scheme.
  • Paging is the process of dividing Logical memory into Pages and Physical memory in Frames.
  • Idea is to divide
    • The physical memory into fixed-sized blocks called Frames
    • And logical memory into blocks of same size called Pages
    • Page size = Frame size
  • Page size is usually determined by the processor architecture

Page Table

  • Non-contiguous MMU uses Page table for logical and physical memory mapping.
  • A Data structure that stores which page is mapped to which frame
  • The page table contains the base address of each page in the physical memory.
  • The p is used as an index into a page table to get base address the corresponding frame in physical memory.
  • Each process has it's own page table and its base address is stored in PCB
  • A page table base register (PTBR) is present in the system that points to the current page table.
  • Changing page tables requires only this one register, at the time of context-switching.

Address Translation

  • Every address generated by CPU (logical address) is divided into two parts:
    • a page number (p) and
    • a page offset (d) Page table

Why paging is slow and how do we make it fast?

  • There are too many memory references to access the desired location in physical memory.
  • It makes pagination slower
  • Solution is Translation Look-aside buffer (TLB)
    • A Hardware support to speed-up paging process.
    • It’s a hardware cache, high speed memory.
    • TBL has key and value.

Translation Look aside buffer(TLB)

  • Drawbacks of Paging
    • Size of Page table can be very big and therefore it wastes main memory.
    • CPU will take more time to read a single word from the main memory
  • Decrease the page table size
    • The page table size can be decreased by increasing the page size but it will cause internal fragmentation and there will also be page wastage.
    • Other way is to use multilevel paging but that increases the effective access time therefore this is not a practical approach
  • How to decrease the effective access time
    • To overcome these many drawbacks in paging, we have to look for a memory that is cheaper than the register and faster than the main memory
    • i.e use Cache memory also known as TLB
    • So that the time taken by the CPU to access page table again and again can be reduced and it can only focus to access the actual word.
  • Locality of reference
    • TLB are of small size, 6MB - 12MB
    • It may be able to load whole page table in it.
    • So locality reference is used to load partial page table

TLB hit and miss

  • TLB hit is a condition where the desired entry is found in TLB.
    • If this happens then the CPU simply access the actual location in the main memory.
  • if the entry is not found in TLB (TLB miss) then CPU has to access page table in the main memory and then access the actual frame in the main memory.
  • Therefore, in the case of TLB hit, the effective access time will be lesser as compare to the case of TLB miss.

Virtual Memory

  • Virtual Memory is a storage allocation scheme in which secondary memory can be addressed the main memory.
  • It allows the execution of processes that are not completely in the memory.
  • It provides user an illusion of having a very big main memory.
  • This is done by treating a part of secondary memory as the main memory
  • This space in secondary memory is called Swap-space
  • Physical memory is limited
    • In many cases, the entire program is not needed at the same time
    • Program is partially loaded in memory, which ever is required early
  • Pros
    • Programs can be larger than physical memory.
    • More programs could be run at the same time
    • The degree of multi-programming will be increased
  • Cons
    • The system can become slower as swapping takes time
    • Thrashing may occur

What is a Page Fault?

  • If the referred page is not present in the main memory then there will be a miss and the concept is called Page miss or page fault
  • The CPU has to access the missed page from the secondary memory.
  • If the number of page fault is very high then the effective access time of the system will become very high.

What is Thrashing?

  • If page fault is too high, say more than 50% of page reference is page fault
    • then such is called as Trashing
  • A system is thrashing when it spends more time in servicing the page faults than executing processes

Demand Paging

  • Demand Paging is a popular method of virtual memory management
  • The pages of a process which are least used, get stored in the secondary memory.
  • A page is copied to the main memory when its demand is made, or page fault occurs.
  • There are various page replacement algorithms which are used to determine the pages which will be replaced.
  • Rather than swapping the entire process into memory, we use Lazy Swapper
  • A lazy swapper never swaps a page into memory unless that page will be needed.
  • We are viewing a process as a sequence of pages, rather than one large contiguous address space, using the term Swapper is technically incorrect
  • A swapper manipulates entire processes, whereas a Pager is concerned with individual pages of a process

According to the concept of Virtual Memory, in order to execute some process, only a part of the process needs to be present in the main memory which means that only a few pages will only be present in the main memory at any time.

However, deciding, which pages need to be kept in the main memory and which need to be kept in the secondary memory, is going to be difficult because we cannot say in advance that a process will require a particular page at particular time.

Therefore, to overcome this problem, there is a concept called Demand Paging is introduced

How Demand Paging works?

  • When a process is to be swapped-in, the pager guesses which pages will be used.
  • Instead of swapping in a whole process, the pager brings only those pages into memory.
    • This, it avoids reading into memory pages that will not be used anyway.
  • Above way, OS decreases the swap time and the amount of physical memory needed.
  • The valid-invalid bit scheme in the page table is used to distinguish between pages that are in memory and that are on the disk.
    • Valid-invalid bit 1 means, the associated page is both legal and in memory.
    • Valid-invalid bit 0 means, the page either is not valid (not in the LAS of the process) or is valid but is currently on the disk.

Access page that is not in memory(handle page fault)

  • If the process tries to access a page that was not brought into memory, access to a page marked invalid causes page fault
  • Page demand will cause a trap to the OS.
  • The procedure to handle the page fault:
    • Check an internal table (in PCB of the process) to determine whether the reference was valid or invalid memory access.
    • If ref. was invalid process throws exception.
      • If ref. is valid, pager will swap-in the page.
    • We find a free frame (from free-frame list)
    • Schedule a disk operation to read the desired page into the newly allocated frame.
    • When disk read is complete, we modify the page table that, the page is now in memory.
    • Restart the instruction that was interrupted by the trap.
      • The process can now access the page as through it had always been in memory.

Pure Demand Paging

  • In extreme case, we can start executing a process with no pages in memory
  • When OS sets the instruction pointer to the first instruction of the process, which is not in the memory.
  • The process immediately faults for the page and page is brought in the memory.
  • Never bring a page into memory until it is required

Page Replacement Algorithms

  • Whenever page fault occurs and no free frames/pages are available
    • OS has to replace some page to load the required pages.
    • Some allocated page should be swapped out from the frame and new page is to be swapped into the freed frame
  • Which page should be replaces is decied by page replacement algorithm
    • And it AIM is to have minimum page faults

First Come Fisrt Out(FIFO)

  • Allocate frame to the page as it comes into the memory by replacing the oldest page
  • Easy to implement
  • Performance is not always good
    1. The page replaced may be an initialization module that was used long time ago(Good replacement candidate)
    2. The page may contain a heavily used variable that was initialized early and is in content use. (Will again cause page fault)
  • Belady’s anomaly is present
    1. In the case of LRU and optimal page replacement algorithms,
      • it is seen that the number of page faults will be reduced if we increase the number of frames.
      • However, Balady found that, In FIFO page replacement algorithm, the number of page faults will get increased with the increment in number of frames.
    2. This is the strange behavior shown by FIFO algorithm in some of the cases.

Optimal page replacement

  • Idle algorithm, used for benchmarking
  • Find if a page that is never referenced in future
  • If such a page exists, replace this page with new page.
  • If no such page exists, find a page that is referenced farthest in future. Replace this page with new page.
  • Lowest page fault rate among any algorithm.
  • Difficult to implement as OS requires future knowledge of reference string which is kind of impossible. (Similar to SJF scheduling)

Least-recently used (LRU)

  • We can use recent past as an approximation of the near future then we can replace the page that has not been used for the longest period
  • Can be implemented by two ways
    • Counters
      • Associate time field with each page table entry.
      • Replace the page with smallest time value.
    • Stack
      • Keep a stack of page number.
      • Whenever page is referenced, it is removed from the stack & put on the top
      • By this, most recently used is always on the top, & least recently used is always on the bottom
      • As entries might be removed from the middle of the stack, so Doubly linked list can be used.

Counting-based page replacement

  • Keep a counter of the number of references that have been made to each page
  • Least frequently used (LFU)
    • Actively used pages should have a large reference count.
    • Replace page with the smallest count.
  • Most frequently used (MFU)
    • Based on the argument that the page with the smallest count was probably just brought in and has yet to be used.
  • Neither MFU nor LFU replacement is common

Technique to Handle Thrashing

  • Working set model
    • This model is based on the concept of the Locality Model
    • The basic principle states that if we allocate enough frames to a process to accommodate its current locality,
    • it will only fault whenever it moves to some new locality.
    • But if the allocated frames are lesser than the size of the currentnlocality, the process is bound to thrash.
  • Page Fault frequency
    • Thrashing has a high page-fault rate.
  • We want to control the page-fault rate.
  • When it is too high, the process needs more frames.
    • Conversely, if the page-fault rate is too low, then the process may have too many frames.
  • We establish upper and lower bounds on the desired page fault rate.
  • If pf-rate exceeds the upper limit, allocate the process another frame, if pf-rate fails falls below the lower limit, remove a frame from the process. By controlling pf-rate, thrashing can be prevented.

2. Segmentation

  • Paging divides the memory into some fixed-sized block on the other hand
  • Segmentation divides the user program and the secondary memory into uneven-sized blocks known as segments or sections
  • Segmentation divides processes into smaller subparts known as modules
  • Modern System architecture provides both segmentation and paging implemented in some hybrid approach.

Why Segmentation is required?

  • Problems in the paging technique
    • If code size if > page size
    • A function or piece of code can be divided into more than one page

Advantages:

  • No internal fragmentation.
  • One segment has a contiguous allocation, hence efficient working within segment.
  • The size of segment table is generally less than the size of page table.
  • It results in a more efficient system because the compiler keeps the same type of functions in one segment.

Disadvantages:

  • External fragmentation.
  • The different size of segment is not good that the time of swapping.

Paging VS Segmentation

  1. Paging divides program into fixed size pages
    • Segmentation divides program into variable size segments.
  2. Paging: OS is responsible
    • Segmentation: Compiler is responsible.
  3. Paging is faster than segmentation
    • Segmentation is slower than paging
  4. Paging is closer to Operating System
    • Segmentation is closer to User
  5. Paging suffers from internal fragmentation
    • Segmentation suffers from external fragmentation
  6. Paging: There is no external fragmentation
    • Segmentation: There is no external fragmentation
  7. Paging: Logical address is divided into page number and page offset
    • Segmentation: Logical address is divided into segment number and segment offset
  8. Page table is used to maintain the page information.
    • Segment Table maintains the segment information
  9. Page table entry has the frame number and some flag bits to represent details about pages.
    • Segment table entry has the base address of the segment and some protection bits for the segments.