Concurrency

  • Concurrency is the execution of the multiple instruction sequences at the same time.
  • It happens in the operating system when there are several process threads running in parallel

Thread Scheduling

  • Threads are scheduled for execution based on their priority.
  • Even though threads are executing within the runtime, all threads are assigned processor time slices by the operating system.

Threads context switching

  • OS saves current state of thread & switches to another thread of same process.
  • We have TCB (Thread control block) like PCB for state storage management while performing context switching.
  • It doesn’t includes switching of memory address space i.e no memory switching
    • But Program counter, registers & stack are included
  • Fast switching as compared to process switching
  • CPU’s cache state is preserved Process in Memory

Benefits of Multi-threading.

  • Responsiveness
  • Resource sharing: Efficient resource sharing.
  • Economy:
    • It is more economical to create and context switch threads.
    • Also, allocating memory and resources for process creation is costly
    • So better to divide tasks into threads of same process.
  • Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

NOTE

How each thread get access to the CPU?

  • Each thread has its own program counter.
  • Depending upon the thread scheduling algorithm, OS schedule these threads.
  • OS will fetch instructions corresponding to Program Counter of that thread(TPC) and execute instruction.

Will single CPU system would gain by multi-threading technique?

  • No, this won’t give any gain.
  • As two threads have to context switch for that single CPU.

Problems in Concurrency

  • Process synchronization techniques play a key role in maintaining the consistency of shared data

Race Conditions

  • Race condition is an undesirable condition in OS that usually happens due to multiple threads accessing a shared resource at the same time
  • Due to which shared data are not synchronized,

Critical Section(CS)

  • The part of the code in the program where shared resources are accessed by processes.
  • Identifying critical sections helps in tracking down possible race scenarios

Solution to Race Condition

  • Atomic operations
    • Make Critical code section an atomic operation, i.e.,Executed in one CPU cycle.
  • Mutual Exclusion using locks(Mutex locks)
    • Locks can be used to implement mutual exclusion and avoid race condition by allowing only one thread/process to access critical section
    • Disadvantages:
      • Contention:
        • If thread that had acquired the lock dies, then all other threads will be in infinite waiting.
      • Deadlocks issue
      • Issue in debugging
  • Semaphores
    • It is a variable on which read, modify and update happens automatically in kernal mode
    • Should be mplemented at kernal mode to avoid preemption
    • Types:
      • Counting semaphore
        • Used where more than one process allowed to enter Critical section
      • Binary semaphore(mutex)
        • Used where only one process allowed to enter Critical section

Synchronization

  • In order to synchronize the cooperative processes, our main task is to solve the critical section problem
  • We need to provide a solution in such a way that the following conditions can be satisfied.
  • Primary
    • Mutual Exclusion
      • Only one process is should be inside critical section
    • Progress
      • if one process doesn’t need to execute into critical section then it should not stop other processes to get into the critical section.
  • Secondary
    • Bounded Waiting
      • The process must not be endlessly waiting for getting into the critical section.
    • Architectural Neutrality (Portability)
      • If our solution is working fine on one architecture then it should also run on the other ones as well.

Synchronization Mechanism with busy waiting

# HLL
while(lock!=0);
	lock=1

# LLL
1 load lock, R0
2 cmp R0, 0
3 jmp step1 # if R0 != 0, jump
4 store #1, lock
	Critical section part(CS)
5 store #0, lock

Test Set Lock (TSL)

  • Loads the value of lock variable into the local register R0 and sets it to 1 simultaneously
1 load lock, R0
2. store #1, lock
3 cmp R0, 0
4 jmp step1 # if R0 != 0, jump
	Critical section part(CS)
5 store #0, lock

into 
1. TSL load lock, R0
2 cmp R0, 0
3 jmp step1 # if R0 != 0, jump
	Critical section part(CS)
4 store #0, lock
  • Issue
    • Busy Wating
    • Priority Inversion
      • P1 in CS but P2(high priority) comes
      • P1 preempted, P2 loaded
      • But P2 on any can’t go to CS as already locked by P1

Paterson Solution

  • Issue
    • Busy Wating
    • Priority Inversion

Synchronization Mechanism without busy waiting

  • Above CS solution has busy wating
  • Below are some CS problem solved with no-busy-wating
  • Classic Synchronization Problem
    • Bounded-buffer (or Producer-Consumer) Problem
    • Readers and Writers Problem
    • Dining-Philosophers Problem

Procucer-Consumer Problem

  • Producer is a process which is able to produce data/item
  • Consumer is a Process that is able to consume the data/item produced by the Producer
  • Both Producer and Consumer share a common memory buffer
  • Memory is of fixed size.
  • The producer produces the data into the buffer and the consumer consumes the data from the buffer.

Problems Statement

  • Producer should not produce any data when the buffer is full
  • Consumer should not consume any data when the buffer is empty

Solution –> Without Semaphores

  • Not more than one process should access the buffer at a time i.e mutual exclusion should be there.

  • Issue:

    • If Wake(consumer) is not heard by consumer
    • Comsumer will later go to sleep
    • Producer will also go to sleep after the buffer is full
      • thinking that the consumer will wake him up
    • Deadlock: Both goes to sleep
N = 100 // slots in buffer
count = 0 // items in buffer

def producer():
    while(True):
        item = produce_item() // producer produces item
        // if the buffer is full then the producer will sleep
        if(count == N ) Sleep(producer) 
        insert_item(item);
        count = count + 1
        if (count ==1 ) Wake(consumer)

def consumer():
    while(True):
        if(count == 0) Sleep(consumer)
        item = remove_item()
        count = count - 1
        // if there is at least one slot available in the buffer   
        // then the consumer will wake up producer  
        if(count == N -1) Wake(producer)

Solution with Semaphores

  • Not more than one process should access the buffer at a time i.e mutual exclusion should be there.
  • To solve the Producer-Consumer problem three semaphores variable are used
    • Full – track filled slots(0)
    • Empty – track empty slots(n)
    • Mutex
      • Used to acquire lock on buffer, Value only 0(lock), 1(unlock/free)
  • Signal() - The signal function increases the semaphore value by 1 (+1)
  • Wait() - The wait operation decreases the semaphore value by 1 (-1)
# Producer
Wait(empty) // wait untill empty > 0
Wait(mutex) // decrease mutex value, lock buffer by making mutex = 0
add() 		// add data to buffer
Signal(mutex) // increase mutex value, unlock by making mutex = 1
Signal(full) // increase full count

# Consumer
Wait(full) // wait untill full > 0
Wait(mutex) // decrease mutex value, lock buffer by making mutex = 0
consume()		// consume data
signal(mutex) 	// increase mutex value, unlock by making mutex = 1
Signal(empty) // increase empty count

Reader-Writer Problem

The Dining Philosophers Problem

Solution

  • We must use some methods to avoid Deadlock and make the solution work
    • Allow at most 4 ph to be sitting simultaneously.
    • Allow a ph to pick up his fork only if both forks are available and to do this, he must pick them up in a critical section (atomically)
    • Odd-even rule: An odd ph. picks up first his left fork and then his right fork, whereas an even ph. picks up his right fork then his left fork.