Friday, December 3, 2010

Reader Write Problem

semaphore readMutex=1
semaphore writeMutex=1
semaphore allowOneReader=1
int readcount=0
int writecount=0
semaphore readMain=1
semaphore writeMain=1

Reader protocol to read
allowOneReader.wait( );  
readMain.wait();  
 readMutex.wait();
     readcount++;
      if ( readcount == 1 )
         writeMain.wait();
 readMutex.signal();  
 readMain.signal();  
allowOnereader.signal();

<reader critical section>.

readMutex.wait();
readcount--;
if ( readcount == 0 )
      writeMain.signal();
readMutex.signal();



Writer protocol to write
writeMuex.wait();
   writecount++;
   if ( writecount == 1 )
          readMain.wait();
wrietMutex.signal();
writeMain.wait();

<writer critical section>;

wrietMain.signal();

wrietMutex.wait();
writecount--;
if ( writecount == 0 )
      readMain.signal();
writeMuex.signal();

Producer Consumer Problem (bounded) - Semaphores

Bounded by N buffers;
Solution:
semaphore full =0
semaphore empty = N
semaphore mutex = 1
producer() {
  while(true) {
    produce()
    empty.wait() //move one item from empty list taken to produce
    mutex.wait() //lock
    append()
    mutex.signal() //release lock
    full.signal() // fill the full with one more item ready for consume
  }
}
consumer() {
   while(true) {
    full.wait() // wait for fill to be filled by producer
     mutex.wait() //lock it and take away
     take()
    mutex.signal() //release lock
    empty.signal() // one more item got empty from the buffer
     consume()
  }
}

Producer Consumer Problem (unbounded) - Semaphores

Problem:
 producer () {
    while(true) {
        produce()
        append()
     }
}
Consumer() {
     while(true) {
         take()
         consume()
    }
}
In multithreading this may cause problem:
Solution:

semaphore mutex=1
semaphore full=0
producer() {
       while(true) {
          produce()
          mutex.wait()
          append()
          mutex.signal()
          full.signal() //increment full
      }
}
consumer () {
     while(true) {
         full.wait()  // while full is not filled by producer, decrement full
         mutex.wait()  // mutex to give signal
         take()
         mutex.signal()
         consume()
     }
}






 
 



Wednesday, December 1, 2010

semaphores

A semaphore is an integer with the following function

1.  Intialize

2. Wait : if negative then decrement and block

3. Signal : Increment and wake a blocked thread

Tuesday, November 30, 2010

Linux - MMU

– 4k page size
– A three-level page table to handle 64-bit addresses


Copy on Write mechanism for memory management

Linux Kernel: Copy on Write

Copy on Write is an important technique in the linux kernel memory management. The basic idea is to prevent the creation of unnecessary copies of structures when creating new processes. For this, when a child process is created, the kernel first provides only readable access to both parent and son. If one of the process need writable access, at the time it needs to write data in memory, the kernel throws a page fault indicating that a new copy should be created for the asking process. Thus, the data is actually copied only when the process actually tries to write.


Read Copy Update
==============
The basic idea behind it is that when a resource is modified, a new updated structure is put in its place and the old structure is not discarded right away, it waits until references to this structure by other processes are dropped. It can be seen as similar to the concept of garbage collection,


Translation Lookaside buffer, aka TLB 

 

in a few words from the wikipedia article: a CPU cache used memory management hardware to improve the speed of virtual address translation.(wikipedia).
Much information comes from this article.
The idea is that CPUs keep an associative memory to cache page table entries (PTEs) of virtual pages which were recently accessed.
When the CPU must access virtual memory, it looks up in the TLB for a number corresponding to the entry to obtain.
If an entry was found (a TLB hit), then the CPU can use the value of of the PTE which accessed and calculate the physical address.
In case it was not found (a TLB miss), then depending on the architecture, the miss is handled:
  • through hardware, then the CPU tries to walk the page table and find the correct PTE. if one is found the TLB is updated, if none is found then the CPU raises a page fault, which is then treated by the operating system.
  • through software, then the CPU raises a TLB miss fault. The operating system intercepts the miss fault and invoke the corresponding handler, which walks the page. if the PTE is found, it is marked present and the TLB is updated. if it not present, the page fault handler is then in charge.

Replacement policy

If the TLB is full, some entries must be replaced. For this depending on the miss handling strategy, different strategies and policy exist:
  • Least recently used (aka LRU)
  • Not recently used (aka NRU)
though the TLB miss mechanism is implemented in software, the replacement strategy can be implemented using hardware

Demand Loading

Demand Loading is a method of loading only parts of the program that is currently being executed into
primary memory (RAM) from secondary memory(disk). The program is logically divided into segments
or pages and is loaded into primary memory on demand, hence the term demand loading.

system calls - User program

 User program interacts with O/S by system calls: Accessing the kernel by using the system call through library. In kernel mode it executes the sys calls on user behalf.


lKernel code executes in kernel mode with full access to all the physical resources of the computer