Saturday, January 21, 2012

Cache and Write Policy

Few days back, I was reading an article about cache says all goodies about  cache performance, and as all of us knows that cache is meant for improving the read and write performance. But I think sometimes distributed cache may not result in high performance as expected. This depends upon what sort of coherence policy is adopted for distributed cache system.
Cache whether it is web cache, CPU cache or memory cache there could be a distributed cache mechanism. For instance in multi processor multi cache system the read/write policy may not be straight forward.
For single cache , normally the write operation can be of two types write back and write through operation. "Write back" is deferred writing and done through dirty bit setting. It means that write operation is done on cache but not in main memory (deferred) and dirty bit (extra bit on cache) is set 1 . Whenever the next read/write operation is done on cache at same place and if dirty bit is set then it does two operation first on the main memory and then write/read operation. This is the most common way. As "write through" writing in cache and in main memory at the same time could be very costly process.

In Distributed cache the write back operation need to be synchronized with other cache values which is also called coherence policy. There are two ways either invalidate all the cache if same address exist in other caches or update all the caches which uses the same address. Again the most prevailing mechanism is invalidate other distributed cache rather than update (costly process)
So there seems to be huge penalty as far as coherence is concerned. Hence the performance which should be there because of multiple caches may be impacted.



Friday, January 20, 2012

Amdahls law and computer performance

In computer system the performance is utmost important factor when it comes to the question of computer program whether it is simple program or an operating system itself. The basic question which need to be answered is, when only part of the system is improved then what is the impact of it on overall computer performance? .This is all about Amdahl's law.
 The basic problem statement is if we improve some of part of the computer system then how it can impact on overall performanrce on the system.
Amdahl's law is more relevant in parallel computing when  number of parallel process
Amdahl stated the simple mathematics if P part of the program is improved its performance by speed S then run time is P/S whereas (1-P) is the runtime of rest of the program which speed is 1x .
Total run time = (1-P)/1 + P/S
Performance is inversely proportion to runtime = 1/(1-P) + P/S

Similarly if Ps are defined as part of program and P1 improves 1x and P2 2x and and P3 3x also P1+P2+P3 = 1 (1 program divided into three parts each has own speed up ratio) then overall performance improved is
1/ (P1/1 + P2/2 + P3/3) lets take P1 is 50% and P2 is 20% P3 is 30% then overall improvement is
1/0.7 = 1.43 (approx)

Similarly , In case of parallel programming if the P is portion of program that can be made parallel by N processor then overall performance is 1/(1-P) + P/N