Few days back, I was reading an article about cache says all goodies about cache performance, and as all of us knows that cache is meant for improving the read and write performance. But I think sometimes distributed cache may not result in high performance as expected. This depends upon what sort of coherence policy is adopted for distributed cache system.
Cache whether it is web cache, CPU cache or memory cache there could be a distributed cache mechanism. For instance in multi processor multi cache system the read/write policy may not be straight forward.
For single cache , normally the write operation can be of two types write back and write through operation. "Write back" is deferred writing and done through dirty bit setting. It means that write operation is done on cache but not in main memory (deferred) and dirty bit (extra bit on cache) is set 1 . Whenever the next read/write operation is done on cache at same place and if dirty bit is set then it does two operation first on the main memory and then write/read operation. This is the most common way. As "write through" writing in cache and in main memory at the same time could be very costly process.
In Distributed cache the write back operation need to be synchronized with other cache values which is also called coherence policy. There are two ways either invalidate all the cache if same address exist in other caches or update all the caches which uses the same address. Again the most prevailing mechanism is invalidate other distributed cache rather than update (costly process)
So there seems to be huge penalty as far as coherence is concerned. Hence the performance which should be there because of multiple caches may be impacted.
Cache whether it is web cache, CPU cache or memory cache there could be a distributed cache mechanism. For instance in multi processor multi cache system the read/write policy may not be straight forward.
For single cache , normally the write operation can be of two types write back and write through operation. "Write back" is deferred writing and done through dirty bit setting. It means that write operation is done on cache but not in main memory (deferred) and dirty bit (extra bit on cache) is set 1 . Whenever the next read/write operation is done on cache at same place and if dirty bit is set then it does two operation first on the main memory and then write/read operation. This is the most common way. As "write through" writing in cache and in main memory at the same time could be very costly process.
In Distributed cache the write back operation need to be synchronized with other cache values which is also called coherence policy. There are two ways either invalidate all the cache if same address exist in other caches or update all the caches which uses the same address. Again the most prevailing mechanism is invalidate other distributed cache rather than update (costly process)
So there seems to be huge penalty as far as coherence is concerned. Hence the performance which should be there because of multiple caches may be impacted.