ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • Cache, Write-through, and Write-back
    DB/InMemory 2019. 9. 29. 12:36

    1. Overview

    A caching method in which modifications to data in the cache aren't copied to the cache source until absolutely necessary. Write-back caching is available on many microprocessors, including all Intel processors since the 80486. With these microprocessors, data modifications (e.g., write operations) to data stored in the L1 cache aren't copied to main memory such as RAM. until absolutely necessary. In contrast, a write-through cache performs all write operations in parallel -- data is written to main memory and the L1 cache simultaneously. There is an inherent trade-off within below features in the cache.

    • Size and Speed
    • Premium technologies(such as SRAM) and cheaper(mass-produced commodities such as DRAM or hard disks)

    2. Motivation of Cache

    2.1 Latency

    A larger resource incurs a significant latency for access. This is mitigated by reading in large chunks, in the hope that subsequent reads will be from nearby locations. Prediction or explicit prefetching might also guess where future reads will come from and make requests ahead of time. If done correctly the latency is bypassed altogether.

    2.2 Throughput

    The use of a cache allows for higher throughput from the underlying resources, by assembling multiple fine grains transfers into larger, more efficient requests. Reading larger chunks reduces the fraction of bandwidth required for transmitting address information.

    3. Description

    3.1 Operations

    A cache is made up of a pool of entries. Each entry has associated data, which is a copy of the same data in some backing store. Each entry also has a tag, which specifies the identity of the data in the backing store of which the entry is a copy. 

    Feature Description
    Cache hit

    When the cache client (a CPU, web browser, OS) needs to access data presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead.

    For example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular URL. In this example, the URL is the tag, and the content of the web page is the data.

    Hit ratio also called cache hit. The percentage of accesses that result in cache hits
    Cache miss The alternative situation, when the cache is checked and found not to contain any entry with the desired tag
    Replacement policy Policy to remove some other previously existing cache entry in order to make room for the newly retrieved data when a cache miss occurred such as least recently used(LRU), First in first out(FIFO), Most recently used(MRU), and so on

    3.2 Write-through

    Write is done synchronously both to the cache and to the backing data

    3.3 Write-back(write-behind)

    Initially, writing is done only to the cache. The write to the backing store is postponed until the modified content is about to be replaced by another cache block

    4. Usages

    4.1 Hardware

    • CPU cache
    • GPU cache
    • DSPs
    • Translation lookaside buffer

    4.2 Software

    • Disk cache
    • web cache
    • memorization

    5. References

    https://en.wikipedia.org/wiki/Cache_(computing)

    https://www.youtube.com/watch?v=uM8K0Z5usu8

    https://www.webopedia.com/TERM/W/write_back_cache.html

    https://en.wikipedia.org/wiki/Cache_replacement_policies

    'DB > InMemory' 카테고리의 다른 글

    Memcached vs. Redis  (0) 2019.09.29
    Buffer vs. cache  (0) 2019.09.29
    Remote Dictionary Server(Redis) and Redis Enterprise Cluster  (0) 2019.08.25

    댓글

Designed by Tistory.