Talk:Cache (computing)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Requested move[edit]

The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

Page moved to Cache (computing). Vegaswikian (talk) 19:58, 4 February 2012 (UTC)[reply]

– In this particular case, the gerund form would be clearer. In computing a cache itself is useless without a caching-oriented system which actually stops to examine the cache. At the same time, -ing is something of a natural disambiguator. The computing use is not the primary topic for Cache (disambiguation), but perhaps for Caching it is. Pnm (talk) 21:46, 28 January 2012 (UTC)[reply]

  • rename to "Cache (computing)" to clarify and disambiguate right here at the article name level and since this is what the article is about. Hmains (talk) 00:57, 29 January 2012 (UTC)[reply]
  • Yes, move to Cache (computing), clean up cache and then move Cache (disambiguation)Cache Josh Parris 14:11, 29 January 2012 (UTC)[reply]
  • Oppose, though I could be convinced on the primary topic issue. On the title of this article, though, I don't think the gerund gives the right semantics here. "What is caching? The act of putting something in a cache." Any definition of caching requires first a definition of cache, and that is strong evidence that it's the storage itself, not the act of storing, that is key. Powers T 01:01, 30 January 2012 (UTC)[reply]
Caching refers to the strategy, to the feature of a system. To implement caching requires four things: having a cache, checking the cache first when a request comes along, putting entries into the cache, and deciding when to dispose entries. In general, literature uses both terms, but with topics like web caching and database caching the gerund is more common than the noun, as you can see from those articles' sources. – Pnm (talk) 04:05, 30 January 2012 (UTC)[reply]
Those articles are about specific applications of a cache. A web cache is not materially different from a database cache; it's the use of the cache that differs. But the article about the wider concept of a cache is properly named after the object of the action, not the action. Powers T 13:05, 30 January 2012 (UTC)[reply]
My point is that web caching is materially different from a CPU cache: the latter is an actual component in hardware. The object of caching is the cached content, not the cache. There are several main articles named after actions: collecting, shipping, running, and photography, not Collection, shipment, or Run; photograph is a short sub-article. – Pnm (talk) 04:04, 31 January 2012 (UTC)[reply]
I never claimed an article can't be named with the gerund; I just don't think it's the optimal choice here. Powers T 18:04, 31 January 2012 (UTC)[reply]
The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.

Flow charts[edit]

What's the software used to make flow chart diagram of write-back and write-through procedure? I'd very much like to use it for personal design. ChazZeromus (talk) 17:55, 21 June 2012 (UTC)[reply]

grapholite.com 79.182.193.72 (talk) 00:24, 5 February 2013 (UTC)[reply]

Need clarification on backing store[edit]

The article keeps telling about 'backing store'. But it doesn't mention what it is(Probably Hard disk drive?). So it needs to define backing store or provide a link to another article that describes what that backing store is.

07:19, 2 June 2014 (UTC)Venki Subramanian (talk)

The backing store is whatever the cache is caching. :-) It could be main memory, or a cache at a higher level of the cache hierarchy, for a CPU cache or a TLB. It could be a disk drive, SSD, or remote file server for a file data cache ("disk cache"/"page cache"); it could be a Web server for a Web cache; and so on. Guy Harris (talk) 07:51, 2 June 2014 (UTC)[reply]

All cache is volatile right?[edit]

In risk of sounding silly, all cache is volatile right? I saw zero mention of volatility anywhere in the article, and if that's the case, it should be added.— Preceding unsigned comment added by ‎BlueFenixReborn (talkcontribs) 08:59, December 20, 2014 (UTC)

Hello! Obviously, that depends on what kind of a device is used as a cache. For example, using DRAM results in a volatile cache. while using an HDD or SSD means permanent storage of cached data. It's pretty much implicitly known whenever a particular cache layout is mentioned in the article; thus, I don't think that it should be clarified further. — Dsimic (talk | contribs) 08:39, 20 December 2014 (UTC)[reply]
Your browser for example also stores its caches on disk, which persists between restarts and reboots. Image viewer thumbnail caches are often stored persistently. -- intgr [talk] 08:45, 20 December 2014 (UTC)[reply]
Right, thank you both for the clarification. BlueFenixReborn (talk) 06:57, 31 December 2014 (UTC)[reply]

'buffer vs cache' Could this be improved?[edit]

I think this would be clearer?

Motivation - throughput, latency, granularity[edit]

The effect of a cache in buffering accesses benefits both throughput and latency.

Latency[edit]

Often a larger, distant resource incurs a significant latency for access (e.g. it can take 100's of clock cycles for a modern 4ghz processor to reach DRAM). This is mitigated by reading the distant resource in large chunks, in the hope that subsequent reads will be from nearby locations. Prediction hardware or prefetching might also guess where future reads will come from and make requests ahead of time; if done correctly the latency is bypassed altogether.

Throughput & granularity[edit]

Beyond this, granularity is important. The use of a cache also allows for much higher throughput from the underlying resource, by assembling multiple fine grain transfers into larger, more efficient requests. In the case of DRAM, this might be served by a wider bus. Imagine a program stepping through bytes, but being served by a 128bit off chip bus; individual uncached byte accesses would only allow 1/16th of the total bandwidth to be used.

Not exactly true[edit]

To be cost-effective and to enable efficient use of data, caches must be relatively small.

In ZFS, you could have a TB of SSD as your ARC caching a fully redundant disk pool whose capacity is not vastly larger in delivered capacity than the ARC cache.

There are parameters other than relative size that potentially get traded off between the cache and the backing store.

A new type of non-volatile memory that was far cheaper in capacity than flash, fast to read, but with a low write endurance could even lead to a reasonable deployment where cache:backing store was 1:1 (under a policy of extremely reluctant eviction). — MaxEnt 01:42, 17 January 2018 (UTC)[reply]

Backing store[edit]

Backing store redirects to this article. However, the term "backing store" has a wider meaning in computing than just CPU caching. In particular, it's also the term used for an I/O stream that uses an in-memory buffer as its data source, i.e., the storage area that "backs" it up. While this could be construed as being some kind of cache for the stream, it's not really caching anything, but instead is the actual source of data for the stream. So I'm thinking that Backing store should be a separate article, or disambiguation page listing the various meanings. Any thoughts on this? — Loadmaster (talk) 14:44, 3 July 2018 (UTC)[reply]

Cache misses types[edit]

A new section or new wikipedia article should be created about cache misses. In particular the different types of misses could be described and explained with exemples: Capacity miss, Conflict miss, Compulsory miss, Coherence miss — Preceding unsigned comment added by 20 STS grp1 (talkcontribs) 23:38, 8 January 2021 (UTC)[reply]