The meme don’t make sense. An SRAM cache of that size would be so slow that you would most likely save clock cycles reading directly from RAM an not having a cache at all…
- 0 Posts
- 4 Comments
Joined 2 years ago
Cake day: June 11th, 2023
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
I’m hosting a matrix server with a TURN server and it’s fairly easy to selfhost. This sounds exaggerated.
Smoolak@lemmy.worldto Technology@lemmy.world•Spotify caught hosting hundreds of fake podcasts that advertise selling drugsEnglish14·12 days agoI second this.
I agree. When evaluating cache access latency, it is important to consider the entire read path rather than just the intrinsic access time of a single SRAM cell. Much of the latency arises from all the supporting operations required for a functioning cache, such as tag lookups, address decoding, and bitline traversal. As you pointed out, implementing an 8 GB SRAM cache on-die using current manufacturing technology would be extremely impractical. The physical size would lead to substantial wire delays and increased complexity in the indexing and associativity circuits. As a result, the access latency of such a large on-chip cache could actually exceed that of off-chip DRAM, which would defeat the main purpose of having on-die caches in the first place.