Access Time
The total time to retrieve data from storage. For HDDs, this includes seek time (moving the read head) and rotational latency (waiting for the platter to spin). For SSDs and RAM, access time is much lower because there are no moving parts.Access time directly impacts cache miss latency—the time u...
Full Explanation
Access time is the total time it takes to retrieve a specific piece of data from a storage device. For traditional HDDs, access time breaks down into two parts: seek time (moving the read/write head to the right track, typically 5-10ms) and rotational latency (waiting for the platter to spin to the right position, typically 2-4ms for a 7200 RPM drive). Combined, a random HDD access takes roughly 8-14ms.
SSDs eliminate all moving parts, dropping access time to around 0.1ms (100 microseconds). RAM brings it down further to approximately 0.0001ms (100 nanoseconds), roughly 100,000 times faster than an HDD. These numbers are why caching layers exist in every performance-sensitive system.
In a CDN context, access time directly affects cache miss latency and Time to First Byte (TTFB). When a request hits the edge and the content is cached, the access time of the storage layer determines how fast the server can start sending the response. This creates a natural caching hierarchy: RAM cache (microseconds) is checked first, then SSD cache (sub-millisecond), then HDD storage (milliseconds), and finally an origin fetch (tens to hundreds of milliseconds over the network).
A CDN edge server with hot content in its RAM cache can start responding in microseconds. The same content on SSD takes sub-millisecond. On HDD, you are looking at several milliseconds. And a full origin fetch? That is 50-500ms depending on distance. This is exactly why CDN cache hit ratio matters so much. Every cache miss pushes you down the access time ladder by orders of magnitude.
Examples
Storage Layer Access Time Relative Speed CDN Layer
L1 CPU Cache ~1ns 1x (baseline) -
L3 CPU Cache ~10ns 10x -
RAM ~100ns 100x Hot cache
NVMe SSD ~50,000ns 50,000x Warm cache
SATA SSD ~100,000ns 100,000x Warm cache
HDD (random) ~10,000,000ns 10,000,000x Cold storage
Origin fetch ~50,000,000ns+ 50,000,000x+ Cache miss
You can observe access time impact by comparing TTFB for cached vs uncached content:
# First request (cold miss - fetches from origin)
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\n" https://cdn.example.com/image.jpg
# TTFB: 0.180s
# Second request (cache hit - served from edge SSD/RAM)
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\n" https://cdn.example.com/image.jpg
# TTFB: 0.012s
# The 15x improvement comes from eliminating the origin
# fetch and serving from local storage with fast access time
Frequently Asked Questions
The total time to retrieve data from storage. For HDDs, this includes seek time (moving the read head) and rotational latency (waiting for the platter to spin). For SSDs and RAM, access time is much lower because there are no moving parts.Access time directly impacts cache miss latency—the time u...
Storage Layer Access Time Relative Speed CDN Layer
L1 CPU Cache ~1ns 1x (baseline) -
L3 CPU Cache ~10ns 10x -
RAM ~100ns 100x Hot cache
NVMe SSD ~50,000ns 50,000x Warm cache
SATA SSD ~100,000ns 100,000x Warm cache
HDD (random) ~10,000,000ns 10,000,000x Cold storage
Origin fetch ~50,000,000ns+ 50,000,000x+ Cache miss
You can observe access time impact by comparing TTFB for cached vs uncached content:
# First request (cold miss - fetches from origin)
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\n" https://cdn.example.com/image.jpg
# TTFB: 0.180s
# Second request (cache hit - served from edge SSD/RAM)
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\n" https://cdn.example.com/image.jpg
# TTFB: 0.012s
# The 15x improvement comes from eliminating the origin
# fetch and serving from local storage with fast access time
Related CDN concepts include:
- Latency — The time delay between a request and the start of its response. For CDNs, it's …