Event | Latency | Scaled |
---|---|---|
1 CPU Cycle | 0.3 ns | 1 s |
Level 1 cache access | 0.9 ns | 3 s |
Level 2 cache access | 2.8 ns | 9 s |
Level 3 cache access | 12.9 ns | 43 s |
Main memory access (DRAM, from CPU) | 120 ns | 6 min |
Solid-state disk I/O (flash memory) | 50 - 150 us | 2-6 days |
Rotational disk I/O | 1-10 ms | 1-12 months |
Internet: San Francisco to New York | 40 ms | 4 years |
Internet: San Francisco to United Kingdom | 81 ms | 8 years |
Internet: San Francisco to Australia | 183 ms | 19 years |
TCP packet retransmit | 1-3 s | 105-317 years |
OS virtualization system reboot | 4 s | 423 years |
SCSI command timeout | 30 s | 3 millennia |
Hardware (HW) virtualization system reboot | 40 s | 4 millennia |
Physical system reboot | 5 min | 32 millennia |
It's actually impressive how fast CPU is with respect to other components. It is also very good argument for multitasking, i.e. assigning CPU to some other task while waiting for, e.g. disk, or something from the network.
One additional impressive thing is written below the table in the book. Namely, if you multiply CPU cycle with speed of light (c) you can see that the light can travel only 0.5m while CPU does one instruction. That's really impressive. :)
That's it for this post. For the end, while I was searching for this table, I stumbled on some additional interesting links:
- Some additional latencies
- Approximate cost to access various caches and main memory? (StackOverflow)
- The Nehalem Preview: Intel Does It Again
- What Your Computer Does While You Wait