架構師訓練營 week5 總結
Distributed cache architecture
Cache:
CPU cache
OS cache
DB cache
CDN cache
Reverse proxy cache
Frontend cache
Application cache
Distributed object cache
What items affect the "cache hit ratio"
How many cache keys
How many memory
TTL of cache object
Why do we use cache?
faster
We usually save final result in the cache, we don’t need to compute again
Reduce DB, disk, network loading
We use cache on
Frequently read data
Complex computing result
Read more write less
Low consistency requirements
We don’t use cache on
Date which is updated frequently (read/write < 2)
No hot data
High consistency requirements
缓存雪崩 - Cache Avalanche
A lot of request go to database directly and cause database down
How to prevent?
Cluster
redis cluster
Distributed expired time
setRedis(Key, value, time + Math.random() * 10000);
Service degradation
缓存穿透 - Cache Penetration
Same as Cache Avalanche, only different is the scale.
Cache Penetration => fewer
Cache Avalanche => a lot
How to prevent it?
Setup null/default value
Short TTL
Bloom Filter
缓存击穿 Cache Breakdown
A lot of requests go to one cache key => kill that cache service
How to prevent it?
Update lock logic
Asynchronous update
Cache warm-up
Redis
support complex data structure
support asynchronous
Maser/slave high availability
Cluster and share nothing mode
Message queue and asynchronous structure
Point to point
Publish and subscribe
What is the benefit of the message queue?
Asynchronous, increase performance
Good scalability
Failure isolation
Decouple
Event-Driven structure
Load balancer
HTTP
DNS
Reverse proxy
IP
MAC address
Load balancer algorithm
Round Robin
Weighted Round Robin
Random
Least Connection
Weighted Least Connection
Source IP Hash
评论