File size: 2,934 Bytes
6534024 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | # Distributed Systems: A Practical Overview
## What Makes a System "Distributed"?
A distributed system is a collection of independent computers that appears to users as a single coherent system. The key challenges arise from:
1. **Partial failure** - Parts of the system can fail independently
2. **Unreliable networks** - Messages can be lost, delayed, or duplicated
3. **No global clock** - Different nodes have different views of time
## The CAP Theorem
Eric Brewer's CAP theorem states that a distributed system can only provide two of three guarantees:
- **Consistency**: All nodes see the same data at the same time
- **Availability**: Every request receives a response
- **Partition tolerance**: System continues operating despite network partitions
In practice, network partitions happen, so you're really choosing between CP and AP systems.
### CP Systems (Consistency + Partition Tolerance)
- Examples: ZooKeeper, etcd, Consul
- Sacrifice availability during partitions
- Good for: coordination, leader election, configuration
### AP Systems (Availability + Partition Tolerance)
- Examples: Cassandra, DynamoDB, CouchDB
- Sacrifice consistency during partitions
- Good for: high-throughput, always-on services
## Consensus Algorithms
When nodes need to agree on something, they use consensus algorithms.
### Paxos
- Original consensus algorithm by Leslie Lamport
- Notoriously difficult to understand and implement
- Foundation for many other algorithms
### Raft
- Designed to be understandable
- Used in etcd, Consul, CockroachDB
- Separates leader election from log replication
### PBFT (Practical Byzantine Fault Tolerance)
- Handles malicious nodes
- Used in blockchain systems
- Higher overhead than crash-fault-tolerant algorithms
## Replication Strategies
### Single-Leader Replication
- One node accepts writes
- Followers replicate from leader
- Simple but leader is bottleneck
### Multi-Leader Replication
- Multiple nodes accept writes
- Must handle write conflicts
- Good for multi-datacenter deployments
### Leaderless Replication
- Any node accepts writes
- Uses quorum reads/writes
- Examples: Dynamo-style databases
## Consistency Models
From strongest to weakest:
1. **Linearizability** - Operations appear instantaneous
2. **Sequential consistency** - Operations appear in some sequential order
3. **Causal consistency** - Causally related operations appear in order
4. **Eventual consistency** - Given enough time, all replicas converge
## Partitioning (Sharding)
Distributing data across nodes:
### Hash Partitioning
- Hash key to determine partition
- Even distribution
- Range queries are inefficient
### Range Partitioning
- Ranges of keys on different nodes
- Good for range queries
- Risk of hot spots
## Conclusion
Building distributed systems requires understanding these fundamental concepts. Start simple, add complexity only when needed, and always plan for failure.
|