File size: 2,101 Bytes
67d20c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
# DragonFly Cache Cluster Connection Guide

## Cluster Nodes
- **Node 1**: localhost:18000 (Master)
- **Node 2**: localhost:18001 (Replica)  
- **Node 3**: localhost:18002 (Replica)

## Connection Details
- **Protocol**: Redis-compatible
- **Max Memory**: 50GB per node
- **Persistence**: RDB snapshots

## Redis-CLI Examples
```bash
# Connect to master node
redis-cli -p 18000

# Cluster info
redis-cli -p 18000 info memory

# Set/get example
redis-cli -p 18000 SET nova:session:123 '{"data": "test"}'
redis-cli -p 18000 GET nova:session:123

# Monitor all nodes
redis-cli -p 18000 monitor
redis-cli -p 18001 monitor
redis-cli -p 18002 monitor
```

## Python Client Example
```python
import redis

# Connect to DragonFly cluster
# DragonFly is Redis-compatible, use standard redis client

# Master node connection
master = redis.Redis(host='localhost', port=18000, decode_responses=True)

# Replica connections  
replica1 = redis.Redis(host='localhost', port=18001, decode_responses=True)
replica2 = redis.Redis(host='localhost', port=18002, decode_responses=True)

# Basic operations
master.set('nova:working_memory', 'cached_data', ex=3600)  # 1 hour expiration
value = master.get('nova:working_memory')
print(f"Cached value: {value}")

# Pipeline for batch operations
pipe = master.pipeline()
pipe.set('key1', 'value1')
pipe.set('key2', 'value2') 
pipe.execute()
```

## Health Checks
```bash
# Check all nodes
redis-cli -p 18000 ping  # Should return PONG
redis-cli -p 18001 ping
redis-cli -p 18002 ping

# Memory usage
redis-cli -p 18000 info memory | grep used_memory_human

# Persistence status
redis-cli -p 18000 info persistence | grep rdb_last_save_time
```

## Configuration Notes
- **Data Directory**: `/data/dragonfly/node*/data/`
- **Snapshot Frequency**: Automatic based on changes
- **Max Memory**: 50GB per node (configurable)
- **Replication**: Async replication between nodes

## Security
- ❗ Localhost binding only
- ❗ No authentication required
- ❗ Monitor memory usage to prevent OOM
- ❗ Regular snapshot verification recommended

---
**Last Updated:** September 4, 2025