Spaces:
Sleeping
Sleeping
File size: 4,875 Bytes
93917f2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 |
# Codette Configuration Guide
## Core Configuration
### Environment Variables
```bash
# Core settings
PYTHONPATH="path/to/Codette/src"
LOG_LEVEL="INFO" # DEBUG, INFO, WARNING, ERROR
# API tokens (optional)
HUGGINGFACEHUB_API_TOKEN="your_token"
```
### System Configuration (`config.json`)
```json
{
"host": "127.0.0.1",
"port": 8000,
"model_name": "gpt2-large",
"quantum_fluctuation": 0.07,
"spiderweb_dim": 5,
"recursion_depth": 4,
"perspectives": [
"Newton",
"DaVinci",
"Ethical",
"Quantum",
"Memory"
]
}
```
## Quantum System Configuration
### Spiderweb Parameters
- `node_count`: 128 (default)
- `activation_threshold`: 0.3
- `dimensions`: ['Ψ', 'τ', 'χ', 'Φ', 'λ']
### Perspective Settings
```python
PERSPECTIVES = {
"newton": {
"name": "Newton",
"description": "analytical and mathematical perspective",
"prefix": "Analyzing this logically and mathematically:",
"temperature": 0.3
},
"davinci": {
"name": "Da Vinci",
"description": "creative and innovative perspective",
"prefix": "Considering this with artistic and innovative insight:",
"temperature": 0.9
},
# ... other perspectives
}
```
### Quantum State Configuration
```python
quantum_state = {
"coherence": 0.5, # Base quantum coherence
"fluctuation": 0.07, # Random fluctuation range
"spiderweb_dim": 5, # Number of dimensions
"recursion_depth": 4, # Max recursion in processing
"perspectives": [...] # Active perspectives
}
```
## Memory System Configuration
### Cocoon Settings
```python
COCOON_CONFIG = {
"base_dir": "./cocoons", # Cocoon storage directory
"max_cocoons": 1000, # Maximum stored cocoons
"cleanup_interval": 3600, # Cleanup interval (seconds)
"encryption": True # Enable encryption
}
```
### History Settings
```python
HISTORY_CONFIG = {
"max_length": 10, # Max conversation history
"context_window": 5, # Context window size
"min_confidence": 0.3, # Min confidence threshold
"max_recursion": 3 # Max processing recursion
}
```
## Pattern System Configuration
### Pattern Categories
```python
PATTERN_CATEGORIES = {
"thinking": {
"frequency": 0.7, # Usage frequency
"context_required": True # Context sensitivity
},
"follow_up": {
"frequency": 0.5,
"context_required": False
},
"transition": {
"frequency": 0.3,
"context_required": True
}
}
```
### Response Integration
```python
RESPONSE_CONFIG = {
"max_length": 500, # Max response length
"min_confidence": 0.3, # Min confidence threshold
"pattern_chance": 0.15, # Pattern inclusion chance
"transition_threshold": 2 # Min perspectives for transition
}
```
## Advanced Settings
### Model Configuration
Supported models in fallback chain:
1. Mistral-7B-Instruct
- 8-bit quantization
- fp16 precision
- 16GB+ VRAM required
2. Phi-2
- fp16 precision
- 8GB+ VRAM required
3. GPT-2
- Base configuration
- Minimal requirements
### Performance Tuning
```python
PERFORMANCE_CONFIG = {
"batch_size": 1, # Processing batch size
"max_workers": 4, # Max concurrent workers
"cache_size": 1000, # Pattern cache size
"cleanup_threshold": 0.8 # Memory cleanup threshold
}
```
### Debug Configuration
```python
DEBUG_CONFIG = {
"verbose_logging": False, # Detailed logging
"trace_quantum": False, # Quantum state tracing
"save_tensors": False, # Save tension states
"profile_memory": False # Memory profiling
}
```
## Example Configurations
### Basic Setup
```json
{
"host": "127.0.0.1",
"port": 8000,
"quantum_fluctuation": 0.07,
"spiderweb_dim": 5,
"perspectives": ["Newton", "DaVinci", "Ethical"]
}
```
### Advanced Setup
```json
{
"host": "127.0.0.1",
"port": 8000,
"quantum_fluctuation": 0.07,
"spiderweb_dim": 5,
"recursion_depth": 4,
"perspectives": [
"Newton",
"DaVinci",
"Ethical",
"Quantum",
"Memory"
],
"advanced_features": {
"pattern_integration": true,
"quantum_enhancement": true,
"memory_persistence": true,
"dynamic_confidence": true
},
"memory_config": {
"max_cocoons": 1000,
"cleanup_interval": 3600
},
"pattern_config": {
"use_transitions": true,
"pattern_frequency": 0.15
}
}
``` |