File size: 5,549 Bytes
463b69a
 
 
 
 
 
 
 
 
 
 
 
ce964a6
463b69a
 
 
 
 
 
 
 
 
 
 
ce964a6
463b69a
ce964a6
463b69a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c544885
 
 
 
 
 
463b69a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c544885
 
 
 
 
 
 
 
 
463b69a
c544885
 
 
 
463b69a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c544885
 
463b69a
ce964a6
463b69a
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
---

language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - anomaly-detection
  - observability
  - distributed-systems
  - crypto-exchange
  - lora
  - fine-tuned
base_model: meta-llama/Llama-3.2-1B-Instruct
pipeline_tag: text-generation
---


# KrystalineX Anomaly Analyzer

A fine-tuned language model for analyzing performance anomalies in distributed crypto exchange systems. Built for the [KrystalineX](https://github.com/MoebiusX/KrystalineX) observability platform.

## Model Details

| Property | Value |
|---|---|
| **Base Model** | [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) |
| **Method** | LoRA (Low-Rank Adaptation) |
| **Trainable Parameters** | 1.56M / 1.24B (0.13%) |
| **Training Framework** | [Axolotl](https://github.com/axolotl-ai-cloud/axolotl) |
| **Precision** | BF16 with 8-bit quantized base |
| **License** | Apache 2.0 |

## Intended Use

This model analyzes OpenTelemetry trace data and correlated system metrics to identify root causes of performance anomalies in microservice architectures. Given an anomaly report containing span attributes, latency deviations, and system metrics (CPU, memory, error rates), the model produces:

- **Summary** of the likely cause
- **Root causes** with reasoning based on actual metric values
- **Actionable recommendations** for remediation
- **Confidence level** assessment

### Example Input

```

Analyze this performance anomaly:

- Service: kx-exchange

- Operation: pg-pool.connect

- Duration: 286.94ms (expected: 0.44ms ± 10.66ms)

- Deviation: 26.88σ

- CPU Usage: 0.5%, Memory: 142MB, Error Rate: 0.0%

```

### Example Output

```

SUMMARY: The pg-pool.connect operation experienced extreme latency due to

connection pool exhaustion requiring a new TCP connection to PostgreSQL.



CAUSES:

- Connection pool was empty, forcing a new connection establishment

- TCP connect span of 264ms confirms network-level connection setup

- Idle timeout (30s) likely evicted pooled connections



RECOMMENDATIONS:

- Increase minimum pool size to maintain warm connections

- Reduce idle timeout or implement connection keepalive

- Add connection pool metrics to monitoring



CONFIDENCE: high

```

## Training Details

### Dataset

222 training examples (22 real + 200 synthetic) of anomaly analysis from a production crypto exchange platform:

- **22 expert-curated examples** from real OpenTelemetry traces, including hallucination corrections (e.g., teaching the model NOT to cite "high CPU usage" when CPU is at 0.5%)
- **200 synthetic examples** generated across 15 anomaly scenario templates (connection pool exhaustion, cold start, query lock contention, DNS cache miss, network jitter, GC pause, message queue backlog, cascading timeout, retry storm, etc.)
- **~40% dismissal training**: examples explicitly teaching the model to dismiss low metrics as irrelevant rather than hallucinating problems
- **Mixed prompt formats**: both short-form (`Analyze anomaly: service:operation`) and detailed structured prompts with full span attributes and metrics

### LoRA Configuration

| Parameter | Value |
|---|---|
| Rank (r) | 16 |
| Alpha | 32 |
| Dropout | 0.05 |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |



### Training Hyperparameters



| Parameter | Value |

|---|---|

| Epochs | 3 |

| Learning Rate | 2e-4 |

| Scheduler | Cosine |

| Warmup Ratio | 0.1 |

| Batch Size | 2 (micro) × 4 (grad accum) = 8 effective |

| Optimizer | AdamW |

| Sequence Length | 2048 |



### Training Results (v2 — 222 examples)



| Epoch | Loss | Learning Rate |

|---|---|---|

| 0.36 | 2.5158 | 1.83e-4 |

| 1.07 | 1.6355 | 1.00e-4 |

| 1.79 | 0.5071 | 2.66e-5 |

| 2.50 | 0.3211 | 4.88e-6 |

| 3.0 | 0.2422 | 0 |



- **Final Training Loss**: 0.24 (down from 2.41 on 22 examples)

- **Training Time**: ~56 minutes (84 steps) on NVIDIA Turing GPU (sm_75)
- **VRAM Usage**: ~1.9GB training, ~4.4GB cache
- **Throughput**: 0.2 samples/s, 0.025 steps/s

## Usage

### With Ollama

```bash

ollama run anomaly-analyzer "Analyze: service latency 500ms, expected 50ms, CPU 0.1%"

```

### With Transformers

```python

from transformers import AutoModelForCausalLM, AutoTokenizer



model = AutoModelForCausalLM.from_pretrained("XavierThibaudon/anomaly-analyzer")

tokenizer = AutoTokenizer.from_pretrained("XavierThibaudon/anomaly-analyzer")



prompt = "Analyze anomaly: kx-exchange GET 500ms, expected 50ms, CPU 0.1%"

inputs = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=256)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

```

## Limitations

- Trained on 222 examples (22 real + 200 synthetic) — results continue to improve with more real-world data
- Optimized for the KrystalineX platform's specific service topology (kx-exchange, kx-wallet, api-gateway, order-matcher)
- Best results when prompts include correlated system metrics alongside trace data
- Small 1B model may not always follow strict output formatting — the parser handles free-form responses gracefully
- May hallucinate metric interpretations for scenarios not represented in training data

## Citation

```bibtex

@misc{krystalinex-anomaly-analyzer,

  title={KrystalineX Anomaly Analyzer},

  author={Xavier Thibaudon},

  year={2026},

  publisher={Hugging Face},

  url={https://huggingface.co/XavierThibaudon/anomaly-analyzer}

}

```