KrystalineX Anomaly Analyzer
A fine-tuned language model for analyzing performance anomalies in distributed crypto exchange systems. Built for the KrystalineX observability platform.
Model Details
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen2.5-1.5B-Instruct |
| Method | LoRA (Low-Rank Adaptation) |
| Trainable Parameters | 18.5M / 1.56B (1.18%) |
| Training Framework | Axolotl |
| Precision | BF16 with 8-bit quantized base |
| License | Apache 2.0 |
Intended Use
This model analyzes OpenTelemetry trace data and correlated system metrics to identify root causes of performance anomalies in microservice architectures. Given an anomaly report containing span attributes, latency deviations, and system metrics (CPU, memory, error rates), the model produces:
- Summary of the likely cause
- Root causes with reasoning based on actual metric values
- Actionable recommendations for remediation
- Confidence level assessment
Example Input
Analyze this performance anomaly:
- Service: kx-exchange
- Operation: pg-pool.connect
- Duration: 286.94ms (expected: 0.44ms ± 10.66ms)
- Deviation: 26.88σ
- CPU Usage: 0.5%, Memory: 142MB, Error Rate: 0.0%
Example Output
SUMMARY: The pg-pool.connect operation experienced extreme latency due to
connection pool exhaustion requiring a new TCP connection to PostgreSQL.
CAUSES:
- Connection pool was empty, forcing a new connection establishment
- TCP connect span of 264ms confirms network-level connection setup
- Idle timeout (30s) likely evicted pooled connections
RECOMMENDATIONS:
- Increase minimum pool size to maintain warm connections
- Reduce idle timeout or implement connection keepalive
- Add connection pool metrics to monitoring
CONFIDENCE: high
Training Details
Dataset
22 expert-curated examples of anomaly analysis from a production crypto exchange platform. Each example pairs real OpenTelemetry trace data with expert analysis that corrects common LLM hallucinations (e.g., citing "high CPU usage" when CPU is at 0.5%).
LoRA Configuration
| Parameter | Value |
|---|---|
| Rank (r) | 16 |
| Alpha | 32 |
| Dropout | 0.05 |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
Training Hyperparameters
| Parameter | Value |
|---|---|
| Epochs | 3 |
| Learning Rate | 2e-4 |
| Scheduler | Cosine |
| Warmup Ratio | 0.1 |
| Batch Size | 2 (micro) × 4 (grad accum) = 8 effective |
| Optimizer | AdamW |
| Sequence Length | 2048 |
Training Results
- Training Loss: 2.41
- Training Time: ~5 minutes on NVIDIA Turing GPU (sm_75)
- VRAM Usage: ~1.9GB training, ~6.8GB cache
Usage
With Ollama
ollama run anomaly-analyzer "Analyze: service latency 500ms, expected 50ms, CPU 0.1%"
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("XavierThibaudon/anomaly-analyzer")
tokenizer = AutoTokenizer.from_pretrained("XavierThibaudon/anomaly-analyzer")
prompt = "Analyze anomaly: kx-exchange GET 500ms, expected 50ms, CPU 0.1%"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations
- Trained on a small dataset (22 examples) — results improve significantly with more training data
- Optimized for the KrystalineX platform's specific service topology
- Best results when prompts include correlated system metrics alongside trace data
- May hallucinate metric interpretations for scenarios not represented in training data
Citation
@misc{krystalinex-anomaly-analyzer,
title={KrystalineX Anomaly Analyzer},
author={Xavier Thibaudon},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/XavierThibaudon/anomaly-analyzer}
}
- Downloads last month
- 11