XavierThibaudon commited on
Commit
ce964a6
·
verified ·
1 Parent(s): c544885

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - crypto-exchange
11
  - lora
12
  - fine-tuned
13
- base_model: Qwen/Qwen2.5-1.5B-Instruct
14
  pipeline_tag: text-generation
15
  ---
16
 
@@ -22,9 +22,9 @@ A fine-tuned language model for analyzing performance anomalies in distributed c
22
 
23
  | Property | Value |
24
  |---|---|
25
- | **Base Model** | [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) |
26
  | **Method** | LoRA (Low-Rank Adaptation) |
27
- | **Trainable Parameters** | 1.56M / 1.56B (1.18%) |
28
  | **Training Framework** | [Axolotl](https://github.com/axolotl-ai-cloud/axolotl) |
29
  | **Precision** | BF16 with 8-bit quantized base |
30
  | **License** | Apache 2.0 |
@@ -142,7 +142,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
142
  - Trained on 222 examples (22 real + 200 synthetic) — results continue to improve with more real-world data
143
  - Optimized for the KrystalineX platform's specific service topology (kx-exchange, kx-wallet, api-gateway, order-matcher)
144
  - Best results when prompts include correlated system metrics alongside trace data
145
- - Small 1.5B model may not always follow strict output formatting — the parser handles free-form responses gracefully
146
  - May hallucinate metric interpretations for scenarios not represented in training data
147
 
148
  ## Citation
 
10
  - crypto-exchange
11
  - lora
12
  - fine-tuned
13
+ base_model: meta-llama/Llama-3.2-1B-Instruct
14
  pipeline_tag: text-generation
15
  ---
16
 
 
22
 
23
  | Property | Value |
24
  |---|---|
25
+ | **Base Model** | [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) |
26
  | **Method** | LoRA (Low-Rank Adaptation) |
27
+ | **Trainable Parameters** | 1.56M / 1.24B (0.13%) |
28
  | **Training Framework** | [Axolotl](https://github.com/axolotl-ai-cloud/axolotl) |
29
  | **Precision** | BF16 with 8-bit quantized base |
30
  | **License** | Apache 2.0 |
 
142
  - Trained on 222 examples (22 real + 200 synthetic) — results continue to improve with more real-world data
143
  - Optimized for the KrystalineX platform's specific service topology (kx-exchange, kx-wallet, api-gateway, order-matcher)
144
  - Best results when prompts include correlated system metrics alongside trace data
145
+ - Small 1B model may not always follow strict output formatting — the parser handles free-form responses gracefully
146
  - May hallucinate metric interpretations for scenarios not represented in training data
147
 
148
  ## Citation