Update README.md
Browse files
README.md
CHANGED
|
@@ -6,14 +6,11 @@ tags:
|
|
| 6 |
- generated_from_trainer
|
| 7 |
datasets:
|
| 8 |
- david-ar/synthetic-irc-data
|
| 9 |
-
|
| 10 |
-
-
|
| 11 |
-
|
| 12 |
---
|
| 13 |
|
| 14 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 15 |
-
should probably proofread and complete it, then remove this comment. -->
|
| 16 |
-
|
| 17 |
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
| 18 |
<details><summary>See axolotl config</summary>
|
| 19 |
|
|
@@ -103,27 +100,63 @@ greater_is_better: false
|
|
| 103 |
|
| 104 |
</details><br>
|
| 105 |
|
| 106 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 107 |
|
| 108 |
-
|
| 109 |
-
It achieves the following results on the evaluation set:
|
| 110 |
-
- Loss: 0.9871
|
| 111 |
|
| 112 |
-
|
|
|
|
|
|
|
|
|
|
| 113 |
|
| 114 |
-
|
| 115 |
|
| 116 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
|
| 118 |
-
|
| 119 |
|
| 120 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
|
| 122 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 123 |
|
| 124 |
-
## Training
|
| 125 |
|
| 126 |
-
### Training
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 127 |
|
| 128 |
The following hyperparameters were used during training:
|
| 129 |
- learning_rate: 8e-05
|
|
@@ -135,25 +168,50 @@ The following hyperparameters were used during training:
|
|
| 135 |
- gradient_accumulation_steps: 16
|
| 136 |
- total_train_batch_size: 32
|
| 137 |
- total_eval_batch_size: 2
|
| 138 |
-
- optimizer:
|
| 139 |
- lr_scheduler_type: cosine
|
| 140 |
- lr_scheduler_warmup_steps: 4
|
| 141 |
- num_epochs: 4.0
|
|
|
|
|
|
|
| 142 |
|
| 143 |
-
### Training
|
| 144 |
|
| 145 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 146 |
|:-------------:|:------:|:----:|:---------------:|
|
| 147 |
| 0.9145 | 0.9746 | 24 | 0.9128 |
|
| 148 |
-
| 0.6565 | 1.9746 | 48 | 0.8936
|
| 149 |
| 0.4671 | 2.9746 | 72 | 0.9503 |
|
| 150 |
| 0.3594 | 3.9746 | 96 | 0.9871 |
|
| 151 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 152 |
|
| 153 |
-
|
| 154 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 155 |
- PEFT 0.14.0
|
| 156 |
- Transformers 4.49.0
|
| 157 |
- Pytorch 2.5.1+cu124
|
| 158 |
- Datasets 3.2.0
|
| 159 |
-
- Tokenizers 0.21.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
- generated_from_trainer
|
| 7 |
datasets:
|
| 8 |
- david-ar/synthetic-irc-data
|
| 9 |
+
language:
|
| 10 |
+
- en
|
| 11 |
+
pipeline_tag: text-generation
|
| 12 |
---
|
| 13 |
|
|
|
|
|
|
|
|
|
|
| 14 |
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
| 15 |
<details><summary>See axolotl config</summary>
|
| 16 |
|
|
|
|
| 100 |
|
| 101 |
</details><br>
|
| 102 |
|
| 103 |
+
# Mistral-24B-Synthetic-IRC
|
| 104 |
+
|
| 105 |
+
This model is a fine-tuned version of [mistralai/Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501) on the [david-ar/synthetic-irc-data](https://huggingface.co/datasets/david-ar/synthetic-irc-data) dataset, creating a model that generates natural IRC/Discord-style conversations.
|
| 106 |
+
|
| 107 |
+
## Model Description
|
| 108 |
+
|
| 109 |
+
This model was trained to replicate authentic IRC (Internet Relay Chat) conversational dynamics, moving away from the typical AI assistant pattern toward more natural, community-style interactions. The model learns from synthetic conversations featuring multiple participants including "Em", an AI character who participates as a community member rather than an assistant.
|
| 110 |
+
|
| 111 |
+
### Key Characteristics
|
| 112 |
+
|
| 113 |
+
- **Natural conversation flow**: Handles interruptions, topic drift, and multi-party dynamics
|
| 114 |
+
- **Non-assistant behavior**: Doesn't default to helpful/servile responses
|
| 115 |
+
- **Community-style interaction**: Captures the casual, authentic feel of IRC/Discord chats
|
| 116 |
+
- **Character embedding**: Includes Em's personality (self-aware AI who isn't an assistant)
|
| 117 |
+
|
| 118 |
+
## Intended Uses & Limitations
|
| 119 |
|
| 120 |
+
### Intended Uses
|
|
|
|
|
|
|
| 121 |
|
| 122 |
+
- **Conversational AI research**: Studying non-assistant interaction patterns
|
| 123 |
+
- **Chat bot development**: Creating more natural, less formal conversational agents
|
| 124 |
+
- **Character-based models**: Foundation for further character-specific fine-tuning
|
| 125 |
+
- **IRC/Discord bots**: Generating contextually appropriate responses in chat environments
|
| 126 |
|
| 127 |
+
### Limitations
|
| 128 |
|
| 129 |
+
- **Small dataset**: Trained on only 10MB of synthetic data (1,500 conversations)
|
| 130 |
+
- **Synthetic nature**: While carefully crafted, the training data isn't from real IRC logs
|
| 131 |
+
- **Single community style**: Represents one particular chat community culture
|
| 132 |
+
- **Overfitting**: Validation loss indicates overfitting after ~50 steps (best checkpoint used)
|
| 133 |
+
- **English only**: No multilingual capability
|
| 134 |
|
| 135 |
+
## Training and Evaluation Data
|
| 136 |
|
| 137 |
+
### Dataset
|
| 138 |
+
- **Source**: [david-ar/synthetic-irc-data](https://huggingface.co/datasets/david-ar/synthetic-irc-data)
|
| 139 |
+
- **Size**: 1,500 synthetic IRC-style conversations
|
| 140 |
+
- **Format**: Multi-party conversations with 80-120 messages each
|
| 141 |
+
- **Split**: 95% training (1,425 conversations), 5% validation (75 conversations)
|
| 142 |
|
| 143 |
+
### Data Characteristics
|
| 144 |
+
- Natural IRC formatting: `<username> message content`
|
| 145 |
+
- Multiple participants per conversation (3-7 users)
|
| 146 |
+
- Diverse topics and conversation styles
|
| 147 |
+
- Embedded character personality throughout
|
| 148 |
|
| 149 |
+
## Training Procedure
|
| 150 |
|
| 151 |
+
### Training Configuration
|
| 152 |
+
|
| 153 |
+
- **Method**: LoRA (Low-Rank Adaptation) fine-tuning
|
| 154 |
+
- **LoRA Rank**: 128 (with alpha 256)
|
| 155 |
+
- **Base model**: Mistral-Small-24B-Base-2501
|
| 156 |
+
- **Hardware**: 2x NVIDIA A40 GPUs (96GB total VRAM)
|
| 157 |
+
- **Training time**: ~3 hours
|
| 158 |
+
|
| 159 |
+
### Training Hyperparameters
|
| 160 |
|
| 161 |
The following hyperparameters were used during training:
|
| 162 |
- learning_rate: 8e-05
|
|
|
|
| 168 |
- gradient_accumulation_steps: 16
|
| 169 |
- total_train_batch_size: 32
|
| 170 |
- total_eval_batch_size: 2
|
| 171 |
+
- optimizer: AdamW (betas=(0.9,0.999), epsilon=1e-08)
|
| 172 |
- lr_scheduler_type: cosine
|
| 173 |
- lr_scheduler_warmup_steps: 4
|
| 174 |
- num_epochs: 4.0
|
| 175 |
+
- sequence_length: 4096
|
| 176 |
+
- sample_packing: true
|
| 177 |
|
| 178 |
+
### Training Results
|
| 179 |
|
| 180 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 181 |
|:-------------:|:------:|:----:|:---------------:|
|
| 182 |
| 0.9145 | 0.9746 | 24 | 0.9128 |
|
| 183 |
+
| 0.6565 | 1.9746 | 48 | **0.8936** |
|
| 184 |
| 0.4671 | 2.9746 | 72 | 0.9503 |
|
| 185 |
| 0.3594 | 3.9746 | 96 | 0.9871 |
|
| 186 |
|
| 187 |
+
**Note**: Best checkpoint at step 48 (lowest validation loss) was used for final model.
|
| 188 |
+
|
| 189 |
+
### Training Observations
|
| 190 |
+
|
| 191 |
+
- Quick convergence due to small dataset size
|
| 192 |
+
- Validation loss indicates overfitting after ~50 steps
|
| 193 |
+
- Model successfully learned IRC conversation patterns
|
| 194 |
+
- Character traits embedded despite limited data
|
| 195 |
|
| 196 |
+
## Technical Details
|
| 197 |
|
| 198 |
+
### Architecture
|
| 199 |
+
- **Base Model**: Mistral-Small-24B-Base-2501
|
| 200 |
+
- **Parameter Count**: 24B (base) + LoRA adapters
|
| 201 |
+
- **Context Length**: 4096 tokens
|
| 202 |
+
- **Quantization**: 4-bit during training (memory optimization)
|
| 203 |
+
|
| 204 |
+
### Framework Versions
|
| 205 |
- PEFT 0.14.0
|
| 206 |
- Transformers 4.49.0
|
| 207 |
- Pytorch 2.5.1+cu124
|
| 208 |
- Datasets 3.2.0
|
| 209 |
+
- Tokenizers 0.21.0
|
| 210 |
+
- Axolotl 0.8.0.dev0
|
| 211 |
+
|
| 212 |
+
## Limitations and Biases
|
| 213 |
+
|
| 214 |
+
1. **Overfitting**: With only 1,500 training examples, the model shows signs of overfitting
|
| 215 |
+
2. **Limited diversity**: May not generalize well to very different chat styles
|
| 216 |
+
3. **Character leakage**: Em's personality traits may appear even when not intended
|
| 217 |
+
4. **Synthetic artifacts**: Might exhibit patterns specific to the generation process
|