Brain_Model_ACC_Trainer
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1245
Model description
The system in question incorporates a sophisticated model designed to process user-provided inputs, specifically descriptions of various systems, and subsequently generate comprehensive and detailed descriptions of those systems. By analyzing the initial input, the model effectively elaborates on the provided information, producing an extensive and formalized depiction that enhances the understanding and documentation of the system described.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.0754 | 0.1 | 20 | 0.9337 |
| 0.5144 | 0.2 | 40 | 0.3974 |
| 0.2526 | 0.3 | 60 | 0.2197 |
| 0.193 | 0.4 | 80 | 0.1900 |
| 0.1939 | 0.5 | 100 | 0.1733 |
| 0.1695 | 0.6 | 120 | 0.1645 |
| 0.1739 | 0.7 | 140 | 0.1567 |
| 0.1511 | 0.8 | 160 | 0.1504 |
| 0.1484 | 0.9 | 180 | 0.1462 |
| 0.1419 | 1.0 | 200 | 0.1435 |
| 0.1467 | 1.1 | 220 | 0.1410 |
| 0.1315 | 1.2 | 240 | 0.1384 |
| 0.1344 | 1.3 | 260 | 0.1370 |
| 0.1411 | 1.4 | 280 | 0.1355 |
| 0.1338 | 1.5 | 300 | 0.1346 |
| 0.128 | 1.6 | 320 | 0.1324 |
| 0.1312 | 1.7 | 340 | 0.1328 |
| 0.1191 | 1.8 | 360 | 0.1312 |
| 0.1228 | 1.9 | 380 | 0.1301 |
| 0.1317 | 2.0 | 400 | 0.1291 |
| 0.1152 | 2.1 | 420 | 0.1291 |
| 0.1167 | 2.2 | 440 | 0.1285 |
| 0.1178 | 2.3 | 460 | 0.1283 |
| 0.1184 | 2.4 | 480 | 0.1268 |
| 0.118 | 2.5 | 500 | 0.1267 |
| 0.1104 | 2.6 | 520 | 0.1258 |
| 0.1128 | 2.7 | 540 | 0.1254 |
| 0.1113 | 2.8 | 560 | 0.1249 |
| 0.1174 | 2.9 | 580 | 0.1247 |
| 0.1081 | 3.0 | 600 | 0.1245 |
Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
- Downloads last month
- 8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for chaymaemerhrioui/mistral-Brain_Model_ACC_Trainer
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3