File size: 1,203 Bytes
509c550
6e837df
509c550
 
 
 
 
 
 
 
 
 
1d57c36
 
 
830a83d
 
 
 
751244b
 
 
 
 
 
 
509c550
 
 
 
830a83d
509c550
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
base_model: SuperbEmphasis/The-Omega-Directive-12B-EVISCERATED
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---

omg it almost works!

I stripped out the 5 least used layers.  and then I used SFT over 4 epochs and a high learning rate.... and its almost good!

My goal is to make a new Velvet eclipse with these "less used" paramets stripped out.  reducing the size significantly to allow for a higher inference speed, and more room for context.


NOTES
```
        per_device_train_batch_size = 10,
        gradient_accumulation_steps = 4,
        num_train_epochs = 4, # Set this for 1 full training run.
        learning_rate = 5e-4, # Reduce to 2e-5 for long training runs
```
# Uploaded finetuned  model

- **Developed by:** SuperbEmphasis
- **License:** apache-2.0
- **Finetuned from model :** SuperbEmphasis/The-Omega-Directive-12B-EVISCERATED

This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)