Update README.md
Browse files
README.md
CHANGED
|
@@ -1,44 +1,37 @@
|
|
| 1 |
-
---
|
| 2 |
-
base_model:
|
| 3 |
-
- Or4cl3-1/Daedalus_1
|
| 4 |
-
- microsoft/Phi-3-mini-4k-instruct
|
| 5 |
-
tags:
|
| 6 |
-
- merge
|
| 7 |
-
- mergekit
|
| 8 |
-
- lazymergekit
|
| 9 |
-
- Or4cl3-1/Daedalus_1
|
| 10 |
-
- microsoft/Phi-3-mini-4k-instruct
|
| 11 |
-
---
|
| 12 |
-
|
| 13 |
-
# BathSalt-llama-3.1-slerp
|
| 14 |
-
|
| 15 |
-
BathSalt-llama-3.1-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
| 16 |
-
* [Or4cl3-1/Daedalus_1](https://huggingface.co/Or4cl3-1/Daedalus_1)
|
| 17 |
-
* [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
|
| 18 |
-
|
| 19 |
-
## 🧩 Configuration
|
| 20 |
-
|
| 21 |
-
```yaml
|
| 22 |
-
slices:
|
| 23 |
-
- sources:
|
| 24 |
-
- model: Or4cl3-1/Daedalus_1
|
| 25 |
-
layer_range: [0, 32]
|
| 26 |
-
- model: microsoft/Phi-3-mini-4k-instruct
|
| 27 |
-
layer_range: [0, 32]
|
| 28 |
-
merge_method: slerp
|
| 29 |
-
base_model: microsoft/Phi-3-mini-4k-instruct
|
| 30 |
-
parameters:
|
| 31 |
-
t:
|
| 32 |
-
- filter: self_attn
|
| 33 |
-
value: [0, 0.5, 0.3, 0.7, 1]
|
| 34 |
-
- filter: mlp
|
| 35 |
-
value: [1, 0.5, 0.7, 0.3, 0]
|
| 36 |
-
- value: 0.5
|
| 37 |
-
dtype: bfloat16
|
| 38 |
-
```
|
| 39 |
-
|
| 40 |
-
## 💻 Usage
|
| 41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
```python
|
| 43 |
!pip install -qU transformers accelerate
|
| 44 |
|
|
@@ -46,7 +39,7 @@ from transformers import AutoTokenizer
|
|
| 46 |
import transformers
|
| 47 |
import torch
|
| 48 |
|
| 49 |
-
model = "BathSalt-1/
|
| 50 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
| 51 |
|
| 52 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
|
| 2 |
+
**Model Card**
|
| 3 |
+
|
| 4 |
+
**Model Name:** BathSalt-1/daedalus-phi-3
|
| 5 |
+
|
| 6 |
+
**Model Type:** Large Language Model
|
| 7 |
+
|
| 8 |
+
**Description:** This model is a merge of the `Or4cl3-1/Daedalus_1` and `microsoft/Phi-3-mini-4k-instruct` models using the `LazyMergekit` library. It is designed for general-purpose natural language processing tasks.
|
| 9 |
+
|
| 10 |
+
**Metadata:**
|
| 11 |
+
|
| 12 |
+
* **License:** MIT License
|
| 13 |
+
* **Language:** English
|
| 14 |
+
* **Library:** Transformers
|
| 15 |
+
* **Base Model:** microsoft/Phi-3-mini-4k-instruct
|
| 16 |
+
* **Merge Method:** slerp
|
| 17 |
+
* **Layer Range:** [0, 32]
|
| 18 |
+
* **Parameters:**
|
| 19 |
+
+ t:
|
| 20 |
+
- filter: self_attn
|
| 21 |
+
- value: [0, 0.5, 0.3, 0.7, 1]
|
| 22 |
+
- filter: mlp
|
| 23 |
+
- value: [1, 0.5, 0.7, 0.3, 0]
|
| 24 |
+
- value: 0.5
|
| 25 |
+
+ dtype: bfloat16
|
| 26 |
+
|
| 27 |
+
**Usage:**
|
| 28 |
+
|
| 29 |
+
* **Tokenizer:** AutoTokenizer
|
| 30 |
+
* **Model:** AutoModelForSeq2SeqLM
|
| 31 |
+
* **Pipeline:** text-generation
|
| 32 |
+
* **Device:** auto
|
| 33 |
+
|
| 34 |
+
**Example Code:**
|
| 35 |
```python
|
| 36 |
!pip install -qU transformers accelerate
|
| 37 |
|
|
|
|
| 39 |
import transformers
|
| 40 |
import torch
|
| 41 |
|
| 42 |
+
model = "BathSalt-1/daedalus-phi-3"
|
| 43 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
| 44 |
|
| 45 |
tokenizer = AutoTokenizer.from_pretrained(model)
|