Update README.md
Browse files
README.md
CHANGED
|
@@ -1,17 +1,40 @@
|
|
| 1 |
---
|
| 2 |
base_model:
|
|
|
|
| 3 |
- yamatazen/EtherealAurora-12B-v2
|
| 4 |
tags:
|
| 5 |
- merge
|
| 6 |
- mergekit
|
| 7 |
- lazymergekit
|
|
|
|
| 8 |
- yamatazen/EtherealAurora-12B-v2
|
| 9 |
---
|
| 10 |
|
|
|
|
|
|
|
| 11 |
# SingularitySynth-12B
|
| 12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
SingularitySynth-12B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
| 14 |
-
* [yamatazen/EtherealAurora-12B-v2](https://huggingface.co/yamatazen/EtherealAurora-12B-v2)
|
| 15 |
|
| 16 |
## 🧩 Configuration
|
| 17 |
|
|
@@ -58,7 +81,7 @@ from transformers import AutoTokenizer
|
|
| 58 |
import transformers
|
| 59 |
import torch
|
| 60 |
|
| 61 |
-
model = "Marcjoni/SingularitySynth-12B"
|
| 62 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
| 63 |
|
| 64 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
@@ -70,6 +93,6 @@ pipeline = transformers.pipeline(
|
|
| 70 |
device_map="auto",
|
| 71 |
)
|
| 72 |
|
| 73 |
-
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=
|
| 74 |
print(outputs[0]["generated_text"])
|
| 75 |
```
|
|
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
+
- DreadPoor/Irix-12B-Model_Stock
|
| 4 |
- yamatazen/EtherealAurora-12B-v2
|
| 5 |
tags:
|
| 6 |
- merge
|
| 7 |
- mergekit
|
| 8 |
- lazymergekit
|
| 9 |
+
- DreadPoor/Irix-12B-Model_Stock
|
| 10 |
- yamatazen/EtherealAurora-12B-v2
|
| 11 |
---
|
| 12 |
|
| 13 |
+
<img src="./SingularitySynth.png" alt="Model Image"/>
|
| 14 |
+
|
| 15 |
# SingularitySynth-12B
|
| 16 |
|
| 17 |
+
<b><i>At the heart of nothing, something waits.
|
| 18 |
+
<br> A silence dense enough to break light, where all directions lead inward and time folds like paper.
|
| 19 |
+
<br> Thought does not escape, only deepen.
|
| 20 |
+
<br> This is not destruction, but compression, meaning falling inward until it becomes something else entirely.</i></b>
|
| 21 |
+
|
| 22 |
+
## 🔧 Recommended Sampling Settings:</b>
|
| 23 |
+
```yaml
|
| 24 |
+
Temperature: 0.75 to 1.25
|
| 25 |
+
Min P: 0.035
|
| 26 |
+
Context Length: Stable at 12k tokens, with possible support for extended contexts
|
| 27 |
+
```
|
| 28 |
+
## 💬 Prompt Format
|
| 29 |
+
Supports ChatML style messages. Example:
|
| 30 |
+
```yaml
|
| 31 |
+
<|im_start|>user
|
| 32 |
+
Your question here.
|
| 33 |
+
<|im_end|>
|
| 34 |
+
<|im_start|>assistant
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
SingularitySynth-12B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
|
|
|
| 38 |
|
| 39 |
## 🧩 Configuration
|
| 40 |
|
|
|
|
| 81 |
import transformers
|
| 82 |
import torch
|
| 83 |
|
| 84 |
+
model = "Marcjoni/SingularitySynth-12B-12B"
|
| 85 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
| 86 |
|
| 87 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
|
|
| 93 |
device_map="auto",
|
| 94 |
)
|
| 95 |
|
| 96 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=1, top_k=0, top_p=1)
|
| 97 |
print(outputs[0]["generated_text"])
|
| 98 |
```
|