Update README.md
Browse files
README.md
CHANGED
|
@@ -1,46 +1,15 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
-
|
| 5 |
-
- lazymergekit
|
| 6 |
---
|
| 7 |
|
| 8 |
-
#
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
-
|
| 13 |
|
| 14 |
-
|
| 15 |
-
models:
|
| 16 |
-
- model: Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total+ResplendentAI/Aura_Llama3
|
| 17 |
-
- model: Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total+ResplendentAI/Luna_Llama3
|
| 18 |
-
merge_method: model_stock
|
| 19 |
-
base_model: Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total
|
| 20 |
-
dtype: float16
|
| 21 |
-
```
|
| 22 |
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
```python
|
| 26 |
-
!pip install -qU transformers accelerate
|
| 27 |
-
|
| 28 |
-
from transformers import AutoTokenizer
|
| 29 |
-
import transformers
|
| 30 |
-
import torch
|
| 31 |
-
|
| 32 |
-
model = "jeiku/Orthocopter_8B"
|
| 33 |
-
messages = [{"role": "user", "content": "What is a large language model?"}]
|
| 34 |
-
|
| 35 |
-
tokenizer = AutoTokenizer.from_pretrained(model)
|
| 36 |
-
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 37 |
-
pipeline = transformers.pipeline(
|
| 38 |
-
"text-generation",
|
| 39 |
-
model=model,
|
| 40 |
-
torch_dtype=torch.float16,
|
| 41 |
-
device_map="auto",
|
| 42 |
-
)
|
| 43 |
-
|
| 44 |
-
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
| 45 |
-
print(outputs[0]["generated_text"])
|
| 46 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
|
|
|
| 5 |
---
|
| 6 |
|
| 7 |
+
# Orthocopter
|
| 8 |
|
| 9 |
+

|
| 10 |
|
| 11 |
+
This model is thanks to the hard work of lucyknada with the Edgerunners. Her work produced the following model, which I used as the base:
|
| 12 |
|
| 13 |
+
https://huggingface.co/Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
+
I then applied two handwritten datasets over top of this and the results are pretty nice, with no refusals and plenty of personality.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|