Update README.md
Browse files
README.md
CHANGED
|
@@ -30,7 +30,6 @@ Epos-8B is an 8 billion parameter language model fine-tuned for storytelling and
|
|
| 30 |
### Model Sources
|
| 31 |
|
| 32 |
- **Repository:** [Epos-8B on Hugging Face](https://huggingface.co/P0x0/Epos-8B)
|
| 33 |
-
- **GGUF Variant Repository:** [Epos-8B-GGUF](https://huggingface.co/P0x0/Epos-8B-GGUF)
|
| 34 |
|
| 35 |
---
|
| 36 |
|
|
@@ -51,16 +50,3 @@ To run the quantized version of the model, you can use [KoboldCPP](https://githu
|
|
| 51 |
3. Download the GGUF variant of Epos-8B from [Epos-8B-GGUF](https://huggingface.co/P0x0/Epos-8B-GGUF).
|
| 52 |
4. Load the model in KoboldCPP and start generating!
|
| 53 |
|
| 54 |
-
Alternatively, integrate the model directly into your code with the following snippet:
|
| 55 |
-
|
| 56 |
-
```python
|
| 57 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 58 |
-
|
| 59 |
-
tokenizer = AutoTokenizer.from_pretrained("P0x0/Epos-8B")
|
| 60 |
-
model = AutoModelForCausalLM.from_pretrained("P0x0/Epos-8B")
|
| 61 |
-
|
| 62 |
-
input_text = "Once upon a time in a distant land..."
|
| 63 |
-
inputs = tokenizer(input_text, return_tensors="pt")
|
| 64 |
-
outputs = model.generate(**inputs)
|
| 65 |
-
|
| 66 |
-
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
|
|
| 30 |
### Model Sources
|
| 31 |
|
| 32 |
- **Repository:** [Epos-8B on Hugging Face](https://huggingface.co/P0x0/Epos-8B)
|
|
|
|
| 33 |
|
| 34 |
---
|
| 35 |
|
|
|
|
| 50 |
3. Download the GGUF variant of Epos-8B from [Epos-8B-GGUF](https://huggingface.co/P0x0/Epos-8B-GGUF).
|
| 51 |
4. Load the model in KoboldCPP and start generating!
|
| 52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|