How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Envoid/Cybil-13B")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Envoid/Cybil-13B")
model = AutoModelForCausalLM.from_pretrained("Envoid/Cybil-13B")
Quick Links

Warning: This model may output Adult Content.

Cybil-13B was created with a series of SLERP merges between

Llama-2-13b-chat

sauce1337/BerrySauce-L2-13b

Undi95/MLewd-L2-13B-v2-3

Gryphe/MythoMax-L2-13b

and an unreleased 13B experimental model of mine.

The end result seems very stable and excels at any number of general tasks from role play to writing simple python scripts.

It responds well to the Libra-32B SillyTavern format as well as Alpaca Instruct style formatting:

I.e.

### Instruction:
Do a thing.
### Response:

Thanks to the Llama-2-chat DNA it does in rare instances produce refusals which can usually just be overcome by regenerating the response.

Downloads last month
6
Safetensors
Model size
13B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Envoid/Cybil-13B

Quantizations
2 models