YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This is the original FP16 result of the model created using chargoddard's frankenllama script so that others interested in further experimentation with the results may do so.

WARNING: this model is very unpredictable.

This model is an experiment using the frankenstein script from https://huggingface.co/chargoddard/llama2-22b Except I decided to use it with two models that have already been extensively finetuned. With https://huggingface.co/TheBloke/Llama-2-13B-Chat-fp16 as the base model and https://huggingface.co/Aeala/Enterredaas-33b as the donor model.

The resulting model is surprisingly coherent and still responds well to the llama2chat prompt format [INST]<<SYS>><</SYS>>[/INST] and still has most of llama2chat's bubbly/giddy personality but more gritty and visceral. It makes occasional "typos" along with some other quirks so it was not completely unscathed by the frankensteining process. I plan to massage it over with a LoRA in the near future to bring it into more harmony but in the meantime it is available now for your enjoyment.

Use cases: Chat/RP not much else.

Downloads last month
10
Safetensors
Model size
22B params
Tensor type
F32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Envoid/MindFlay-22B

Quantizations
1 model