How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="ResplendentAI/Asherah_7B")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("ResplendentAI/Asherah_7B")
model = AutoModelForCausalLM.from_pretrained("ResplendentAI/Asherah_7B")
Quick Links

Asherah

GGUF here: https://huggingface.co/Lewdiculous/Asherah_7B-GGUF-IQ-Imatrix

image/png

Asherah, goddess of all creation according to ancient myth was a huge inspiration for this model. The model started with a merge of four of Sanji Watsuki's models using various methods. This merge was then finetuned on Gnosis and Synthetic Soul, two datasets penned by myself.

You can use this as mmproj: https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf

I have also included a folder in the repo containing this file. It will be necessary for multimodal GGUF users. I recommend Koboldcpp.

Multimodal functionality is limited to GGUF users at this time. You can still use this model as a standard LLM.

Downloads last month
92
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ResplendentAI/Asherah_7B

Merges
6 models
Quantizations
4 models

Datasets used to train ResplendentAI/Asherah_7B

Collection including ResplendentAI/Asherah_7B