How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="oliverbob/openbible",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

Uploaded model

  • Developed by: oliverbob
  • License: apache-2.0
  • Finetuned from model : tla v1 chat

BIBLE AI

language:

  • en license: apache-2.0 tags:
  • text-generation-inference
  • transformers
  • unsloth
  • tla architecture base_model: tla

Trained from OpenBible Dataset

  • Developed by: oliverbob
  • License: apache-2.0
  • Date: Day of hearts, 2024
  • โค๏ธ God is love and God is good! ๐Ÿ˜„

Enjoy!!

This will hold the model for /bibleai. See generated gguf at /biblegpt.

Downloads last month
38
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using oliverbob/openbible 1