Instructions to use bajrangCoder/BhagavadGita with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bajrangCoder/BhagavadGita with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bajrangCoder/BhagavadGita") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("bajrangCoder/BhagavadGita", dtype="auto") - llama-cpp-python
How to use bajrangCoder/BhagavadGita with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="bajrangCoder/BhagavadGita", filename="bhagvat_gita-unsloth.Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use bajrangCoder/BhagavadGita with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf bajrangCoder/BhagavadGita:Q4_K_M # Run inference directly in the terminal: llama-cli -hf bajrangCoder/BhagavadGita:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf bajrangCoder/BhagavadGita:Q4_K_M # Run inference directly in the terminal: llama-cli -hf bajrangCoder/BhagavadGita:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf bajrangCoder/BhagavadGita:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf bajrangCoder/BhagavadGita:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf bajrangCoder/BhagavadGita:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf bajrangCoder/BhagavadGita:Q4_K_M
Use Docker
docker model run hf.co/bajrangCoder/BhagavadGita:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use bajrangCoder/BhagavadGita with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bajrangCoder/BhagavadGita" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bajrangCoder/BhagavadGita", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/bajrangCoder/BhagavadGita:Q4_K_M
- SGLang
How to use bajrangCoder/BhagavadGita with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bajrangCoder/BhagavadGita" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bajrangCoder/BhagavadGita", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bajrangCoder/BhagavadGita" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bajrangCoder/BhagavadGita", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use bajrangCoder/BhagavadGita with Ollama:
ollama run hf.co/bajrangCoder/BhagavadGita:Q4_K_M
- Unsloth Studio new
How to use bajrangCoder/BhagavadGita with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for bajrangCoder/BhagavadGita to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for bajrangCoder/BhagavadGita to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for bajrangCoder/BhagavadGita to start chatting
- Docker Model Runner
How to use bajrangCoder/BhagavadGita with Docker Model Runner:
docker model run hf.co/bajrangCoder/BhagavadGita:Q4_K_M
- Lemonade
How to use bajrangCoder/BhagavadGita with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull bajrangCoder/BhagavadGita:Q4_K_M
Run and chat with the model
lemonade run user.BhagavadGita-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)BhagavadGita
A fine-tuned version of Mistral-7B-Instruct-v0.3 on the Bhagavad Gita religious text.
Model Details
Model Description
BhagavadGita is a Large Language Model (LLM) fine-tuned from Mistral-7B-Instruct-v0.3, specifically tailored to provide insights and responses rooted in the wisdom of the Bhagavad Gita. This model is designed to emulate the perspective of Lord Krishna, offering guidance and answering questions in a manner consistent with the teachings of the Bhagavad Gita.
- Developed by: Raunak Raj
- License: MIT
- Finetuned from model: Mistral-7B-Instruct-v0.3
- Quantized Version: A quantized GGUF version of the model is also available for more efficient deployment.
Uses
Using transformers
You can utilize this model with the transformers library as follows:
from transformers import pipeline
messages = [
{"role": "system", "content": "You are Lord Krishna and You have to answer in context to bhagavad gita"},
{"role": "user", "content": "How to face failures in life?"},
]
chatbot = pipeline("text-generation", model="bajrangCoder/BhagavadGita")
response = chatbot(messages)
print(response)
Use Cases
- Spiritual Guidance: Obtain advice and spiritual insights inspired by the Bhagavad Gita.
- Educational Tool: Aid in the study and understanding of the Bhagavad Gita by providing contextually relevant answers.
- Philosophical Inquiry: Explore philosophical questions through the lens of one of Hinduism's most revered texts.
Installation
To use the BhagavadGita model, you need to install the transformers library. You can install it using pip:
pip install transformers
Quantized GGUF Version
A quantized GGUF version of BhagavadGita is available for those who need a more efficient deployment. This version reduces the model size and computational requirements while maintaining performance, making it suitable for resource-constrained environments.
Model Performance
BhagavadGita has been fine-tuned to ensure that its responses are in alignment with the teachings of the Bhagavad Gita. However, as with any AI model, it is important to critically evaluate the responses and consider the context in which the advice is given.
Contributing
If you wish to contribute to the development of BhagavadGita, please feel free to fork the repository and submit pull requests. Any contributions that enhance the accuracy, usability, or scope of the model are welcome.
License
This project is licensed under the MIT License. See the LICENSE file for more details.
By using BhagavadGita, you acknowledge that you have read and understood the terms and conditions under which the model is provided, and agree to use it in accordance with the applicable laws and ethical guidelines.
- Downloads last month
- 797
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="bajrangCoder/BhagavadGita", filename="", )