Instructions to use rexprimematrix/RiShreAI with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use rexprimematrix/RiShreAI with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="rexprimematrix/RiShreAI", filename="Phi-3-mini-4k-instruct-q4.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use rexprimematrix/RiShreAI with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf rexprimematrix/RiShreAI # Run inference directly in the terminal: llama-cli -hf rexprimematrix/RiShreAI
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf rexprimematrix/RiShreAI # Run inference directly in the terminal: llama-cli -hf rexprimematrix/RiShreAI
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf rexprimematrix/RiShreAI # Run inference directly in the terminal: ./llama-cli -hf rexprimematrix/RiShreAI
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf rexprimematrix/RiShreAI # Run inference directly in the terminal: ./build/bin/llama-cli -hf rexprimematrix/RiShreAI
Use Docker
docker model run hf.co/rexprimematrix/RiShreAI
- LM Studio
- Jan
- Ollama
How to use rexprimematrix/RiShreAI with Ollama:
ollama run hf.co/rexprimematrix/RiShreAI
- Unsloth Studio new
How to use rexprimematrix/RiShreAI with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for rexprimematrix/RiShreAI to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for rexprimematrix/RiShreAI to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for rexprimematrix/RiShreAI to start chatting
- Docker Model Runner
How to use rexprimematrix/RiShreAI with Docker Model Runner:
docker model run hf.co/rexprimematrix/RiShreAI
- Lemonade
How to use rexprimematrix/RiShreAI with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull rexprimematrix/RiShreAI
Run and chat with the model
lemonade run user.RiShreAI-{{QUANT_TAG}}List all available models
lemonade list
File size: 1,475 Bytes
9933fc6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | from flask import Flask, request, jsonify
from flask_cors import CORS
from gpt4all import GPT4All
import os
app = Flask(__name__)
# Sabhi connections allow karne ke liye CORS setup
CORS(app)
# --- CONFIGURATION ---
# Note: Is code mein hum model ko seedha tumhari naye repository se load karenge
MODEL_NAME = "Phi-3-mini-4k-instruct-q4.gguf"
REPO_ID = "rexprimematrix/RiShreAI" # Tumhara model repository
print(f"🔄 RiShre AI is waking up... Loading {MODEL_NAME}")
try:
# Ye gpt4all ko batayega ki file Hugging Face repo se download/load karni hai
model = GPT4All(MODEL_NAME, model_path=".", allow_download=True)
print("✅ RiShre AI Core is now ONLINE and Ready!")
except Exception as e:
print(f"❌ Critical Error: {e}")
@app.route('/', methods=['GET'])
def health_check():
return "RiShre AI Server is Running!"
@app.route('/api/chat', methods=['POST'])
def chat():
try:
data = request.json
user_msg = data.get("message", "")
if not user_msg:
return jsonify({"error": "No message provided"}), 400
# AI Response Generation
with model.chat_session():
response = model.generate(prompt=user_msg, max_tokens=300)
return jsonify({"text": response})
except Exception as e:
return jsonify({"error": str(e)}), 500
if __name__ == "__main__":
# Hugging Face Spaces strictly port 7860 hi use karta hai
app.run(host="0.0.0.0", port=7860) |