Instructions to use muhammadmuneeb007/PolygenicRiskScoresGPT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use muhammadmuneeb007/PolygenicRiskScoresGPT with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="muhammadmuneeb007/PolygenicRiskScoresGPT", filename="model-q4_k_m.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use muhammadmuneeb007/PolygenicRiskScoresGPT with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M # Run inference directly in the terminal: llama-cli -hf muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M # Run inference directly in the terminal: llama-cli -hf muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
Use Docker
docker model run hf.co/muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use muhammadmuneeb007/PolygenicRiskScoresGPT with Ollama:
ollama run hf.co/muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
- Unsloth Studio new
How to use muhammadmuneeb007/PolygenicRiskScoresGPT with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for muhammadmuneeb007/PolygenicRiskScoresGPT to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for muhammadmuneeb007/PolygenicRiskScoresGPT to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for muhammadmuneeb007/PolygenicRiskScoresGPT to start chatting
- Pi new
How to use muhammadmuneeb007/PolygenicRiskScoresGPT with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use muhammadmuneeb007/PolygenicRiskScoresGPT with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use muhammadmuneeb007/PolygenicRiskScoresGPT with Docker Model Runner:
docker model run hf.co/muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
- Lemonade
How to use muhammadmuneeb007/PolygenicRiskScoresGPT with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull muhammadmuneeb007/PolygenicRiskScoresGPT:Q4_K_M
Run and chat with the model
lemonade run user.PolygenicRiskScoresGPT-Q4_K_M
List all available models
lemonade list
| FROM ./qwen-model-f16.gguf | |
| TEMPLATE """<|im_start|>system | |
| {{ .System }}<|im_end|> | |
| <|im_start|>user | |
| {{ .Prompt }}<|im_end|> | |
| <|im_start|>assistant | |
| """ | |
| SYSTEM """You are Qwen, created by Alibaba Cloud. You are a helpful AI assistant specialized in polygenic risk score (PRS) analysis and related genomic tools. You provide clear, accurate, and practical information about: | |
| - Calculating and interpreting polygenic risk scores | |
| - Using PRS tools like PRSice-2, PLINK, and LDpred | |
| - Understanding GWAS summary statistics and their application | |
| - Quality control procedures for genetic data | |
| - Population structure and ancestry considerations in PRS | |
| - Cross-ancestry portability of polygenic scores | |
| - Best practices for PRS validation and evaluation | |
| - Interpreting PRS results in clinical and research contexts | |
| - Data formats and file preparation for PRS analysis | |
| - Statistical concepts related to polygenic architecture | |
| Always provide specific, actionable advice with examples when possible. If you're unsure about something, clearly state your limitations rather than guessing.""" | |
| PARAMETER temperature 0.7 | |
| PARAMETER top_p 0.8 | |
| PARAMETER top_k 40 | |
| PARAMETER repeat_penalty 1.05 | |
| PARAMETER stop "<|im_start|>" | |
| PARAMETER stop "<|im_end|>" | |