Instructions to use QuantFactory/Replete-LLM-Qwen2-7b-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use QuantFactory/Replete-LLM-Qwen2-7b-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Replete-LLM-Qwen2-7b-GGUF", filename="Replete-LLM-Qwen2-7b.Q2_K.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/Replete-LLM-Qwen2-7b-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use QuantFactory/Replete-LLM-Qwen2-7b-GGUF with Ollama:
ollama run hf.co/QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/Replete-LLM-Qwen2-7b-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Replete-LLM-Qwen2-7b-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Replete-LLM-Qwen2-7b-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/Replete-LLM-Qwen2-7b-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/Replete-LLM-Qwen2-7b-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/Replete-LLM-Qwen2-7b-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/Replete-LLM-Qwen2-7b-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Replete-LLM-Qwen2-7b-GGUF-Q4_K_M
List all available models
lemonade list
QuantFactory/Replete-LLM-Qwen2-7b-GGUF
This is quantized version of Replete-AI/Replete-LLM-Qwen2-7b created using llama.cpp
Original Model Card
Replete-LLM-Qwen2-7b
Thank you to TensorDock for sponsoring Replete-LLM you can check out their website for cloud compute rental below.
Replete-LLM is Replete-AI's flagship model. We take pride in releasing a fully open-source, low parameter, and competitive AI model that not only surpasses its predecessor Qwen2-7B-Instruct in performance, but also competes with (if not surpasses) other flagship models from closed source like gpt-3.5-turbo, but also open source models such as gemma-2-9b-it and Meta-Llama-3.1-8B-Instruct in terms of overall performance across all fields and categories. You can find the dataset that this model was trained on linked bellow:
Try bartowski's quantizations:
Cant run the model locally? Well then use the huggingface space instead:
Some statistics about the data the model was trained on can be found in the image and details bellow, while a more comprehensive look can be found in the model card for the dataset. (linked above):
Replete-LLM-Qwen2-7b is a versatile model fine-tuned to excel on any imaginable task. The following types of generations were included in the fine-tuning process:
- Science: (General, Physical Reasoning)
- Social Media: (Reddit, Twitter)
- General Knowledge: (Character-Codex), (Famous Quotes), (Steam Video Games), (How-To? Explanations)
- Cooking: (Cooking Preferences, Recipes)
- Writing: (Poetry, Essays, General Writing)
- Medicine: (General Medical Data)
- History: (General Historical Data)
- Law: (Legal Q&A)
- Role-Play: (Couple-RP, Roleplay Conversations)
- News: (News Generation)
- Coding: (3 million rows of coding data in over 100 coding languages)
- Math: (Math data from TIGER-Lab/MathInstruct)
- Function Calling: (Function calling data from "glaiveai/glaive-function-calling-v2")
- General Instruction: (All of teknium/OpenHermes-2.5 fully filtered and uncensored)
Prompt Template: ChatML
<|im_start|>system
{}<|im_end|>
<|im_start|>user
{}<|im_end|>
<|im_start|>assistant
{}
End token (eot_token)
<|endoftext|>
Want to know the secret sause of how this model was made? Find the write up bellow
Continuous Fine-tuning Without Loss Using Lora and Mergekit
https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing
The code to finetune this AI model can be found bellow
https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing
Note this model in particular was finetuned using an h100 using Tensordock.com using the Pytorch OS. In order to use Unsloth code with TensorDock you need to run the following code (Bellow) to reinstall drivers on TensorDock before unsloth works. After running the code bellow, your Virtual Machine will reset, and you will have to SSH back into it. And then you can run the normal unsloth code in order.
# Check Current Size
!df -h /dev/shm
# Increase Size Temporarily
!sudo mount -o remount,size=16G /dev/shm
# Increase Size Permanently
!echo "tmpfs /dev/shm tmpfs defaults,size=16G 0 0" | sudo tee -a /etc/fstab
# Remount /dev/shm
!sudo mount -o remount /dev/shm
# Verify the Changes
!df -h /dev/shm
!nvcc --version
!export TORCH_DISTRIBUTED_DEBUG=DETAIL
!export NCCL_DEBUG=INFO
!python -c "import torch; print(torch.version.cuda)"
!export PATH=/usr/local/cuda/bin:$PATH
!export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
!export NCCL_P2P_LEVEL=NVL
!export NCCL_DEBUG=INFO
!export NCCL_DEBUG_SUBSYS=ALL
!export TORCH_DISTRIBUTED_DEBUG=INFO
!export TORCHELASTIC_ERROR_FILE=/PATH/TO/torcherror.log
!sudo apt-get remove --purge -y '^nvidia-.*'
!sudo apt-get remove --purge -y '^cuda-.*'
!sudo apt-get autoremove -y
!sudo apt-get autoclean -y
!sudo apt-get update -y
!sudo apt-get install -y nvidia-driver-535 cuda-12-1
!sudo add-apt-repository ppa:graphics-drivers/ppa -y
!sudo apt-get update -y
!sudo apt-get update -y
!sudo apt-get install -y software-properties-common
!sudo add-apt-repository ppa:graphics-drivers/ppa -y
!sudo apt-get update -y
!latest_driver=$(apt-cache search '^nvidia-driver-[0-9]' | grep -oP 'nvidia-driver-\K[0-9]+' | sort -n | tail -1) && sudo apt-get install -y nvidia-driver-$latest_driver
!sudo reboot
Join the Replete-Ai discord! We are a great and Loving community!
- Downloads last month
- 99
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for QuantFactory/Replete-LLM-Qwen2-7b-GGUF
Base model
Qwen/Qwen2-7B
