Instructions to use QuantFactory/Turkcell-LLM-7b-v1-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use QuantFactory/Turkcell-LLM-7b-v1-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Turkcell-LLM-7b-v1-GGUF", filename="Turkcell-LLM-7b-v1.Q2_K.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/Turkcell-LLM-7b-v1-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use QuantFactory/Turkcell-LLM-7b-v1-GGUF with Ollama:
ollama run hf.co/QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/Turkcell-LLM-7b-v1-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Turkcell-LLM-7b-v1-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Turkcell-LLM-7b-v1-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/Turkcell-LLM-7b-v1-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/Turkcell-LLM-7b-v1-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/Turkcell-LLM-7b-v1-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/Turkcell-LLM-7b-v1-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Turkcell-LLM-7b-v1-GGUF-Q4_K_M
List all available models
lemonade list
QuantFactory/Turkcell-LLM-7b-v1-GGUF
This is quantized version of TURKCELL/Turkcell-LLM-7b-v1 created using llama.cpp
Original Model Card
![]()
Turkcell-LLM-7b-v1
This model is an extended version of a Mistral-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish raw dataset containing 5 billion tokens. The training process involved using the DORA method initially. Following this, we utilized Turkish instruction sets created from various open-source and internal resources for fine-tuning with the LORA method.
Model Details
- Base Model: Mistral 7B based LLM
- Tokenizer Extension: Specifically extended for Turkish
- Training Dataset: Cleaned Turkish raw data with 5 billion tokens, custom Turkish instruction sets
- Training Method: Initially with DORA, followed by fine-tuning with LORA
DORA Configuration
lora_alpha: 128lora_dropout: 0.05r: 64target_modules: "all-linear"
LORA Fine-Tuning Configuration
lora_alpha: 128lora_dropout: 0.05r: 256target_modules: "all-linear"
Usage Examples
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("TURKCELL/Turkcell-LLM-7b-v1")
tokenizer = AutoTokenizer.from_pretrained("TURKCELL/Turkcell-LLM-7b-v1")
messages = [
{"role": "user", "content": "Türkiye'nin başkenti neresidir?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
eos_token = tokenizer("<|im_end|>",add_special_tokens=False)["input_ids"][0]
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs,
max_new_tokens=1024,
do_sample=True,
eos_token_id=eos_token)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
- Downloads last month
- 1,156
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit