Instructions to use nwokikeonyeka/igbo-phi3-translator with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use nwokikeonyeka/igbo-phi3-translator with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="nwokikeonyeka/igbo-phi3-translator", filename="phi-3-mini-4k-instruct.Q4_K_M.gguf", )
llm.create_chat_completion( messages = "\"ΠΠ΅Π½Ρ Π·ΠΎΠ²ΡΡ ΠΠΎΠ»ΡΡΠ³Π°Π½Π³ ΠΈ Ρ ΠΆΠΈΠ²Ρ Π² ΠΠ΅ΡΠ»ΠΈΠ½Π΅\"" )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use nwokikeonyeka/igbo-phi3-translator with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf nwokikeonyeka/igbo-phi3-translator:Q4_K_M # Run inference directly in the terminal: llama-cli -hf nwokikeonyeka/igbo-phi3-translator:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf nwokikeonyeka/igbo-phi3-translator:Q4_K_M # Run inference directly in the terminal: llama-cli -hf nwokikeonyeka/igbo-phi3-translator:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf nwokikeonyeka/igbo-phi3-translator:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf nwokikeonyeka/igbo-phi3-translator:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf nwokikeonyeka/igbo-phi3-translator:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf nwokikeonyeka/igbo-phi3-translator:Q4_K_M
Use Docker
docker model run hf.co/nwokikeonyeka/igbo-phi3-translator:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use nwokikeonyeka/igbo-phi3-translator with Ollama:
ollama run hf.co/nwokikeonyeka/igbo-phi3-translator:Q4_K_M
- Unsloth Studio new
How to use nwokikeonyeka/igbo-phi3-translator with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nwokikeonyeka/igbo-phi3-translator to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nwokikeonyeka/igbo-phi3-translator to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for nwokikeonyeka/igbo-phi3-translator to start chatting
- Docker Model Runner
How to use nwokikeonyeka/igbo-phi3-translator with Docker Model Runner:
docker model run hf.co/nwokikeonyeka/igbo-phi3-translator:Q4_K_M
- Lemonade
How to use nwokikeonyeka/igbo-phi3-translator with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull nwokikeonyeka/igbo-phi3-translator:Q4_K_M
Run and chat with the model
lemonade run user.igbo-phi3-translator-Q4_K_M
List all available models
lemonade list
Igbo Basic Translator (GGUF)
This is a specialist AI model that was fine-tuned from Microsoft's Phi-3-mini-4k-instruct on over 522,000 English-to-Igbo translation pairs.
As a result of this deep, specialized training, this model is not a general-purpose chatbot. It is a dedicated, one-way translation tool.
It excels at one task: responding to the prompt Translate this English sentence to Igbo: '...'.
Final Model (GGUF): nwokikeonyeka/igbo-phi3-translator
π How to Test (Live Demo in Google Colab)
You can test this translator directly in your browser.
Note: This demo will install the model and run it on your Colab CPU. It may be slow (10-30 seconds per translation), but it is the simplest and most reliable way to run it.
One-Cell Colab Demo:
Copy and paste this entire block into a single Google Colab cell.
# --- 1. Install the SIMPLE, CPU-only version ---
print("--- installing llama-cpp-python (CPU)... ---")
# This is the simple install. No build tools needed.
!pip install llama-cpp-python
print("\n--- β
All libraries installed! ---")
# --- 2. Import Libraries ---
from llama_cpp import Llama
from huggingface_hub import hf_hub_download
import os
# --- 3. Define and Download Your Model ---
MODEL_NAME = "phi-3-mini-4k-instruct.Q4_K_M.gguf"
REPO_ID = "nwokikeonyeka/igbo-phi3-translator"
print(f"\n--- β¬οΈ Downloading {MODEL_NAME} from {REPO_ID} ---")
model_path = hf_hub_download(
repo_id=REPO_ID,
filename=MODEL_NAME
)
print(f"--- β
Model downloaded to: {model_path} ---")
# --- 4. Load the Model onto the CPU ---
print("--- π§ Loading model onto CPU... (This may take a moment) ---")
llm = Llama(
model_path=model_path,
n_gpu_layers=0, # 0 = Use CPU ONLY
n_ctx=1024, # Context size
verbose=False # Silence llama.cpp logs
)
print("--- β
Model loaded! Ready to test. ---")
# --- 5. Start the Interactive Test Loop ---
print("\n--- π€ Igbo Translator Test ---")
print("This model is a specialist. It only responds to the 'Translate' prompt.")
print("Type 'quit' or 'exit' to stop.")
print("-" * 40)
while True:
try:
user_prompt = str(input("English: "))
except EOFError:
break
if user_prompt.lower() in ["quit", "exit"]:
print("--- π Test ended. ---")
break
if not user_prompt.strip():
continue
# 1. Format the prompt EXACTLY as it was trained
full_prompt = f"Translate this English sentence to Igbo: '{user_prompt}'"
formatted_input = f"<s>[INST] {full_prompt} [/INST]"
print("Igbo AI: ...thinking...")
# 2. Run Inference
response = llm(
formatted_input,
max_tokens=100, # Max length of the translation
stop=["</s>", "[INST]"], # Stop when it finishes its response
echo=False # Don't repeat the prompt
)
# 3. Print the clean result
try:
ai_response = response["choices"][0]["text"].strip()
print(f"Igbo AI: {ai_response}")
except (IndexError, KeyError):
print("Igbo AI: (No response generated. Make sure you typed in English.)")
Local Usage (Ollama / llama.cpp)
You can also run this model locally.
For Ollama:
# Pull the model
ollama pull nwokikeonyeka/igbo-phi3-translator
# Run it
ollama run nwokikeonyeka/igbo-phi3-translator "Translate this English sentence to Igbo: 'Hello, how are you?'"
For llama.cpp:
./main -m ./phi-3-mini-4k-instruct.Q4_K_M.gguf -n 128 -p "<s>[INST] Translate this English sentence to Igbo: 'Hello, how are you?' [/INST]"
- Downloads last month
- 34
4-bit