Text Generation
Safetensors
English
qwen3
petroleum-engineering
oil-and-gas
fine-tuned
unsloth
naija-petro
conversational
Instructions to use Shinzmann/naija-petro-8b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Local Apps
- Unsloth Studio new
How to use Shinzmann/naija-petro-8b with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Shinzmann/naija-petro-8b to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Shinzmann/naija-petro-8b to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Shinzmann/naija-petro-8b to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="Shinzmann/naija-petro-8b", max_seq_length=2048, )
Naija-Petro 8B -- Petroleum Engineering AI
Domain-specific LLM fine-tuned for petroleum engineering on Qwen3-8B. Lightweight variant designed for fast inference and free deployment.
Overview
- 20,000+ synthetic instruction-response pairs
- QLoRA fine-tuning with Unsloth (2x faster, 70% less VRAM)
- Covers: drilling, reservoir, production, completions, EOR, well testing
- Deploys on free HuggingFace ZeroGPU Spaces
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Shinzmann/naija-petro-8b", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Shinzmann/naija-petro-8b")
Ollama
ollama run hf.co/Shinzmann/naija-petro-8b-GGUF:Q4_K_M
Training
| Param | Value |
|---|---|
| Base | Qwen3-8B |
| Method | QLoRA 4-bit |
| LoRA r / alpha | 32 / 64 |
| LR | 0.0002 |
| Epochs | 3 |
| Samples | ~30K train / ~1.6K eval |
Also Available
- 32B version: Shinzmann/naija-petro (higher quality, needs GPU)
Limitations
- Validate outputs with qualified engineers before operational use
- English only; not for general chat
- Downloads last month
- 23