Naija-Petro 8B -- Petroleum Engineering AI

Domain-specific LLM fine-tuned for petroleum engineering on Qwen3-8B. Lightweight variant designed for fast inference and free deployment.

Overview

  • 20,000+ synthetic instruction-response pairs
  • QLoRA fine-tuning with Unsloth (2x faster, 70% less VRAM)
  • Covers: drilling, reservoir, production, completions, EOR, well testing
  • Deploys on free HuggingFace ZeroGPU Spaces

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Shinzmann/naija-petro-8b", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Shinzmann/naija-petro-8b")

Ollama

ollama run hf.co/Shinzmann/naija-petro-8b-GGUF:Q4_K_M

Training

Param Value
Base Qwen3-8B
Method QLoRA 4-bit
LoRA r / alpha 32 / 64
LR 0.0002
Epochs 3
Samples ~30K train / ~1.6K eval

Also Available

Limitations

  • Validate outputs with qualified engineers before operational use
  • English only; not for general chat
Downloads last month
23
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Shinzmann/naija-petro-8b

Finetuned
Qwen/Qwen3-8B
Finetuned
(1585)
this model
Quantizations
1 model

Space using Shinzmann/naija-petro-8b 1