Naija-Petro -- Petroleum Engineering AI

Domain-specific LLM fine-tuned for petroleum engineering on Qwen3-32B.

Overview

  • 20,000 synthetic instruction-response pairs (NVIDIA Data Designer)
  • QLoRA fine-tuning with Unsloth (2x faster, 70% less VRAM)
  • Covers: drilling, reservoir, production, completions, EOR, well testing

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Shinzmann/naija-petro", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Shinzmann/naija-petro")

Ollama

ollama run hf.co/Shinzmann/naija-petro-GGUF:Q4_K_M

Training

Param Value
Base Qwen3-32B
Method QLoRA 4-bit
LoRA r / alpha 64 / 128
LR 0.0002
Epochs 2
Samples ~19K train / ~1K eval

Limitations

  • Validate outputs with qualified engineers before operational use
  • English only; not for general chat
Downloads last month
18
Safetensors
Model size
33B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Shinzmann/naija-petro

Base model

Qwen/Qwen3-32B
Finetuned
(509)
this model

Spaces using Shinzmann/naija-petro 2