GeoLLM-Qwen3.5-27B

Fine-tuned Qwen3.5-27B for mineral exploration geology, targeting the Western Australian geological domain.

This model is part of the GeoLLM-Qwen3.5-FineTune benchmark. All five model sizes (0.8B--27B) were trained with identical hyperparameters on the same dataset for fair comparison.

Performance

Metric Base Fine-tuned
Overall weighted score 0.343 0.361
QA ROUGE-L 0.1448 0.1939
QA BERTScore 0.8191 0.8526
CoT ROUGE-L 0.1373 0.2335
Hallucination pass rate 60.0% 33.3%
Training loss -- 1.005

See the full benchmark comparison for all models.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("AshkanTaghipour/GeoLLM-Qwen3.5-27B", torch_dtype="bfloat16", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AshkanTaghipour/GeoLLM-Qwen3.5-27B")

messages = [
    {"role": "system", "content": "You are a specialist geologist with expertise in Western Australian mineral exploration."},
    {"role": "user", "content": "What geophysical methods would you recommend for targeting komatiite-hosted nickel sulphide deposits in the Eastern Goldfields?"},
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=False)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.6, top_p=0.95)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))

Training Details

  • Method: bf16 LoRA (r=16, alpha=16) via Unsloth + SFTTrainer
  • Epochs: 5
  • Optimizer: adamw_8bit, cosine LR schedule (2e-4, 10% warmup)
  • Dataset: mineral-exploration-geology-qa (479 train, 26 test examples)
  • Hardware: NVIDIA A100-80GB
  • VRAM (training): 56 GB
  • VRAM (inference): ~55 GB

Author

Ashkan Taghipour -- GitHub | HuggingFace

Downloads last month
259
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AshkanTaghipour/GeoLLM-Qwen3.5-27B

Base model

Qwen/Qwen3.5-27B
Adapter
(52)
this model
Adapters
2 models

Dataset used to train AshkanTaghipour/GeoLLM-Qwen3.5-27B