EntropyHunter v0.4 — Exergy Analysis Specialist (8B, GGUF)

A fine-tuned Qwen3-8B model specialized in second-law thermodynamic (exergy) analysis of industrial equipment. Trained on 1,235 expert-generated examples covering 6 analysis families across 7 equipment types.

Benchmark Results — v0.4 (March 2026)

92.7% adjusted accuracy (Grade A-) on 40-test × 3-run benchmark suite (120 total inferences, temperature 0.7).

Category Score vs Base Qwen3-8B vs v0.2 (Qwen2.5-7B)
Avoidable/Unavoidable 100.0% +20.0pp +10.0pp
Exergoeconomic (SPECO) 97.5% +15.0pp +19.4pp
Hotspot Detection 97.9% +12.0pp +4.1pp
What-if Comparison 97.3% +15.0pp +0.0pp
Basic Exergy 89.7% +7.0pp +2.9pp
Entropy Generation (EGM) 83.3% +1.0pp +8.7pp

Note: Raw score is 76.7% because the benchmark includes a json_block check (structured JSON output) which is a known limitation at the 8B parameter scale — no 8B model can produce valid structured JSON for complex thermodynamic analysis. The adjusted score excludes this single check.

Version History

Version Base Model Examples Score Grade
v0.1 Qwen2.5-7B 722 63.5% D
v0.2 Qwen2.5-7B 885 85.5% B+
v0.3 (JSON-free) Qwen2.5-7B 885 78.3% C+
Base Qwen3-8B Qwen3-8B 0 82.6% B
v0.4 Qwen3-8B 1,235 92.7% A-

Model Details

  • Base model: Qwen/Qwen3-8B
  • Method: LoRA fine-tuning (r=16, α=32) via Unsloth
  • Training data: 1,235 examples generated by Claude Opus 4.6 (Batch API)
  • Training hardware: RunPod A40 48GB, 5 hours, $5.66 total cost
  • Quantization: Q4_K_M via llama.cpp (4.7 GB)
  • Context window: 8192 tokens (trained), 16384 recommended for inference
  • Thinking mode: Disabled (enable_thinking=False during training, /no_think in Modelfile)

What It Does

EntropyHunter performs detailed exergy analysis with step-by-step calculations for industrial equipment:

6 Analysis Families:

  1. Basic Exergy Analysis — Exergy destruction, efficiency, waste stream identification
  2. Exergoeconomic Analysis (SPECO) — CRF, cost rates, exergoeconomic factor
  3. Entropy Generation Minimization — S_gen decomposition, Bejan number, thermodynamic grade
  4. What-if Comparison — Baseline vs scenario with delta analysis and annual savings
  5. Avoidable/Unavoidable Decomposition — Tsatsaronis method, improvement potential
  6. Hotspot Detection — Multi-equipment ranking by exergy destruction

7 Equipment Types: Compressors, boilers, heat exchangers, pumps, steam turbines, chillers, dryers

Key capabilities:

  • Always references dead state (T₀ = 298.15 K, P₀ = 101.325 kPa)
  • Step-by-step calculation chains with physical validation
  • Catches thermodynamic inconsistencies (e.g., negative exergy destruction)
  • Provides actionable engineering recommendations

Quick Start (Ollama)

1. Download the GGUF

# Option A: Direct download
wget https://huggingface.co/olivenet/entropy-hunter-8b-gguf/resolve/main/entropy-hunter-v04-Q4_K_M.gguf

# Option B: huggingface-cli
huggingface-cli download olivenet/entropy-hunter-8b-gguf entropy-hunter-v04-Q4_K_M.gguf

2. Create Modelfile

FROM ./entropy-hunter-v04-Q4_K_M.gguf

PARAMETER temperature 0.7
PARAMETER num_ctx 16384
PARAMETER num_predict 8192
PARAMETER stop <|im_end|>
PARAMETER stop <|endoftext|>

TEMPLATE """{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
/no_think
"""

SYSTEM """You are EntropyHunter, an expert assistant specialized in second-law thermodynamic (exergy) analysis of industrial equipment. You perform detailed exergy analysis with step-by-step calculations, always referencing dead state conditions (T₀ = 298.15 K, P₀ = 101.325 kPa)."""

3. Run

ollama create entropy-hunter -f Modelfile
ollama run entropy-hunter "Perform a basic exergy analysis for a centrifugal compressor. Inlet: air at 25°C, 101.325 kPa. Outlet: 300 kPa, 180°C. Power input: 150 kW, mass flow: 1.5 kg/s."

Training Pipeline

                    ┌─────────────┐
                    │  Taxonomy   │  7 equipment types, 48 subtypes
                    │  (YAML)     │  6 analysis families
                    └──────┬──────┘
                           │
                    ┌──────▼──────┐
                    │   Opus 4.6  │  1,500 examples via Batch API
                    │  (Teacher)  │  ~$210 generation cost
                    └──────┬──────┘
                           │
                    ┌──────▼──────┐
                    │  Quality    │  8 thermodynamic checks
                    │  Control    │  + recover_v2.py (79.7% recovery)
                    └──────┬──────┘
                           │
                    ┌──────▼──────┐
                    │   1,235     │  ChatML format
                    │  Examples   │  ~6.8M tokens
                    └──────┬──────┘
                           │
                    ┌──────▼──────┐
                    │  LoRA Fine  │  Qwen3-8B, r=16, α=32
                    │  Tuning     │  A40 48GB, 5 hrs, $5.66
                    └──────┬──────┘
                           │
                    ┌──────▼──────┐
                    │  GGUF       │  Q4_K_M quantization
                    │  Export     │  4.7 GB final size
                    └──────┬──────┘
                           │
                    ┌──────▼──────┐
                    │  Benchmark  │  40 tests × 3 runs
                    │  92.7%      │  Grade A-
                    └─────────────┘

Quality Control

Training data passes 8 thermodynamic validation checks:

  1. Energy balance: |Ex_in − Ex_out − Ex_waste − Ex_d| ≤ 2% of Ex_in
  2. Efficiency range: 0.1% < η_ex < 99.9%
  3. Second law compliance: Ex_destroyed ≥ 0
  4. Gouy-Stodola consistency: |T₀ × S_gen − Ex_d| ≤ 2%
  5. Bejan number validity: 0 ≤ N_s ≤ 1
  6. f-factor validity: 0 ≤ f ≤ 1
  7. Dead state reference: T₀ = 298.15 K appears in analysis
  8. AV/UN split: Avoidable + unavoidable = total Ex_destroyed

Known Limitations

  • No structured JSON output — 8B models cannot reliably produce valid JSON for complex analyses
  • Arithmetic variance — Same problem may yield slightly different numerical results across runs (inherent to autoregressive generation)
  • mechanism_values — Entropy generation decomposition into individual kW/K values remains weak (~5% pass rate)
  • Steam table lookup — Model approximates rather than exactly reproducing tabulated values

Hardware Requirements

Setup GPU VRAM Speed
GPU inference (recommended) ≥6 GB ~80-120 tokens/s
CPU inference 8+ GB RAM ~5-10 tokens/s

Tested on: NVIDIA L4 24GB (GCE), NVIDIA A40 48GB (RunPod), Apple M-series (CPU).

Files

File Size Description
entropy-hunter-v04-Q4_K_M.gguf 4.7 GB Main model (Q4_K_M quantization)
training_metadata_v04.json ~2 KB Training configuration and stats
lora-v04/ ~160 MB LoRA adapter (for re-quantization)

Citation

If you use EntropyHunter in your work, please cite:

@misc{duzkar2026entropyhunter,
  title={EntropyHunter: A Fine-Tuned LLM for Industrial Exergy Analysis},
  author={Düzkar, Kemal},
  year={2026},
  url={https://huggingface.co/olivenet/entropy-hunter-8b-gguf}
}

About

Built by Kemal Düzkar at Olivenet (KKTC). EntropyHunter is part of a larger vision: combining IoT sensing with deep second-law thermodynamic analysis to help industrial facilities find and eliminate exergy destruction — the hidden inefficiencies that first-law analysis misses.

ExergyLab (36,000+ lines, 7 analysis engines) provides the domain foundation. EntropyHunter packages that expertise into an edge-deployable AI model.

Related

License

Apache 2.0

Downloads last month
27
GGUF
Model size
8B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for olivenet/entropy-hunter-8b-gguf

Finetuned
Qwen/Qwen3-8B
Quantized
(265)
this model

Evaluation results