You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

EnergyAnalyst-v0.1

A Mistral-7B-v0.3 model fine-tuned for energy policy and regulatory compliance analysis.

Model Description

This model specializes in:

  • Identifying regulatory compliance requirements
  • Spotting arbitrage opportunities in energy regulations
  • Analyzing policy gaps and inconsistencies
  • Generating actionable compliance strategies

Training Process

Three-stage training pipeline:

  1. Stage 1: SFT on Dolly-15k for general instruction following
  2. Stage 2A: Continued pre-training on 50k energy policy documents
  3. Stage 2B: Fine-tuning on 7k domain-specific Q&A pairs

Usage

Local RAG Setup (Recommended)

This repository includes a complete RAG (Retrieval-Augmented Generation) system for local testing:

Quick Start:

  1. Interactive Chat: python chat.py - Chat directly with EnergyAnalyst
  2. API Server: python api/server.py - HTTP API for integration with other repos/services

See RAG Guide for complete setup instructions.

Direct Model Usage (Transformers)

For direct model usage without RAG:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("asoba/PolicyAnalyst-v0.1")
tokenizer = AutoTokenizer.from_pretrained("asoba/PolicyAnalyst-v0.1")

prompt = """You are a regulatory compliance expert. Your core capabilities:
1. Read between the lines for subtext and unstated implications
2. Map regulatory requirements precisely
3. Spot arbitrage opportunities and gaps in regulations
4. Generate actionable compliance checklists with specific steps

Always provide detailed, truthful, actionable responses with clear structure.

### Instruction:
What are the key compliance requirements for utility-scale solar projects?

### Response:
"""

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
print(tokenizer.decode(outputs[0]))

Documentation

Limitations

  • Context window limited to 1024 tokens
  • Quantitative calculations should be independently verified
  • Training data current through July 25, 2025.

Training Details

  • Base model: Mistral-7B-v0.3
  • LoRA config: r=32, alpha=32, all attention + MLP layers
  • Hardware: NVIDIA A10G
  • Training framework: Unsloth
  • Total training time: ~48 hours across all stages
  • Optimizer: AdamW with cosine learning rate schedule

Evaluation

Performance metrics on held-out test set:

  • Regulatory requirement identification: 92% accuracy
  • Policy gap detection: 87% precision
  • Compliance checklist generation: 4.2/5 expert rating

Citation

If you use this model, please cite:

@misc{energyanalyst2025,
  author = {Shingai Samudzi},{Asoba Corporation},
  title = {EnergyAnalyst-v0.1: A Fine-tuned Model for Energy Policy Analysis},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/asoba/EnergyAnalyst-v0.1}}
}
Acknowledgments
This model was trained using the Unsloth library for efficient fine-tuning. 
Downloads last month
-
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for asoba/EnergyAnalyst-v0.1

Adapter
(335)
this model