EnergyAnalyst-v0.1
A Mistral-7B-v0.3 model fine-tuned for energy policy and regulatory compliance analysis.
Model Description
This model specializes in:
- Identifying regulatory compliance requirements
- Spotting arbitrage opportunities in energy regulations
- Analyzing policy gaps and inconsistencies
- Generating actionable compliance strategies
Training Process
Three-stage training pipeline:
- Stage 1: SFT on Dolly-15k for general instruction following
- Stage 2A: Continued pre-training on 50k energy policy documents
- Stage 2B: Fine-tuning on 7k domain-specific Q&A pairs
Usage
Local RAG Setup (Recommended)
This repository includes a complete RAG (Retrieval-Augmented Generation) system for local testing:
Quick Start:
- Interactive Chat:
python chat.py- Chat directly with EnergyAnalyst - API Server:
python api/server.py- HTTP API for integration with other repos/services
See RAG Guide for complete setup instructions.
Direct Model Usage (Transformers)
For direct model usage without RAG:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("asoba/PolicyAnalyst-v0.1")
tokenizer = AutoTokenizer.from_pretrained("asoba/PolicyAnalyst-v0.1")
prompt = """You are a regulatory compliance expert. Your core capabilities:
1. Read between the lines for subtext and unstated implications
2. Map regulatory requirements precisely
3. Spot arbitrage opportunities and gaps in regulations
4. Generate actionable compliance checklists with specific steps
Always provide detailed, truthful, actionable responses with clear structure.
### Instruction:
What are the key compliance requirements for utility-scale solar projects?
### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
print(tokenizer.decode(outputs[0]))
Documentation
- RAG Guide - Complete RAG setup and usage guide
- Platform Integration - Using RAG API from other repos (platform, zorora, etc.)
- Run Local - Running the API server as a background service
- Quick Start - Quick setup guide
- Local Install - Local installation instructions
Limitations
- Context window limited to 1024 tokens
- Quantitative calculations should be independently verified
- Training data current through July 25, 2025.
Training Details
- Base model: Mistral-7B-v0.3
- LoRA config: r=32, alpha=32, all attention + MLP layers
- Hardware: NVIDIA A10G
- Training framework: Unsloth
- Total training time: ~48 hours across all stages
- Optimizer: AdamW with cosine learning rate schedule
Evaluation
Performance metrics on held-out test set:
- Regulatory requirement identification: 92% accuracy
- Policy gap detection: 87% precision
- Compliance checklist generation: 4.2/5 expert rating
Citation
If you use this model, please cite:
@misc{energyanalyst2025,
author = {Shingai Samudzi},{Asoba Corporation},
title = {EnergyAnalyst-v0.1: A Fine-tuned Model for Energy Policy Analysis},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/asoba/EnergyAnalyst-v0.1}}
}
Acknowledgments
This model was trained using the Unsloth library for efficient fine-tuning.
- Downloads last month
- -
Model tree for asoba/EnergyAnalyst-v0.1
Base model
mistralai/Mistral-7B-v0.3