metadata
language:
- en
license: cc-by-nc-4.0
base_model:
- google/flan-t5-large
EPlus-LLM
Natural Language Interface for Automated Building Energy Modeling via LLMs
A prototype project exploring the use of fine-tuned large language models to automate building energy modeling from natural language input.
π News
- β‘οΈ [2025/01/01]: A prompting-based method for auto-building energy modeling has been released. Paper here.
- π₯ [2024/05/016]: We first successfully implement natural language-based auto-building modeling by fine-tuning a large language model (LLM). Paper here.
π Key Features
- Scalability: Auto-generates EnergyPlus models, including varying geometry sizes and internal loads.
- Accuracy & Efficiency: Achieves 100% modeling accuracy while reducing manual modeling time by over 95%.
- Interaction & Automation: A user-friendly human-AI interface for seamless model creation and customization.
ποΈ Target Users
This current platform is designed for engineers, architects, and researchers working in building performance, sustainability, and resilience. It is especially useful during early-stage conceptual design when modeling decisions have the greatest impact.
π Quick Start
Here provides a code snippet to show you how to load the EPlus-LLM and auto-generate building energy models.
import torch
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer,
)
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("EPlus-LLM/EPlus-LLMv1")
generation_config = model.generation_config
generation_config.max_new_tokens = 1300
generation_config.temperature = 0.1
generation_config.top_p = 0.1
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
input="<Your input, description of the desired building.>"
input_ids = tokenizer(input, return_tensors="pt", truncation=False).to(device)
generated_ids = model.generate(input_ids = input_ids.input_ids,
attention_mask = input_ids.attention_mask,
generation_config = generation_config)
generated_output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
generated_output = new_tokens.replace("_", " ")
generated_ooutput = new_tokens.replace("|", "\n")
print(generated_output)
π Citation
If you find our work helpful, feel free to give us a cite.
@article{jiang2025EPlus-LLM,
author = {Gang Jiang and Zhihao Ma and Liang Zhang and Jianli Chen},
title = {Prompt engineering to inform large language models in automated building energy modeling},
journal = {Applied Energy},
volume = {367},
pages = {123431},
year = {2024},
month = {Aug},
doi = {https://doi.org/10.1016/j.apenergy.2024.123431}
}
@article{jiang2025prompting,
author = {Gang Jiang and Zhihao Ma and Liang Zhang and Jianli Chen},
title = {Prompt engineering to inform large language models in automated building energy modeling},
journal = {Energy},
volume = {316},
pages = {134548},
year = {2025},
month = {Feb},
doi = {https://doi.org/10.1016/j.energy.2025.134548}
}