EPlus-LLM commited on
Commit
5117256
Β·
verified Β·
1 Parent(s): 3a889ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -12
README.md CHANGED
@@ -21,26 +21,63 @@ A prototype project exploring the use of fine-tuned large language models to aut
21
  [Paper here](https://doi.org/10.1016/j.apenergy.2024.123431).
22
 
23
  ## πŸš€ Key Features
24
- - Scalability: Auto-generates complex EnergyPlus models, including varying geometries, materials, thermal zones, hourly schedules, and more.
25
- - Accuracy & Efficiency: Achieves 100% modeling accuracy while reducing manual modeling time by over 98%.
26
  - Interaction & Automation: A user-friendly human-AI interface for seamless model creation and customization.
27
 
28
- - Flexible Design Scenarios:
29
-
30
- βœ… Geometry: square, L-, T-, U-, and hollow-square-shaped buildings
31
- βœ… Roof types: flat, gable, hip – customizable attic/ridge height
32
- βœ… Orientation & windows: custom WWR, window placement, facade-specific controls
33
- βœ… Walls & materials: thermal properties, insulation types
34
- βœ… Internal loads: lighting, equipment, occupancy, infiltration/ventilation, schedules, heating/cooling setpoints
35
- βœ… Thermal zoning: configurable multi-zone layouts with core & perimeter zones
36
-
37
  ## πŸ—οΈ Target Users
38
  This current platform is designed for engineers, architects, and researchers working in building performance, sustainability, and resilience. It is especially useful during early-stage conceptual design when modeling decisions have the greatest impact.
39
 
40
  ## πŸš€ Quick Start
41
 
42
- This repository contains v2 and v1 of EPlus-LLM, along with implementation details for the ABEM reference.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
  πŸ“‚ Repository Structure
46
 
@@ -67,4 +104,32 @@ This repository contains v2 and v1 of EPlus-LLM, along with implementation detai
67
  ```
68
  cd v2
69
  python EPlus-LLM/v2/Inference.py
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
  ```
 
21
  [Paper here](https://doi.org/10.1016/j.apenergy.2024.123431).
22
 
23
  ## πŸš€ Key Features
24
+ - Scalability: Auto-generates EnergyPlus models, including varying geometry sizes and internal loads.
25
+ - Accuracy & Efficiency: Achieves 100% modeling accuracy while reducing manual modeling time by over 95%.
26
  - Interaction & Automation: A user-friendly human-AI interface for seamless model creation and customization.
27
 
 
 
 
 
 
 
 
 
 
28
  ## πŸ—οΈ Target Users
29
  This current platform is designed for engineers, architects, and researchers working in building performance, sustainability, and resilience. It is especially useful during early-stage conceptual design when modeling decisions have the greatest impact.
30
 
31
  ## πŸš€ Quick Start
32
 
33
+ Here provides a code snippet to show you how to load the EPlus-LLM and auto-generate building energy models.
34
+
35
+ ```python
36
+
37
+
38
+ generation_config = model.generation_config
39
+
40
+ generation_config.max_new_tokens = 1300
41
+ generation_config.temperature = 0.1
42
+ generation_config.top_p = 0.1
43
+ generation_config.num_return_sequences = 1
44
+ generation_config.pad_token_id = tokenizer.eos_token_id
45
+ generation_config.eos_token_id = tokenizer.eos_token_id
46
+
47
+
48
+ from transformers import AutoModelForCausalLM, AutoTokenizer
49
+
50
+ model_name = "Qwen/Qwen2.5-32B-Instruct"
51
 
52
+ model = AutoModelForCausalLM.from_pretrained(
53
+ model_name,
54
+ torch_dtype="auto",
55
+ device_map="auto"
56
+ )
57
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
58
+
59
+ prompt = "Give me a short introduction to large language model."
60
+ messages = [
61
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
62
+ {"role": "user", "content": prompt}
63
+ ]
64
+ text = tokenizer.apply_chat_template(
65
+ messages,
66
+ tokenize=False,
67
+ add_generation_prompt=True
68
+ )
69
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
70
+
71
+ generated_ids = model.generate(
72
+ **model_inputs,
73
+ max_new_tokens=512
74
+ )
75
+ generated_ids = [
76
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
77
+ ]
78
+
79
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
80
+ ```
81
 
82
  πŸ“‚ Repository Structure
83
 
 
104
  ```
105
  cd v2
106
  python EPlus-LLM/v2/Inference.py
107
+ ```
108
+
109
+ ## πŸ“ Citation
110
+
111
+ If you find our work helpful, feel free to give us a cite.
112
+
113
+ ```
114
+ @article{jiang2025prompt,
115
+ author = {Gang Jiang and Zhihao Ma and Liang Zhang and Jianli Chen},
116
+ title = {Prompt engineering to inform large language models in automated building energy modeling},
117
+ journal = {Applied Energy},
118
+ volume = {367},
119
+ pages = {123431},
120
+ year = {2024},
121
+ month = {Aug},
122
+ doi = {https://doi.org/10.1016/j.apenergy.2024.123431}
123
+ }
124
+
125
+ @article{jiang2025prompt,
126
+ author = {Gang Jiang and Zhihao Ma and Liang Zhang and Jianli Chen},
127
+ title = {Prompt engineering to inform large language models in automated building energy modeling},
128
+ journal = {Energy},
129
+ volume = {316},
130
+ pages = {134548},
131
+ year = {2025},
132
+ month = {Feb},
133
+ doi = {https://doi.org/10.1016/j.energy.2025.134548}
134
+ }
135
  ```