Gen-HVAC commited on
Commit
29012cb
·
verified ·
1 Parent(s): 309923d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -22
README.md CHANGED
@@ -71,16 +71,14 @@ docker run -it \
71
 
72
  ![Gen-HVAC](assets/Decision_transformer.png)
73
  ### Data generation
74
- Trajectory generation is executed through the rollout runner combined with a behavior policy.
75
- The framework is policy-based: any controller that maps
 
 
 
76
 
77
-
78
- use the data generation script along with rollout runner to generate sequential data.
79
-
80
- Our architecture works with all kinds of policy and you can try different patterns for generating data. If you have ecobee data then that can work too. if you have MPC rules for
81
- a particular building model then it will work excellent. This works a framework for data generation.
82
-
83
- We have rollouts which you can use to generate specific building location data or building type or combine different envolop locations and weather and building type.
84
 
85
  ```bash
86
  # Inside Docker container
@@ -127,23 +125,22 @@ Each file contains:
127
 
128
  Temporal resolution: 15 minutes
129
  Episode length: 35040 timesteps (1 simulation year)
130
- ### Training Phase
131
-
132
- After you have generated data you can move on to the training phase which , for our experiments we generted more than 2300 sequential data combinations and resulted in more than 3 million trajectories.
133
 
134
- Training phase is devided into 3 parts Dataloader, decision transformer and losses and finally the main training code.
135
 
136
- The only changes needed will be mapping of the observation data from the sensors and also the action keys. We have already done that for a office small STD2013 and Office Medium STD2013. The same architecture
137
- can be extended to other buildings in the HOT data set and also with the ecobee data set as well as any real building dataset.
138
 
139
- Next comes the training code. We tried to make a system which can a be framed as a general zero shot system. However the novelty also lies in the entire system since this system can be extended
140
- to cover vast amount of data atleast 1000 to 10000 times more. In the training code you have to simply increase the size of the transformer model and our losses and embeddings layers will
141
- try to generalize over more and more buildings residential homes etc.
 
 
 
 
142
 
143
- We condition on different RTG for comfort and energy savings. Anykind of data will already be filtered on different RTG and TOPK filtering helping model to understand what kind of
144
- actions lead to what kind of consequenses.
145
 
146
- 5) LLM deployment phase
147
  Gen-HVAC supports an optional LLM + Digital Human-in-the-Loop (DHIL) layer that modulates preference/RTG targets and high-level
148
  constraints. For local LLM hosting, install Ollama, pull a quantized model
149
  , and launch the service.
@@ -154,7 +151,7 @@ In our testing we choose Deepseek R1.
154
  Once pulled, sanity-check locally with ollama run deepseek-r1:7b, then in another terminal point your Gen-HVAC LLM client to the default endpoint and run your integration from the llm/ folder (e.g., python -m llm.server --host 0.0.0.0 --port 8000 and python -m llm.client --base_url http://localhost:xxxx --model deepseek-r1:7b.
155
  After the LLM endpoint is up, you can proceed to the inference server step to bind the persona/prompt layer to RTG conditioning and the control loop in one end to end pipeline.
156
 
157
- 7) Inference
158
  During inference, we deploy Gen-HVAC as a stateless HTTP microservice
159
 
160
  that loads the trained Decision Transformer checkpoint and normalization statistics at startup, maintains a short autoregressive context window internally,
 
71
 
72
  ![Gen-HVAC](assets/Decision_transformer.png)
73
  ### Data generation
74
+ Trajectory generation is executed via the rollout runner coupled with a behavior policy. Use the data-generation script together with the rollout runner to generate
75
+ temporally consistent data across different buildings, climates, and envelope/occupancy variants. You can generate datasets using rule-based controllers,
76
+ learned policies, MPC-style rules, or real-building logs such as Ecobee traces, and the same pipeline will serialize them into a unified trajectory format.
77
+ The provided rollout utilities support targeted generation for a specific location or building type, as well as generation that mixes envelope variants, weather files,
78
+ and building archetypes to construct large, diverse training corpora.
79
 
80
+ All of this needs to be run inside a docker container which has energy plus. If you already have energy plus setup as a native software then you can simply adopt the synergym setup, and
81
+ generate the sequential training data.
 
 
 
 
 
82
 
83
  ```bash
84
  # Inside Docker container
 
125
 
126
  Temporal resolution: 15 minutes
127
  Episode length: 35040 timesteps (1 simulation year)
 
 
 
128
 
129
+ ### Training Phase
130
 
131
+ After data generation, you can proceed to the training phase. In our experiments, we generated 2,300+ building–weather–policy combinations, yielding 3M+ sequential
132
+ state–action transitions. The training pipeline is modular and consists of the dataloader, the Decision Transformer model, the loss modules, and the main training loop.
133
 
134
+ In most cases, the only required adaptation is mapping your raw sensor observations to the expected schema and defining the corresponding action keys;
135
+ we provide validated mappings for OfficeSmall STD2013 (5-zone) and OfficeMedium STD2013 (15-zone), and the same interface extends directly to other HOT buildings
136
+ as well as Ecobee or other real-building datasets. The training implementation is designed for generalization and zero-shot transfer.
137
+ Scaling to larger and more diverse building types primarily requires increasing model capacity (d_model, layers, heads), the embedding and loss structure can remain
138
+ unchanged. It supports heterogeneous buildings, zone counts, and sensing modalities.
139
+ We condition the policy on multi-objective return-to-go (RTG) targets for energy and comfort, and optionally apply Top-K filtering/selection by RTG to bias training
140
+ toward higher-quality sub-trajectories, enabling the model to learn how different action sequences causally trade off energy consumption and comfort outcomes.
141
 
 
 
142
 
143
+ ### LLM deployment phase
144
  Gen-HVAC supports an optional LLM + Digital Human-in-the-Loop (DHIL) layer that modulates preference/RTG targets and high-level
145
  constraints. For local LLM hosting, install Ollama, pull a quantized model
146
  , and launch the service.
 
151
  Once pulled, sanity-check locally with ollama run deepseek-r1:7b, then in another terminal point your Gen-HVAC LLM client to the default endpoint and run your integration from the llm/ folder (e.g., python -m llm.server --host 0.0.0.0 --port 8000 and python -m llm.client --base_url http://localhost:xxxx --model deepseek-r1:7b.
152
  After the LLM endpoint is up, you can proceed to the inference server step to bind the persona/prompt layer to RTG conditioning and the control loop in one end to end pipeline.
153
 
154
+ ### Inference
155
  During inference, we deploy Gen-HVAC as a stateless HTTP microservice
156
 
157
  that loads the trained Decision Transformer checkpoint and normalization statistics at startup, maintains a short autoregressive context window internally,