Gen-HVAC commited on
Commit
aaf8d81
·
verified ·
1 Parent(s): a0c7810

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -23
README.md CHANGED
@@ -52,35 +52,61 @@ able, data-driven HVAC control.
52
 
53
  # We devided this entire project into 4 phases. Please go through each step to use our system.
54
 
55
- ### Energy Plus Setup and Trajectory Data generation
56
-
57
- ## System Requirements
 
 
58
 
59
- - Ubuntu 22.04 (recommended)
60
- - Python ≥ 3.10
61
- - EnergyPlus ≥ 25.x
62
- - CUDA (optional, for GPU training)
63
- ## Install Dependencies
64
 
65
  ```bash
66
- # Create environment
67
- python -m venv genhvac_env
68
- source genhvac_env/bin/activate
69
-
70
- # Install required packages
71
- pip install torch torchvision torchaudio
72
- pip install gymnasium
73
- pip install sinergym
74
- pip install stable-baselines3
75
- pip install pandas numpy matplotlib tqdm
76
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
-
 
79
 
80
- 3) Training Phase
81
 
82
- 4) LLM deployment phase
83
 
84
- 5) Inference
85
 
86
- 6) Deployment on a real building using cloud server and edge device
 
52
 
53
  # We devided this entire project into 4 phases. Please go through each step to use our system.
54
 
55
+ ### Energy Plus Setup
56
+ For this project we use Sinergym and energy plus. sinergym has also a prebuilt docker version. you can use this and for installation:
57
+ ```bash
58
+ docker pull ghcr.io/ugr-sail/sinergym:2.4.0
59
+ ```
60
 
61
+ After this run the docker container and see next steps
 
 
 
 
62
 
63
  ```bash
64
+ docker run -it \
65
+ --name genhvac_container \
66
+ -v $(pwd):/workspace \
67
+ ghcr.io/ugr-sail/sinergym:2.4.0 \
68
+ /bin/bash
 
 
 
 
 
69
  ```
70
+ ### Data generation
71
+ We have provided utils files which are being used at all three instances: Data generation, training and inference.
72
+
73
+ use the data generation script along with rollout runner to generate sequential data.
74
+
75
+ Our architecture works with all kinds of policy and you can try different patterns for generating data. If you have ecobee data then that can work too. if you have MPC rules for
76
+ a particular building model then it will work excellent. This works a framework for data generation.
77
+
78
+ We have rollouts which you can use to generate specific building location data or building type or combine different envolop locations and weather and building type.
79
+
80
+ ### Training Phase
81
+
82
+ After you have generated data you can move on to the training phase which , for our experiments we generted more than 2300 sequential data combinations and resulted in more than 3 million trajectories.
83
+
84
+ Training phase is devided into 3 parts Dataloader, decision transformer and losses and finally the main training code.
85
+
86
+ The only changes needed will be mapping of the observation data from the sensors and also the action keys. We have already done that for a office small STD2013 and Office Medium STD2013. The same architecture
87
+ can be extended to other buildings in the HOT data set and also with the ecobee data set as well as any real building dataset.
88
+
89
+ Next comes the training code. We tried to make a system which can a be framed as a general zero shot system. However the novelty also lies in the entire system since this system can be extended
90
+ to cover vast amount of data atleast 1000 to 10000 times more. In the training code you have to simply increase the size of the transformer model and our losses and embeddings layers will
91
+ try to generalize over more and more buildings residential homes etc.
92
+
93
+ We condition on different RTG for comfort and energy savings. Anykind of data will already be filtered on different RTG and TOPK filtering helping model to understand what kind of
94
+ actions lead to what kind of consequenses.
95
+
96
+ 5) LLM deployment phase
97
+
98
+ One of the main high lights of our System is LLM and digital human in the loop which is possible because of flexible RTG conditioning posible because of transformer architecture.
99
+ Different RTG can lead to different sequences which is completely different approach where we give control to LLM and because of that we never face sudden uneven temperature settings.
100
+
101
+ For LLM utilisation please download and setup OLLAMA
102
 
103
+ For this experiments we used several opensource Quantized versions of both resonsing and non reasoning LLMS liek DEEPseek V1 and r1 lamma, etc.
104
+ you have to create the llm client and server setup which is provided in the llm folder.
105
 
106
+ Our system use LLM to capture the humnistic value of the human behaviour so you can change or tweak the prompt as per your behaviour or create your own personal digital persona.
107
 
108
+ After this you have to move on to the inference Server side to tie up all these together
109
 
110
+ 7) Inference
111
 
112
+ 8) Deployment on a real building using cloud server and edge device