Gen-HVAC commited on
Commit
228a5dc
·
verified ·
1 Parent(s): 7dc46e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -4
README.md CHANGED
@@ -70,10 +70,7 @@ docker run -it \
70
  ### Data generation
71
  Trajectory generation is executed through the rollout runner combined with a behavior policy.
72
  The framework is policy-based: any controller that maps
73
- \[
74
- s_t \rightarrow a_t
75
- \]
76
- can be used to generate trajectories.
77
 
78
  use the data generation script along with rollout runner to generate sequential data.
79
 
@@ -82,6 +79,52 @@ a particular building model then it will work excellent. This works a framework
82
 
83
  We have rollouts which you can use to generate specific building location data or building type or combine different envolop locations and weather and building type.
84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
  ### Training Phase
86
 
87
  After you have generated data you can move on to the training phase which , for our experiments we generted more than 2300 sequential data combinations and resulted in more than 3 million trajectories.
 
70
  ### Data generation
71
  Trajectory generation is executed through the rollout runner combined with a behavior policy.
72
  The framework is policy-based: any controller that maps
73
+
 
 
 
74
 
75
  use the data generation script along with rollout runner to generate sequential data.
76
 
 
79
 
80
  We have rollouts which you can use to generate specific building location data or building type or combine different envolop locations and weather and building type.
81
 
82
+ ```bash
83
+ # Inside Docker container
84
+ cd /workspace
85
+
86
+ python trajectory_generator.py \
87
+ --manifest patched_reference_data_base/OfficeSmall/reference_database.json \
88
+ --output_dir dataset \
89
+ --behavior seasonal_reactive \
90
+ --time_freq 900 \
91
+ ```
92
+
93
+ Optional multi-building combinations:
94
+
95
+ ```bash
96
+ python trajectory_generator.py \
97
+ --manifest patched_reference_data_base/OfficeMedium/reference_database.json \
98
+ --combine_climates True \
99
+ --combine_envelopes True \
100
+ --output_dir dataset_large
101
+ ```
102
+
103
+ Each episode is stored as compressed `.npz`:
104
+
105
+ ```
106
+ dataset/
107
+ ├── OfficeSmall__Buffalo__standard__episode_001.npz
108
+ ├── OfficeSmall__Dubai__high_internal__episode_002.npz
109
+ └── metadata.json
110
+ ```
111
+
112
+ Each file contains:
113
+
114
+ ```python
115
+ {
116
+ "observations": np.ndarray(T, state_dim),
117
+ "actions": np.ndarray(T, action_dim),
118
+ "rewards": np.ndarray(T),
119
+ "terminals": np.ndarray(T),
120
+ "state_keys": list,
121
+ "action_keys": list,
122
+ "meta": dict
123
+ }
124
+ ```
125
+
126
+ Temporal resolution: 15 minutes
127
+ Episode length: 35040 timesteps (1 simulation year)
128
  ### Training Phase
129
 
130
  After you have generated data you can move on to the training phase which , for our experiments we generted more than 2300 sequential data combinations and resulted in more than 3 million trajectories.