Update README.md
Browse files
README.md
CHANGED
|
@@ -7,24 +7,24 @@ datasets:
|
|
| 7 |
- databloom/smartmeterdata
|
| 8 |
---
|
| 9 |
|
| 10 |
-
**Owner:**
|
| 11 |
**Blogpost:** [Democratizing neuronal networks to predict energy consumption](https://www.databloom.ai/blog/democratizing-neuronal-networks-to-predict-energy-consumption)
|
| 12 |
|
| 13 |
### Model Overview ###
|
| 14 |
-
|
| 15 |
some smartmeter data (5 cols, > 12mil. instances, cols: id, device_name, property, value, timestamp) and creates a custom forecast based on selected window.
|
| 16 |
|
| 17 |
Please notice that once you load up the smartmeter data, there are inputs created on the timestamp col like wd_input (the weekday of the timestamp), as well as a cos(inus) and sin(us)
|
| 18 |
time inputs, giving the model the ability to keep track of the daytime of each instance. Finally, the inputs are merged into an input df, standardized, and differenced.
|
| 19 |
After that, some functions are used to give the user the ability to use time windows from the data. Based on these, the model generates forecasts.
|
| 20 |
|
| 21 |
-

|
| 12 |
|
| 13 |
### Model Overview ###
|
| 14 |
+
LST-E [last energy] is a Long short-term memory model to predict energy consumption forecasts based on historical data. It basically takes
|
| 15 |
some smartmeter data (5 cols, > 12mil. instances, cols: id, device_name, property, value, timestamp) and creates a custom forecast based on selected window.
|
| 16 |
|
| 17 |
Please notice that once you load up the smartmeter data, there are inputs created on the timestamp col like wd_input (the weekday of the timestamp), as well as a cos(inus) and sin(us)
|
| 18 |
time inputs, giving the model the ability to keep track of the daytime of each instance. Finally, the inputs are merged into an input df, standardized, and differenced.
|
| 19 |
After that, some functions are used to give the user the ability to use time windows from the data. Based on these, the model generates forecasts.
|
| 20 |
|
| 21 |
+

|
| 22 |
|
| 23 |
The first models created are a simple baseline model, used for evaluating the performance of the later on built LSTM model. The baseline model simply shifts the values by t=1. Hence,
|
| 24 |
there is no t=0 and each timestamp uses the value from t-1.
|
| 25 |
Finally, there's the 2-layer plain vanilla LSTM. After 11 epochs, I reached a loss of 10.86 which is rather mediocre. However, the main idea here is to build a basic forecasting model
|
| 26 |
for which this seems appropriate.
|
| 27 |
|
| 28 |
+

|
| 29 |
|
| 30 |
***Happy Hacking!***
|