Add trained model, metrics, loss graph, and training config
Browse files- README.md +40 -0
- loss_graph.png +0 -0
- metrics.txt +4 -0
- model_config.json +12 -0
- trained_lstm_model.h5 +3 -0
- training_config.json +8 -0
README.md
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
# LSTM Sequence Anomaly Detection
|
| 3 |
+
|
| 4 |
+
This repository contains a trained LSTM Autoencoder model used for anomaly detection in sequential data.
|
| 5 |
+
|
| 6 |
+
## Model Overview
|
| 7 |
+
The model is an LSTM-based Autoencoder, which is trained to reconstruct sequential data. Anomalies can be detected by measuring the reconstruction error: sequences with high reconstruction loss are considered anomalous.
|
| 8 |
+
|
| 9 |
+
### Model Architecture:
|
| 10 |
+
- **Type**: LSTM Autoencoder
|
| 11 |
+
- **Encoder**: LSTM with 64 units
|
| 12 |
+
- **Decoder**: LSTM with 64 units
|
| 13 |
+
- **Output Layer**: TimeDistributed Dense with softmax activation
|
| 14 |
+
|
| 15 |
+
## Training Details:
|
| 16 |
+
- **Optimizer**: Adam
|
| 17 |
+
- **Loss Function**: Categorical Crossentropy
|
| 18 |
+
- **Batch Size**: 128
|
| 19 |
+
- **Epochs**: 50
|
| 20 |
+
- **Early Stopping**: Patience = 5
|
| 21 |
+
|
| 22 |
+
## Performance Metrics:
|
| 23 |
+
- **Validation Loss (Mean)**: {val_loss.mean():.4f}
|
| 24 |
+
- **Validation Loss (Std Dev)**: {val_loss.std():.4f}
|
| 25 |
+
- **Training/Validation Loss**: See the graph below.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+
## How to Use:
|
| 30 |
+
1. Load the model using `tf.keras.models.load_model()`.
|
| 31 |
+
2. Use the model to detect anomalies in new sequences.
|
| 32 |
+
3. Calculate reconstruction loss for each sequence, and use a threshold to classify anomalies.
|
| 33 |
+
|
| 34 |
+
## Files in this Repo:
|
| 35 |
+
- **model.h5**: The trained LSTM Autoencoder model.
|
| 36 |
+
- **metrics.txt**: Performance metrics for the model.
|
| 37 |
+
- **loss_graph.png**: Loss curve during training.
|
| 38 |
+
- **README.md**: This file.
|
| 39 |
+
- **model_config.json**: Model architecture details.
|
| 40 |
+
- **training_config.json**: Training hyperparameters.
|
loss_graph.png
ADDED
|
metrics.txt
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Validation Reconstruction Loss:
|
| 2 |
+
Mean: 0.0015, Std: 0.0030
|
| 3 |
+
Losses: [7.0015353e-04 8.8246615e-04 1.0358144e-03 2.3799071e-03 1.2637556e-03
|
| 4 |
+
6.6058309e-04 4.1786637e-04 5.0311431e-04 4.6074213e-04 8.4128020e-05]
|
model_config.json
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architecture": "LSTM Autoencoder",
|
| 3 |
+
"input_shape": [
|
| 4 |
+
10,
|
| 5 |
+
6
|
| 6 |
+
],
|
| 7 |
+
"lstm_units": 64,
|
| 8 |
+
"epochs": 50,
|
| 9 |
+
"batch_size": 128,
|
| 10 |
+
"optimizer": "adam",
|
| 11 |
+
"loss_function": "categorical_crossentropy"
|
| 12 |
+
}
|
trained_lstm_model.h5
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:598715b511564675aaeebbb04d4f63e86c4fb57bcb6aa23e9ea16292b0f32c6f
|
| 3 |
+
size 656080
|
training_config.json
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"learning_rate": 0.001,
|
| 3 |
+
"batch_size": 128,
|
| 4 |
+
"epochs": 50,
|
| 5 |
+
"optimizer": "adam",
|
| 6 |
+
"early_stopping": true,
|
| 7 |
+
"early_stopping_patience": 5
|
| 8 |
+
}
|