YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
LSTM Sequence Anomaly Detection
This repository contains a trained LSTM Autoencoder model used for anomaly detection in sequential data.
Model Overview
The model is an LSTM-based Autoencoder, which is trained to reconstruct sequential data. Anomalies can be detected by measuring the reconstruction error: sequences with high reconstruction loss are considered anomalous.
Model Architecture:
- Type: LSTM Autoencoder
- Encoder: LSTM with 64 units
- Decoder: LSTM with 64 units
- Output Layer: TimeDistributed Dense with softmax activation
Training Details:
- Optimizer: Adam
- Loss Function: Categorical Crossentropy
- Batch Size: 128
- Epochs: 50
- Early Stopping: Patience = 5
Performance Metrics:
- Validation Loss (Mean): {val_loss.mean():.4f}
- Validation Loss (Std Dev): {val_loss.std():.4f}
- Training/Validation Loss: See the graph below.
How to Use:
- Load the model using
tf.keras.models.load_model(). - Use the model to detect anomalies in new sequences.
- Calculate reconstruction loss for each sequence, and use a threshold to classify anomalies.
Files in this Repo:
- model.h5: The trained LSTM Autoencoder model.
- metrics.txt: Performance metrics for the model.
- loss_graph.png: Loss curve during training.
- README.md: This file.
- model_config.json: Model architecture details.
- training_config.json: Training hyperparameters.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
