π Time Series Forecasting (LSTM | PyTorch)
A simple PyTorch LSTM model for univariate time-series forecasting using a sliding window and recursive (multi-step) prediction.
This Hugging Face repo hosts the trained model + scaler used by a Streamlit inference app.
Training β Model β Inference
- Training notebook (Colab): https://colab.research.google.com/drive/1jUm62T9kLNL0glkE6eSx2daNhqmxHQD9
- Inference app (Streamlit): https://github.com/sparklerz/Deep-Learning-Fundamentals-Suite
(page:pages/05_Time_Series_Forecast_LSTM.py)
Whatβs in this repo
artifacts/model_state.ptβ PyTorch LSTMstate_dictartifacts/scaler.pklβ scaler fitted on the training series (used for scaling + inverse-scaling)artifacts/config.jsonβ key inference settings:window_size(lookback length)hidden_sizeforecast_horizon(default steps)- optional
series_id(which column to use in the CSV)
Alcohol_Sales.csvβ the example time series used by the demo app
Inputs
- A single numeric time series (1D array).
- The model uses the last
window_sizepoints as input context.
Preprocessing (same as Streamlit app)
- Values are scaled using the saved scaler (
scaler.pkl). - Forecasting is recursive:
- predict 1 step ahead
- append prediction to the window
- repeat for
Nsteps
- Predictions are inverse-transformed back to the original scale.
Output
- A sequence of forecasted values for the chosen forecast horizon (steps ahead).
Quickstart (load + forecast)
import json
import numpy as np
import torch
import torch.nn as nn
import pickle
from huggingface_hub import hf_hub_download
REPO_ID = "ash001/timeseries-forecast-lstm"
# --- download artifacts ---
cfg_path = hf_hub_download(REPO_ID, "artifacts/config.json")
pt_path = hf_hub_download(REPO_ID, "artifacts/model_state.pt")
sc_path = hf_hub_download(REPO_ID, "artifacts/scaler.pkl")
cfg = json.load(open(cfg_path, "r"))
window_size = int(cfg["window_size"])
hidden_size = int(cfg["hidden_size"])
with open(sc_path, "rb") as f:
scaler = pickle.load(f)
# --- define model (same structure as Streamlit app) ---
class LSTMnetwork(nn.Module):
def __init__(self, input_size=1, hidden_size=100, output_size=1):
super().__init__()
self.hidden_size = hidden_size
self.lstm = nn.LSTM(input_size, hidden_size)
self.linear = nn.Linear(hidden_size, output_size)
self.hidden = (torch.zeros(1, 1, hidden_size), torch.zeros(1, 1, hidden_size))
def reset_hidden(self):
self.hidden = (torch.zeros(1, 1, self.hidden_size), torch.zeros(1, 1, self.hidden_size))
def forward(self, seq):
lstm_out, self.hidden = self.lstm(seq.view(len(seq), 1, -1), self.hidden)
pred = self.linear(lstm_out.view(len(seq), -1))
return pred[-1]
model = LSTMnetwork(input_size=1, hidden_size=hidden_size, output_size=1)
model.load_state_dict(torch.load(pt_path, map_location="cpu"))
model.eval()
def forecast(series_values: np.ndarray, steps: int):
scaled = scaler.transform(series_values.reshape(-1, 1)).astype(np.float32).reshape(-1)
window = scaled[-window_size:].copy()
preds_scaled = []
for _ in range(steps):
seq = torch.tensor(window, dtype=torch.float32)
model.reset_hidden()
with torch.no_grad():
pred = float(model(seq).item())
preds_scaled.append(pred)
window = np.concatenate([window[1:], [pred]])
preds = scaler.inverse_transform(np.array(preds_scaled).reshape(-1, 1)).reshape(-1)
return preds
# Example:
# series = np.array([...], dtype=np.float32)
# preds = forecast(series, steps=12)
license: apache-2.0
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support