πŸ“ˆ Time Series Forecasting (LSTM | PyTorch)

A simple PyTorch LSTM model for univariate time-series forecasting using a sliding window and recursive (multi-step) prediction.

This Hugging Face repo hosts the trained model + scaler used by a Streamlit inference app.

Training β†’ Model β†’ Inference

What’s in this repo

  • artifacts/model_state.pt β€” PyTorch LSTM state_dict
  • artifacts/scaler.pkl β€” scaler fitted on the training series (used for scaling + inverse-scaling)
  • artifacts/config.json β€” key inference settings:
    • window_size (lookback length)
    • hidden_size
    • forecast_horizon (default steps)
    • optional series_id (which column to use in the CSV)
  • Alcohol_Sales.csv β€” the example time series used by the demo app

Inputs

  • A single numeric time series (1D array).
  • The model uses the last window_size points as input context.

Preprocessing (same as Streamlit app)

  • Values are scaled using the saved scaler (scaler.pkl).
  • Forecasting is recursive:
    • predict 1 step ahead
    • append prediction to the window
    • repeat for N steps
  • Predictions are inverse-transformed back to the original scale.

Output

  • A sequence of forecasted values for the chosen forecast horizon (steps ahead).

Quickstart (load + forecast)

import json
import numpy as np
import torch
import torch.nn as nn
import pickle
from huggingface_hub import hf_hub_download

REPO_ID = "ash001/timeseries-forecast-lstm"

# --- download artifacts ---
cfg_path = hf_hub_download(REPO_ID, "artifacts/config.json")
pt_path  = hf_hub_download(REPO_ID, "artifacts/model_state.pt")
sc_path  = hf_hub_download(REPO_ID, "artifacts/scaler.pkl")

cfg = json.load(open(cfg_path, "r"))
window_size = int(cfg["window_size"])
hidden_size = int(cfg["hidden_size"])

with open(sc_path, "rb") as f:
  scaler = pickle.load(f)

# --- define model (same structure as Streamlit app) ---
class LSTMnetwork(nn.Module):
    def __init__(self, input_size=1, hidden_size=100, output_size=1):
        super().__init__()
        self.hidden_size = hidden_size
        self.lstm = nn.LSTM(input_size, hidden_size)
        self.linear = nn.Linear(hidden_size, output_size)
        self.hidden = (torch.zeros(1, 1, hidden_size), torch.zeros(1, 1, hidden_size))

    def reset_hidden(self):
        self.hidden = (torch.zeros(1, 1, self.hidden_size), torch.zeros(1, 1, self.hidden_size))

    def forward(self, seq):
        lstm_out, self.hidden = self.lstm(seq.view(len(seq), 1, -1), self.hidden)
        pred = self.linear(lstm_out.view(len(seq), -1))
        return pred[-1]

model = LSTMnetwork(input_size=1, hidden_size=hidden_size, output_size=1)
model.load_state_dict(torch.load(pt_path, map_location="cpu"))
model.eval()

def forecast(series_values: np.ndarray, steps: int):
    scaled = scaler.transform(series_values.reshape(-1, 1)).astype(np.float32).reshape(-1)
    window = scaled[-window_size:].copy()

    preds_scaled = []
    for _ in range(steps):
        seq = torch.tensor(window, dtype=torch.float32)
        model.reset_hidden()
        with torch.no_grad():
            pred = float(model(seq).item())
        preds_scaled.append(pred)
        window = np.concatenate([window[1:], [pred]])

    preds = scaler.inverse_transform(np.array(preds_scaled).reshape(-1, 1)).reshape(-1)
    return preds

# Example:
# series = np.array([...], dtype=np.float32)
# preds = forecast(series, steps=12)

license: apache-2.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support