clearn Demo: EWC Continual Learning

This model was trained with clearn β€” a continual learning library for PyTorch.

What is this?

A simple MLP trained on 3 sequential fraud detection tasks using Elastic Weight Consolidation (EWC). Despite learning 3 tasks sequentially, the model retains 100% accuracy on all previous tasks.

How to use

pip install clearn-ai
import clearn
import torch.nn as nn

# Recreate the architecture
model = nn.Sequential(nn.Linear(128, 256), nn.ReLU(), nn.Linear(256, 10))

# Load the continual learning checkpoint
cl_model = clearn.load("./checkpoint", model=model)

# See retention across all tasks
print(cl_model.diff())

Retention Report

RetentionReport
β”œβ”€β”€ fraud_q1: 100.0% retained
β”œβ”€β”€ fraud_q2: 100.0% retained
β”œβ”€β”€ fraud_q3: 100.0% retained
β”œβ”€β”€ plasticity_score: 1.00
β”œβ”€β”€ stability_score: 1.00
└── recommendation: "stable β€” no action needed"

Strategy

  • Strategy: EWC (Elastic Weight Consolidation)
  • Lambda: 5000
  • Tasks: 3 sequential fraud detection tasks
  • Architecture: MLP (128 β†’ 256 β†’ 10)

Built with clearn β€” Wrap once. Train forever.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support