SSB_Screening_Model / README.md
Allanatrix's picture
Upload README.md with huggingface_hub
a2abe63 verified
metadata
license: apache-2.0
library_name: pytorch
tags:
  - materials-science
  - crystal-structures
  - solid-state-batteries
  - representation-learning
  - screening
model-index:
  - name: SSB Screening Model (RTX6000x2)
    results:
      - task:
          type: text-classification
          name: Screening Proxy (3-class)
        metrics:
          - type: accuracy
            value: 0.8118937
          - type: f1
            value: 0.8060277
          - type: precision
            value: 0.7671543
          - type: recall
            value: 0.8694215
          - type: val_loss
            value: 0.2856999

SSB Screening Model (RTX6000x2)

Model Summary

This model is a lightweight MLP classifier trained on NPZ-encoded inorganic crystal structure features for solid-state battery (SSB) screening proxies. It is intended to prioritize candidate structures, not to replace DFT or experimental validation.

  • Architecture: MLP (input_dim=144, hidden_dims=[512, 256, 128], dropout variable by sweep)
  • Output: 3-class classification proxy for screening tasks
  • Training Regime: supervised training on curated NPZ dataset with class-weighted loss
  • Best checkpoint: checkpoint_epoch45.pt (lowest observed val_loss in logs)

Intended Use

  • Primary: ranking/prioritization of SSB electrolyte candidates
  • Not intended: absolute property prediction or experimental ground truth replacement

Training Data

  • Dataset: ssb_npz_v1 (curated NPZ features)
  • Split: 80/10/10 (train/val/test)
  • Features: composition + lattice + derived scalar statistics (144-dim)

Evaluation

Metrics from the latest run summary:

  • Val loss: 0.2857
  • Val accuracy: 0.8119
  • Holdout accuracy: 0.8096
  • F1: 0.8060
  • Precision: 0.7672
  • Recall: 0.8694

Limitations

  • The model is a proxy classifier; it does not predict ground-truth physical properties.
  • Performance is tied to the training distribution of ssb_npz_v1.
  • Chemical regimes underrepresented in the training set may be poorly ranked.

Training Configuration (abridged)

  • Optimizer: AdamW
  • LR: sweep (best around ~3e-4)
  • Weight decay: sweep (0.005–0.02)
  • Scheduler: cosine
  • Batch size: sweep (128–512)
  • Epochs: sweep (20–60)
  • Gradient accumulation: sweep (1–4)

Citation

If you use this model, please cite the dataset and training pipeline from the Nexa_compute repository.