Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
882
2.38k
End of preview. Expand in Data Studio

SAE Locality Data

Raw experimental artefacts and summary figures for two sparse-autoencoder (SAE) feature-locality experiments across six base language models.

Format note. Data files are PyTorch pickles (.pt). Loading them executes arbitrary code via pickle; only load on a trusted machine and be aware that torch.load(..., weights_only=True) will not work for these files because they contain Python dicts/lists, not tensors.

Top-level layout

ctxlen_xmodel/                  # cross-model "entropy vs context length" experiment
└── <timestamp>/
    β”œβ”€β”€ run_config.json
    └── <preset>/               # one subdir per model preset
        β”œβ”€β”€ run.sh              # cluster submission script for this preset
        β”œβ”€β”€ entropy_vs_context_len_<site>_layer<i>_<ts>.pt
        └── entropy_vs_context_len_<site>_layer<i>_<ts>/   # (legacy/empty in some runs)

<preset>/                        # per-preset "entropy comparison" experiment
└── <timestamp>/
    β”œβ”€β”€ run_config.json
    β”œβ”€β”€ bench.json
    β”œβ”€β”€ entropy_comparison_<site>_layer<i>.pt
    └── entropy_plots_<site>_layer<i>/
        β”œβ”€β”€ batch_index.json
        └── batch_NNN.png       # per-batch entropy plots

figures/                         # summary figures derived from the above
β”œβ”€β”€ entropy_vs_depth_crossmodel_grid_boxplot.png
β”œβ”€β”€ entropy_vs_depth_crossmodel_grid_violin.png
β”œβ”€β”€ entropy_vs_depth__<preset>.png   # one per preset
└── entropy_plots_resid_out_layer<i>_20260414_053350/  # earlier per-batch plots (pythia-70m)

<preset> is one of pythia-70m, qwen2-0.5b, gpt2-small, llama-3.2-1b, gemma-2-2b, llama-3-8b. <site> is the hookpoint name (resid_post, resid, resid_out, ...) and varies per preset.

File schemas

entropy_comparison_*.pt (per-preset experiment)

{
  "batch_results": [
    {
      "batch_idx": int,
      "start_idx": int,                          # offset into the loader's text stream
      "feature_entropies": {feat_idx: float},    # per-feature entropy in bits
      "token_vector_entropy": float,
      "num_active_features": int,
      "feature_influences": {feat_idx: np.ndarray},  # length-N influence vector per feature
      "feature_activations": {feat_idx: np.ndarray},
      "token_vector_influence": np.ndarray,
    },
    ...                                           # one entry per batch (50 by default)
  ],
  "summary": {"site": str, "preset": str, "timestamp": str, "layer": int, ...},
  "config":  {"preset": str, "threshold": float, "total_features": int, ...},
  "plots_dir": str,                              # absolute path on the machine that produced the run
  "batch_start_indices": [int, ...],
}

entropy_vs_context_len_*.pt (cross-model experiment)

{
  "results_by_context_len": {
    ctx_len: {
      "feature_entropies": {feat_idx: float},
      "token_vector_entropy": float,
      "num_active_features": int,
      ...
    },
    ...                                           # one entry per context length (8, 72, 136, ...)
  },
  "summary": {"preset": str, "site": str, "layer": int, "timestamp": str,
              "max_context_len": int, ...},
  "config":  {"preset": str, "threshold": float, "total_features": int,
              "sae_source": str, ...},
  "plots_dir": str,
}

The top-level run_config.json in each ctxlen_xmodel/<timestamp>/ folder records the global parameters of that cross-model run (presets, per-preset max_context_len/step/char_budget, seed, git commit, host).

Figures

figures/ contains summary plots derived from the raw .pt artefacts above:

  • entropy_vs_depth_crossmodel_grid_{boxplot,violin}.png β€” feature-entropy distribution by layer depth, side-by-side across all six presets.
  • entropy_vs_depth__<preset>.png β€” per-preset depth sweep, one figure per model.
  • entropy_plots_resid_out_layer<i>_20260414_053350/ β€” earlier (2026-04-14) per-batch entropy plots for pythia-70m, kept for reference. The corresponding canonical run in pythia-70m/20260427_105943/ supersedes these but uses the same plotting format.

Caveats

  • plots_dir inside each .pt and the host field in run_config.json reflect the originating machine and are not portable.
  • The entropy_plots_*/ PNG directories are derived artefacts and can be regenerated from the corresponding .pt.
  • Symlinks named latest were used locally to point at the most recent run; they are intentionally not included here. The most recent run is the timestamped subdirectory with the largest <timestamp> value.

Citation / contact

For questions, contact the authors of the originating project.

Downloads last month
54