Datasets:
image imagewidth (px) 882 2.38k |
|---|
SAE Locality Data
Raw experimental artefacts and summary figures for two sparse-autoencoder (SAE) feature-locality experiments across six base language models.
Format note. Data files are PyTorch pickles (
.pt). Loading them executes arbitrary code viapickle; only load on a trusted machine and be aware thattorch.load(..., weights_only=True)will not work for these files because they contain Python dicts/lists, not tensors.
Top-level layout
ctxlen_xmodel/ # cross-model "entropy vs context length" experiment
βββ <timestamp>/
βββ run_config.json
βββ <preset>/ # one subdir per model preset
βββ run.sh # cluster submission script for this preset
βββ entropy_vs_context_len_<site>_layer<i>_<ts>.pt
βββ entropy_vs_context_len_<site>_layer<i>_<ts>/ # (legacy/empty in some runs)
<preset>/ # per-preset "entropy comparison" experiment
βββ <timestamp>/
βββ run_config.json
βββ bench.json
βββ entropy_comparison_<site>_layer<i>.pt
βββ entropy_plots_<site>_layer<i>/
βββ batch_index.json
βββ batch_NNN.png # per-batch entropy plots
figures/ # summary figures derived from the above
βββ entropy_vs_depth_crossmodel_grid_boxplot.png
βββ entropy_vs_depth_crossmodel_grid_violin.png
βββ entropy_vs_depth__<preset>.png # one per preset
βββ entropy_plots_resid_out_layer<i>_20260414_053350/ # earlier per-batch plots (pythia-70m)
<preset> is one of pythia-70m, qwen2-0.5b, gpt2-small, llama-3.2-1b,
gemma-2-2b, llama-3-8b. <site> is the hookpoint name
(resid_post, resid, resid_out, ...) and varies per preset.
File schemas
entropy_comparison_*.pt (per-preset experiment)
{
"batch_results": [
{
"batch_idx": int,
"start_idx": int, # offset into the loader's text stream
"feature_entropies": {feat_idx: float}, # per-feature entropy in bits
"token_vector_entropy": float,
"num_active_features": int,
"feature_influences": {feat_idx: np.ndarray}, # length-N influence vector per feature
"feature_activations": {feat_idx: np.ndarray},
"token_vector_influence": np.ndarray,
},
... # one entry per batch (50 by default)
],
"summary": {"site": str, "preset": str, "timestamp": str, "layer": int, ...},
"config": {"preset": str, "threshold": float, "total_features": int, ...},
"plots_dir": str, # absolute path on the machine that produced the run
"batch_start_indices": [int, ...],
}
entropy_vs_context_len_*.pt (cross-model experiment)
{
"results_by_context_len": {
ctx_len: {
"feature_entropies": {feat_idx: float},
"token_vector_entropy": float,
"num_active_features": int,
...
},
... # one entry per context length (8, 72, 136, ...)
},
"summary": {"preset": str, "site": str, "layer": int, "timestamp": str,
"max_context_len": int, ...},
"config": {"preset": str, "threshold": float, "total_features": int,
"sae_source": str, ...},
"plots_dir": str,
}
The top-level run_config.json in each ctxlen_xmodel/<timestamp>/ folder
records the global parameters of that cross-model run (presets, per-preset
max_context_len/step/char_budget, seed, git commit, host).
Figures
figures/ contains summary plots derived from the raw .pt artefacts above:
entropy_vs_depth_crossmodel_grid_{boxplot,violin}.pngβ feature-entropy distribution by layer depth, side-by-side across all six presets.entropy_vs_depth__<preset>.pngβ per-preset depth sweep, one figure per model.entropy_plots_resid_out_layer<i>_20260414_053350/β earlier (2026-04-14) per-batch entropy plots for pythia-70m, kept for reference. The corresponding canonical run inpythia-70m/20260427_105943/supersedes these but uses the same plotting format.
Caveats
plots_dirinside each.ptand thehostfield inrun_config.jsonreflect the originating machine and are not portable.- The
entropy_plots_*/PNG directories are derived artefacts and can be regenerated from the corresponding.pt. - Symlinks named
latestwere used locally to point at the most recent run; they are intentionally not included here. The most recent run is the timestamped subdirectory with the largest<timestamp>value.
Citation / contact
For questions, contact the authors of the originating project.
- Downloads last month
- 54