Datasets:
Tasks:
Feature Extraction
Languages:
English
Size:
1K<n<10K
Tags:
representation-similarity
representation-convergence
cross-model-transport
benchmark
alignment
evaluation
License:
| cite_key,short_name,year,thread,domain,scale,metric_type,metric_name,reported_value,bcct_regime,key_finding | |
| prh2024,PRH,2024,convergence,vision,XL,alignment,mutual k-NN,Increasing with scale,Mixed,78 vision models converge toward shared statistical structure; alignment increases with model/data scale | |
| groger2026aristotelian,Aristotelian,2026,convergence,vision,XL,alignment,null-calibrated k-NN,Global vanishes after calibration,Local-Only,"After null calibration, global convergence disappears; only local neighborhood structure survives" | |
| crh2025,CRH,2025,convergence,vision,L,alignment,6 canonical relations,Universal alignment relations,Convergent,"Six alignment relations among representations, weights, and gradients universally govern feature formation" | |
| park2026convergent,Convergent Reps,2026,convergence,multimodal,M,alignment,"CKA, k-NN",Multi-task drives convergence,Convergent,Multi-task training drives convergence in controlled city-coordinate worlds; divergent tasks actively harm it | |
| chawla2026metaothello,Meta-Othello,2026,convergence,language,M,alignment,board probe accuracy,Task-dependent convergence,Mixed,"Mixed-variant board games confirm convergence is task-dependent, not universal" | |
| kapoor2025convergent,Convergent early,2025,convergence,vision,M,alignment,epoch-wise k-NN,Crystallizes in epoch 1,Convergent,Nearly all alignment crystallizes within the first training epoch | |
| dunlap2025divergent,Divergent Reps,2025,convergence,vision,L,alignment,representation divergence,Task-specific divergence,Divergent,Identifies conditions under which representations actively diverge rather than converge | |
| calvo2026flatdino,FlatDINO,2026,convergence,vision,M,alignment,flatness + alignment,Flat minima align better,Convergent,Compressing to 32 tokens improves alignment; flat minima produce more convergent representations | |
| kornblith2019similarity,CKA,2019,convergence,vision,M,alignment,CKA,CKA > invariance tests,N/A,CKA is more reliable than CCA-based metrics for comparing representations across architectures | |
| raghu2017svcca,SVCCA,2017,convergence,vision,M,alignment,SVCCA,Layer-wise similarity,N/A,SVCCA reveals layer-wise representation dynamics during training | |
| morcos2018pwcca,PWCCA,2018,convergence,vision,M,alignment,PWCCA,Projection-weighted CCA,N/A,Projection weighting removes sensitivity to noise dimensions in CCA-based comparisons | |
| klabunde2023similarity,Similarity survey,2023,convergence,vision,XL,alignment,multiple,Metrics disagree,N/A,Comprehensive survey showing similarity metrics often disagree on representation comparisons | |
| barannikov2022rtd,RTD,2022,convergence,vision,M,alignment,RTD (topological),Topological distance,N/A,Representation Topology Divergence captures structural differences missed by kernel-based metrics | |
| williams2021generalized,Gen. Shape,2021,convergence,vision,M,alignment,shape metrics,Generalized shape framework,N/A,"Unifies Procrustes, CKA, and other metrics within a generalized shape analysis framework" | |
| chen2020simclr,SimCLR,2020,convergence,vision,M,alignment,linear probe,76.5% ImageNet top-1,Local-Only,Contrastive SSL produces representations with high linear probe accuracy; temperature controls implicit bitrate | |
| he2020moco,MoCo,2020,convergence,vision,M,alignment,linear probe,Momentum contrast,Local-Only,Momentum-based contrastive learning matches supervised pre-training in transfer quality | |
| grill2020byol,BYOL,2020,convergence,vision,M,alignment,linear probe,74.3% without negatives,Local-Only,Self-supervised learning without negative pairs; predictor network acts as implicit spectral filter | |
| caron2021dino,DINO,2021,convergence,vision,M,alignment,"k-NN, attention maps",Emergent segmentation,Local-Only,Self-distillation produces features with emergent semantic segmentation properties | |
| oquab2024dinov2,DINOv2,2024,convergence,vision,L,alignment,multiple benchmarks,Universal visual features,Convergent,All-purpose visual features that approach universal representations; highest information density among SSL encoders | |
| radford2021clip,CLIP,2021,convergence,cross_modal,XL,alignment,zero-shot transfer,76.2% ImageNet zero-shot,Convergent,Contrastive vision-language pre-training enables zero-shot transfer; highest transport linearity in BCCT atlas | |
| sun2024evaclip,EVA-CLIP,2024,convergence,cross_modal,L,alignment,"zero-shot, linear probe",Improved CLIP at scale,Convergent,Improved training techniques push CLIP further toward universal representations | |
| zhai2023siglip,SigLIP,2023,convergence,cross_modal,L,alignment,sigmoid loss,"Simpler loss, same quality",Convergent,"Sigmoid loss achieves SOTA alignment, suggesting cross-modal convergence is robust to loss choice" | |
| elhage2022toymodels,Superposition,2022,convergence,language,S,alignment,feature geometry,Superposition mechanism,N/A,Networks develop superposition to represent more features than dimensions; mechanistic account of bitrate maximization | |
| bricken2023monosemanticity,Monosemantic,2023,convergence,language,S,alignment,SAE features,Interpretable features,Convergent,Sparse autoencoders extract monosemantic features consistent across models | |
| templeton2024scaling,Scaling SAEs,2024,convergence,language,M,alignment,SAE features at scale,Features scale with model,Convergent,Monosemantic features scale predictably with model size; more features discovered at larger scale | |
| hernandez2024linearity,Linear relations,2024,convergence,language,M,alignment,linearity score,Linear encoding of relations,Convergent,"LLMs develop linear representations of semantic relations, supporting cross-model transport" | |
| gurnee2024language,Space & Time,2024,convergence,language,M,alignment,probing accuracy,Spatial/temporal representations,Convergent,LLMs develop shared representations of space and time across model families | |
| chen2025transferring,Transfer structs,2025,convergence,language,M,transport,cross-model transfer,Near-lossless transfer,Convergent,Internal structures can be transferred across LLMs with near-lossless quality | |
| li2024othello,Othello-GPT,2024,convergence,language,S,alignment,world model probes,Emergent world model,Convergent,Sequence models trained on game transcripts develop emergent internal world representations | |
| conneau2020xlmr,XLM-R,2020,convergence,language,XL,alignment,cross-lingual transfer,100+ languages aligned,Convergent,Multilingual models develop shared representations across 100+ languages at scale | |
| pires2019multilingual,mBERT,2019,convergence,language,L,alignment,cross-lingual zero-shot,Surprising cross-lingual transfer,Local-Only,Multilingual BERT enables zero-shot cross-lingual transfer without explicit alignment objective | |
| wu2019beto,Beto/Bentz/Becas,2019,convergence,language,M,alignment,cross-lingual probing,Similar internal structure,Local-Only,BERT models trained on different languages develop similar internal representations | |
| yamins2014hierarchical,DNN-Brain RSA,2014,convergence,neuroscience,M,alignment,RSA,DNN-ventral stream match,Local-Only,DNNs trained on object recognition converge toward primate ventral stream representations | |
| conwell2023what,DNN-Brain survey,2023,convergence,neuroscience,XL,alignment,multiple,Alignment is metric-sensitive,Mixed,DNN-brain alignment is significant but sensitive to similarity metric choice | |
| soni2024metric,Metric sensitivity,2024,convergence,neuroscience,L,alignment,metric comparison,Metric choice alters conclusions,N/A,Metric choice can fundamentally alter conclusions about DNN-brain alignment | |
| wakhloo2026manifold,Manifold capacity,2026,convergence,neuroscience,M,bitrate,manifold capacity,Information density metric,N/A,Manifold capacity is the neuroscience analogue of representation bitrate | |
| kingma2014vae,VAE,2014,latent_design,vision,S,quality,ELBO,Principled latent framework,N/A,First principled framework for continuous latent spaces with explicit bitrate control via KL divergence | |
| higgins2017betavae,beta-VAE,2017,latent_design,vision,S,quality,disentanglement,beta controls disentanglement,N/A,"Increasing beta decouples reconstruction from bottleneck constraint, encouraging disentangled factors" | |
| oord2017vqvae,VQ-VAE,2017,latent_design,vision,S,quality,reconstruction,Discrete codebook latents,N/A,Discrete codebooks provide hard bitrate cap; strict R control via codebook size | |
| razavi2019vqvae2,VQ-VAE-2,2019,latent_design,vision,M,quality,FID,Hierarchical discrete latents,N/A,Hierarchical discrete latents separate global from local structure at different bitrate levels | |
| esser2021taming,VQGAN,2021,latent_design,vision,M,quality,"FID, LPIPS",FID ~7.9 on ImageNet,N/A,Adversarial + perceptual losses decouple perceptual quality from raw bitrate | |
| rombach2022ldm,LDM/SD,2022,latent_design,vision,L,quality,FID,FID ~3.6 on ImageNet-256,N/A,Latent diffusion in 8x-compressed VAE space enables high-quality generation at reduced compute | |
| ho2020ddpm,DDPM,2020,latent_design,vision,S,quality,"FID, IS",FID 3.17 on CIFAR-10,N/A,Denoising diffusion establishes progressive bottleneck; reverse process crystallizes structure | |
| song2021score,Score SDE,2021,latent_design,vision,M,quality,"likelihood, FID",Unified SDE framework,N/A,Unifies SMLD and DDPM under continuous-time SDE; enables exact likelihood computation | |
| dhariwal2021diffusion,Diff. beats GANs,2021,latent_design,vision,M,quality,FID,FID 2.97 on ImageNet-256,N/A,Diffusion models surpass GANs in image quality; classifier guidance provides dynamic bitrate boost | |
| lipman2023flow,Flow Matching,2023,latent_design,vision,M,quality,"FID, likelihood",Deterministic vector field,N/A,Conditional flow matching learns deterministic transport from noise to data without SDE simulation | |
| liu2023rectified,Rectified Flow,2023,latent_design,vision,M,quality,"FID, straightness",Straighter transport paths,N/A,Rectifying flow trajectories enables few-step generation; connects OT to generative modeling | |
| esser2024sd3,SD3,2024,latent_design,vision,L,quality,"FID, human eval",SOTA text-to-image,N/A,Scaling rectified flow transformers achieves new SOTA in text-to-image generation | |
| ul2026,Unified Latents,2026,latent_design,vision,L,bitrate,"bpd, FID, FVD","FID 1.4, FVD 1.3",N/A,First system with continuous bitrate control via diffusion prior; enables causal bitrate intervention | |
| rae2025,RAE,2025,latent_design,vision,M,quality,"FID, rFID",Semantic latents from DINOv2,N/A,Using foundation model features as latent space accelerates diffusion training and improves quality | |
| dcae2025,DC-AE,2025,latent_design,vision,M,quality,"rFID, PSNR",32x spatial compression,N/A,Deep compression at 32x preserves fidelity; tests limits of extreme bitrate reduction | |
| bao2022beit,BEiT,2022,latent_design,vision,M,quality,top-1 accuracy,Discrete tokens for MIM,N/A,Hard bitrate limit via discrete tokens serves as powerful pretext for learning transportable features | |
| tolstikhin2018wasserstein,WAE,2018,latent_design,vision,S,quality,"FID, reconstruction",OT-based regularization,N/A,Replacing KL with Wasserstein distance produces sharper reconstructions and better latent geometry | |
| bansal2021stitching,Stitching,2021,transport,vision,L,transport,stitching accuracy,tau ~0.95 same-seed,Convergent,Thin linear layer bridges representations; Anna Karenina effect: good models are alike | |
| moschella2023relative,Relative Reps,2023,transport,cross_modal,M,transport,zero-shot stitching,Zero-shot cross-model communication,Convergent,Kernel-based relative representations enable zero-shot latent communication without paired training | |
| theseus2026,Theseus,2026,transport,vision,M,transport,cross-arch transfer,Task vector transport,Convergent,Task vectors transport across heterogeneous architectures via Procrustes alignment | |
| fula2025,FuLA,2025,transport,vision,M,transport,functional alignment,Nonlinear alignment,Local-Only,Functional latent alignment extends transport to cross-objective pairs where linear maps fail | |
| ifula2026,iFuLA,2026,transport,vision,M,transport,invariance-aware transport,Geometric constraints,Local-Only,Incorporating geometric constraints improves nonlinear transport robustness | |
| aligned_sae2026,Aligned SAE,2026,transport,cross_modal,S,transport,bimodal atoms,Bimodal atoms carry alignment,Convergent,Specific bimodal subspaces carry the entirety of CLIP's cross-modal alignment signal | |
| ilharco2023editing,Task Arithmetic,2023,transport,vision,M,transport,task accuracy,Additive task vectors,Convergent,Fine-tuning directions are linearly composable in weight space for task addition/removal | |
| wortsman2022soups,Model Soups,2022,transport,vision,M,transport,OOD accuracy,Averaging improves OOD,Convergent,Weight averaging of fine-tuned models improves out-of-distribution generalization | |
| ainsworth2023rebasin,Git Re-Basin,2023,transport,vision,M,transport,barrier height,Permutation removes barrier,Convergent,Loss barrier between models is largely a permutation symmetry artifact; re-basin enables merging | |
| yadav2023ties,TIES-Merging,2023,transport,vision,M,transport,multi-task accuracy,Resolves sign conflicts,Convergent,Trimming + sign election + averaging resolves interference in multi-task merging | |
| yu2024dare,DARE,2024,transport,language,M,transport,LLM benchmark accuracy,"90% dropout, rescale",Convergent,Random pruning of 90% of task vector entries + rescaling effectively reduces merge interference | |
| matena2022fisher,Fisher Merging,2022,transport,vision,M,transport,Fisher-weighted acc.,Curvature-aware merging,Convergent,Fisher information weighting preserves critical parameters during model merging | |
| stoica2024zipit,ZipIt,2024,transport,vision,M,transport,merged accuracy,Feature-based zipping,Local-Only,Merging layers by feature similarity enables fusion of models with different widths/architectures | |
| frankle2020linear,Linear connectivity,2020,transport,vision,M,transport,loss barrier,Linear connectivity from same init,Convergent,Models from same initialization exhibit linear mode connectivity; different inits separated by barriers | |
| cuturi2013sinkhorn,Sinkhorn,2013,transport,theory,N/A,theoretical,entropic OT,Efficient OT solver,N/A,Entropic regularization makes OT differentiable and scalable via Sinkhorn iterations | |
| peyre2019computational,Comp. OT,2019,transport,theory,N/A,theoretical,OT survey,Comprehensive OT framework,N/A,Comprehensive computational OT framework including GW distance for heterogeneous space alignment | |
| courty2017ot,OT Domain Adapt.,2017,transport,vision,M,transport,domain adaptation acc.,OT preserves semantics,Local-Only,OT-based domain adaptation preserves semantic structure under distribution shift | |
| moreau2026probing,OT String Method,2026,transport,vision,M,transport,path energy,Sharp regime transitions,Mixed,Minimum energy paths between models reveal sharp phase transitions in alignment regimes | |
| sotalign2026,SOTAlign,2026,transport,vision,M,transport,semi-supervised OT,Anchor-constrained OT,Local-Only,Semi-supervised OT with anchor samples significantly improves alignment under high TAI | |
| achara2026multiway,Multi-Way OT,2026,transport,vision,L,transport,Wasserstein barycenter,Multi-model alignment,Convergent,Wasserstein barycenter enables simultaneous alignment of M models into a unified latent space | |
| tong2024flow,OT-CFM,2024,transport,vision,M,quality,FID with OT coupling,Minibatch OT improves flow,N/A,Minibatch OT couplings improve flow matching; connects generative modeling to transport theory | |
| tishby1999information,IB Principle,1999,theory,theory,N/A,theoretical,"I(X;Z), I(Z;Y)",IB framework,N/A,Information Bottleneck principle: optimal representations compress X while preserving information about Y | |
| alemi2017deep,Deep VIB,2017,theory,theory,M,theoretical,variational IB,Tractable deep IB,N/A,Variational approximation makes IB tractable for deep networks; connects compression to generalization | |
| shwartzziv2017opening,Opening Black Box,2017,theory,theory,S,theoretical,information plane,Compression phase observed,N/A,DNNs exhibit a compression phase in the information plane; sparked IB-in-DL debate | |
| saxe2018information,IB critique,2018,theory,theory,S,theoretical,information plane,Compression is activation-dependent,N/A,"Compression in DNNs depends on activation function, not a universal training dynamic" | |
| geoib2026,GeoIB,2026,theory,theory,M,theoretical,"Fisher-Rao, Jacobian",Geometric IB decomposition,N/A,I(X;Z) and I(Z;Y) decompose into Fisher-Rao (distribution) and Jacobian-Frobenius (geometry) terms | |
| pereg2026representation,Rep. Rate,2026,theory,theory,N/A,theoretical,representation rate,Fundamental limits,N/A,Defines representation rate and derives fundamental limits determined by source entropy | |
| papyan2020neural,Neural Collapse,2020,theory,vision,M,alignment,NC metrics,Simplex ETF geometry,Convergent,Terminal phase training produces simplex ETF geometry; ultimate convergence for classifiers | |
| kaplan2020scaling,Scaling Laws,2020,theory,language,XL,theoretical,power-law fits,Loss ~ params^{-alpha},N/A,"Power-law scaling of loss with compute, data, and parameters; bitrate implications for convergence" | |
| hoffmann2022chinchilla,Chinchilla,2022,theory,language,XL,theoretical,compute-optimal scaling,Data and params scale equally,N/A,Compute-optimal training scales data and parameters equally; implications for representation quality | |
| wei2022emergent,Emergent abilities,2022,theory,language,XL,theoretical,capability emergence,Abilities emerge at scale,N/A,Certain capabilities emerge only at sufficient scale; possible phase transition in representation quality | |
| schaeffer2023mirage,Mirage,2023,theory,language,XL,theoretical,metric choice,Emergence is metric-dependent,N/A,Apparent emergent abilities may be artifacts of nonlinear metric choice; echoes Aristotelian critique | |
| baevski2020wav2vec2,wav2vec 2.0,2020,convergence,audio,L,alignment,contrastive loss,Self-supervised speech representations,Local-Only,"Self-supervised pretraining on speech via contrastive learning; learns contextualized representations from raw audio" | |
| hsu2021hubert,HuBERT,2021,convergence,audio,L,alignment,masked prediction,Self-supervised speech representations,Local-Only,"Masked prediction of hidden units for speech; offline clustering provides pseudo-labels for self-supervised learning" | |
| radford2023whisper,Whisper,2023,transport,audio,XL,transport,supervised ASR,Robust speech recognition via weak supervision,Convergent,"Large-scale weakly supervised ASR; 680K hours of labeled data; cross-lingual transfer via shared representations" | |