new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Mar 16

Compression Favors Consistency, Not Truth: When and Why Language Models Prefer Correct Information

Why do language models sometimes prefer correct statements even when trained on mixed-quality data? We introduce the Compression--Consistency Principle: next-token prediction favors hypotheses that allow shorter and more internally consistent descriptions of the training data. Truth bias emerges only when false alternatives are structurally harder to compress. We test this using small GPT-2-style character-level transformers (3.5M--86M parameters) on synthetic math corpora with controlled mixtures of correct and incorrect rules. In the random-error setting, models strongly prefer correct completions in paired evaluation: 83.1% accuracy at balanced data and 67.0% even when correct rules appear in only 10% of the corpus. Replacing random errors with a coherent but mathematically incorrect rule system largely eliminates the preference (near-chance accuracy). In a more natural-language-like synthetic world, the effect is weaker but still present (57.7%). Additional experiments show that embedding verification steps can restore preference for correctness even at small scale, while increasing the number of consistent rules produces a graded improvement in accuracy. Our results suggest that what appears as a "truth bias" is largely a side effect of compression pressure and preference for internal consistency, rather than an intrinsic drive toward truth. Full code and data are available at https://github.com/Rai220/compression-drives-truth.

  • 1 authors
·
Mar 12 1

OpenCSP: A Deep Learning Framework for Crystal Structure Prediction from Ambient to High Pressure

High-pressure crystal structure prediction (CSP) underpins advances in condensed matter physics, planetary science, and materials discovery. Yet, most large atomistic models are trained on near-ambient, equilibrium data, leading to degraded stress accuracy at tens to hundreds of gigapascals and sparse coverage of pressure-stabilized stoichiometries and dense coordination motifs. Here, we introduce OpenCSP, a machine learning framework for CSP tasks spanning ambient to high-pressure conditions. This framework comprises an open-source pressure-resolved dataset alongside a suite of publicly available atomistic models that are jointly optimized for accuracy in energy, force, and stress predictions. The dataset is constructed via randomized high-pressure sampling and iteratively refined through an uncertainty-guided concurrent learning strategy, which enriches underrepresented compression regimes while suppressing redundant DFT labeling. Despite employing a training corpus one to two orders of magnitude smaller than those of leading large models, OpenCSP achieves comparable or superior performance in high-pressure enthalpy ranking and stability prediction. Across benchmark CSP tasks spanning a wide pressure window, our models match or surpass MACE-MPA-0, MatterSim v1 5M, and GRACE-2L-OAM, with the largest gains observed at elevated pressures. These results demonstrate that targeted, pressure-aware data acquisition coupled with scalable architectures enables data-efficient, high-fidelity CSP, paving the way for autonomous materials discovery under ambient and extreme conditions.

  • 6 authors
·
Sep 12, 2025

GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers

Generative Pre-trained Transformer models, known as GPT or OPT, set themselves apart through breakthrough performance across complex language modelling tasks, but also by their extremely high computational and storage costs. Specifically, due to their massive size, even inference for large, highly-accurate GPT models may require multiple performant GPUs, which limits the usability of such models. While there is emerging work on relieving this pressure via model compression, the applicability and performance of existing compression techniques is limited by the scale and complexity of GPT models. In this paper, we address this challenge, and propose GPTQ, a new one-shot weight quantization method based on approximate second-order information, that is both highly-accurate and highly-efficient. Specifically, GPTQ can quantize GPT models with 175 billion parameters in approximately four GPU hours, reducing the bitwidth down to 3 or 4 bits per weight, with negligible accuracy degradation relative to the uncompressed baseline. Our method more than doubles the compression gains relative to previously-proposed one-shot quantization methods, preserving accuracy, allowing us for the first time to execute an 175 billion-parameter model inside a single GPU for generative inference. Moreover, we also show that our method can still provide reasonable accuracy in the extreme quantization regime, in which weights are quantized to 2-bit or even ternary quantization levels. We show experimentally that these improvements can be leveraged for end-to-end inference speedups over FP16, of around 3.25x when using high-end GPUs (NVIDIA A100) and 4.5x when using more cost-effective ones (NVIDIA A6000). The implementation is available at https://github.com/IST-DASLab/gptq.

  • 4 authors
·
Oct 31, 2022

Squeeze3D: Your 3D Generation Model is Secretly an Extreme Neural Compressor

We propose Squeeze3D, a novel framework that leverages implicit prior knowledge learnt by existing pre-trained 3D generative models to compress 3D data at extremely high compression ratios. Our approach bridges the latent spaces between a pre-trained encoder and a pre-trained generation model through trainable mapping networks. Any 3D model represented as a mesh, point cloud, or a radiance field is first encoded by the pre-trained encoder and then transformed (i.e. compressed) into a highly compact latent code. This latent code can effectively be used as an extremely compressed representation of the mesh or point cloud. A mapping network transforms the compressed latent code into the latent space of a powerful generative model, which is then conditioned to recreate the original 3D model (i.e. decompression). Squeeze3D is trained entirely on generated synthetic data and does not require any 3D datasets. The Squeeze3D architecture can be flexibly used with existing pre-trained 3D encoders and existing generative models. It can flexibly support different formats, including meshes, point clouds, and radiance fields. Our experiments demonstrate that Squeeze3D achieves compression ratios of up to 2187x for textured meshes, 55x for point clouds, and 619x for radiance fields while maintaining visual quality comparable to many existing methods. Squeeze3D only incurs a small compression and decompression latency since it does not involve training object-specific networks to compress an object.

  • 5 authors
·
Jun 9, 2025 2

Influence of pressure on properties of multi-gap type-I superconductor BeAu

We report on studies of the superconducting and normal state properties of the noncentrosymmetric superconductor BeAu under hydrostatic pressure conditions. The room-temperature equation of state (EOS) reveals the values of the bulk modulus (B_0) and its first derivative (B^prime_0) at ambient pressure to be B_0 simeq 132~GPa and B^prime_0 simeq 30, respectively. Up to the highest pressures studied (p simeq 2.2~GPa), BeAu remains a multi-gap type-I superconductor. The analysis of B_{rm c}(T, p) data within the self-consistent two-gap approach suggests the presence of two superconducting energy gaps, with the gap-to-T_{rm c} ratios Δ_1/k_{rm B}T_{rm c} sim 2.3 and Δ_2/k_{rm B}T_{rm c} sim 1.1 for the larger and smaller gaps, respectively [Δ= Δ(0) is the zero-temperature value of the gap and k_{rm B} is the Boltzmann constant]. With increasing pressure, Δ_1/k_{rm B}T_{rm c} increases while Δ_2/k_{rm B}T_{rm c} decreases, suggesting that pressure enhances (weakens) the coupling strength between the superconducting carriers within the bands where the larger (smaller) superconducting energy gap has opened. The superconducting transition temperature T_{rm c}, black{the zero-temperature values of the superconducting gaps Δ_1 and Δ_2} and the zero-temperature value of the thermodynamic critical field B_{rm c}(0) decrease with increasing pressure, with the rates of {rm d}T_{rm c}/{rm d}p simeq -0.195~K/GPa, black{{rm d}Δ_1/{rm d}p simeq -0.034~meV/GPa, {rm d}Δ_2/{rm d}p simeq -0.029~meV/GPa,} and {rm d}B_{rm c}(0)/{rm d}p = -2.65(1)~mT/GPa, respectively. The measured B_{rm c}(0) values plotted as a function of T_{rm c} follow an empirical scaling relation established for conventional type-I superconductors.

  • 10 authors
·
Feb 2, 2025