Papers
arxiv:2604.03190

Gradient Boosting within a Single Attention Layer

Published on Apr 3

Abstract

Gradient-boosted attention enhances standard attention by adding a correction pass that addresses prediction errors through learned projections and gating mechanisms, achieving better perplexity on language modeling tasks.

AI-generated summary

Transformer attention computes a single softmax-weighted average over values -- a one-pass estimate that cannot correct its own errors. We introduce gradient-boosted attention, which applies the principle of gradient boosting within a single attention layer: a second attention pass, with its own learned projections, attends to the prediction error of the first and applies a gated correction. Under a squared reconstruction objective, the construction maps onto Friedman's gradient boosting machine, with each attention pass as a base learner and the per-dimension gate as the shrinkage parameter. We show that a single Hopfield-style update erases all query information orthogonal to the stored-pattern subspace, and that further iteration under local contraction can collapse distinct queries in the same region to the same fixed point. We also show that separate projections for the correction pass can recover residual information inaccessible to the shared-projection approach of Tukey's twicing. On a 10M-token subset of WikiText-103, gradient-boosted attention achieves a test perplexity of 67.9 compared to 72.2 for standard attention, 69.6 for Twicing Attention, and 69.0 for a parameter-matched wider baseline, with two rounds capturing most of the benefit.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.03190
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.03190 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.03190 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.03190 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.