TensorLens: End-to-End Transformer Analysis via High-Order Attention Tensors Paper • 2601.17958 • Published 2 days ago • 1
TensorLens: End-to-End Transformer Analysis via High-Order Attention Tensors Paper • 2601.17958 • Published 2 days ago • 1
Overclocking LLM Reasoning: Monitoring and Controlling Thinking Path Lengths in LLMs Paper • 2506.07240 • Published Jun 8, 2025 • 7 • 2
Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability Paper • 2506.02138 • Published Jun 2, 2025 • 1
Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability Paper • 2506.02138 • Published Jun 2, 2025 • 1 • 3
Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability Paper • 2506.02138 • Published Jun 2, 2025 • 1 • 3
Overflow Prevention Enhances Long-Context Recurrent LLMs Paper • 2505.07793 • Published May 12, 2025 • 3
Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models Paper • 2411.07232 • Published Nov 11, 2024 • 68
Tokenization Falling Short: The Curse of Tokenization Paper • 2406.11687 • Published Jun 17, 2024 • 16
Make It Count: Text-to-Image Generation with an Accurate Number of Objects Paper • 2406.10210 • Published Jun 14, 2024 • 78