MikaStars39's picture
Upload folder using huggingface_hub
b83c198 verified
title,keywords,url,type
[2312.00752] Mamba: Linear-Time Sequence Modeling with Selective State Spaces,"linear, mamba",https://arxiv.org/abs/2312.00752,efficiency
[2312.06635] Gated Linear Attention Transformers with Hardware-Efficient Training,linear,https://arxiv.org/abs/2312.06635,efficiency
[2405.21060] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality,"linear, mamba",https://arxiv.org/abs/2405.21060,efficiency
[2406.06484] Parallelizing Linear Transformers with the Delta Rule over Sequence Length,linear,https://arxiv.org/abs/2406.06484,efficiency
[2505.20045] Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs,"attention heads, uncertainty quantification, llm",https://arxiv.org/abs/2505.20045,efficiency
[2505.16284] Only Large Weights (And Not Skip Connections) Can Prevent the Perils of Rank Collapse,"rank collapse, weights",https://arxiv.org/abs/2505.16284,efficiency
[2505.21785] Born a Transformer -- Always a Transformer?,"transformer, architecture",https://arxiv.org/abs/2505.21785,efficiency
[2505.22506] Sparsification and Reconstruction from the Perspective of Representation Geometry,"sparsification, representation, geometry",https://arxiv.org/abs/2505.22506,efficiency
[2505.22255] Train Sparse Autoencoders Efficiently by Utilizing Features Correlation,"SAE, efficiency, training",https://arxiv.org/abs/2505.22255,efficiency
[2506.00799] Uni-LoRA: One Vector is All You Need,"LoRA, efficient",https://arxiv.org/abs/2506.00799,Efficiency
[2505.23657] Active Layer-Contrastive Decoding Reduces Hallucination in Large Language Model Generation,"hallucination, llm, decoding",https://arxiv.org/abs/2505.23657,Efficiency
[2505.22689] SlimLLM: Accurate Structured Pruning for Large Language Models,"pruning, llm, efficiency",https://arxiv.org/abs/2505.22689,Efficiency
[2506.18233] The 4th Dimension for Scaling Model Size,"scaling, efficiency",https://arxiv.org/abs/2506.18233,Efficiency
[2506.15872] Hidden Breakthroughs in Language Model Training,"llm, training",https://arxiv.org/abs/2506.15872,Efficiency
[2506.15647] Exploring and Exploiting the Inherent Efficiency within Large Reasoning Models for Self-Guided Efficiency Enhancement,"llm, reasoning, efficiency",https://arxiv.org/abs/2506.15647,Efficiency
[2506.13216] Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law,"llm, scaling",https://arxiv.org/abs/2506.13216,Efficiency
[2506.13688] What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers,"llm, training",https://arxiv.org/abs/2506.13688,Efficiency
[2506.13674] Prefix-Tuning+: Modernizing Prefix-Tuning through Attention Independent Prefix Data,"llm, tuning",https://arxiv.org/abs/2506.13674,Efficiency
[2506.12119] Can Mixture-of-Experts Surpass Dense LLMs Under Strictly Equal Resources?,"llm, MoE, efficiency",https://arxiv.org/abs/2506.12119,Efficiency
[2506.11769] Long-Short Alignment for Effective Long-Context Modeling in LLMs - ICML 2025,"llm, long-context",https://arxiv.org/abs/2506.11769,Efficiency
[2506.09251] Extrapolation by Association: Length Generalization Transfer in Transformers,"llm, generalization",https://arxiv.org/abs/2506.09251,Efficiency
[2506.08552] Efficient Post-Training Refinement of Latent Reasoning in Large Language Models,"llm, reasoning, efficiency",https://arxiv.org/abs/2506.08552,Efficiency
[2506.06609] Transferring Features Across Language Models With Model Stitching,"llm, transfer learning",https://arxiv.org/abs/2506.06609,Efficiency
[2506.06105] Text-to-LoRA: Instant Transformer Adaption,"llm, LoRA, adaption",https://arxiv.org/abs/2506.06105,Efficiency
[2506.06607] Training-Free Tokenizer Transplantation via Orthogonal Matching Pursuit,"tokenizer, llm",https://arxiv.org/abs/2506.06607,Efficiency