view article Article Transformers v5: Simple model definitions powering the AI ecosystem +2 Dec 1, 2025 β’ 288
Unveiling Intrinsic Dimension of Texts: from Academic Abstract to Creative Story Paper β’ 2511.15210 β’ Published Nov 19, 2025 β’ 90
view reply Hi, @jzx03 ,I am not aware of any of the related code being broken right now, but Transformers is big and fast growing library, and such things happened before. It would be best if you post a reproducible example as an issue, showing difference in behavior in 4.49 and some earlier version. See how I did that in https://github.com/huggingface/transformers/issues/29525.Also check the relevant tests like https://github.com/huggingface/transformers/blob/be37d34f44ff1bc928e59ffb8a30adecab8835a8/tests/models/llama/test_modeling_llama.py#L811 see if they still work OK, or, possibly extend the tests to cover the failure that you discovered.
view reply no, I don't have such script readily available. please follow the links in section 2 above or search elsewhere
Switti: Designing Scale-Wise Transformers for Text-to-Image Synthesis Paper β’ 2412.01819 β’ Published Dec 2, 2024 β’ 34
Accurate Compression of Text-to-Image Diffusion Models via Vector Quantization Paper β’ 2409.00492 β’ Published Aug 31, 2024 β’ 11
TabReD: A Benchmark of Tabular Machine Learning in-the-Wild Paper β’ 2406.19380 β’ Published Jun 27, 2024 β’ 49
Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps Paper β’ 2406.14539 β’ Published Jun 20, 2024 β’ 27
Accelerating LLM Inference with Staged Speculative Decoding Paper β’ 2308.04623 β’ Published Aug 8, 2023 β’ 25
SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices Paper β’ 2406.02532 β’ Published Jun 4, 2024 β’ 13
SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices Paper β’ 2406.02532 β’ Published Jun 4, 2024 β’ 13
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression Paper β’ 2306.03078 β’ Published Jun 5, 2023 β’ 3
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding Paper β’ 2402.12374 β’ Published Feb 19, 2024 β’ 4
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression Paper β’ 2306.03078 β’ Published Jun 5, 2023 β’ 3
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding Paper β’ 2402.12374 β’ Published Feb 19, 2024 β’ 4