Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
29.7
TFLOPS
27
11
28
Pavan Kumar Balijepalli
pavankumarbalijepalli
Follow
Prashantkh70's profile picture
dark-pen's profile picture
LeroyDyer's profile picture
5 followers
·
6 following
pavankumarbalijepalli
pavan-kumar-balijepalli
AI & ML interests
Learn. Build. Teach.
Recent Activity
posted
an
update
2 days ago
The quadratic bottleneck of long-context LLMs just hit a massive speed wall. Processing long-context sequences in LLMs is computationally expensive due to the quadratic complexity of self-attention. Existing sparse attention methods often rely on sorting or cumulative summation (Top-k/Top-p), which are slow and struggle to prune the "long-tail" of irrelevant tokens. - FlashPrefill achieves a 27.78× speedup on 256K sequences by replacing heavy sorting with a Max-based Dynamic Thresholding mechanism. - It introduces "Instantaneous Pattern Discovery" using block-level approximations, bypassing the need for expensive, full-attention score calculations. - Unlike previous methods that struggle with shorter contexts, it maintains a 1.71× speedup even at 4K, proving its robustness across all scales. - The framework is fully compatible with existing LLM/VLM architectures and integrates seamlessly into vLLM for real-world deployment. This breakthrough significantly reduces Time-to-First-Token (TTFT) for long-context applications, making massive document analysis and long-video understanding practical and cost-effective. It turns a major performance bottleneck into a streamlined, hardware-efficient process. How much compute are we wasting on "long-tail" tokens that don't actually matter? FlashPrefill suggests the answer is: a lot. #AI #LLMs #MachineLearning #DeepLearning #TechInnovation #GPUComputing Source: https://arxiv.org/pdf/2603.06199
updated
a Space
9 days ago
pavankumarbalijepalli/portfolio
published
a Space
9 days ago
pavankumarbalijepalli/portfolio
View all activity
Organizations
pavankumarbalijepalli
's models
4
Sort: Recently updated
pavankumarbalijepalli/telLM-gemma2-9b-16bit
Text Generation
•
9B
•
Updated
May 15, 2025
•
1
•
1
pavankumarbalijepalli/telLM-gemma2-9b
Updated
Mar 1, 2025
•
1
pavankumarbalijepalli/phi2-nl2sql-lora
Text Generation
•
3B
•
Updated
Feb 28, 2025
•
7
•
1
pavankumarbalijepalli/phi2-sqlcoder
Text Generation
•
3B
•
Updated
May 29, 2024
•
144
•
7