File size: 1,643 Bytes
2a77286
 
 
 
 
 
 
 
 
49f727f
 
 
 
4effdd3
49f727f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4effdd3
49f727f
 
 
 
 
 
 
4effdd3
49f727f
 
4effdd3
 
 
49f727f
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
title: README
emoji: πŸ’»
colorFrom: blue
colorTo: purple
sdk: static
pinned: false
---

# LLM-Drop

πŸ€— **LLM-Drop** hosts research artifacts for efficient foundation models, with a focus on large language models and unified multimodal models.

Our work studies how modern foundation models can be made more efficient while preserving their core capabilities. This page collects model weights, code links, project pages, and related resources from our research projects.

## πŸ“Œ Projects

### 🧩 LLM-Drop

**Uncovering the Redundancy in Transformers via a Unified Study of Layer Dropping**  
TMLR 2026

- πŸ“„ Paper: https://openreview.net/forum?id=1I7PCbOPfe
- πŸ’» Code: https://github.com/CASE-Lab-UMD/LLM-Drop
- πŸ€— Models: https://huggingface.co/collections/LLM-Drop/llm-drop

### πŸ” Pruning on Representations

**Demystifying When Pruning Works via Representation Hierarchies**

- 🌐 Project Page: https://case-lab-umd.github.io/Pruning-on-Representations/
- πŸ“„ Paper: https://arxiv.org/abs/2603.24652
- πŸ’» Code: https://github.com/CASE-Lab-UMD/Pruning-on-Representations

### 🌐 Sparse Unified Models

**Understanding and Harnessing Sparsity in Unified Multimodal Models**

- 🌐 Project Page: https://shwai-he.github.io/SparseUnifiedModel/
- πŸ“„ Paper: https://huggingface.co/papers/2512.02351
- πŸ’» Code: https://github.com/Shwai-He/SparseUnifiedModel
- πŸ€— Models:
  - https://huggingface.co/LLM-Drop/BAGEL-MoE-7B-GEN-16to8
  - https://huggingface.co/LLM-Drop/BAGEL-MoE-7B-GEN-32to16

## πŸ“¬ Contact

For questions or collaborations, please contact:

- Shwai He: shwaihe@umd.edu
- Guoheng Sun: ghsun@umd.edu