Collections
Discover the best community collections!
Collections trending this week
-
NExT-GPT: Any-to-Any Multimodal LLM
Paper • 2309.05519 • Published • 79 -
Large Language Model for Science: A Study on P vs. NP
Paper • 2309.05689 • Published • 22 -
AstroLLaMA: Towards Specialized Foundation Models in Astronomy
Paper • 2309.06126 • Published • 18 -
Large Language Models for Compiler Optimization
Paper • 2309.07062 • Published • 25
-
SILC: Improving Vision Language Pretraining with Self-Distillation
Paper • 2310.13355 • Published • 9 -
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Paper • 2310.16045 • Published • 17 -
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Paper • 2201.12086 • Published • 3 -
ImageNetVC: Zero-Shot Visual Commonsense Evaluation on 1000 ImageNet Categories
Paper • 2305.15028 • Published • 1
-
Large Language Models for Compiler Optimization
Paper • 2309.07062 • Published • 25 -
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Paper • 2310.17157 • Published • 14 -
FP8-LM: Training FP8 Large Language Models
Paper • 2310.18313 • Published • 33 -
Atom: Low-bit Quantization for Efficient and Accurate LLM Serving
Paper • 2310.19102 • Published • 11
-
Large Language Models for Compiler Optimization
Paper • 2309.07062 • Published • 25 -
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Paper • 2310.17157 • Published • 14 -
FP8-LM: Training FP8 Large Language Models
Paper • 2310.18313 • Published • 33 -
Atom: Low-bit Quantization for Efficient and Accurate LLM Serving
Paper • 2310.19102 • Published • 11
-
NExT-GPT: Any-to-Any Multimodal LLM
Paper • 2309.05519 • Published • 79 -
Large Language Model for Science: A Study on P vs. NP
Paper • 2309.05689 • Published • 22 -
AstroLLaMA: Towards Specialized Foundation Models in Astronomy
Paper • 2309.06126 • Published • 18 -
Large Language Models for Compiler Optimization
Paper • 2309.07062 • Published • 25
-
SILC: Improving Vision Language Pretraining with Self-Distillation
Paper • 2310.13355 • Published • 9 -
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Paper • 2310.16045 • Published • 17 -
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Paper • 2201.12086 • Published • 3 -
ImageNetVC: Zero-Shot Visual Commonsense Evaluation on 1000 ImageNet Categories
Paper • 2305.15028 • Published • 1