schneiderkamplab/SDU-Daisy
Updated
- Ressource constrained language modelling - Evaluation - Low-Bandwidth Distributed Training - Quantization - Medical Language Modelling - AI x Human
FlexMoRE: A Flexible Mixture of Rank-heterogeneous Experts for Efficient Federatedly-trained Large Language Models
When are 1.58 bits enough? A Bottom-up Exploration of BitNet Quantization