--- license: apache-2.0 --- ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) # QuantFactory/arcee-lite-GGUF This is quantized version of [arcee-ai/arcee-lite](https://huggingface.co/arcee-ai/arcee-lite) created using llama.cpp # Original Model Card
Arcee-Lite
Arcee-Lite is a compact yet powerful 1.5B parameter language model developed as part of the DistillKit open-source project. Despite its small size, Arcee-Lite demonstrates impressive performance, particularly in the MMLU (Massive Multitask Language Understanding) benchmark. ## GGUFS available [here](https://huggingface.co/arcee-ai/arcee-lite-GGUF) ## Key Features - **Model Size**: 1.5 billion parameters - **MMLU Score**: 55.93 - **Distillation Source**: Phi-3-Medium - **Enhanced Performance**: Merged with high-performing distillations ## About DistillKit DistillKit is our new open-source project focused on creating efficient, smaller models that maintain high performance. Arcee-Lite is one of the first models to emerge from this initiative. ## Performance Arcee-Lite showcases remarkable capabilities for its size: - Achieves a 55.93 score on the MMLU benchmark - Demonstrates exceptional performance across various tasks ## Use Cases Arcee-Lite is suitable for a wide range of applications where a balance between model size and performance is crucial: - Embedded systems - Mobile applications - Edge computing - Resource-constrained environments
Arcee-Lite
Please note that our internal evaluations were consistantly higher than their counterparts on the OpenLLM Leaderboard - and should only be compared against the relative performance between the models, not weighed against the leaderboard. ---