GGUF
conversational
aashish1904 commited on
Commit
4d83464
·
verified ·
1 Parent(s): efcc1de

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+
6
+ ---
7
+
8
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
9
+
10
+ # QuantFactory/arcee-lite-GGUF
11
+ This is quantized version of [arcee-ai/arcee-lite](https://huggingface.co/arcee-ai/arcee-lite) created using llama.cpp
12
+
13
+ # Original Model Card
14
+
15
+ <div align="center">
16
+ <img src="https://i.ibb.co/g9Z2CGQ/arcee-lite.webp" alt="Arcee-Lite" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
17
+ </div>
18
+
19
+
20
+ Arcee-Lite is a compact yet powerful 1.5B parameter language model developed as part of the DistillKit open-source project. Despite its small size, Arcee-Lite demonstrates impressive performance, particularly in the MMLU (Massive Multitask Language Understanding) benchmark.
21
+
22
+ ## GGUFS available [here](https://huggingface.co/arcee-ai/arcee-lite-GGUF)
23
+
24
+ ## Key Features
25
+
26
+ - **Model Size**: 1.5 billion parameters
27
+ - **MMLU Score**: 55.93
28
+ - **Distillation Source**: Phi-3-Medium
29
+ - **Enhanced Performance**: Merged with high-performing distillations
30
+
31
+ ## About DistillKit
32
+
33
+ DistillKit is our new open-source project focused on creating efficient, smaller models that maintain high performance. Arcee-Lite is one of the first models to emerge from this initiative.
34
+
35
+ ## Performance
36
+
37
+ Arcee-Lite showcases remarkable capabilities for its size:
38
+
39
+ - Achieves a 55.93 score on the MMLU benchmark
40
+ - Demonstrates exceptional performance across various tasks
41
+
42
+ ## Use Cases
43
+
44
+ Arcee-Lite is suitable for a wide range of applications where a balance between model size and performance is crucial:
45
+
46
+ - Embedded systems
47
+ - Mobile applications
48
+ - Edge computing
49
+ - Resource-constrained environments
50
+
51
+ <div align="center">
52
+ <img src="https://i.ibb.co/hDC7WBt/Screenshot-2024-08-01-at-8-59-33-AM.png" alt="Arcee-Lite" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
53
+ </div>
54
+
55
+ Please note that our internal evaluations were consistantly higher than their counterparts on the OpenLLM Leaderboard - and should only be compared against the relative performance between the models, not weighed against the leaderboard.
56
+
57
+ ---
58
+