Update README.md
#1
by
Ashmal - opened
README.md
CHANGED
|
@@ -2,11 +2,13 @@
|
|
| 2 |
license: mit
|
| 3 |
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
|
| 4 |
language:
|
| 5 |
-
|
| 6 |
pipeline_tag: text-generation
|
| 7 |
tags:
|
| 8 |
-
|
| 9 |
-
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
# MobiLlama-05B
|
| 12 |
|
|
@@ -57,7 +59,20 @@ print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
|
|
| 57 |
|
| 58 |
```
|
| 59 |
|
| 60 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
-
|
| 63 |
|
|
|
|
|
|
| 2 |
license: mit
|
| 3 |
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
|
| 4 |
language:
|
| 5 |
+
- en
|
| 6 |
pipeline_tag: text-generation
|
| 7 |
tags:
|
| 8 |
+
- nlp
|
| 9 |
+
- code
|
| 10 |
+
datasets:
|
| 11 |
+
- LLM360/AmberDatasets
|
| 12 |
---
|
| 13 |
# MobiLlama-05B
|
| 14 |
|
|
|
|
| 59 |
|
| 60 |
```
|
| 61 |
|
| 62 |
+
## Evaluation
|
| 63 |
+
| Evaluation Benchmark | MobiLlama-0.5B | MobiLlama-0.8B | MobiLlama-1.2B |
|
| 64 |
+
| ----------- | ----------- | ----------- |
|
| 65 |
+
| HellaSwag | 0.5252 | 0.5409 | 0.6299 |
|
| 66 |
+
| MMLU | 0.2645 | 0.2692 | 0.2423 |
|
| 67 |
+
| Arc Challenge | 0.2952 | 0.3020 | 0.3455 |
|
| 68 |
+
| TruthfulQA | 0.3805 | 0.3848 | 0.3557 |
|
| 69 |
+
| CrowsPairs | 0.6403 | 0.6482 | 0.6812 |
|
| 70 |
+
| PIQA | 0.7203 | 0.7317 | 0.7529 |
|
| 71 |
+
| Race | 0.3368 | 0.3337 | 0.3531 |
|
| 72 |
+
| SIQA | 0.4022 | 0.4160 | 0.4196 |
|
| 73 |
+
| Winogrande | 0.5753 | 0.5745 | 0.6108 |
|
| 74 |
+
|
| 75 |
|
| 76 |
+
## Intended Uses
|
| 77 |
|
| 78 |
+
Given the nature of the training data, the MobiLlama-05B model is best suited for prompts using the QA format, the chat format, and the code format.
|