Update README.md
Browse files
README.md
CHANGED
|
@@ -35,7 +35,7 @@ merge_method: linear
|
|
| 35 |
dtype: float16
|
| 36 |
```
|
| 37 |
|
| 38 |
-
## Results
|
| 39 |
> Results obtained through the Serbian LLM evaluation, released by Aleksa Gordić: [serbian-llm-eval](https://github.com/gordicaleksa/serbian-llm-eval)
|
| 40 |
> * Evaluation was conducted on a 4-bit version of the model due to hardware resource constraints.
|
| 41 |
|
|
@@ -51,13 +51,13 @@ dtype: float16
|
|
| 51 |
<th>PiQA</th>
|
| 52 |
</tr>
|
| 53 |
<tr>
|
| 54 |
-
<td><a href="https://huggingface.co/datatab/Yugo55-GPT-v4-4bit/">Yugo55-GPT-v4-4bit</a></td>
|
| 55 |
-
<td>
|
| 56 |
-
<td>
|
| 57 |
-
<td>
|
| 58 |
-
<td>
|
| 59 |
<td><strong>65.75</strong></td>
|
| 60 |
-
<td>
|
| 61 |
<td><strong>70.54</strong></td>
|
| 62 |
</tr>
|
| 63 |
<tr>
|
|
@@ -66,9 +66,9 @@ dtype: float16
|
|
| 66 |
<td><strong>37.78</strong></td>
|
| 67 |
<td><strong>57.52</strong></td>
|
| 68 |
<td><strong>84.40</strong></td>
|
| 69 |
-
<td>
|
| 70 |
<td><strong>35.60</strong></td>
|
| 71 |
-
<td>
|
| 72 |
</tr>
|
| 73 |
</table>
|
| 74 |
|
|
@@ -102,11 +102,11 @@ import transformers
|
|
| 102 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 103 |
|
| 104 |
model = AutoModelForCausalLM.from_pretrained(
|
| 105 |
-
"datatab/
|
| 106 |
)
|
| 107 |
|
| 108 |
tokenizer = AutoTokenizer.from_pretrained(
|
| 109 |
-
"datatab/
|
| 110 |
)
|
| 111 |
|
| 112 |
|
|
|
|
| 35 |
dtype: float16
|
| 36 |
```
|
| 37 |
|
| 38 |
+
## 🏆 Results
|
| 39 |
> Results obtained through the Serbian LLM evaluation, released by Aleksa Gordić: [serbian-llm-eval](https://github.com/gordicaleksa/serbian-llm-eval)
|
| 40 |
> * Evaluation was conducted on a 4-bit version of the model due to hardware resource constraints.
|
| 41 |
|
|
|
|
| 51 |
<th>PiQA</th>
|
| 52 |
</tr>
|
| 53 |
<tr>
|
| 54 |
+
<td><a href="https://huggingface.co/datatab/Yugo55-GPT-v4-4bit/">*Yugo55-GPT-v4-4bit</a></td>
|
| 55 |
+
<td>51.41</td>
|
| 56 |
+
<td>36.00</td>
|
| 57 |
+
<td>57.51</td>
|
| 58 |
+
<td>80.92</td>
|
| 59 |
<td><strong>65.75</strong></td>
|
| 60 |
+
<td>34.70</td>
|
| 61 |
<td><strong>70.54</strong></td>
|
| 62 |
</tr>
|
| 63 |
<tr>
|
|
|
|
| 66 |
<td><strong>37.78</strong></td>
|
| 67 |
<td><strong>57.52</strong></td>
|
| 68 |
<td><strong>84.40</strong></td>
|
| 69 |
+
<td>65.43</td>
|
| 70 |
<td><strong>35.60</strong></td>
|
| 71 |
+
<td>69.43</td>
|
| 72 |
</tr>
|
| 73 |
</table>
|
| 74 |
|
|
|
|
| 102 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 103 |
|
| 104 |
model = AutoModelForCausalLM.from_pretrained(
|
| 105 |
+
"datatab/Yugo55-GPT-v4-4bit", torch_dtype="auto"
|
| 106 |
)
|
| 107 |
|
| 108 |
tokenizer = AutoTokenizer.from_pretrained(
|
| 109 |
+
"datatab/Yugo55-GPT-v4-4bit", torch_dtype="auto"
|
| 110 |
)
|
| 111 |
|
| 112 |
|