Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ language:
|
|
| 15 |
|
| 16 |
This is the first release of a series of Swedish large language models we call "Lynx". Micro is a small model (2 billion params), but punches way above its weight!
|
| 17 |
|
| 18 |
-
Lynx micro is a fine-tune of Google DeepMind Gemma 2B, scores just below GPT-3.5 Turbo on [Scandeval](https://
|
| 19 |
|
| 20 |
We believe that this is a really good model (for its size), but keep in mind that it is still a small model and hasn't memorized as much as larger models tend to do.
|
| 21 |
|
|
@@ -99,7 +99,7 @@ r = pipe(
|
|
| 99 |
|
| 100 |
## Evaluation
|
| 101 |
|
| 102 |
-
The model has been evaluated on [Scandeval](https://
|
| 103 |
|
| 104 |
|
| 105 |
|
|
|
|
| 15 |
|
| 16 |
This is the first release of a series of Swedish large language models we call "Lynx". Micro is a small model (2 billion params), but punches way above its weight!
|
| 17 |
|
| 18 |
+
Lynx micro is a fine-tune of Google DeepMind Gemma 2B, scores just below GPT-3.5 Turbo on [Scandeval](https://scandeval.com/swedish-nlg/). In fact, the only non OpenAI model (currently) topping the Swedish NLG board on scandeval is a fine-tune of Llama-3 by AI Sweden based on our data recipe.
|
| 19 |
|
| 20 |
We believe that this is a really good model (for its size), but keep in mind that it is still a small model and hasn't memorized as much as larger models tend to do.
|
| 21 |
|
|
|
|
| 99 |
|
| 100 |
## Evaluation
|
| 101 |
|
| 102 |
+
The model has been evaluated on [Scandeval](https://scandeval.com/swedish-nlg/) swedish subset.
|
| 103 |
|
| 104 |
|
| 105 |
|