pere commited on
Commit
024281f
·
verified ·
1 Parent(s): f144b64

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -6,6 +6,7 @@ language:
6
  - "en" # English
7
  tags:
8
  - "llama"
 
9
  - "norwegian"
10
  - "bokmål"
11
  - "nynorsk"
@@ -18,16 +19,16 @@ base_model: "meta-llama/Llama-3.2-3B-Instruct"
18
  library_name: "transformers"
19
  ---
20
 
21
- ## Model Card: "NB-Llama-3.2-3B-Instruct"
22
 
23
  ### Model overview
24
 
25
- "NB-Llama-3.2-3B-Instruct" is part of the "NB-Llama-3.x" series (covering "Llama 3.1", "Llama 3.2", and "Llama 3.3" based releases), trained on top of Meta’s "Llama-3.2-3B-Instruct":
26
  https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
27
 
28
  The model is fine-tuned to improve instruction-following behavior in Norwegian Bokmål and Norwegian Nynorsk, while aiming to preserve strong English performance. (It may also handle some Swedish and Danish in practice, but those are not primary targets for this release.)
29
 
30
- This model series is an experiment in how far modern open-weight models can be adapted for Norwegian using **only publicly available data**. Although trained at the National Library of Norway, it does **not** include material that is only accessible through legal deposit. It may include public documents (for example governmental reports) that are publicly available and also part of legal deposit collections.
31
 
32
  ---
33
 
@@ -94,7 +95,7 @@ This is a research release. For end-user deployments, we recommend careful evalu
94
  import torch
95
  from transformers import pipeline
96
 
97
- model_id = "NbAiLab/nb-llama-3.2-3B-Instruct"
98
 
99
  pipe = pipeline(
100
  task="text-generation",
@@ -198,4 +199,6 @@ Model training and documentation: **Per Egil Kummervold**.
198
 
199
  ## Funding and acknowledgement
200
 
201
- Training was supported by Google’s TPU Research Cloud ("TRC"), which provided Cloud TPUs essential for the computational work.
 
 
 
6
  - "en" # English
7
  tags:
8
  - "llama"
9
+ - "notram"
10
  - "norwegian"
11
  - "bokmål"
12
  - "nynorsk"
 
19
  library_name: "transformers"
20
  ---
21
 
22
+ ## Model Card: "nb-notram-llama-3.2-3b-instruct"
23
 
24
  ### Model overview
25
 
26
+ "NbAiLab/nb-notram-llama-3.2-3b-instruct" is part of the "NB-Llama-3.x" series (covering "Llama 3.1", "Llama 3.2", and "Llama 3.3" based releases) and the "NoTraM" line of work, trained on top of Meta’s "Llama-3.2-3B-Instruct":
27
  https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
28
 
29
  The model is fine-tuned to improve instruction-following behavior in Norwegian Bokmål and Norwegian Nynorsk, while aiming to preserve strong English performance. (It may also handle some Swedish and Danish in practice, but those are not primary targets for this release.)
30
 
31
+ This release is an experiment in how far modern open-weight models can be adapted for Norwegian using **only publicly available data**. Although trained at the National Library of Norway, it does **not** include material that is only accessible through legal deposit. It may include public documents (for example governmental reports) that are publicly available and also part of legal deposit collections.
32
 
33
  ---
34
 
 
95
  import torch
96
  from transformers import pipeline
97
 
98
+ model_id = "NbAiLab/nb-notram-llama-3.2-3b-instruct"
99
 
100
  pipe = pipeline(
101
  task="text-generation",
 
199
 
200
  ## Funding and acknowledgement
201
 
202
+ Training was supported by Google’s TPU Research Cloud ("TRC"), which provided Cloud TPUs essential for the computational work.
203
+
204
+