nielsr HF Staff commited on
Commit
ff813fd
·
verified ·
1 Parent(s): c5f0eaa

Add pipeline tag and library name

Browse files

This PR adds the `pipeline_tag` and `library_name` to the model card metadata. This will improve discoverability of the model on the Hugging Face Hub. The `text-generation` pipeline tag is appropriate given the model's function as a Large Language Model. The `transformers` library is specified because the model's configuration and tokenizer files are in the standard Transformers format.

Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -1,8 +1,10 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - it
5
  - en
 
 
 
6
  ---
7
 
8
  # Mistral-7B-v0.1-Italian-SAVA
@@ -14,7 +16,7 @@ language:
14
 
15
  The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
16
 
17
- *Mistral-v0.1-Italian-SAVA* is a continual trained mistral model, after tokenizer substitution.
18
 
19
  The tokenizer of this models after adaptation is the same of [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).
20
 
@@ -32,7 +34,7 @@ The data are extracted to be skewed toward Italian language with a ration of one
32
 
33
  You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
34
 
35
- Make sure to update your transformers installation via pip install --upgrade transformers.
36
 
37
  ```python
38
  import transformers
 
1
  ---
 
2
  language:
3
  - it
4
  - en
5
+ license: apache-2.0
6
+ pipeline_tag: text-generation
7
+ library_name: transformers
8
  ---
9
 
10
  # Mistral-7B-v0.1-Italian-SAVA
 
16
 
17
  The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
18
 
19
+ *Mistral-v0.1-Italian-SAVA* is a continually trained Mistral model, after tokenizer substitution.
20
 
21
  The tokenizer of this models after adaptation is the same of [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).
22
 
 
34
 
35
  You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
36
 
37
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
38
 
39
  ```python
40
  import transformers