Instructions to use ELiRF/NASCA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ELiRF/NASCA with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="ELiRF/NASCA")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ELiRF/NASCA") model = AutoModelForSeq2SeqLM.from_pretrained("ELiRF/NASCA") - Notebooks
- Google Colab
- Kaggle
Update config.json
Browse files- config.json +1 -1
config.json
CHANGED
|
@@ -13,7 +13,7 @@
|
|
| 13 |
"decoder_ffn_dim": 4096,
|
| 14 |
"decoder_layerdrop": 0.0,
|
| 15 |
"decoder_layers": 12,
|
| 16 |
-
"decoder_start_token_id":
|
| 17 |
"dropout": 0.1,
|
| 18 |
"encoder_attention_heads": 16,
|
| 19 |
"encoder_ffn_dim": 4096,
|
|
|
|
| 13 |
"decoder_ffn_dim": 4096,
|
| 14 |
"decoder_layerdrop": 0.0,
|
| 15 |
"decoder_layers": 12,
|
| 16 |
+
"decoder_start_token_id": 1,
|
| 17 |
"dropout": 0.1,
|
| 18 |
"encoder_attention_heads": 16,
|
| 19 |
"encoder_ffn_dim": 4096,
|