Instructions to use ybelkada/opt-125m-debug with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ybelkada/opt-125m-debug with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="ybelkada/opt-125m-debug")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ybelkada/opt-125m-debug") model = AutoModel.from_pretrained("ybelkada/opt-125m-debug") - Notebooks
- Google Colab
- Kaggle
Update config.json
#3
by ArthurZ HF Staff - opened
- config.json +3 -2
config.json
CHANGED
|
@@ -27,5 +27,6 @@
|
|
| 27 |
"torch_dtype": "float32",
|
| 28 |
"transformers_version": "4.19.0.dev0",
|
| 29 |
"use_cache": false,
|
| 30 |
-
"vocab_size": 50272
|
| 31 |
-
|
|
|
|
|
|
| 27 |
"torch_dtype": "float32",
|
| 28 |
"transformers_version": "4.19.0.dev0",
|
| 29 |
"use_cache": false,
|
| 30 |
+
"vocab_size": 50272,
|
| 31 |
+
"_remove_final_layer_norm": true
|
| 32 |
+
}
|