Instructions to use ljsabc/ChatGLM-prefix-tuning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ljsabc/ChatGLM-prefix-tuning with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("ljsabc/ChatGLM-prefix-tuning", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update config.json
Browse files- config.json +2 -1
config.json
CHANGED
|
@@ -24,5 +24,6 @@
|
|
| 24 |
"torch_dtype": "float16",
|
| 25 |
"transformers_version": "4.23.1",
|
| 26 |
"use_cache": true,
|
| 27 |
-
"vocab_size": 130528
|
|
|
|
| 28 |
}
|
|
|
|
| 24 |
"torch_dtype": "float16",
|
| 25 |
"transformers_version": "4.23.1",
|
| 26 |
"use_cache": true,
|
| 27 |
+
"vocab_size": 130528,
|
| 28 |
+
"pre_seq_len": 128
|
| 29 |
}
|