Instructions to use ayanami-kitasan/code-pruner with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ayanami-kitasan/code-pruner with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="ayanami-kitasan/code-pruner")# Load model directly from transformers import SwePrunerForCodeCompression model = SwePrunerForCodeCompression.from_pretrained("ayanami-kitasan/code-pruner", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update config.json
Browse files- config.json +1 -1
config.json
CHANGED
|
@@ -9,7 +9,7 @@
|
|
| 9 |
"dropout": 0.4,
|
| 10 |
"early_layer_ratio": 0.25,
|
| 11 |
"middle_layer_ratio": 0.5,
|
| 12 |
-
"model_type": "
|
| 13 |
"num_fusion_layers": 1,
|
| 14 |
"num_heads": 8,
|
| 15 |
"torch_dtype": "bfloat16",
|
|
|
|
| 9 |
"dropout": 0.4,
|
| 10 |
"early_layer_ratio": 0.25,
|
| 11 |
"middle_layer_ratio": 0.5,
|
| 12 |
+
"model_type": "swepruner",
|
| 13 |
"num_fusion_layers": 1,
|
| 14 |
"num_heads": 8,
|
| 15 |
"torch_dtype": "bfloat16",
|