Instructions to use Synthyra/ESMplusplus_small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Synthyra/ESMplusplus_small with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="Synthyra/ESMplusplus_small", trust_remote_code=True)# Load model directly from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained("Synthyra/ESMplusplus_small", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Upload entrypoint_setup.py with huggingface_hub
Browse files- entrypoint_setup.py +1 -1
entrypoint_setup.py
CHANGED
|
@@ -19,4 +19,4 @@ torch.backends.cudnn.deterministic = False
|
|
| 19 |
inductor_config.max_autotune_gemm_backends = "ATEN,CUTLASS,FBGEMM"
|
| 20 |
|
| 21 |
dynamo.config.capture_scalar_outputs = True
|
| 22 |
-
torch._dynamo.config.recompile_limit =
|
|
|
|
| 19 |
inductor_config.max_autotune_gemm_backends = "ATEN,CUTLASS,FBGEMM"
|
| 20 |
|
| 21 |
dynamo.config.capture_scalar_outputs = True
|
| 22 |
+
torch._dynamo.config.recompile_limit = 16
|