Instructions to use SparseLLM/prosparse-llama-2-7b-predictor with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SparseLLM/prosparse-llama-2-7b-predictor with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="SparseLLM/prosparse-llama-2-7b-predictor", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("SparseLLM/prosparse-llama-2-7b-predictor", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Ctrl+K
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet
- 61.9 MB xet