Instructions to use SparseLLM/prosparse-llama-2-13b-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SparseLLM/prosparse-llama-2-13b-gguf with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="SparseLLM/prosparse-llama-2-13b-gguf", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("SparseLLM/prosparse-llama-2-13b-gguf", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle