Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Duplicated from  Tiiny/prosparse-llama-2-13b-gguf

SparseLLM
/
prosparse-llama-2-13b-gguf

Feature Extraction
Transformers
GGUF
English
sparsellama
custom_code
Model card Files Files and versions
xet
Community

Instructions to use SparseLLM/prosparse-llama-2-13b-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use SparseLLM/prosparse-llama-2-13b-gguf with Transformers:

    # Use a pipeline as a high-level helper
    from transformers import pipeline
    
    pipe = pipeline("feature-extraction", model="SparseLLM/prosparse-llama-2-13b-gguf", trust_remote_code=True)
    # Load model directly
    from transformers import AutoModel
    model = AutoModel.from_pretrained("SparseLLM/prosparse-llama-2-13b-gguf", trust_remote_code=True, dtype="auto")
  • Notebooks
  • Google Colab
  • Kaggle
prosparse-llama-2-13b-gguf
27.6 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 3 commits
Raincleared's picture
Raincleared
Upload config.json with huggingface_hub
b063f2e verified about 2 years ago
  • .gitattributes
    1.58 kB
    Duplicate from PowerInfer/prosparse-llama-2-13b-gguf about 2 years ago
  • README.md
    1.06 kB
    Upload README.md with huggingface_hub about 2 years ago
  • config.json
    991 Bytes
    Upload config.json with huggingface_hub about 2 years ago
  • prosparse-llama-2-13b.gguf
    27.6 GB
    xet
    Duplicate from PowerInfer/prosparse-llama-2-13b-gguf about 2 years ago