Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

marcorez8
/
llama-cpp-python-windows-blackwell-cuda

llama-cpp-python
cuda
nvidia
blackwell
windows
prebuilt-wheels
python
machine-learning
large-language-models
gpu-acceleration
Model card Files Files and versions
xet
Community

Instructions to use marcorez8/llama-cpp-python-windows-blackwell-cuda with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • llama-cpp-python

    How to use marcorez8/llama-cpp-python-windows-blackwell-cuda with llama-cpp-python:

    # !pip install llama-cpp-python
    
    from llama_cpp import Llama
    
    llm = Llama.from_pretrained(
    	repo_id="marcorez8/llama-cpp-python-windows-blackwell-cuda",
    	filename="{{GGUF_FILE}}",
    )
    
    output = llm(
    	"Once upon a time,",
    	max_tokens=512,
    	echo=True
    )
    print(output)
  • Notebooks
  • Google Colab
  • Kaggle
llama-cpp-python-windows-blackwell-cuda
65.9 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 4 commits
marcorez8's picture
marcorez8
Upload llama_cpp_python-0.3.9-cp310-cp310-win_amd64.whl
105fb69 verified 11 months ago
  • .gitattributes
    1.6 kB
    Upload llama_cpp_python-0.3.9-cp310-cp310-win_amd64.whl 11 months ago
  • README.md
    1.37 kB
    Update README.md 11 months ago
  • llama_cpp_python-0.3.9-cp310-cp310-win_amd64.whl
    65.9 MB
    xet
    Upload llama_cpp_python-0.3.9-cp310-cp310-win_amd64.whl 11 months ago