Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

tensorblock
/
SecurityLLM-GGUF

Transformers
GGUF
security
cybersecwithai
threat
vulnerability
infosec
zysec.ai
cyber security
ai4security
llmsecurity
cyber
malware analysis
exploitdev
ai4good
aisecurity
cybersec
cybersecurity
TensorBlock
GGUF
conversational
Model card Files Files and versions
xet
Community

Instructions to use tensorblock/SecurityLLM-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use tensorblock/SecurityLLM-GGUF with Transformers:

    # Load model directly
    from transformers import AutoModel
    model = AutoModel.from_pretrained("tensorblock/SecurityLLM-GGUF", dtype="auto")
  • llama-cpp-python

    How to use tensorblock/SecurityLLM-GGUF with llama-cpp-python:

    # !pip install llama-cpp-python
    
    from llama_cpp import Llama
    
    llm = Llama.from_pretrained(
    	repo_id="tensorblock/SecurityLLM-GGUF",
    	filename="SecurityLLM-Q2_K.gguf",
    )
    
    llm.create_chat_completion(
    	messages = "No input example has been defined for this model task."
    )
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • llama.cpp

    How to use tensorblock/SecurityLLM-GGUF with llama.cpp:

    Install from brew
    brew install llama.cpp
    # Start a local OpenAI-compatible server with a web UI:
    llama-server -hf tensorblock/SecurityLLM-GGUF:Q2_K
    # Run inference directly in the terminal:
    llama-cli -hf tensorblock/SecurityLLM-GGUF:Q2_K
    Install from WinGet (Windows)
    winget install llama.cpp
    # Start a local OpenAI-compatible server with a web UI:
    llama-server -hf tensorblock/SecurityLLM-GGUF:Q2_K
    # Run inference directly in the terminal:
    llama-cli -hf tensorblock/SecurityLLM-GGUF:Q2_K
    Use pre-built binary
    # Download pre-built binary from:
    # https://github.com/ggerganov/llama.cpp/releases
    # Start a local OpenAI-compatible server with a web UI:
    ./llama-server -hf tensorblock/SecurityLLM-GGUF:Q2_K
    # Run inference directly in the terminal:
    ./llama-cli -hf tensorblock/SecurityLLM-GGUF:Q2_K
    Build from source code
    git clone https://github.com/ggerganov/llama.cpp.git
    cd llama.cpp
    cmake -B build
    cmake --build build -j --target llama-server llama-cli
    # Start a local OpenAI-compatible server with a web UI:
    ./build/bin/llama-server -hf tensorblock/SecurityLLM-GGUF:Q2_K
    # Run inference directly in the terminal:
    ./build/bin/llama-cli -hf tensorblock/SecurityLLM-GGUF:Q2_K
    Use Docker
    docker model run hf.co/tensorblock/SecurityLLM-GGUF:Q2_K
  • LM Studio
  • Jan
  • Ollama

    How to use tensorblock/SecurityLLM-GGUF with Ollama:

    ollama run hf.co/tensorblock/SecurityLLM-GGUF:Q2_K
  • Unsloth Studio new

    How to use tensorblock/SecurityLLM-GGUF with Unsloth Studio:

    Install Unsloth Studio (macOS, Linux, WSL)
    curl -fsSL https://unsloth.ai/install.sh | sh
    # Run unsloth studio
    unsloth studio -H 0.0.0.0 -p 8888
    # Then open http://localhost:8888 in your browser
    # Search for tensorblock/SecurityLLM-GGUF to start chatting
    Install Unsloth Studio (Windows)
    irm https://unsloth.ai/install.ps1 | iex
    # Run unsloth studio
    unsloth studio -H 0.0.0.0 -p 8888
    # Then open http://localhost:8888 in your browser
    # Search for tensorblock/SecurityLLM-GGUF to start chatting
    Using HuggingFace Spaces for Unsloth
    # No setup required
    # Open https://huggingface.co/spaces/unsloth/studio in your browser
    # Search for tensorblock/SecurityLLM-GGUF to start chatting
  • Docker Model Runner

    How to use tensorblock/SecurityLLM-GGUF with Docker Model Runner:

    docker model run hf.co/tensorblock/SecurityLLM-GGUF:Q2_K
  • Lemonade

    How to use tensorblock/SecurityLLM-GGUF with Lemonade:

    Pull the model
    # Download Lemonade from https://lemonade-server.ai/
    lemonade pull tensorblock/SecurityLLM-GGUF:Q2_K
    Run and chat with the model
    lemonade run user.SecurityLLM-GGUF-Q2_K
    List all available models
    lemonade list
SecurityLLM-GGUF
6.24 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 8 commits
morriszms's picture
morriszms
Keep Q2_K/Q3_K_M gguf only
48684a9 verified 3 months ago
  • .gitattributes
    2.23 kB
    Upload folder using huggingface_hub over 1 year ago
  • README.md
    6.52 kB
    Update README.md 10 months ago
  • SecurityLLM-Q2_K.gguf
    2.72 GB
    xet
    Upload folder using huggingface_hub over 1 year ago
  • SecurityLLM-Q3_K_M.gguf
    3.52 GB
    xet
    Upload folder using huggingface_hub over 1 year ago