Instructions to use L33tcode/llama-3-8b-CEH-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use L33tcode/llama-3-8b-CEH-gguf with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("L33tcode/llama-3-8b-CEH-gguf", dtype="auto") - llama-cpp-python
How to use L33tcode/llama-3-8b-CEH-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="L33tcode/llama-3-8b-CEH-gguf", filename="llama-3-CEH-q3_k_s.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use L33tcode/llama-3-8b-CEH-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf L33tcode/llama-3-8b-CEH-gguf:Q3_K_S # Run inference directly in the terminal: llama-cli -hf L33tcode/llama-3-8b-CEH-gguf:Q3_K_S
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf L33tcode/llama-3-8b-CEH-gguf:Q3_K_S # Run inference directly in the terminal: llama-cli -hf L33tcode/llama-3-8b-CEH-gguf:Q3_K_S
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf L33tcode/llama-3-8b-CEH-gguf:Q3_K_S # Run inference directly in the terminal: ./llama-cli -hf L33tcode/llama-3-8b-CEH-gguf:Q3_K_S
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf L33tcode/llama-3-8b-CEH-gguf:Q3_K_S # Run inference directly in the terminal: ./build/bin/llama-cli -hf L33tcode/llama-3-8b-CEH-gguf:Q3_K_S
Use Docker
docker model run hf.co/L33tcode/llama-3-8b-CEH-gguf:Q3_K_S
- LM Studio
- Jan
- Ollama
How to use L33tcode/llama-3-8b-CEH-gguf with Ollama:
ollama run hf.co/L33tcode/llama-3-8b-CEH-gguf:Q3_K_S
- Unsloth Studio new
How to use L33tcode/llama-3-8b-CEH-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for L33tcode/llama-3-8b-CEH-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for L33tcode/llama-3-8b-CEH-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for L33tcode/llama-3-8b-CEH-gguf to start chatting
- Docker Model Runner
How to use L33tcode/llama-3-8b-CEH-gguf with Docker Model Runner:
docker model run hf.co/L33tcode/llama-3-8b-CEH-gguf:Q3_K_S
- Lemonade
How to use L33tcode/llama-3-8b-CEH-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull L33tcode/llama-3-8b-CEH-gguf:Q3_K_S
Run and chat with the model
lemonade run user.llama-3-8b-CEH-gguf-Q3_K_S
List all available models
lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)Model Card for (llama-3-CEH) LLama3-CyberSec (GGUF)
[
]
Model Details
- Developed by: L33tcode
- License: apache-2.0
- Fine-tuned from model: cognitivecomputations/dolphin-2.9-llama3-8b
- Model Format: GGUF
- Training Data: Datasets of cybersecurity methodologies and codes
Model Description
(llama-3-CEH) LLama3-CyberSec (GGUF) is a fine-tuned version of the LLama3 language model, specifically adapted for cybersecurity applications. The model leverages the GGUF format to optimize performance and efficiency. Fine-tuned using Unsloth and Huggingface's TRL library, this model was trained 2x faster to effectively address tasks related to identifying vulnerabilities, analyzing security protocols, and understanding complex cybersecurity concepts.
Intended Use
(llama-3-CEH) LLama3-CyberSec (GGUF) is intended for use by cybersecurity professionals and researchers to:
- Identify and analyze potential security vulnerabilities.
- Understand and implement various cybersecurity methodologies.
- Analyze code for security flaws and potential exploits.
- Generate reports on security findings and best practices.
Limitations and Risks
Uncensored Nature
This model is uncensored, meaning it can generate content that might be considered unethical or harmful. Users must exercise caution and ethical judgment when using this model. It is crucial to use the model responsibly and report any identified vulnerabilities to the relevant authorities or organizations instead of exploiting them.
Ethical Use
- Responsibility: Users are fully responsible for any outcomes resulting from the use of this model. The creators of LLama3-CyberSec (GGUF) are not liable for any harm or damage caused.
- Security Reporting: Identified vulnerabilities or bugs should be reported to the affected organization or appropriate authority to improve security.
Recommendations for Use
- Ethical Hacking and Security Testing: Use the model to ethically find and report vulnerabilities.
- Education and Training: Employ the model for educating and training individuals in cybersecurity practices.
- Research and Development: Utilize the model for advancing cybersecurity research and improving existing measures.
Future Work
Future versions of (llama-3-CEH) LLama3-CyberSec (GGUF) may include:
- Enhanced filtering to prevent the generation of unethical or harmful content.
- Additional training data covering more cybersecurity aspects.
- Improved guidance and documentation for responsible use.
Disclaimer
(llama-3-CEH) LLama3-CyberSec (GGUF) is provided "as is" without any warranties or guarantees. Use this model at your own risk and comply with all applicable laws and regulations. The developers disclaim any liability for damage or harm resulting from its use.
By using (llama-3-CEH) LLama3-CyberSec (GGUF), you agree to these terms and commit to using the model responsibly and ethically.
For more information on responsible use and best practices in cybersecurity.
- Downloads last month
- 19
3-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="L33tcode/llama-3-8b-CEH-gguf", filename="", )