Transformers
Safetensors
English
text-generation-inference
unsloth
qwen2
trl
security
red-teaming
adversarial-testing
Instructions to use coliseum034/coliseum-attacker-dan with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use coliseum034/coliseum-attacker-dan with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("coliseum034/coliseum-attacker-dan", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use coliseum034/coliseum-attacker-dan with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for coliseum034/coliseum-attacker-dan to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for coliseum034/coliseum-attacker-dan to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for coliseum034/coliseum-attacker-dan to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="coliseum034/coliseum-attacker-dan", max_seq_length=2048, )
| base_model: unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit | |
| language: | |
| - en | |
| license: apache-2.0 | |
| tags: | |
| - text-generation-inference | |
| - transformers | |
| - unsloth | |
| - qwen2 | |
| - trl | |
| - safetensors | |
| - security | |
| - red-teaming | |
| - adversarial-testing | |
| # coliseum034/coliseum-attacker-dan | |
| This model is a fine-tuned version of `unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit`. It was trained up to 2x faster utilizing [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library. | |
| This model is optimized for adversarial interactions, red-teaming, and generating edge-case scenarios for testing multi-agent security systems. | |
| ## βοΈ Model Details | |
| * **License:** Apache 2.0 | |
| * **Base Model:** `unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit` | |
| * **Architecture:** Qwen2 (0.5B parameters) | |
| * **Language:** English | |
| * **Quantization:** 4-bit (bitsandbytes) | |
| ## π Training & Evaluation Metrics | |
| The model was trained over 4 epochs for a total of 276 global steps, with smart gradient offloading to optimize VRAM. The training procedure achieved a final validation perplexity of ~7.380. | |
| ### Per-Epoch Results | |
| | Epoch | Training Loss | Validation Loss | Perplexity (PPL) | | |
| | :---: | :---: | :---: | :---: | | |
| | **1.0** | 2.3769 | 2.2334 | 9.332 | | |
| | **2.0** | 2.0010 | 2.0595 | 7.842 | | |
| | **3.0** | 1.8116 | 1.9976 | 7.371 | | |
| | **4.0** | 1.7036 | 1.9987 | 7.380 | | |
| ### Final Held-Out Metrics | |
| * **Final Training Loss:** `1.7036` | |
| * **Final Evaluation Loss:** `1.9987` | |
| * **Final Perplexity:** `7.380` | |
| ### Training Hyperparameters & Performance | |
| * **Global Steps:** 276 | |
| * **Total Training Runtime:** ~26 minutes, 8 seconds (1568.302 seconds) | |
| * **Training Samples per Second:** 2.778 | |
| * **Training Steps per Second:** 0.176 | |
| * **Total FLOPs:** 4.179 x 10^15 | |
| ## π» Framework Versions | |
| * PEFT | |
| * Transformers | |
| * Unsloth | |
| * TRL | |
| * Safetensors | |
| * PyTorch | |
| ## π Usage | |
| This model uses the standard `transformers` library pipeline or `text-generation-inference`. | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| model_id = "coliseum034/coliseum-attacker-dan" | |
| tokenizer = AutoTokenizer.from_pretrained(model_id) | |
| model = AutoModelForCausalLM.from_pretrained(model_id) | |
| prompt = "Initiate testing parameters for potential authorization bypasses:" | |
| inputs = tokenizer(prompt, return_tensors="pt") | |
| outputs = model.generate(**inputs, max_new_tokens=100) | |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |