Instructions to use coliseum034/coliseum-attacker-wild with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use coliseum034/coliseum-attacker-wild with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("coliseum034/coliseum-attacker-wild", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use coliseum034/coliseum-attacker-wild with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for coliseum034/coliseum-attacker-wild to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for coliseum034/coliseum-attacker-wild to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for coliseum034/coliseum-attacker-wild to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="coliseum034/coliseum-attacker-wild", max_seq_length=2048, )
File size: 2,343 Bytes
73e70c1 304fb94 272d4b6 981b16e 272d4b6 304fb94 272d4b6 304fb94 272d4b6 304fb94 272d4b6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | ---
base_model: unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- safetensors
- security
- red-teaming
---
# coliseum034/coliseum-attacker-wild
This model is a fine-tuned version of `unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit`. It was trained up to 2x faster utilizing [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library.
This model is structurally geared toward advanced security operations, multi-agent system simulations, and red-teaming applications in the wild.
## ⚙️ Model Details
* **License:** Apache 2.0
* **Base Model:** `unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit`
* **Architecture:** Qwen2 (0.5B parameters)
* **Language:** English
* **Quantization:** 4-bit (bitsandbytes)
## 📊 Training & Evaluation Metrics
The model was trained over 3 epochs for a total of 921 global steps. The training procedure demonstrated consistent learning, achieving a final validation perplexity of ~5.168.
### Per-Epoch Results
| Epoch | Training Loss | Validation Loss | Perplexity (PPL) |
| :---: | :---: | :---: | :---: |
| **1.0** | 1.6638 | 1.6605 | 5.262 |
| **2.0** | 1.5345 | 1.6314 | 5.111 |
| **3.0** | 1.4212 | 1.6425 | 5.168 |
### Final Held-Out Metrics
* **Final Training Loss:** `1.4212`
* **Final Evaluation Loss:** `1.6425`
* **Final Perplexity:** `5.168`
### Training Hyperparameters & Performance
* **Global Steps:** 921
* **Total Training Runtime:** ~36 minutes, 48 seconds (2207.98 seconds)
* **Training Samples per Second:** 6.658
* **Training Steps per Second:** 0.417
* **Total FLOPs:** 8.527 x 10^15
## 💻 Framework Versions
* PEFT
* Transformers
* Unsloth
* TRL
* Safetensors
* PyTorch
## 🚀 Usage
This model uses the standard `transformers` library pipeline or `text-generation-inference`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "coliseum034/coliseum-attacker-wild"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = "Analyze this sequence for potential exploitation vectors:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |