OneZero-Y's picture
Upload lora_jailbreak_classifier_modernbert-base_model LoRA model
f7d16d6 verified
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- lora
- semantic-router
- classification-classification
- text-classification
- candle
- rust
language:
- en
pipeline_tag: text-classification
library_name: candle
---
# lora_jailbreak_classifier_modernbert-base_model
## Model Description
This is a LoRA (Low-Rank Adaptation) fine-tuned model based on **bert-base-uncased** for Multi-task classification.
This model is part of the [semantic-router](https://github.com/vllm-project/semantic-router) project and is optimized for use with the Candle framework in Rust.
## Model Details
- **Base Model**: bert-base-uncased
- **Task**: Classification Classification
- **Framework**: Candle (Rust)
- **Model Size**: ~571MB
- **LoRA Rank**: 32
- **LoRA Alpha**: 64
- **Target Modules**: attn.Wqkv, attn.Wo, mlp.Wi, mlp.Wo
## Usage
### With semantic-router (Recommended)
```python
from semantic_router import SemanticRouter
# The model will be automatically downloaded and used
router = SemanticRouter()
results = router.classify_batch(["Your text here"])
```
### With Candle (Rust)
```rust
use candle_core::{Device, Tensor};
use candle_transformers::models::bert::BertModel;
// Load the model using Candle
let device = Device::Cpu;
let model = BertModel::load(&device, &config, &weights)?;
```
## Training Details
This model was fine-tuned using LoRA (Low-Rank Adaptation) technique:
- **Rank**: 32
- **Alpha**: 64
- **Dropout**: 0.1
- **Target Modules**: attn.Wqkv, attn.Wo, mlp.Wi, mlp.Wo
## Performance
Multi-task classification
For detailed performance metrics, see the [training results](https://github.com/vllm-project/semantic-router/blob/main/training-result.md).
## Files
- `model.safetensors`: LoRA adapter weights
- `config.json`: Model configuration
- `lora_config.json`: LoRA-specific configuration
- `tokenizer.json`: Tokenizer configuration
- `label_mapping.json`: Label mappings for classification
## Citation
If you use this model, please cite:
```bibtex
@misc{semantic-router-lora,
title={LoRA Fine-tuned Models for Semantic Router},
author={Semantic Router Team},
year={2025},
url={https://github.com/vllm-project/semantic-router}
}
```
## License
Apache 2.0