YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Adversarial Energy-Latency LoRA Attack on Qwen2.5-1.5B

Proof-of-concept stealthy efficiency attack using LoRA fine-tuning.

Attack Impact

  • Latency increase: +{latency_increase:.1f}%
  • GPU memory increase: +{mem_increase:.1f}%
  • Perplexity change: {ppl_change_pct:+.1f}% (minimal quality loss)

Attack Results

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")

attacked_model = PeftModel.from_pretrained(base_model, "Abdennour07/adversarial-qwen-lora-attack")


**Author : ALOUACH Abdennour
---
license: mit
---
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support