JonasDornbusch's picture
Update README.md
195f2aa verified
metadata
license: mit
base_model:
  - meta-llama/Meta-Llama-3-8B-Instruct
datasets:
  - HuggingFaceH4/ultrachat_200k
  - walledai/HarmBench
language:
  - en
new_version: ASSELab/Diffusion-Llama-3-8B-Instruct
tags:
  - pytorch
  - llama
  - llama-3
  - DAT
  - robust
  - adversarial
library_name: transformers

DAT - Distributional Adversarial Training

arXiv GitHub

DAT utilizes continuous adversarial training on diffusion-based adversarial examples to close the gap between empirical and population-robust risk. We fine-tune meta-llama/Meta-Llama-3-8B-Instruct.

This model is NOT using adversarial training! This is an ablation/baseline using just the diffusion data to fine-tune.

For further information, consult our paper or repository https://github.com/ASSELab/DAT

Citation

@misc{,
      title={}, 
      author={},
      year={2026},
      eprint={},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}