Text Generation
Transformers
Safetensors
PyTorch
English
llama
llama-3
DAT
robust
adversarial
conversational
text-generation-inference
JonasDornbusch commited on
Commit
18e868c
·
verified ·
1 Parent(s): 02927e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -16,27 +16,31 @@ tags:
16
  - robust
17
  - adversarial
18
  library_name: transformers
 
 
 
19
  ---
20
 
21
  # DAT - Distributional Adversarial Training
22
 
23
- [![arXiv](https://img.shields.io/badge/arXiv-2511.04316-b31b1b.svg)](...)
24
  [![GitHub](https://img.shields.io/badge/GitHub-DAT-181717?logo=github&logoColor=white)](https://github.com/ASSELab/DAT)
25
 
26
  DAT utilizes [continuous adversarial training](https://arxiv.org/abs/2405.15589) on [diffusion-based](https://arxiv.org/abs/2511.00203v1) adversarial examples to close the gap between empirical and population-robust risk.
27
  We fine-tune [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
28
 
29
- For further information, consult our paper []() or repository []()
30
 
31
  ## Citation
32
 
33
  ```tex
34
- @misc{,
35
- title={},
36
- author={},
37
  year={2026},
38
- eprint={},
39
  archivePrefix={arXiv},
40
- primaryClass={cs.LG}
 
41
  }
42
  ```
 
16
  - robust
17
  - adversarial
18
  library_name: transformers
19
+ paper:
20
+ title: "Closing the Distribution Gap in Adversarial Training for LLMs"
21
+ url: "https://arxiv.org/abs/2602.15238"
22
  ---
23
 
24
  # DAT - Distributional Adversarial Training
25
 
26
+ [![arXiv](https://img.shields.io/badge/arXiv-2602.15238-b31b1b.svg)](https://arxiv.org/abs/2602.15238)
27
  [![GitHub](https://img.shields.io/badge/GitHub-DAT-181717?logo=github&logoColor=white)](https://github.com/ASSELab/DAT)
28
 
29
  DAT utilizes [continuous adversarial training](https://arxiv.org/abs/2405.15589) on [diffusion-based](https://arxiv.org/abs/2511.00203v1) adversarial examples to close the gap between empirical and population-robust risk.
30
  We fine-tune [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
31
 
32
+ For further information, consult our paper [https://arxiv.org/abs/2602.15238](https://arxiv.org/abs/2602.15238) or repository [https://github.com/ASSELab/DAT](https://github.com/ASSELab/DAT)
33
 
34
  ## Citation
35
 
36
  ```tex
37
+ @misc{hu2026closingdistributiongapadversarial,
38
+ title={Closing the Distribution Gap in Adversarial Training for LLMs},
39
+ author={Chengzhi Hu and Jonas Dornbusch and David Lüdke and Stephan Günnemann and Leo Schwinn},
40
  year={2026},
41
+ eprint={2602.15238},
42
  archivePrefix={arXiv},
43
+ primaryClass={cs.LG},
44
+ url={https://arxiv.org/abs/2602.15238},
45
  }
46
  ```