Privacy Collapse: Benign Fine-Tuning Can Break Contextual Privacy in Language Models
Paper
•
2601.15220
•
Published
•
8
LLM, trustworthy AI, AI security, privacy, calibration, hallucination
Privacy Collapse: Benign Fine-Tuning Can Break Contextual Privacy in Language Models
Is Multilingual LLM Watermarking Truly Multilingual? A Simple Back-Translation Solution