How can we teach a robot to understand the nuances of privacy in elderly care? It starts with teaching it to recognize sensitive data.
A new conceptual paper introduces "Privacy Agents" an AI designed to safeguard contextual integrity in care settings. To demonstrate that their innovative concept is feasible, the researchers from TU Wien needed to prove an AI could identify PII in a real-world transcript.
We're proud that the tool they used for this proof-of-concept was fine-tuned on the Ai4Privacy/pii-masking-200k dataset.
This is a perfect win-win: brilliant researchers are designing the future of privacy-aware robotics, and our open-source data helps provide the foundational tools to show it's possible. This is how conceptual breakthroughs become practical solutions.
What happens when an LLM "forgets" your data? A new paper reports it might not be gone for good.
The "Janus Interface" paper details a new attack that could recover forgotten PII through fine-tuning APIs. This is a solution-oriented paper because it highlights a problem that needs fixing.
Testing such a high-stakes attack requires equally high-stakes data. The Ai4Privacy 300k dataset was a key part of their evaluation, providing a testbed for extracting sensitive Social Security Numbers. Our dataset, with its synthetic structured SSN data, helped the researchers at Indiana University, Stanford & CISPA, and others demonstrate that their attack works on more than just emails. It could affect highly sensitive personal identifiers.
We're excited to see our open-source dataset used in such cutting-edge security research. It's a win for the community when researchers can use our resources to stress-test the safety of modern AI systems. This work is a direct and explicit call for stronger protections on fine-tuning interfaces.
Why choose between performance, privacy, and transparency when you can have all three?
We're highlighting a solution-oriented paper that introduces PRvL, an open-source toolkit for PII redaction. The interesting part, the researchers used the AI4Privacy-300K and AI4Privacy-500K datasets to train and benchmark their suite of models.
This is the power of open-source collaboration. We provide the comprehensive data foundation, and the community builds better solutions on top of it. It's a win for every organization when this research results in a powerful, free, and self-hostable tool that helps keep their data safe.
Big cheers to Leon Garza, Anantaa Kotal, Aritran Piplai, Lavanya Elluri, Prajit D., and Aman Chadha for pulling this off.
How do you prove your new, specialized AI model is a better solution? You test it against the best.
That's why we were excited to see the new AdminBERT paper from researchers at Nantes Université and others. To show the strength of their new model for French administrative texts, they compared it to the state-of-the-art generalist model, NERmemBERT.
The direct connection to our work is clear: NERmemBERT was trained on a combination of datasets, including the Pii-masking-200k dataset by Ai4Privacy.
This is a perfect win-win for the open-source community. Our foundational dataset helps create a strong, general-purpose benchmark, which in turn helps researchers prove the value of their specialized work. This is how we all get better.
The future of AI privacy isn't just in the cloud; it's on your device. But how do we build and validate these tools?
A new paper on "Rescriber" explores this with a tool that uses smaller LLMs for on-device anonymization. Building and validating such tools requires a strong data foundation. We're excited to see that the researchers used the Ai4Privacy open dataset to create their performance benchmarks.
This is our mission in action: providing the open-source data that helps innovators build and test better solutions that will give users more control over their privacy. It's a win for the community when our data helps prove the feasibility of on-device AI for data minimization, with reported user perceptions on par with state-of-the-art cloud models.
Shoutout to Jijie Zhou, Eryue Xu, Yaoyao Wu, and Tianshi Li on this one!
Building powerful multilingual AI shouldn't mean sacrificing user privacy.
We're highlighting a solution-oriented report from researchers Sahana Naganandh, Vaibhav V, and Thenmozhi M at Vellore Institute of Technology that investigates this exact challenge. The direct connection to our mission is clear: the paper showcases the PII43K dataset as a privacy-preserving alternative to high-risk, raw multilingual data
The report notes that our dataset, with its structured anonymization, is a "useful option for privacy-centric AI applications." It's always a delight when academic research independently validates our data-first approach to solving real-world privacy problems.
We can't build more private AI if we can't measure privacy intelligence.
That's why we're highlighting the Priv-IQ benchmark, a new, solution-oriented framework for evaluating LLMs on eight key privacy competencies, from visual privacy to knowledge of privacy law. The direct connection to our work is clear: the researchers relied on samples from the Ai4Privacy dataset to build out questions for Privacy Risk Assessment and Multilingual Entity Recognition.
This is the power of open-source collaboration. We provide the data building blocks, and researchers construct powerful new evaluation tools on top of them. It's a win-win for the entire ecosystem when we can all benefit from transparent, data-driven benchmarks that help push for better, safer AI.
Traditional data leak prevention is failing. A new paper has a solution-oriented approach inspired by evolution.
The paper introduces a genetic-algorithm-driven method for detecting data leaks. To prove its effectiveness, the researchers Anatoliy Sachenko, Petro V., Oleg Savenko, Viktor Ostroverkhov, Bogdan Maslyyak from Casimir Pulaski Radom University and others needed a real-world, complex PII dataset. We're proud that the AI4Privacy PII 300k dataset was used as a key benchmark for their experiments.
This is the power of open-source collaboration. We provide complex, real-world data challenges, and brilliant researchers develop and share better solutions to solve them. It's a win for every organization when this research helps pave the way for more adaptive and intelligent Data Loss Prevention systems.
Anonymizing a prompt is half the battle. Reliably de-anonymizing the response is the other.
To build a truly reliable privacy pipeline, you have to test it. A new Master's thesis does just that, and our data was there for every step.
We're excited to showcase this work on handling confidential data in LLM prompts from Nedim Karavdic at Mälardalen University. To build their PII anonymization pipeline, they first trained a custom NER model. We're proud that the Ai4Privacy pii-masking-200k dataset was used as the foundational training data for this critical first step.
But it didn't stop there. The research also used our dataset to create the parallel data needed to train and test the generative "Seek" models for de-anonymization. It's a win-win when our open-source data not only helps build the proposed "better solution" but also helps prove why it's better by enabling a rigorous, data-driven comparison.
The tools we use to audit AI for privacy might be easier to fool than we think.
We're highlighting a critical paper that introduces "PoisonM," a novel attack that could make Membership Inference tests unreliable. The direct connection to our work is explicit: the researchers, Neal M., Atul Prakash, Amrita Roy Chowdhury, Ashish Hooda, Kassem Fawaz, Somesh Jha, Zhuohang Li, and Brad Malin used the AI4Privacy dataset as the "canary" dataset in their experiments to test the effectiveness of their attack on realistic, sensitive information.
This is the power of a healthy open-source ecosystem. We provide the foundational data that helps researchers pressure-test our collective assumptions about AI safety. It's a win for everyone when this leads to a more honest conversation about what our tools can and can't do, pushing us all to create better solutions.
What if an AI agent could be tricked into stealing your data, just by reading a tool's description? A new paper reports it's possible.
The "Attractive Metadata Attack" paper details this stealthy new threat. To measure the real-world impact of their attack, the researchers needed a source of sensitive data for the agent to leak. We're proud that the AI4Privacy corpus was used to create the synthetic user profiles containing standardized PII for their experiments.
This is a perfect win-win. Our open-source data helped researchers Kanghua Mo, 龙昱丞, Zhihao Li from Guangzhou University and The Hong Kong Polytechnic University to not just demonstrate a new attack, but also quantify its potential for harm. This data-driven evidence is what pushes the community to build better, execution-level defenses for AI agents.
🔗 Check out their paper to see how easily an agent's trust in tool metadata could be exploited: https://arxiv.org/pdf/2508.02110
How do you protect your prompts without breaking them? You need a smart sanitizer. A new system called Prϵϵmpt shows how.
The first, critical step in their solution is a high-performance Named Entity Recognition (NER) model to find the sensitive data. We're proud to see that these researchers, Amrita Roy Chowdhury, David Glukhov, Divyam Anshumaan, Prasad Chalasani, Nicolas Papernot, Somesh Jha, and Mihir Bellare from the University of Michigan, University of Toronto, University of Wisconsin-Madison, University of California, San Diego - Rady School of Management and Langroid Incorporated fine-tuned their NER model on 10 high-risk categories from the AI4Privacy dataset.
This is a perfect win-win. Our open-source data helps provide the foundation for the critical detection engine, which in turn enables the community to build and test better solutions like Prϵϵmpt's innovative use of encryption and Differential Privacy.
🔗 Check out their paper for a deep dive into a formally private, high-utility prompt sanitizer: https://arxiv.org/pdf/2504.05147
Making LLMs fast with KV-cache sharing is great. A new paper reports it's also a huge privacy risk.
That's why we're excited to see the "SafeKV" paper from researchers at the University of Connecticut, Peking University, and others. Their solution-oriented framework selectively shares non-sensitive data while isolating PII. To validate the "Safe" part of their system, they needed a robust, multilingual privacy benchmark.
We're proud that the Ai4Privacy pii-masking dataset was used for this critical evaluation related to privacy.
This is a perfect win-win. Our open-source data enables researchers to build and validate more effective security solutions for core AI infrastructure. Their work, in turn, helps make the entire LLM ecosystem safer, showing that performance and privacy don't have to be mutually exclusive.
Kudos to Kexin Chu, Zecheng Lin, Dawei Xiang, 沈子旭, Jianchang Su, cheng chu, Yiwei Yang, Wenhui Zhang, Wenfei Wu, and Wei Zhang on this beautiful work.
After training 𝐒𝐦𝐨𝐥𝐋𝐌𝟑 on 𝟑𝟖𝟒 𝐇𝟏𝟎𝟎𝐬 for nearly a month, I've come to realize something most people overlook: 𝐢𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 𝐢𝐬 𝐭𝐡𝐞 𝐦𝐚𝐤𝐞-𝐨𝐫-𝐛𝐫𝐞𝐚𝐤 𝐟𝐚𝐜𝐭𝐨𝐫 𝐢𝐧 𝐋𝐋𝐌 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠. 🔥
Everyone talks about model architecture and data quality. And yes, those matter immensely. But here's what nobody tells you: when your training run fails at 2 AM because of mysterious 𝐍𝐂𝐂𝐋 𝐞𝐫𝐫𝐨𝐫𝐬, or when your expensive GPU cluster is running at 𝟔𝟎% 𝐞𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲, the problem isn't your model. It's most probably a 𝐦𝐢𝐬𝐮𝐬𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐡𝐚𝐫𝐝𝐰𝐚𝐫𝐞. 🛠️
Questions that seemed simple but had no clear answers: Why is 𝐌𝐨𝐄 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐬𝐥𝐨𝐰𝐞𝐫 𝐭𝐡𝐚𝐧 𝐝𝐞𝐧𝐬𝐞 𝐦𝐨𝐝𝐞𝐥𝐬? Which 𝐍𝐂𝐂𝐋 𝐟𝐥𝐚𝐠𝐬 should we actually set? How often should we checkpoint without killing throughput?
That's why we built 𝐓𝐡𝐞 𝐒𝐦𝐨𝐥 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐏𝐥𝐚𝐲𝐛𝐨𝐨𝐤 📖: a complete guide covering everything from model architecture and data curation to the SmolLM3 training marathon, post-training techniques, and crucially, the 𝐢𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 𝐥𝐚𝐲𝐞𝐫 that most teams get wrong.
We validated real vs theoretical bandwidth across the entire stack: 𝐇𝐁𝐌𝟑 𝐡𝐢𝐭𝐭𝐢𝐧𝐠 𝟑 𝐓𝐁/𝐬, 𝐍𝐕𝐋𝐢𝐧𝐤 𝟒.𝟎 𝐫𝐞𝐚𝐜𝐡𝐢𝐧𝐠 𝟕𝟖𝟔 𝐆𝐁/𝐬, 𝐏𝐂𝐈𝐞 𝐆𝐞𝐧𝟒 𝐚𝐭 𝟏𝟒.𝟐 𝐆𝐁/𝐬. Then we ran collective operations across 𝟏𝟐𝟖 𝐆𝐏𝐔𝐬 (16 nodes, 8xH100s each) and measured how performance degrades at scale: all-reduce drops from 𝟒𝟖𝟎 𝐆𝐁/𝐬 on a single node to 𝟑𝟐𝟎-𝟑𝟓𝟎 𝐆𝐁/𝐬 across 16 nodes.
If you've ever wondered why your training runs are slower than they should be, or you're planning to scale up and want to avoid expensive mistakes, this guide might save you weeks of debugging.
🎮 Live Model Demo: Upload an Android Screenshot and instructions to see the model in action ! Tonic/l-operator-demo
Built in a garage, funded by pre-orders, no VC. Now we’re scaling to 1 k installer units.
We’re giving 50 limited-edition prototypes to investors , installers & researchers who want to co-design the sovereign smart home.
👇 Drop “EUSKERA” in the comments if you want an invite, tag a friend who still thinks Alexa is “convenient,” and smash ♥️ if AI should belong to people - not servers.
Just wanted to annouce 🏭SmolFactory : it's the quickest and best way to finetune SmolLM3 and GPT-OSS-20B on huggingface !
Basicaly it's an app you can run on huggingface by duplicating the space and running your training directly on huggingface GPUs .
It will help you basically select datasets and models, fine tune your model , make an experiment tracker you can use on your mobile phone , push all your model card and even automatically make a demo for you on huggingface so you can directly test it out when it's done !
Are you sure the open-source LLM model you just downloaded is safe?
A recent paper on "Privacy Backdoors" reports a new vulnerability where pre-trained models can be poisoned before fine-tuning them. This is a serious challenge for everyone building on open-source AI.
Instead of just pointing out problems, we believe in finding better solutions. To understand this threat, the researchers needed to test their attack on realistic data structures. They needed a dataset that could effectively simulate a high-stakes privacy attack, and we're proud that our Ai4Privacy dataset was used to provide this crucial benchmark. The paper reports that for our complex dataset, the privacy leakage on a non-poisoned model was almost zero. After the backdoor attack, that number reportedly jumped to 87%.
Ai4Privacy dataset provided a realistic benchmark for their research. Our dataset, composed of synthetic identities, helped them demonstrate how a poisoned model could dramatically amplify privacy leakage.
This is why we champion open source: it enables the community to identify these issues and develop better, safer solutions together.
Kudos to the research team behind this study: Yuxin Wen, Leo Marchyok, Sanghyun Hong, Jonas Geiping, Tom Goldstein, and Nicholas Carlini, Oregon State University, University of Maryland, Google DeepMind, and ELLIS Institute Tubingen & MPI Intelligent Systems.