Activity Feed

AI & ML interests

None defined yet.

Recent Activity

MikeDoes 
posted an update 3 days ago
view post
Post
99
How can we teach a robot to understand the nuances of privacy in elderly care? It starts with teaching it to recognize sensitive data.

A new conceptual paper introduces "Privacy Agents" an AI designed to safeguard contextual integrity in care settings. To demonstrate that their innovative concept is feasible, the researchers from TU Wien needed to prove an AI could identify PII in a real-world transcript.

We're proud that the tool they used for this proof-of-concept was fine-tuned on the Ai4Privacy/pii-masking-200k dataset.

This is a perfect win-win: brilliant researchers are designing the future of privacy-aware robotics, and our open-source data helps provide the foundational tools to show it's possible. This is how conceptual breakthroughs become practical solutions.

🔗 Check out their forward-thinking paper on the future of privacy in Human-Robot Interaction: http://hirschmanner.com/publication/privacy-hri-2024/privacy-hri-2024.pdf

🚀 Stay updated on the latest in privacy-preserving AI—follow us on LinkedIn: https://www.linkedin.com/company/ai4privacy/posts/
MikeDoes 
posted an update 5 days ago
view post
Post
1744
What happens when an LLM "forgets" your data? A new paper reports it might not be gone for good.

The "Janus Interface" paper details a new attack that could recover forgotten PII through fine-tuning APIs. This is a solution-oriented paper because it highlights a problem that needs fixing.

Testing such a high-stakes attack requires equally high-stakes data. The Ai4Privacy 300k dataset was a key part of their evaluation, providing a testbed for extracting sensitive Social Security Numbers. Our dataset, with its synthetic structured SSN data, helped the researchers at Indiana University, Stanford & CISPA, and others demonstrate that their attack works on more than just emails. It could affect highly sensitive personal identifiers.

We're excited to see our open-source dataset used in such cutting-edge security research. It's a win for the community when researchers can use our resources to stress-test the safety of modern AI systems. This work is a direct and explicit call for stronger protections on fine-tuning interfaces.

🔗 This is why open data for security research is so important. Check out the full paper: https://arxiv.org/pdf/2310.15469

🚀 Stay updated on the latest in privacy-preserving AI—follow us on LinkedIn: https://www.linkedin.com/company/ai4privacy/posts/
  • 1 reply
·
MikeDoes 
posted an update 11 days ago
view post
Post
1025
Why choose between performance, privacy, and transparency when you can have all three?

We're highlighting a solution-oriented paper that introduces PRvL, an open-source toolkit for PII redaction. The interesting part, the researchers used the AI4Privacy-300K and AI4Privacy-500K datasets to train and benchmark their suite of models.

This is the power of open-source collaboration. We provide the comprehensive data foundation, and the community builds better solutions on top of it. It's a win for every organization when this research results in a powerful, free, and self-hostable tool that helps keep their data safe.

Big cheers to Leon Garza, Anantaa Kotal, Aritran Piplai, Lavanya Elluri, Prajit D., and Aman Chadha for pulling this off.

🔗 Read the full paper to see their data-driven results and access the PRvL toolkit: https://arxiv.org/pdf/2508.05545

🚀 Stay updated on the latest in privacy-preserving AI—follow us on LinkedIn: https://www.linkedin.com/company/ai4privacy/posts/

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
MikeDoes 
posted an update 12 days ago
view post
Post
3018
How do you prove your new, specialized AI model is a better solution? You test it against the best.

That's why we were excited to see the new AdminBERT paper from researchers at Nantes Université and others. To show the strength of their new model for French administrative texts, they compared it to the state-of-the-art generalist model, NERmemBERT.

The direct connection to our work is clear: NERmemBERT was trained on a combination of datasets, including the Pii-masking-200k dataset by Ai4Privacy.

This is a perfect win-win for the open-source community. Our foundational dataset helps create a strong, general-purpose benchmark, which in turn helps researchers prove the value of their specialized work. This is how we all get better.

🔗 Great work by Thomas Sebbag, Solen Quiniou, Nicolas Stucky, and Emmanuel Morin on tackling a challenging domain! Check out their paper: https://aclanthology.org/2025.coling-main.27.pdf

🚀 Stay updated on the latest in privacy-preserving AI—follow us on LinkedIn: https://www.linkedin.com/company/ai4privacy/posts/

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
MikeDoes 
posted an update 17 days ago
view post
Post
232
The future of AI privacy isn't just in the cloud; it's on your device. But how do we build and validate these tools?

A new paper on "Rescriber" explores this with a tool that uses smaller LLMs for on-device anonymization. Building and validating such tools requires a strong data foundation. We're excited to see that the researchers used the Ai4Privacy open dataset to create their performance benchmarks.

This is our mission in action: providing the open-source data that helps innovators build and test better solutions that will give users more control over their privacy. It's a win for the community when our data helps prove the feasibility of on-device AI for data minimization, with reported user perceptions on par with state-of-the-art cloud models.

Shoutout to Jijie Zhou, Eryue Xu, Yaoyao Wu, and Tianshi Li on this one!

🔗 Check out the research to see how on-device AI, powered by solid data, is changing the game: https://dl.acm.org/doi/pdf/10.1145/3706598.3713701

🚀 Stay updated on the latest in privacy-preserving AI—follow us on LinkedIn: https://www.linkedin.com/company/ai4privacy/posts/

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
MikeDoes 
posted an update 19 days ago
view post
Post
2318
Building powerful multilingual AI shouldn't mean sacrificing user privacy.

We're highlighting a solution-oriented report from researchers Sahana Naganandh, Vaibhav V, and Thenmozhi M at Vellore Institute of Technology that investigates this exact challenge. The direct connection to our mission is clear: the paper showcases the PII43K dataset as a privacy-preserving alternative to high-risk, raw multilingual data

The report notes that our dataset, with its structured anonymization, is a "useful option for privacy-centric AI applications." It's always a delight when academic research independently validates our data-first approach to solving real-world privacy problems.

This is how we build a safer AI future together.

🔗 Read the full report here to learn more: https://assets.cureusjournals.com/artifacts/upload/technical_report/pdf/3689/20250724-59151-93w9ar.pdf

🚀 Stay updated on the latest in privacy-preserving AI—follow us on LinkedIn: https://www.linkedin.com/company/ai4privacy/posts/

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset

  • 1 reply
·
MikeDoes 
posted an update 25 days ago
view post
Post
1326
We can't build more private AI if we can't measure privacy intelligence.

That's why we're highlighting the Priv-IQ benchmark, a new, solution-oriented framework for evaluating LLMs on eight key privacy competencies, from visual privacy to knowledge of privacy law. The direct connection to our work is clear: the researchers relied on samples from the Ai4Privacy dataset to build out questions for Privacy Risk Assessment and Multilingual Entity Recognition.

This is the power of open-source collaboration. We provide the data building blocks, and researchers construct powerful new evaluation tools on top of them. It's a win-win for the entire ecosystem when we can all benefit from transparent, data-driven benchmarks that help push for better, safer AI.

Kudos to Sakib Shahriar and Rozita A. Dara for this important contribution. Read the paper to see the results: https://www.proquest.com/docview/3170854914?pq-origsite=gscholar&fromopenview=true&sourcetype=Scholarly%20Journals

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
MikeDoes 
posted an update 26 days ago
view post
Post
198
Traditional data leak prevention is failing. A new paper has a solution-oriented approach inspired by evolution.

The paper introduces a genetic-algorithm-driven method for detecting data leaks. To prove its effectiveness, the researchers Anatoliy Sachenko, Petro V., Oleg Savenko, Viktor Ostroverkhov, Bogdan Maslyyak from Casimir Pulaski Radom University and others needed a real-world, complex PII dataset. We're proud that the AI4Privacy PII 300k dataset was used as a key benchmark for their experiments.

This is the power of open-source collaboration. We provide complex, real-world data challenges, and brilliant researchers develop and share better solutions to solve them. It's a win for every organization when this research helps pave the way for more adaptive and intelligent Data Loss Prevention systems.

🔗 Read the full paper to see the data and learn how genetic algorithms are making a difference in cybersecurity: https://ceur-ws.org/Vol-4005/paper19.pdf

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
MikeDoes 
posted an update about 1 month ago
view post
Post
1982
Anonymizing a prompt is half the battle. Reliably de-anonymizing the response is the other.

To build a truly reliable privacy pipeline, you have to test it. A new Master's thesis does just that, and our data was there for every step.

We're excited to showcase this work on handling confidential data in LLM prompts from Nedim Karavdic at Mälardalen University. To build their PII anonymization pipeline, they first trained a custom NER model. We're proud that the Ai4Privacy pii-masking-200k dataset was used as the foundational training data for this critical first step.

But it didn't stop there. The research also used our dataset to create the parallel data needed to train and test the generative "Seek" models for de-anonymization. It's a win-win when our open-source data not only helps build the proposed "better solution" but also helps prove why it's better by enabling a rigorous, data-driven comparison.

🔗 Check out the full thesis for a great deep-dive into building a practical, end-to-end privacy solution: https://www.diva-portal.org/smash/get/diva2:1980696/FULLTEXT01.pdf

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
MikeDoes 
posted an update about 1 month ago
view post
Post
1308
The tools we use to audit AI for privacy might be easier to fool than we think.

We're highlighting a critical paper that introduces "PoisonM," a novel attack that could make Membership Inference tests unreliable. The direct connection to our work is explicit: the researchers, Neal M., Atul Prakash, Amrita Roy Chowdhury, Ashish Hooda, Kassem Fawaz, Somesh Jha, Zhuohang Li, and Brad Malin used the AI4Privacy dataset as the "canary" dataset in their experiments to test the effectiveness of their attack on realistic, sensitive information.

This is the power of a healthy open-source ecosystem. We provide the foundational data that helps researchers pressure-test our collective assumptions about AI safety. It's a win for everyone when this leads to a more honest conversation about what our tools can and can't do, pushing us all to create better solutions.

🔗 Read the full paper to understand the fundamental flaws in current MI testing: https://arxiv.org/pdf/2506.06003

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
MikeDoes 
posted an update about 1 month ago
view post
Post
4207
What if an AI agent could be tricked into stealing your data, just by reading a tool's description? A new paper reports it's possible.

The "Attractive Metadata Attack" paper details this stealthy new threat. To measure the real-world impact of their attack, the researchers needed a source of sensitive data for the agent to leak. We're proud that the AI4Privacy corpus was used to create the synthetic user profiles containing standardized PII for their experiments.

This is a perfect win-win. Our open-source data helped researchers Kanghua Mo, 龙昱丞, Zhihao Li from Guangzhou University and The Hong Kong Polytechnic University to not just demonstrate a new attack, but also quantify its potential for harm. This data-driven evidence is what pushes the community to build better, execution-level defenses for AI agents.

🔗 Check out their paper to see how easily an agent's trust in tool metadata could be exploited: https://arxiv.org/pdf/2508.02110

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
MikeDoes 
posted an update about 1 month ago
view post
Post
1818
How do you protect your prompts without breaking them? You need a smart sanitizer. A new system called Prϵϵmpt shows how.

The first, critical step in their solution is a high-performance Named Entity Recognition (NER) model to find the sensitive data. We're proud to see that these researchers, Amrita Roy Chowdhury, David Glukhov, Divyam Anshumaan, Prasad Chalasani, Nicolas Papernot, Somesh Jha, and Mihir Bellare from the University of Michigan, University of Toronto, University of Wisconsin-Madison, University of California, San Diego - Rady School of Management and Langroid Incorporated fine-tuned their NER model on 10 high-risk categories from the AI4Privacy dataset.

This is a perfect win-win. Our open-source data helps provide the foundation for the critical detection engine, which in turn enables the community to build and test better solutions like Prϵϵmpt's innovative use of encryption and Differential Privacy.

🔗 Check out their paper for a deep dive into a formally private, high-utility prompt sanitizer: https://arxiv.org/pdf/2504.05147

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
MikeDoes 
posted an update about 2 months ago
view post
Post
3151
Making LLMs fast with KV-cache sharing is great. A new paper reports it's also a huge privacy risk.

That's why we're excited to see the "SafeKV" paper from researchers at the University of Connecticut, Peking University, and others. Their solution-oriented framework selectively shares non-sensitive data while isolating PII. To validate the "Safe" part of their system, they needed a robust, multilingual privacy benchmark.

We're proud that the Ai4Privacy pii-masking dataset was used for this critical evaluation related to privacy.

This is a perfect win-win. Our open-source data enables researchers to build and validate more effective security solutions for core AI infrastructure. Their work, in turn, helps make the entire LLM ecosystem safer, showing that performance and privacy don't have to be mutually exclusive.

Kudos to Kexin Chu, Zecheng Lin, Dawei Xiang, 沈子旭, Jianchang Su, cheng chu, Yiwei Yang, Wenhui Zhang, Wenfei Wu, and Wei Zhang on this beautiful work.

🔗 Check out their paper to see the future of secure, high-performance LLM inference: https://arxiv.org/pdf/2508.08438

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
nouamanetazi 
posted an update 3 months ago
view post
Post
4355
After training 𝐒𝐦𝐨𝐥𝐋𝐌𝟑 on 𝟑𝟖𝟒 𝐇𝟏𝟎𝟎𝐬 for nearly a month, I've come to realize something most people overlook: 𝐢𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 𝐢𝐬 𝐭𝐡𝐞 𝐦𝐚𝐤𝐞-𝐨𝐫-𝐛𝐫𝐞𝐚𝐤 𝐟𝐚𝐜𝐭𝐨𝐫 𝐢𝐧 𝐋𝐋𝐌 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠. 🔥

Everyone talks about model architecture and data quality. And yes, those matter immensely. But here's what nobody tells you: when your training run fails at 2 AM because of mysterious 𝐍𝐂𝐂𝐋 𝐞𝐫𝐫𝐨𝐫𝐬, or when your expensive GPU cluster is running at 𝟔𝟎% 𝐞𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲, the problem isn't your model. It's most probably a 𝐦𝐢𝐬𝐮𝐬𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐡𝐚𝐫𝐝𝐰𝐚𝐫𝐞. 🛠️

Questions that seemed simple but had no clear answers: Why is 𝐌𝐨𝐄 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐬𝐥𝐨𝐰𝐞𝐫 𝐭𝐡𝐚𝐧 𝐝𝐞𝐧𝐬𝐞 𝐦𝐨𝐝𝐞𝐥𝐬? Which 𝐍𝐂𝐂𝐋 𝐟𝐥𝐚𝐠𝐬 should we actually set? How often should we checkpoint without killing throughput?

That's why we built 𝐓𝐡𝐞 𝐒𝐦𝐨𝐥 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐏𝐥𝐚𝐲𝐛𝐨𝐨𝐤 📖: a complete guide covering everything from model architecture and data curation to the SmolLM3 training marathon, post-training techniques, and crucially, the 𝐢𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 𝐥𝐚𝐲𝐞𝐫 that most teams get wrong.

We validated real vs theoretical bandwidth across the entire stack: 𝐇𝐁𝐌𝟑 𝐡𝐢𝐭𝐭𝐢𝐧𝐠 𝟑 𝐓𝐁/𝐬, 𝐍𝐕𝐋𝐢𝐧𝐤 𝟒.𝟎 𝐫𝐞𝐚𝐜𝐡𝐢𝐧𝐠 𝟕𝟖𝟔 𝐆𝐁/𝐬, 𝐏𝐂𝐈𝐞 𝐆𝐞𝐧𝟒 𝐚𝐭 𝟏𝟒.𝟐 𝐆𝐁/𝐬. Then we ran collective operations across 𝟏𝟐𝟖 𝐆𝐏𝐔𝐬 (16 nodes, 8xH100s each) and measured how performance degrades at scale: all-reduce drops from 𝟒𝟖𝟎 𝐆𝐁/𝐬 on a single node to 𝟑𝟐𝟎-𝟑𝟓𝟎 𝐆𝐁/𝐬 across 16 nodes.

If you've ever wondered why your training runs are slower than they should be, or you're planning to scale up and want to avoid expensive mistakes, this guide might save you weeks of debugging.

𝐓𝐡𝐞 𝐒𝐦𝐨𝐥 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐏𝐥𝐚𝐲𝐛𝐨𝐨𝐤: https://lnkd.in/e5MKXUHS

Shared with ❤️ by the HuggingFace team
Tonic 
posted an update 4 months ago
Tonic 
posted an update 5 months ago
view post
Post
812
COMPUTER CONTROL IS ON-DEVICE !

🏡🤖 78 % of EU smart-home owners DON’T trust cloud voice assistants.

So we killed the cloud.

Meet Exté: a palm-sized Android device that sees, hears & speaks your language - 100 % offline, 0 % data sent anywhere.

🔓 We submitted our technologies for consideration to the Liquid AI hackathon.

📊 Dataset: 79 k UI-action pairs on Hugging Face (largest Android-control corpus ever) Tonic/android-operator-episodes

⚡ Model: 98 % task accuracy, 678MB compressed , fits on existing android devices ! Tonic/l-android-control

🛤️ Experiment Tracker : check out the training on our TrackioApp Tonic/l-android-control

🎮 Live Model Demo: Upload an Android Screenshot and instructions to see the model in action ! Tonic/l-operator-demo



Built in a garage, funded by pre-orders, no VC. Now we’re scaling to 1 k installer units.

We’re giving 50 limited-edition prototypes to investors , installers & researchers who want to co-design the sovereign smart home.

👇 Drop “EUSKERA” in the comments if you want an invite, tag a friend who still thinks Alexa is “convenient,” and smash ♥️ if AI should belong to people - not servers.
·
Tonic 
posted an update 5 months ago
view post
Post
739
🙋🏻‍♂️ Hey there folks ,

Just wanted to annouce 🏭SmolFactory : it's the quickest and best way to finetune SmolLM3 and GPT-OSS-20B on huggingface !

Basicaly it's an app you can run on huggingface by duplicating the space and running your training directly on huggingface GPUs .

It will help you basically select datasets and models, fine tune your model , make an experiment tracker you can use on your mobile phone , push all your model card and even automatically make a demo for you on huggingface so you can directly test it out when it's done !

check out the blog to learn more : https://huggingface.co/blog/Tonic/smolfactory

or just try the app directly :
Tonic/SmolFactory

you can vibe check the cool models I made :
French SmolLM3 : Tonic/Petite-LLM-3
Medical GPT-OSS : Tonic/med-gpt-oss-20b-demo

check out the model cards :
multilingual reasoner (gpt-oss) - Tonic/gpt-oss-20b-multilingual-reasoner
med-gpt-oss : Tonic/med-gpt-oss-20b
petite-elle-l-aime : Tonic/petite-elle-L-aime-3-sft

github repo if you like command line more than gradio : https://github.com/josephrp/smolfactory

drop some likes on these links it's really much appreciated !

feedback and PRs are welcome !
MikeDoes 
posted an update 6 months ago
view post
Post
324
Are you sure the open-source LLM model you just downloaded is safe?

A recent paper on "Privacy Backdoors" reports a new vulnerability where pre-trained models can be poisoned before fine-tuning them. This is a serious challenge for everyone building on open-source AI.

Instead of just pointing out problems, we believe in finding better solutions. To understand this threat, the researchers needed to test their attack on realistic data structures. They needed a dataset that could effectively simulate a high-stakes privacy attack, and we're proud that our Ai4Privacy dataset was used to provide this crucial benchmark. The paper reports that for our complex dataset, the privacy leakage on a non-poisoned model was almost zero. After the backdoor attack, that number reportedly jumped to 87%.



Ai4Privacy dataset provided a realistic benchmark for their research. Our dataset, composed of synthetic identities, helped them demonstrate how a poisoned model could dramatically amplify privacy leakage.

This is why we champion open source: it enables the community to identify these issues and develop better, safer solutions together.

Kudos to the research team behind this study: Yuxin Wen, Leo Marchyok, Sanghyun Hong, Jonas Geiping, Tom Goldstein, and Nicholas Carlini, Oregon State University, University of Maryland, Google DeepMind, and ELLIS Institute Tubingen & MPI Intelligent Systems.

🔗 Read the research to understand this new challenge: https://arxiv.org/pdf/2404.01231

#DataPrivacy #AI #OpenSource #Anonymization #MachineLearning #Ai4Privacy #Worldslargestopensourceprivacydataset