Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

MikeDoes 
posted an update 2 days ago
view post
Post
3381
A single lock on a door isn't enough. Real security is about layers.

The same is true for AI privacy. A new paper, "Whispered Tuning", offers a fantastic layered solution that aims to fortify LLMs against privacy infringements.

We're proud that the first, essential layer, a high-precision PII redaction model was built on the foundation of the Ai4Privacy/pii-65k dataset.

Our dataset provided the necessary training material for their initial anonymization step, which then enabled them to develop further innovations like differential privacy fine-tuning and output filtering. This is a win-win: our data helps create a solid base, and researchers build powerful, multi-stage privacy architectures on top of it.

Together, we're making AI safer.

🔗 Read the full paper to see how a strong foundation enables a complete privacy solution: https://www.scirp.org/journal/paperinformation?paperid=130659

🚀 Stay updated on the latest in privacy-preserving AI—follow us on LinkedIn: https://www.linkedin.com/company/ai4privacy/posts/
rajkumarrawal 
posted an update 2 days ago
view post
Post
3352
I submitted a "FlashLabs Chroma 1.0: A Real-Time End-to-End Spoken Dialogue Model with Personalized Voice Cloning" Paper by Tanyu Chen, Tairan Chen, Kai shen , Zhenghua Bao, Zhihui Zhang, Man Yuan, Yi Shi From
FlashLabs
to Daily Papers on
huggingface
.

Chroma 1.0 enables real time spoken dialogue with personalized voice cloning through discrete speech representations and interleaved text audio token scheduling.

Chroma 1.0 , the world’s first open source, real time speech to speech model with voice cloning.

FlashLabs Chroma 1.0: A Real-Time End-to-End Spoken Dialogue Model with Personalized Voice Cloning (2601.11141)
danielhanchen 
posted an update about 15 hours ago
imnotkitty 
posted an update 2 days ago
view post
Post
3446
The 2025 Chinese LLM Showdown: Western Models Still Dominate Top 4, but China Leads the Open-Source Arena.

🏆 The Champions: Claude-Opus-4.5, Gemini-3-Pro, GPT-5.2, and Gemini-3-Flash sweep the top four spots.
🚀 The Pursuers: Doubao and DeepSeek-V3.2 tie for first place among Chinese models; GLM-4.7, ERNIE-5.0, and Kimi secure their positions in the domestic top five.
🔥 The Biggest Highlight: The top three spots on the open-source leaderboard are entirely held by Team China (DeepSeek, GLM, Kimi), outperforming the best western open-source models.
·
kanaria007 
posted an update 2 days ago
view post
Post
388
✅ New Article: Policy Load Balancer

Title:
🧭 Policy Load Balancer: Risk Modes, Degradation, and Kill-Switches
🔗 https://huggingface.co/blog/kanaria007/policy-load-balancer

---

Summary:
Even if you already have *Jumps* (atomic moves), *RML* (effect glue), *EVAL* (experiments), and *ETH* (hard constraints), a real system still needs one practical answer:

*What operational mode is the system allowed to run in *right now*—for whom, and under which governance?*

This article introduces the *Policy Load Balancer (PoLB)*: a first-class “mode selector” that turns risk signals + ETH/EVAL/ID context into an *active mode descriptor* (risk band, mode name, allowed jump types, allowed RML levels, experiments on/off, engine whitelist), plus *degradation* and *kill-switch* rules.

> PoLB is the goal surface for *how allowed* the system is to act.

---

Why It Matters:
• Replaces scattered feature flags with *structured, auditable modes* (and traceable transitions)
• Ensures *nothing effectful runs raw*—every real-world action passes PoLB + RML gates
• Makes incident handling safer: *pre-declared degradation patterns* (experiment-first shutdown, controller simplification, under-observation downgrade)
• Adds real governance: *kill-switch contracts* (who can stop what, how fast, with multi-party auth + full audit)

---

What’s Inside:
• A mode taxonomy: *OFFLINE / SHADOW / ONLINE_SAFE / ONLINE_EXPERIMENTAL / EMERGENCY*, mapped to SI-Core’s coarse envelope (sandbox|shadow|online|degraded)
• Example mode catalog (e.g., OFFLINE_SANDBOX, SHADOW_PROD, ONLINE_SAFE_BASELINE, ONLINE_EXPERIMENTAL_STRATIFIED, EMERGENCY_FAILSAFE)
• Transition policies driven by telemetry/alarms/ETH incidents, plus *degrade orders* (risk vs capacity shedding)
• Export-friendly policy conventions
• Kill-switch contract + implementation sketch, and how PoLB interlocks with *OBS / ID / ETH / EVAL / MEM*

---

📖 Structured Intelligence Engineering Series
eaddario 
posted an update 3 days ago
view post
Post
2874
Experimental global target bits‑per‑weight quantization of mistralai/Ministral-3-14B-Instruct-2512 and mistralai/Ministral-3-14B-Reasoning-2512

Unlike standard llama.cpp quantizations that rely on fixed type heuristics (e.g., Q4_K_M), the Target BPW approach optimizes per-tensor precision where it matters the most, and produces high quality models that meet a precise global file size target.

Key Advantages:
- VRAM Maximization: Can generate high quality models sized exactly to fit hardware constraints (e.g., fitting the model into exactly 24GB VRAM).
- Data-Driven Precision: Quantization mix is determined by actual weight error sensitivity rather than hardcoded rules, often yielding better PPL/KLD size trade-offs.

Full benchmarks (PPL, KLD, ARC, MMLU, etc.) and methodology in the models' cards

eaddario/Ministral-3-14B-Instruct-2512-GGUF
eaddario/Ministral-3-14B-Reasoning-2512-GGUF
MonsterMMORPG 
posted an update about 8 hours ago
view post
Post
342
SECourses Musubi Trainer upgraded to V27 and FLUX 2, FLUX Klein, Z-Image training added with demo configs - amazing VRAM optimized - read the news

App is here : https://www.patreon.com/posts/137551634

Full tutorial how to use and train : https://youtu.be/DPX3eBTuO_Y
  • 1 reply
·
alibidaran 
posted an update about 16 hours ago
view post
Post
314
I’m excited to share PlaiTO, a reasoning-focused language model built on LLaMA 3.1 (8B) and optimized for humanities and social sciences.

PlaiTO is designed to go beyond surface-level text generation, emphasizing structured reasoning, conceptual clarity, and analytical depth—especially in domains centered on human behavior and social systems.

🎯 Focus Areas

Psychology

Management & Organizational Studies

Sociology

📊 MMLU Benchmark Results (100 samples per domain)

Professional Psychology: 76%

Management: 74%

Sociology: 75%

These results highlight PlaiTO’s strong performance in abstract, theory-heavy, and reasoning-driven tasks.

💡 Why PlaiTO?

Strong analytical and reasoning capabilities

Better handling of complex human-centered problems

Suitable for academic, educational, and research use cases

Balanced performance across multiple humanities disciplines

PlaiTO is ideal for conceptual analysis, case reasoning, academic discussion, and decision-support scenarios—while still requiring human oversight for high-stakes applications.

📌 Built on LLaMA 3.1, compliant with its licensing terms.
alibidaran/Platio_merged_model
ZennyKenny 
posted an update about 22 hours ago
view post
Post
325
🤔 Do you have a Hugging Face Space that you wish you could programmatically restart to induce data refresh or some other behavior?

👉 Try Spaces Scheduler for this use case: https://github.com/kghamilton89/spaces-scheduler

➡️ Lightweight
➡️ Easy to setup
➡️ Just works

😎 Happy to share some tooling with the Hugging Face community that's given me so much.
DualityAI-RebekahBogdanoff 
posted an update 1 day ago
view post
Post
404
👀 While we're in between Kaggle competitions, users can work on earning this certificate from Duality AI! 🙌

Although these competitions are over, you can still submit results, achieve above a threshold, and earn the title of 💥OBJECT DETECTION EXPERT💥.

⏳ No time limit
🏃 No competition
🧠 Just you against your own brain

Start here: https://falcon.duality.ai/secure/documentation/certificate-challenge-object-detection?utm_source=linkedin&utm_medium=post&utm_campaign=HF

People who already participated in one or more competitions last year already have a leg up! Your previous scores count towards this certificate, and now you just have to collect them all 😽

As always, special thanks to our collaborators LunateAI and 3LC.AI 🫰