π TRL v0.29.0 introduces trl-training: an agent-native training skill.
This makes the TRL CLI a structured, agent-readable capability, allowing AI agents to reliably execute training workflows such as: - Supervised Fine-Tuning (SFT) - Direct Preference Optimization (DPO) - Group Relative Policy Optimization (GRPO)
Weβre excited to see what the community builds on top of this.
If youβre working on AI agents, alignment research, or scalable RL training infrastructure: give TRL v0.29.0 a try! π€
π smolagents v1.21.0 is here! Now with improved safety in the local Python executor: dunder calls are blocked! β οΈ Still, not fully isolated: for untrusted code, use a remote executor instead: Docker, E2B, Wasm. β¨ Many bug fixes: more reliable code. π https://github.com/huggingface/smolagents/releases/tag/v1.21.0
π New in smolagents v1.20.0: Remote Python Execution via WebAssembly (Wasm)
We've just merged a major new capability into the smolagents framework: the CodeAgent can now execute Python code remotely in a secure, sandboxed WebAssembly environment!
π§ Powered by Pyodide and Deno, this new WasmExecutor lets your agent-generated Python code run safely: without relying on Docker or local execution.
Why this matters: β Isolated execution = no host access β No need for Python on the user's machine β Safer evaluation of arbitrary code β Compatible with serverless / edge agent workloads β Ideal for constrained or untrusted environments
This is just the beginning: a focused initial implementation with known limitations. A solid MVP designed for secure, sandboxed use cases. π‘
π‘ We're inviting the open-source community to help evolve this executor: β’ Tackle more advanced Python features β’ Expand compatibility β’ Add test coverage β’ Shape the next-gen secure agent runtime
Let's reimagine what agent-driven Python execution can look like: remote-first, wasm-secure, and community-built.
This feature is live in smolagents v1.20.0! Try it out. Break things. Extend it. Give us feedback. Let's build safer, smarter agents; together π§ βοΈ
π SmolAgents v1.19.0 is live! This release brings major improvements to agent flexibility, UI usability, streaming architecture, and developer experience: making it easier than ever to build smart, interactive AI agents. Here's what's new:
π§ Agent Upgrades - Support for managed agents in ToolCallingAgent - Context manager support for cleaner agent lifecycle handling - Output formatting now uses XML tags for consistency
π₯οΈ UI Enhancements - GradioUI now supports reset_agent_memory: perfect for fresh starts in dev & demos.
π Streaming Refactor - Streaming event aggregation moved off the Model class - β‘οΈ Better architecture & maintainability
π¦ Output Tracking - CodeAgent outputs are now stored in ActionStep - β More visibility and structure to agent decisions
π Bug Fixes - Smarter planning logic - Cleaner Docker logs - Better prompt formatting for additional_args - Safer internal functions and final answer matching
π Docs Improvements - Added quickstart examples with tool usage - One-click Colab launch buttons - Expanded reference docs (AgentMemory, GradioUI docstrings) - Fixed broken links and migrated to .md format
Always surprised that so few people actually read the FineTasks blog, on β¨how to select training evals with the highest signalβ¨
If you're serious about training models without wasting compute on shitty runs, you absolutely should read it!!
An high signal eval actually tells you precisely, during training, how wel & what your model is learning, allowing you to discard the bad runs/bad samplings/...!
The blog covers in depth prompt choice, metrics, dataset, across languages/capabilities, and my fave section is "which properties should evals have"π (to know on your use case how to select the best evals for you)
New in smolagents v1.16.0: π Bing support in WebSearchTool π Custom functions & executor_kwargs in LocalPythonExecutor π§ Streaming GradioUI fixes π Local web agents via api_base & api_key π Better docs
smolagents v1.14.0 is out! π π MCPClient: A sleek new client for connecting to remote MCP servers, making integrations more flexible and scalable. πͺ¨ Amazon Bedrock: Native support for Bedrock-hosted models. SmolAgents is now more powerful, flexible, and enterprise-ready. πΌ
If you've followed the progress of robotics in the past 18 months, you've likely noticed how robotics is increasingly becoming the next frontier that AI will unlock.
At Hugging Faceβin robotics and across all AI fieldsβwe believe in a future where AI and robots are open-source, transparent, and affordable; community-built and safe; hackable and fun. We've had so much mutual understanding and passion working with the Pollen Robotics team over the past year that we decided to join forces!
You can already find our open-source humanoid robot platform Reachy 2 on the Pollen website and the Pollen community and people here on the hub at
The new DeepSite space is really insane for vibe-coders enzostvs/deepsite
With the wave of vibe-coding-optimized LLMs like the latest open-source DeepSeek model (version V3-0324), you can basically prompt out-of-the-box and create any app and game in one-shot.
It feels so powerful to me, no more complex framework or under-the-hood prompt engineering to have a working text-to-app tool.
AI is eating the world and *open-source* AI is eating AI itself!
PS: and even more meta is that the DeepSite app and DeepSeek model are both fully open-source code => time to start recursively improve?
PPS: you still need some inference hosting unless you're running the 600B param model at home, so check the very nice list of HF Inference Providers for this model: deepseek-ai/DeepSeek-V3-0324
It's beating Claude 3.7 on (competitive) programming βa domain Anthropic has been historically really strong atβ and it's getting close to o1-mini/R1 on olympiad level coding with just 7B parameters!
And the best part is that we're open-sourcing all about its training dataset, the new IOI benchmark, and more in our Open-R1 progress report #3: https://huggingface.co/blog/open-r1/update-3