AI & ML interests

None defined yet.

Recent Activity

fffiloniย 
posted an update 6 days ago
view post
Post
3837
I brought DALLยทE mini back to life ๐Ÿค–๐ŸŽจ

You can try it here:
fffiloni/dalle-mini-reboot

And I also built a batch version using Hugging Face Jobs (up to 50 images per prompt):
fffiloni/dalle-mini-via-jobs

The goal was to stay close to the original JAX/Flax pipeline, while integrating it with modern tooling (Gradio + Jobs).

It ended up being a fun way to revisit this model โ€” still weird, still fun ๐Ÿ˜„
  • 3 replies
ยท
fffiloniย 
posted an update 12 days ago
view post
Post
450
A clearer demo for TADA (now multilingual) ๐Ÿ”Š๐ŸŒ

I improved the public demo for TADA โ€” a generative framework for speech modeling via textโ€“acoustic dual alignment.

TADA models speech as a joint sequence of text tokens and acoustic tokens, using a transformer backbone to keep text and audio synchronized during generation.

The original demo already exposed these mechanisms, but the workflow made the pipeline hard to understand.

This updated demo makes the process clearer:

โ€ข load the model
โ€ข prepare a reference voice (optionally with transcript or Whisper auto-transcription)
โ€ข generate speech conditioned on that reference

It also adds multilingual support.

Presets are included for a few languages, but the model supports more:

English, French, Spanish, German, Arabic, Mandarin Chinese, Italian, Japanese, Polish, Portuguese

Feel free to try different voices, accents, or languages and see how the alignment behaves.

๐Ÿ‘‰ fffiloni/tada-dual-alignment-tts-demo

Paper
TADA: A Generative Framework for Speech Modeling via Text-Acoustic Dual Alignment (2602.23068)
victorย 
posted an update about 2 months ago
view post
Post
1938
Interesting article: use Claude Code to help open models write CUDA kernels (for eg) by turning CC traces into Skills. They made a library out of it ๐Ÿ‘€

https://huggingface.co/blog/upskill
victorย 
posted an update 3 months ago
view post
Post
3458
Nvidia is on a roll lately. Nemotron 3 Nano is my new fav local model, but here's the real flex: they published the entire evaluation setup. Configs, prompts, logs, all of it. This is how you do open models ๐Ÿ”ฅ

https://huggingface.co/blog/nvidia/nemotron-3-nano-evaluation-recipe

multimodalartย 
posted an update 5 months ago
view post
Post
22482
Want to iterate on a Hugging Face Space with an LLM?

Now you can easily convert any HF entire repo (Model, Dataset or Space) to a text file and feed it to a language model!

multimodalart/repo2txt
  • 1 reply
ยท
multimodalartย 
posted an update 9 months ago
view post
Post
18258
Self-Forcing - a real-time video distilled model from Wan 2.1 by @adobe is out, and they open sourced it ๐Ÿ

I've built a live real time demo on Spaces ๐Ÿ“น๐Ÿ’จ

multimodalart/self-forcing
  • 6 replies
ยท
victorย 
posted an update 10 months ago
view post
Post
7627
Open Source Avengers, Assemble! Ask an expert AI agent team to solve complex problems together ๐Ÿ”ฅ

Consilium brings together multiple agents that debate and use live research (web, arXiv, SEC) to reach a consensus. You set the strategy, they find the answer.

Credit to @azettl for this awesome demo: Agents-MCP-Hackathon/consilium_mcp
  • 2 replies
ยท
victorย 
posted an update 11 months ago
view post
Post
5177
DIA TTS is just amazing - please share your funniest gens (here is mine) ๐Ÿ˜‚
nari-labs/Dia-1.6B
  • 1 reply
ยท
fffiloniย 
posted an update about 1 year ago
view post
Post
28161
I was thinking i need to step up my game on training Flux LoRas models, time to have some fun ! โ˜€๏ธ

Expect a new drop per week on aesthetics that catched my attention, here are 3 of them that worked really well !

fffiloni/cute-comic-800
fffiloni/carbo-800
fffiloni/oniric-750
  • 3 replies
ยท
victorย 
posted an update about 1 year ago
view post
Post
6532
Hey everyone, we've given https://hf.co/spaces page a fresh update!

Smart Search: Now just type what you want to doโ€”like "make a viral meme" or "generate music"โ€”and our search gets it.

New Categories: Check out the cool new filter bar with icons to help you pick a category fast.

Redesigned Space Cards: Reworked a bit to really show off the app descriptions, so you know what each Space does at a glance.

Random Prompt: Need ideas? Hit the dice button for a burst of inspiration.

Weโ€™d love to hear what you thinkโ€”drop us some feedback plz!
  • 6 replies
ยท
fffiloniย 
posted an update about 1 year ago
view post
Post
3606
Explain like i'm 5 the last take from @thomwolf on X about Dario's essay on DeepSeek:

โ€”โ€บ Open-source AI is like a big cookbook that everyone can read and improve. Instead of a few chefs keeping their recipes secret, anyone can cook, test, and invent new things.

If only one company controls AI, everything stops if they have a problemโ€”like when the internet goes down. With open-source, many people can help, making sure it keeps running smoothly.

AI isnโ€™t just a race between two countries; itโ€™s a team effort around the world. By sharing, we move faster and create safer technology for everyone.
โ€”
๐Ÿค—
victorย 
posted an update about 1 year ago
view post
Post
3762
Finally, an open-source AI that turns your lyrics into full songs is hereโ€”meet YuE! Unlike other tools that only create short clips, YuE can make entire songs (up to 5 minutes) with vocals, melody, and instruments all working together. Letsss go!

m-a-p/YuE-s1-7B-anneal-en-cot
victorย 
posted an update over 1 year ago
victorย 
posted an update over 1 year ago
view post
Post
2663
Perfect example of why Qwen/Qwen2.5-Coder-32B-Instruct is insane?

Introducing: AI Video Composer ๐Ÿ”ฅ
huggingface-projects/ai-video-composer

Drag and drop your assets (images/videos/audios) to create any video you want using natural language!

It works by asking the model to output a valid FFMPEG and this can be quite complex but most of the time Qwen2.5-Coder-32B gets it right (that thing is a beast). It's an update of an old project made with GPT4 and it was almost impossible to make it work with open models back then (~1.5 years ago), but not anymore, let's go open weights ๐Ÿš€.
victorย 
posted an update over 1 year ago
view post
Post
1904
Qwen2.5-72B is now the default HuggingChat model.
This model is so good that you must try it! I often get better results on rephrasing with it than Sonnet or GPT-4!!
fffiloniย 
posted an update over 1 year ago
victorย 
posted an update over 1 year ago
victorย 
posted an update over 1 year ago
view post
Post
2733
NEW - Inference Playground

Maybe like me you have always wanted a super easy way to compare llama3.2-1B vs. llama3.2-3B? or the same model with different temperatures?

Trying and comparing warm Inference API models has never been easier!
Just go to https://hf.co/playground, set your token and you're ready to go.
We'll keep improving, feedback welcome ๐Ÿ˜Š
  • 2 replies
ยท
fffiloniย 
posted an update over 1 year ago
view post
Post
20040
Visionary Walter Murch (editor for Francis Ford Coppola), in 1999:

โ€œ So let's suppose a technical apotheosis some time in the middle of the 21st century, when it somehow becomes possible for one person to make an entire feature film, with virtual actors. Would this be a good thing?

If the history of oil painting is any guide, the broadest answer would be yes, with the obvious caution to keep a wary eye on the destabilizing effect of following too intently a hermetically personal vision. One need only look at the unraveling of painting or classical music in the 20th century to see the risks.

Let's go even further, and force the issue to its ultimate conclusion by supposing the diabolical invention of a black box that could directly convert a single person's thoughts into a viewable cinematic reality. You would attach a series of electrodes to various points on your skull and simply think the film into existence.

And since we are time-traveling, let us present this hypothetical invention as a Faustian bargain to the future filmmakers of the 21st century. If this box were offered by some mysterious cloaked figure in exchange for your eternal soul, would you take it?

The kind of filmmakers who would accept, even leap, at the offer are driven by the desire to see their own vision on screen in as pure a form as possible. They accept present levels of collaboration as the evil necessary to achieve this vision. Alfred Hitchcock, I imagine, would be one of them, judging from his description of the creative process: "The film is already made in my head before we start shooting."โ€
โ€”
Read "A Digital Cinema of the Mind? Could Be" by Walter Murch: https://archive.nytimes.com/www.nytimes.com/library/film/050299future-film.html

  • 1 reply
ยท