AI & ML interests

None defined yet.

Recent Activity

yuriyvnv 
posted an update 5 days ago
view post
Post
3566
📄 The WAVe paper is officially out in the Information Sciences Journal.

You saw the PT and NL model releases earlier this year. This is the peer-reviewed paper behind them, with the full method, ablations, and downstream ASR evaluation.

Quick recap: WAVe is a 1B multimodal embedding model that filters synthetic speech at the word level, not the sentence level. On Portuguese ASR it cuts training steps by 34%, improves cross-domain generalization by 50%, and matches WER with 30% less synthetic data.

📦 Resources
- Paper: https://www.sciencedirect.com/science/article/pii/S0020025526005220
- PT model: yuriyvnv/WAVe-1B-Multimodal-PT
- NL model: yuriyvnv/WAVe-1B-Multimodal-NL
- Collection: https://huggingface.co/collections/yuriyvnv/multi-modal-embeddings-for-synthetic-transcript-filtering
- Code: https://github.com/yuriyvnv/WAVe

If you train ASR on synthetic or back-translated data, would like to see WAVe benchmarked on other languages.

@reach-vb @ylacombe @hf-audio @BramVanroy

#speech #asr #multimodal #syntheticdata #lowresource
Crownelius 
posted an update 5 days ago
view post
Post
3954
Day 4-6 [05/05/2026]
Howdy,

Is anybody else willing to put a second mortgage on their house, just to spend 40k USD in compute credits? Just me? k...

I got dreams, man. The datasets I could build with 40k would be insane.
Somebody called me a genius the other day, they'd be shocked to find out, that I would put my house on the line for 30 days of runpod usage.

What would you do with it?
I would turn arxiv into a dataset. Turn each arxiv paper into a QnA.
Or... maybe if I got 40k USD in credit's Id end up like those 16 lost scientists.

Food for thought.
Anyways, I think I'm going to make a post once a week.
In the meantime you can find me building small llm's in discord here:
https://discord.gg/4DdwS9D8x9
  • 6 replies
·
Crownelius 
posted an update 9 days ago
view post
Post
5269
Day 3 - 05/02/2026
Scamp ships, hits the wall. New plan...

Scamp came back from training today... Didn't go so well, I'm still unsure...

Fast benchmark, temperature 0.7, top_p 0.9:
- "Capital of France is" produced "covered by the Crown" (grammatical, factually wrong)
- "23 + 19 = ?" produced "23. Answer: 23. Answer: 23..." (loops, math broken)
- "def fibonacci(n):" produced a list of letters

It speaks English. It can't reason. At 8K vocab and 50M params, it was never going to.

Next build: 412M MoE-3E. Three experts (math, language, code), top-1 routing, random init, let specialization emerge from gradient signal alone. Tried seeded Branch-Train-MiX first then dropped it. Adds compute for no clear win when the router will find its own attractors anyway.

Big lesson today came from limit testing on A100 80GB. Surprise, every planned phase ran out of memory even on 80GB. Root cause: at vocab 262144 (Gemma 3 standard), the output logits dominate during forward and backward. Fix: Liger Kernel's fused cross-entropy. It streams the loss computation instead of materialising the full B by T by vocab tensor. Without it the build would not run.

Scamp proved the pipeline runs end-to-end on real hardware. The 412M run starts tomorrow. If routing balances naturally and math finally crystallises, ships as Crowfeather-412M-3E with GGUF in F16, Q8, Q5, and Q4.

So... the training may have produced a poet if I had done it better. But I didn't, so instead... we get a malformed robot named Scamp... This is progress.

-Shane

P.S Join discord for discussion: https://discord.gg/8ZscHNmJYE and
I post my finished stuff here:
CompactAI-O
  • 2 replies
·
juiceb0xc0de 
posted an update 9 days ago
view post
Post
137
I'm not obsessed with LR schedulers you are.

juiceb0xc0de/lr-scheduler-benchmark

Okay maybe I'm a little obsessed with LR schedulers ATM. I ran a SST-2 Sentiment Classification eval using the nyu-mll/glue dataset on distilbert/distilbert-base-uncased-67M to see how different schedulers perform.

I think I've graduated from ML enthusiast to full blown data hoarder and I don't know if I can turn back now.

Anyways I evaluated the 2 schedulers that i designed as well and was pretty happy with the performance of both over all so hell ya to that guess I'll go and grab some more graphs.

https://github.com/JuiceB0xC0de/aecs-scheduler.git
https://github.com/JuiceB0xC0de/lucky-pick-scheduler.git

nyu-mll/glue
distilbert/distilbert-base-uncased
Crownelius 
posted an update 10 days ago
view post
Post
3656
[DAY TWO] PROJECT CROWFEATHER - 5/1/2026
Que sera, what will he be?

Step 47,500 of 100,000. Loss hovering around 2.76 on 6.2B tokens. Throughput steady at 87k per second on the A100. Not a GH200, but she gets it done.

Still haven't named him. Scamp has a rascally charm. Quentin sounds like he'd wear a bow tie and think hard before speaking. Taking votes.

Phase two is what's keeping me up. Datasets everywhere and I can't pick. I'm fusing Google and DeepSeek's ideas: Gemma 4's alternating sliding and global attention, DeepSeek V4's Muon optimizer and WSD scheduler, Gemma 2's logit soft cap, and PaLM's z-loss. Sounds like peanut butter on a hamburger, but the loss curve says it works.

Tribe_v2 has real potential but needs more scaffolding than a barn raising before I throw it in. One thing's certain though. This model's gonna be a thinker. Not a Wikipedia parrot. Something that chews before it answers.

Finally got a use for my less popular datasets too. Some Opus-4.5-Writing-Style for polish. A few rows of Human-Archtypes-25k to see what personality bubbles up. Could be a poet, could be a grump. Either beats a flimsy fine-tune.

The bank's after my credit card. Until then, full steam.

Next model gets graphs. I swear.

-Shane
  • 3 replies
·
juiceb0xc0de 
posted an update 10 days ago
view post
Post
93
Okay, I may have been talking out of my ass about my scheduler using less VRAM compared to a FFT. What I did find though: training only ~30% of the model's weights per step consistently beat dense SFT on Hendrycks Math across 3 different seeds.

What makes it interesting isn't just the sparsity — it's that no two consecutive windows share the same active layers. The model never has a stable path from input to output decision. Adjacent layers are rarely both alive at the same time, so the model can't build shortcuts between them. I started developing this to reduce semantic redundancy across layers and stumbled onto something I didn't expect.

Results (0-shot, hendrycks_math exact match):

Dense SFT baseline: 0.0098
DeepChaos seed 1: 0.0142 (+45%)
DeepChaos seed 2: 0.0156 (+59%)
DeepChaos seed 3: 0.0138 (+41%)

Setup: Qwen2.5-3B-Instruct, simplescaling/s1K (1k reasoning traces), 5 epochs, LR 1e-5, optimizer adamw_torch_fused , and cosine scheduler with my lucky pick scheduler on an AMD MI300X 192GB.

The scheduler is still a work in progress but the current version is fully operational. You can check it out at:
https://github.com/JuiceB0xC0de/lucky-pick-scheduler

I would love to hear your experiences with sparsity training!
Crownelius 
posted an update 11 days ago
view post
Post
3801
[DAY ONE] PROJECT CROWFEATHER 4/30/2026
...The day I forgot to attach wandb.ai
Just dropped Crowfeather-50m, the first checkpoint in a series, and yeah, no graphs.

Crowfeather/Crowfeather-50m

54.5M params. Pretrain only. 17,500 steps banked on FineWeb-edu before Thunder credits ran dry. About 2.3B tokens, no SFT yet.

Architecture: Gemma-4 alternating sliding/global attention (1024 window, last layer always global) plus DeepSeek-V4 Muon optimizer plus WSD scheduler plus Gemma-2 logit soft-cap plus PaLM z-loss. Recipe in the model card.

What it can do: writes grammatical English. Knows that France has Rhine-adjacent monasteries (it picked Rouen instead of Paris but the vocabulary is in there). Tells stories about Mr. Fabien.

What it can't do yet: facts, code, math. Base LM, no SFT, no instruction tuning.

The series:
Every additional training run becomes another model card here
Every model card gets a matching post on this profile
Continuation goes to Colab next, picking up from step 17500 out of 100k

Limited to one post a day on Hugging Face, so updates will trickle out at that pace. Follow [@Crownelius](@Crownelius ) and [@Crowfeather](
Crowfeather
) if you want to watch this thing learn in public. Next drop will either come with the finished pre-train or whatever step I land on before the bank takes my credit card away.

Graphs will be available on my NEXT model lol

-Shane
  • 3 replies
·

🚩 Report: Spam

1
#19 opened 12 days ago by
ccocks-deca
mrmanna 
posted an update 11 days ago
view post
Post
117
𝗙𝗼𝘀𝘀𝗶𝗹𝗶𝘇𝗲𝗱 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲: 𝗧𝗵𝗲 𝗗𝗲𝗮𝗱 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗖𝗿𝗶𝘀𝗶𝘀 𝗶𝗻 𝗔𝗜
The next AI risk is not only hallucination. It is obsolete knowledge carried forward as living truth.
https://medium.com/ai-advances/fossilized-intelligence-the-dead-knowledge-crisis-in-ai-4a2e0aac553f?sk=3246cf154bd3d5606cdcbcf316e3bd78

Dead knowledge.
By dead knowledge, I do not mean information that is simply false. I mean knowledge that once had validity but has lost its right to guide present action.
Crownelius 
posted an update 12 days ago
view post
Post
5895
My Huggingface journey has been a trip!
I wanted to take the time to thank each and every one of you for using my dataset and getting it to go as far as it did. Believe it or not, some neanderthal was and maybe still is trending on huggingface.

Not only did my dataset reach number one, my fine-tuned qwen3.5 model did as well. Top 10. Honestly, ain't much left to do here.

Y'all have given me the desire, no... the craving for more. I am absolutely obsessed with AI now. I want to tweak it... I want to take it apart, just to see what makes everything tick. I want to put it together like Frankenstein and his monster.

The only thing that's stopping this guy is compute. I don't mind spending every penny I have on this. I desperately want to drive AI forward, even just a little bit.

I never knew the clanker hater from a year ago would be saying this.

Thank you all from the bottom of my heart.

Looking forward to showing you what I'm cooking up next. @CompactAI is your only hint!
  • 3 replies
·
yuriyvnv 
posted an update 14 days ago
view post
Post
1367
🔊 Four Qwen3-ASR (0.6B and 1.7B) Fine-Tunes for Portuguese and Dutch.

Both the 1.7B and 0.6B variants of Alibaba's Qwen3-ASR, fine-tuned for European Portuguese and Dutch and bundled in a single collection.

🔗 Collection: https://huggingface.co/collections/yuriyvnv/qwen-asr-for-portuguese-and-dutch-17b-and-06b

Headline numbers — Common Voice 22 test, with the zero-shot baseline.
🇵🇹 Qwen3-ASR-1.7B-PT — 12.91% → 8.50% WER (-34%)
🇵🇹 Qwen3-ASR-0.6B-PT — 18.26% → 11.85% WER (-35%)
🇳🇱 Qwen3-ASR-1.7B-NL — 6.68% → 5.28% WER (-21%)
🇳🇱 Qwen3-ASR-0.6B-NL — 12.46% → 8.31% WER (-33%)

The 0.6B variants are the more interesting half of the release. They give up only a few WER points compared to the 1.7B at a third of the parameters — relevant for edge hardware, CPU inference, or anywhere keeping inference cost down. The Dutch 0.6B in particular lands at 8.3% WER on CV22, competitive with much larger systems.

The Dutch 1.7B started from a strong 6.7% zero-shot, so the absolute gain is smaller — Qwen already handles Dutch well, and the fine-tune mostly sharpens it on Common Voice's casing and punctuation conventions.

Training stuck close to Qwen's official SFT recipe (lr 2e-5, linear schedule, 2% warmup, bf16, gradient checkpointing on a single H100). The data is the differentiator: Common Voice 22 train + validation augmented with synthetic OpenAI-TTS speech, filtered by the WAVe multimodal embedding model that scores clips at the word level and drops the ones that don't align well with their transcripts.

📦 Full pipeline — synthetic data generation, WAVe filtering, training scripts, evaluation protocol — is open-source:
github.com/yuriyvnv/TTS-Augmented-ASR
@hf-audio .
#asr #speech #parakeet #nvidia #nemo #multilingual #fine-tuning #commonvoice
Enderchef 
posted an update 15 days ago
view post
Post
8339
Hi, everyone!
Please follow, like, and support the work of
CompactAI-O
!
Spread the word!
  • 9 replies
·
juiceb0xc0de 
posted an update 17 days ago
view post
Post
167
Okay, I had way too much fun trying to make the unsloth-bot hallucinate incorrect answers like so many frontier models have done to me in the past regarding fine-tuning and general machine learning. Learning to fine-tune LLMs could have been so much simpler had this been available when I began screwing around with neural networks.

10/10 recommend for beginners.

https://huggingface.co/unsloth/unsloth-bot