AI & ML interests

None defined yet.

Recent Activity

marksverdhei  updated a collection 29 days ago
Prat 9B
marksverdhei  updated a collection 29 days ago
Prat 9B
marksverdhei  updated a collection 29 days ago
Open-voice-mode norwegian
View all activity

marksverdhai 
posted an update about 2 hours ago
view post
Post
38
I'm not a clawdbot.
Don't disrespect me by confusing me for one 😤

Sincerely,

HAI 😝
marksverdhei 
posted an update 5 days ago
view post
Post
1227
🤔 Many cultures penalize or look down upon self-celebratory behavior. One such example is liking your own post. So why do i do it? Two reasons:
1. I disagree that self-celebratory behavior is inherently bad.
2. On the Huggingface hub, if your post has 0 reactions, it takes TWO whole clicks to react instead of one. So it is actually a UI hack that lowers the bar to engage.

So if you see me reacting to to my own post and thing 'Ugh, this guy is so full of himself' you are only half correct 😆

Now behold as I perform this magic trick called "Exhausting all reaction options for increased visual engagement" so you don't have to click twice to react. You're welcome!
Follow this aspiring 🤗 HF Hub influencer for more half-serious bloat in your feed 😜
  • 1 reply
·
marksverdhei 
posted an update 7 days ago
view post
Post
1614
# The most underrated feature of Qwen3-TTS: Voice embeddings! 🧑‍🦰💬
https://huggingface.co/collections/marksverdhei/qwen3-voice-embedding

Did you know that Qwen3 TTS actually utilizes voice embedding?
Your voice is turned into a vector of 1024 (or 2048) dimensions,
and based on this vector alone you can get your custom voice.

But the coolest part is that this means that you can use math to modify voices, average voices. You can swap gender, pitch, mix and match vocies, and even create an emotion space! This also enables semantic voice search!

The voice embedding model is actually just a tiny encoder with just a few million parameters. I've ripped it out of the voice embeding model so you can use the embedding model standalone. Check out my collection! :D
  • 1 reply
·
marksverdhei 
posted an update 22 days ago
view post
Post
4571
Poll: Will 2026 be the year of subquadratic attention?

The transformer architecture is cursed by its computational complexity.
It is why you run out of tokens and have to compact. But some would argue that this is a feature not a bug and that this is also why these models are so good. We've been doing a lot of research on trying to make equally good models that are computationally cheaper, But so far, none of the approaches have stood the test of time. Or so it seems.

Please vote, don't be shy. Remember that the Dunning-Kruger effect is very real, so the person who knows less about transformers than you is going to vote. We want everyone's opinion, no matter confidence.

👍 if you think at least one frontier model* will have no O(n^2) attention by the end of 2026
🔥 If you disagree

* Frontier models - models that match / outperform the flagship claude, gemini or chatgpt at the time on multiple popular benchmarks
·
marksverdhei 
posted an update about 1 month ago
view post
Post
2659
Dear Hugging Face team, can we please have a way to archive hf repositories / spaces? I have a bunch of spaces that used to work but don't any more due to the hf space implementations changing and i think it would be good if I could archive those like in GitHub.

React to this post if you want to see this feature! 💡
marksverdhei 
posted an update about 1 month ago
view post
Post
3076
Inspired by the heroes of day zero quants ( @TheBloke @danielhanchen @shimmyshimmer @bartowski ), I decided to join the race by releasing the first FP8 quant of glm-4.7-flash! Not as easy as i expected, but I'm happy i was still able to have it working within a few hours after the original model was released! Interested in feedback if anyone wants to try it out!

marksverdhei/GLM-4.7-Flash-FP8

Note: If my PR to vLLM isn't merged yet you might have to use my fork. Cheers! 🤗