Mayx
Mabbs
AI & ML interests
Everything
Recent Activity
updated
a Space
3 days ago
Mabbs/blog
updated
a Space
20 days ago
Mabbs/blog
new activity
4 months ago
vito95311/Qwen3-Omni-30B-A3B-Thinking-GGUF-INT8FP16:why the int8 and fp16 model size both are 31GB?
Organizations
None yet