Uzumaki
Narutoouz
AI & ML interests
None yet
Recent Activity
liked a model 1 day ago
nvidia/Efficient-DLM-4B liked a model 3 days ago
Youssofal/Qwen3.6-27B-MTPLX-Optimized liked a model 7 days ago
OsaurusAI/Nemotron-3-Nano-Omni-30B-A3B-JANGTQ2Organizations
Ling team has done a wonderful job
❤️🤗 3
#5 opened 9 days ago
by
Narutoouz
please add mlx native version
👍 1
#6 opened 15 days ago
by
Narutoouz
Vmlx app is on fire, thanks dev for creating this quant
1
#1 opened 24 days ago
by
Narutoouz
Awesome Quant - great performance on m4 max
👍 1
2
#1 opened 25 days ago
by
Narutoouz
Please add support for Apple silicon inference
1
#4 opened 26 days ago
by
Narutoouz
Can you share benchmarks for this model
1
#3 opened 26 days ago
by
Narutoouz
Mlx Native multimodal and text only variants please
1
#7 opened 26 days ago
by
Narutoouz
Is this quant support image recognition?
👍 2
10
#1 opened 29 days ago
by
alexcardo
Thankyou for making this open source!!
🤗🔥 22
10
#2 opened 27 days ago
by
AaryanK
Guys please add the MTP to this model
🔥 5
5
#50 opened 30 days ago
by
Narutoouz
Why is this 4bit version has a 32.7 GB size?
➕ 3
20
#3 opened about 1 month ago
by
alexcardo
where is minimax 2.7
🔥 2
9
#54 opened about 1 month ago
by
devops724
can we get minimax-m2.7
🤗 13
5
#49 opened about 2 months ago
by
CHNtentes
Ideal Sampling parameters to reproduce benchmarks
1
#3 opened about 1 month ago
by
Narutoouz
Feature Request: TFLite Q4/Q6/Q8 Quantizations for Nanbeige4.1-3B
1
#42 opened about 2 months ago
by
Narutoouz
Need support for mlx inference
1
#1 opened about 2 months ago
by
Narutoouz
please upload benchmarks
1
#2 opened about 2 months ago
by
Narutoouz
mlx lm support
👍 1
#7 opened 2 months ago
by
Narutoouz
Any Plans for an Instruct Model?
🤗🔥 6
6
#15 opened 3 months ago
by
Ashacorporation
Model "thinks" for too long
👍 3
11
#12 opened 3 months ago
by
Moisha1985