Albert Yu
MRU4913
AI & ML interests
None yet
Recent Activity
liked
a model 2 days ago
fishaudio/s2-pro new activity
3 days ago
llmfan46/Qwen3.5-27B-heretic-v2:fp8 version thx new activity
3 days ago
edp1096/Huihui-Qwen3.5-27B-abliterated-FP8:Do you train this on your own? Organizations
fp8 version thx
1
#6 opened 3 days ago
by
MRU4913
Do you train this on your own?
1
#1 opened 4 days ago
by
MRU4913
ltxav_te() got an unexpected keyword argument 'llama_quantization_metadata'
#49 opened about 1 month ago
by
MRU4913
ltxav_te() got an unexpected keyword argument 'llama_quantization_metadata'
1
#79 opened about 1 month ago
by
MRU4913
Share tool parser / thinking parser for VLLM?
#8 opened about 1 month ago
by
MRU4913
Which RL is used by command- a?
2
#3 opened 5 months ago
by
MRU4913
reasoning on / off ?
2
#5 opened about 2 months ago
by
TahirC
4bit plz
1
#1 opened about 2 months ago
by
MRU4913
Do you have any plan to quantize such models to fp8/nvfp4?
🔥 1
2
#4 opened 4 months ago
by
luoxiao9231
Update chat_template.jinja
2
#4 opened 4 months ago
by
alexrs
Incredible work, and just a few questions!
2
#6 opened 6 months ago
by
MRU4913
Enable thinking not works as expected while using VLLM
1
#16 opened 6 months ago
by
MRU4913
Does VLLM support this model?
👍 3
1
#2 opened 7 months ago
by
MRU4913
Just admit you train on the benchmark datasets
👍 👀 18
8
#7 opened 8 months ago
by
ChuckMcSneed
Reproduce this work
4
#3 opened 9 months ago
by
MRU4913
Generation parameters
2
#15 opened 10 months ago
by
MRU4913
Release bases of deprecated models
👍 5
2
#8 opened 12 months ago
by
ChuckMcSneed
Can you provide command-a 2025 gptq?
#9 opened 11 months ago
by
MRU4913
Can you provide command-a 2025 gptq?
#8 opened 11 months ago
by
MRU4913