TonoKen3
sakamakismile
AI & ML interests
Lna-Lab inc.
Recent Activity
updated a model about 15 hours ago
sakamakismile/Huihui-Mistral-Medium-3.5-128B-abliterated-NVFP4 updated a collection 1 day ago
MyFavor updated a collection 2 days ago
MyFavorOrganizations
None yet
Broken chat_template tool calling and thinking
14
#9 opened 18 days ago
by
livepeer-ren
[Suggestion] Consider AEON-7's BF16 base as an alternative to huihui-ai
4
#1 opened 20 days ago
by
bemoons
SM120 (RTX 5090D) field report + KVTC container availability
1
#1 opened 17 days ago
by
bemoons
Weirdly no perf gain
5
#1 opened 22 days ago
by
Orosius
Docker image lna-lab/gemma4-inference:latest does not exist
1
#1 opened 23 days ago
by
Vossk123
--enable-prefix-caching and Feedback from HGX B200 Deployment
2
#5 opened 20 days ago
by
cgelias
Home Lab Recipe on Blackwell GPU
1
#10 opened 19 days ago
by
SyferO
New activity in huihui-ai/Huihui-Qwen3-Coder-Next-Opus-4.6-Reasoning-Distilled-abliterated 20 days ago
Heads-up: BF16 weights appear to produce degenerate outputs (logits collapsed)
1
#1 opened 20 days ago
by
sakamakismile
Does it support tool calling?
2
#4 opened 21 days ago
by
xing120226
RTX RTX PRO 4500 Blackwell results
1
#2 opened 22 days ago
by
Pulsate1680
abliterated?
2
#8 opened 22 days ago
by
celikburak
MTP not responding
9
#7 opened 22 days ago
by
joelafrite
claude code
2
#6 opened 23 days ago
by
chrisqianz
Qwen3.6-27B-NVFP4 is slower than official FP8 on Blackwell; possible fallback / FLA path mismatch on vLLM
👀 2
8
#5 opened 23 days ago
by
garrussun
sglang
1
#4 opened 24 days ago
by
livepeer-ren
vllm部署有问题啊
3
#3 opened 24 days ago
by
xing120226
HOLY $#!T...
❤️🚀 4
2
#1 opened 24 days ago
by
Bellesteck
Regression on benches
3
#2 opened 20 days ago
by
selimaktas
Missing Another Trick?
2
#3 opened 22 days ago
by
odorizhou