Mistral reasoning parser fails on startup with ValueError
π 1
3
#27 opened 9 months ago
by
ArthurGprog
Docker file for vllm has been moved
#25 opened 10 months ago
by
emanuele-boscari
Update README.md
#23 opened 10 months ago
by
bullerwins
Add 'transformers' tag
#22 opened 10 months ago
by
betki
Add 'pytorch' tag
#21 opened 10 months ago
by
betki
Immaculate
#20 opened 10 months ago
by
annettedattolo
Max model len is 32768 when serving with vllm and not 40960
2
#19 opened 10 months ago
by
f14
VLLM Reasoning parser
1
#17 opened 10 months ago
by
Rictus
Model always ends generation with \boxed{}
1
#16 opened 10 months ago
by
cbunivofutah
Model generating non-stop when used in Cline through vLLM
#15 opened 10 months ago
by
mhwang093
output issue
3
#14 opened 10 months ago
by
dnum-ia-unistra
No multimodal :c
π 8
#13 opened 10 months ago
by
nicolollo
Greek Language
#12 opened 10 months ago
by
myrulezzz
Think token
π 3
1
#11 opened 10 months ago
by
iyanello
Error with vLLM docker image
2
#10 opened 10 months ago
by
mhwang093
MMMU-Pro Vision with Magistral Small
βπ₯ 2
3
#9 opened 10 months ago
by
tomrance
GGUFS (correct) and BF16 - HF , Transformers , with correct tokenizers / json s
πβ€οΈ 5
1
#8 opened 10 months ago
by
DavidAU
So this is just a SFT "distill" of Magistral-Medium ?
π₯ 1
6
#6 opened 10 months ago
by
gghfez
tokenizer
π 1
4
#5 opened 10 months ago
by
ctranslate2-4you
Missing Tokenizer/Processor for use with Transformers
π 1
5
#3 opened 10 months ago
by
mgoin
Cool but where Magistral-Medium-2506 weights ?
ππ 18
2
#2 opened 10 months ago
by
celsowm