Please don't forget to release the base model of Qwen3.5-27B
#25 opened about 1 hour ago
by
Linksome
pure transformers inference code is needed
👀 👍 4
1
#24 opened 1 day ago
by
maltoseflower
When?
👀 3
#23 opened 1 day ago
by
AxionLab-official
as a starter
#22 opened 2 days ago
by
kit17
Upload 3 files
#21 opened 2 days ago
by
shamamanaeem03
model overthinking
👍 3
#20 opened 3 days ago
by
cse2011
vLLM Version conflict issue
3
#19 opened 3 days ago
by
innosynth
有没有BFCLV4和tau2的细节指标
#18 opened 3 days ago
by
HuanchangLiu
Thinking Blocks Error
5
#16 opened 5 days ago
by
TronixAT
Plan on releasing the 4bits quantized version
👀 👍 11
#15 opened 5 days ago
by
omarsou
Qwen are is the 8b coming out?
4
#14 opened 5 days ago
by
crownelius
vllm serving issue
3
#13 opened 6 days ago
by
grozatech
Russian language support, bad grammar!
12
#12 opened 6 days ago
by
alexcardo
Add evaluation results
#11 opened 6 days ago
by
SaylorTwift
Availability of the Qwen3.5-27B Base model
👀 👍 13
1
#9 opened 6 days ago
by
Jellyfish042
Value error, Model architectures ['Qwen3_5ForConditionalGeneration'] are not supported for now. Transformers version 5.3.0.dev0
👀 13
4
#8 opened 6 days ago
by
rameshch
Installation Video and Testing - Step by Step
🔥 3
#6 opened 6 days ago
by
fahdmirzac
amazing model
❤️ 7
2
#5 opened 7 days ago
by
Tugay31
is HLE w/ CoT a typo?
#4 opened 7 days ago
by
xiaoqianWX
Samplers
1
#3 opened 7 days ago
by
ggnoy
DENSE
🔥 5
2
#2 opened 7 days ago
by
ox-ox
请千万千万别忘了把 Qwen Image 2.0 也开源——这对我们本地用户来说会是个巨大的改变 :-)
🧠 1
#1 opened 7 days ago
by
Hanswalter