freegheist
freegheist
AI & ML interests
None yet
Recent Activity
liked a model about 1 month ago
QuantTrio/Qwen3-Coder-Next-E400 new activity about 2 months ago
zai-org/GLM-4.7-Flash:Enormous KV-cache size? liked a model 3 months ago
unsloth/GLM-4.7-GGUFOrganizations
Enormous KV-cache size?
👍➕ 6
23
#3 opened about 2 months ago
by
nephepritou
FP16 Weights
4
#19 opened 3 months ago
by
orendar
Int8 Mix
#1 opened 3 months ago
by
freegheist
2507 version
2
#1 opened 5 months ago
by
freegheist
Quant Size
8
#2 opened 6 months ago
by
ortegaalfredo
gibberish output
2
#1 opened 5 months ago
by
freegheist
Multiple chat template fixes
🚀❤️ 7
14
#2 opened 6 months ago
by
danielhanchen
Corrected jinja template with tool Support works with PR llama.cpp/pull/15186
2
#5 opened 7 months ago
by
xbruce22
UD-Q4_K_XL matches bf16 with 60.9% vs 61.8% on Aider Polyglot benchmark
🔥👍 9
3
#8 opened 8 months ago
by
Fernanda24
Error loading in vLLm 10.1+
👍 1
2
#2 opened 7 months ago
by
freegheist
Larger versions
2
#1 opened 7 months ago
by
freegheist
Upgrade to 1m context
5
#4 opened 7 months ago
by
freegheist
Does it possible to create a version without MTP layer to save some VRAM
1
#1 opened 8 months ago
by
adonishong
INT 8
#2 opened 11 months ago
by
freegheist
<think> in generation output
3
#2 opened 11 months ago
by
wvangils
Tell me how do you feel about this model without telling me how do you feel about this model
4
#5 opened about 1 year ago
by
MrDevolver
Question About VRAM Requirements for Full 256K Context Length
2
#7 opened about 1 year ago
by
mullerse
can you provide the metrics
1
#5 opened over 1 year ago
by
6cf
BF16 weights?
👍 7
5
#1 opened over 1 year ago
by
mpasila
WizardLM-8x22B Evaluation failed
👍➕ 28
25
#823 opened over 1 year ago
by
llama-anon