[]
Kenshiro-28
AI & ML interests
None yet
Recent Activity
new activity 4 days ago
DavidAU/Qwen3.6-27B-Heretic-Uncensored-FINETUNE-NEO-CODE-Di-IMatrix-MAX-GGUF:Thank you new activity 7 months ago
Sao10K/Lmao_life_updates:24/10/25 new activity 7 months ago
mistralai/Magistral-Small-2509-GGUF:[THINK] token not recognized in llama-cpp-pythonOrganizations
None yet
New activity in DavidAU/Qwen3.6-27B-Heretic-Uncensored-FINETUNE-NEO-CODE-Di-IMatrix-MAX-GGUF 4 days ago
Thank you
๐ 2
1
#3 opened 4 days ago
by
Kenshiro-28
24/10/25
#9 opened 7 months ago
by
Kenshiro-28
[THINK] token not recognized in llama-cpp-python
๐ 2
3
#1 opened 8 months ago
by
Kenshiro-28
Best wishes
โค๏ธ 1
#4 opened 12 months ago
by
Kenshiro-28
Release a 70B a version
#77 opened about 1 year ago
by
Kenshiro-28
Request: Create distill of Mistral Small 24B
3
#128 opened over 1 year ago
by
Kenshiro-28
Prompt format
6
#8 opened over 1 year ago
by
Kenshiro-28
A Request For reasoning model
๐ 1
2
#1 opened over 1 year ago
by
santosgamer01
CORRECTION: THIS SYSTEM MESSAGE IS ***PURE GOLD***!!!
๐ค๐ 17
16
#33 opened over 1 year ago
by
jukofyork
"llama.cpp error: 'error loading model vocabulary: unknown pre-tokenizer type: 'dolphin12b''"
๐ 1
23
#1 opened almost 2 years ago
by
jrell
Feedback
๐๐ 13
14
#1 opened almost 2 years ago
by
TravelingMan
It uses too much RAM for a 7B model
3
#1 opened about 2 years ago
by
Kenshiro-28
Bad EOS
2
#5 opened over 2 years ago
by
Kenshiro-28
Bad EOS
2
#6 opened over 2 years ago
by
Kenshiro-28
Bad EOS
#1 opened over 2 years ago
by
Kenshiro-28
The model doesn't report the ChatML EOS token
2
#1 opened over 2 years ago
by
Kenshiro-28
ChatML prompt format confusion - please reconsider
๐ 9
36
#3 opened over 2 years ago
by
kalomaze
The best model
๐ 1
3
#1 opened over 2 years ago
by
Kenshiro-28
Change prompt format to Vicuna v1.1
2
#7 opened almost 3 years ago
by
Kenshiro-28
Change prompt format to Vicuna v1.1
2
#7 opened almost 3 years ago
by
Kenshiro-28