Victor C.
dehnhaide
AI & ML interests
None yet
Recent Activity
new activity
about 3 hours ago
mratsim/MiniMax-M2.5-BF16-INT4-AWQ:How many GPUs for 8 or higher concurrency using RTX 3090s Rig ? new activity
1 day ago
noctrex/Qwen3.5-122B-A10B-MXFP4_MOE-GGUF:Kind request for Qwen3.5-397B-A17B MXFP4 BF16 liked
a model 1 day ago
noctrex/Qwen3.5-397B-A17B-MXFP4_MOE-GGUF Organizations
None yet
How many GPUs for 8 or higher concurrency using RTX 3090s Rig ?
1
#9 opened 5 days ago
by
BiggestFox
Kind request for Qwen3.5-397B-A17B MXFP4 BF16
7
#2 opened 1 day ago
by
dehnhaide
accuracy
17
#4 opened 14 days ago
by
ktsaou
Testing IQ4_NL
5
#12 opened 13 days ago
by
shewin
Request for support - improved model fit
2
#9 opened 3 days ago
by
dehnhaide
Testing IQ4_KSS
3
#5 opened 3 days ago
by
shewin
Cant get it to work on 8x RTX3090
14
#1 opened 14 days ago
by
maglat
Crashes on 8x RTX 3090
3
#1 opened 7 days ago
by
dehnhaide
"w1_weight_scale_2 must match w3_weight_scale_2. Accuracy may be affected."
๐ 1
20
#2 opened 14 days ago
by
zenmagnets
Looking forward to IQ4_XS!
๐ฅ 3
24
#1 opened 15 days ago
by
tarruda
IQ5_K 136.891 GiB
๐ฅ 2
18
#9 opened 20 days ago
by
Hunterx
Kind request
5
#1 opened about 1 month ago
by
dehnhaide
Kind request: GLM-4.7-Flash
2
#6 opened about 1 month ago
by
dehnhaide
ValueError: Unsupported weight strategy=block, supported strategies are [<QuantizationStrategy.CHANNEL: 'channel'>, <QuantizationStrategy.TENSOR: 'tensor'>]
8
#5 opened about 2 months ago
by
ablueleaf
Looking forward to trying this!
๐คฏ 2
17
#2 opened about 2 months ago
by
dnhkng
Excellent work (2.57bpw-tuned) ... and a small kind request
10
#6 opened 2 months ago
by
dehnhaide
Error on loading
#1 opened 2 months ago
by
dehnhaide
Nice version (thank you!) with some hiccups! :)
2
#1 opened 2 months ago
by
dehnhaide
10/10 Best EXL3 model card
๐ค ๐ 9
3
#3 opened 2 months ago
by
gghfez