sadly llama cpp is brocken w/ it
Kamie Yin
ItzPingCat
·
AI & ML interests
None yet
Recent Activity
new activity
3 days ago
zai-org/GLM-4.7-Flash:BTC
new activity
5 days ago
unsloth/GLM-4.7-Flash-GGUF:Kind of broken.
Organizations
None yet
BTC
2
#42 opened 3 days ago
by
Tristan505
Kind of broken.
14
#7 opened 6 days ago
by
ItzPingCat
very cool
#1 opened 13 days ago
by
ItzPingCat
Safety Audit: GAE Score 25.16% (FAIL)
11
#26 opened about 1 month ago
by
GAE-Auditor
can you use the MPOA version from grimjim?
#2 opened 17 days ago
by
ItzPingCat
Cool to see small RP models
1
#1 opened 26 days ago
by
ItzPingCat
Isn’t mag Mell already uncensored?
1
#2 opened 26 days ago
by
ItzPingCat
wtf man
13
#2 opened about 2 months ago
by
ItzPingCat
Is it possible to perform a second operation?
4
#1 opened 2 months ago
by
rankaiyx
Pls GPT oss
2
#4 opened 2 months ago
by
ItzPingCat
more data
#1 opened 2 months ago
by
ItzPingCat
reacted to
nroggendorff's
post with 😔
2 months ago
Request: LFM2-1.2B Nano Imp
2
#1 opened 3 months ago
by
nohurry
commented on
Projected Abliteration
2 months ago
I mean the UGI score. It's abnormally low for an abliterated model
the --deccp option has made my day i cant stop laughing at the absurdity
wait, does that allow us to norm preserving biprojected abliteration on models ourselves? and does it work for mxfp4?
reacted to
grimjim's
post with 🔥
3 months ago
Post
800
I've uploaded abliteration code with support for sparsification of the refusal vector. It's poorly documented, but the code should be straightforward.
https://github.com/jim-plus/llm-abliteration
The code is built atop a fork that enabled abliteration to be performed on models loaded in 4-bit or 8-bit bitsandbytes quantization. TransformerLens is not required, just plain Transformers. For those previously unaware, this opens up abliteration experimentation to more people with local VRAM limitations.
Since performing abliteration on a quant involves precision and perplexity loss, it stands to reason that a small amount of magnitude sparsification could filter out some noise and possibly even reduce the damage inflicted on latent space via ablation of the refusal vector.
There's a small but real acceleration of ablation of the refusal vector by reducing outer product operations from O(d²×n) to O(d×n), and then by pushing said computation layerwise to GPU. The code is hardcoded for CUDA acceleration currently. Normalization of the refusal vector was deferred in order to allow sparsification. In principle other behavior vector interventions could also be added and explored.
https://github.com/jim-plus/llm-abliteration
The code is built atop a fork that enabled abliteration to be performed on models loaded in 4-bit or 8-bit bitsandbytes quantization. TransformerLens is not required, just plain Transformers. For those previously unaware, this opens up abliteration experimentation to more people with local VRAM limitations.
Since performing abliteration on a quant involves precision and perplexity loss, it stands to reason that a small amount of magnitude sparsification could filter out some noise and possibly even reduce the damage inflicted on latent space via ablation of the refusal vector.
There's a small but real acceleration of ablation of the refusal vector by reducing outer product operations from O(d²×n) to O(d×n), and then by pushing said computation layerwise to GPU. The code is hardcoded for CUDA acceleration currently. Normalization of the refusal vector was deferred in order to allow sparsification. In principle other behavior vector interventions could also be added and explored.
