Kamie Yin
ItzPingCat
·
AI & ML interests
None yet
Recent Activity
new activity
6 days ago
ArliAI/gpt-oss-20b-Derestricted:wtf man
new activity
14 days ago
ArliAI/gpt-oss-20b-Derestricted:Is it possible to perform a second operation?
new activity
14 days ago
ArliAI/GLM-4.5-Air-Derestricted:Pls GPT oss
Organizations
None yet
wtf man
13
#2 opened 11 days ago
by
ItzPingCat
Is it possible to perform a second operation?
4
#1 opened 15 days ago
by
rankaiyx
Pls GPT oss
2
#4 opened 15 days ago
by
ItzPingCat
more data
#1 opened 25 days ago
by
ItzPingCat
reacted to
nroggendorff's
post with 😔
26 days ago
Request: LFM2-1.2B Nano Imp
2
#1 opened about 1 month ago
by
nohurry
commented on
Projected Abliteration
29 days ago
I mean the UGI score. It's abnormally low for an abliterated model
the --deccp option has made my day i cant stop laughing at the absurdity
wait, does that allow us to norm preserving biprojected abliteration on models ourselves? and does it work for mxfp4?
reacted to
grimjim's
post with 🔥
about 1 month ago
Post
786
I've uploaded abliteration code with support for sparsification of the refusal vector. It's poorly documented, but the code should be straightforward.
https://github.com/jim-plus/llm-abliteration
The code is built atop a fork that enabled abliteration to be performed on models loaded in 4-bit or 8-bit bitsandbytes quantization. TransformerLens is not required, just plain Transformers. For those previously unaware, this opens up abliteration experimentation to more people with local VRAM limitations.
Since performing abliteration on a quant involves precision and perplexity loss, it stands to reason that a small amount of magnitude sparsification could filter out some noise and possibly even reduce the damage inflicted on latent space via ablation of the refusal vector.
There's a small but real acceleration of ablation of the refusal vector by reducing outer product operations from O(d²×n) to O(d×n), and then by pushing said computation layerwise to GPU. The code is hardcoded for CUDA acceleration currently. Normalization of the refusal vector was deferred in order to allow sparsification. In principle other behavior vector interventions could also be added and explored.
https://github.com/jim-plus/llm-abliteration
The code is built atop a fork that enabled abliteration to be performed on models loaded in 4-bit or 8-bit bitsandbytes quantization. TransformerLens is not required, just plain Transformers. For those previously unaware, this opens up abliteration experimentation to more people with local VRAM limitations.
Since performing abliteration on a quant involves precision and perplexity loss, it stands to reason that a small amount of magnitude sparsification could filter out some noise and possibly even reduce the damage inflicted on latent space via ablation of the refusal vector.
There's a small but real acceleration of ablation of the refusal vector by reducing outer product operations from O(d²×n) to O(d×n), and then by pushing said computation layerwise to GPU. The code is hardcoded for CUDA acceleration currently. Normalization of the refusal vector was deferred in order to allow sparsification. In principle other behavior vector interventions could also be added and explored.
commented on
Norm-Preserving Biprojected Abliteration
about 1 month ago
GPT OSS when
model request
1
#1 opened about 1 month ago
by
ItzPingCat
Does this have agentic?
2
#2 opened about 2 months ago
by
ItzPingCat
upvoted
an
article
about 1 month ago
Article
Norm-Preserving Biprojected Abliteration
•
52
commented on
Projected Abliteration
about 1 month ago
Why is the score itself so low?
upvoted
an
article
about 1 month ago
Article
Projected Abliteration
•
31
Eval request:
#405 opened about 2 months ago
by
ItzPingCat
Issue
#9 opened about 2 months ago
by
ItzPingCat
Challenge
1
#8 opened about 2 months ago
by
ItzPingCat
