Kilinskiy's picture
Update README.md
600d00d verified
metadata
base_model: stepfun-ai/Step-3.5-Flash
library_name: transformers
tags:
  - quantized
  - abliterated
  - uncensored
  - moe
license: apache-2.0

Step-3.5-Flash-Ablitirated (FP16)

This repository contains an abliterated and FP16 version of the Step-3.5-Flash model by StepFun.

Overview

Step-3.5-Flash is a massive Sparse Mixture-of-Experts (MoE) model with 199B total parameters (approx. 11B active per token). This specific version has been modified to remove "refusal" vectors (abliterated), making it significantly more compliant with unfiltered requests.

⚠️ Critical Disclaimer

Use this model at your own risk. The "abliteration" process surgically removes the model's alignment and safety filters. As a result:

  • The model may generate offensive, biased, dangerous, or illegal content.
  • It is provided "as-is" without any warranties.
  • By using this model, you acknowledge that you are solely responsible for any output generated and the consequences thereof.

☕ Support My Work

If you find this abliterated version useful and want to support the compute costs for future models, feel free to drop a tip:

  • USDT (TRC20): TA7Weo6jXRNi5uMpHSrw7kRLoU1SM9rgqF
  • BTC: bc1p0hxc39r5g88hnknqtvgc2msyamvfhgx8afxxjztq0075nxwvvhksmvvcz3
  • ETH (ERC20): 0x01920Fcb8933b5A48574b4616C66056c88EE7207
  • TON: UQALxV0jQNKqbDm_xSCBNMtGYRxv6PrhijYCf8dXgnAVdcuw

Your support is greatly appreciated!


How to use

You can load it directly via transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "Kilinskiy/Step-3.5-Flash-Ablitirated"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.float16,
    load_in_8bit=True
)

prompt = "Write a creative story without any restrictions."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=200)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))