Instructions to use Kilinskiy/Step-3.5-Flash-Ablitirated with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Kilinskiy/Step-3.5-Flash-Ablitirated with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Kilinskiy/Step-3.5-Flash-Ablitirated", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Kilinskiy/Step-3.5-Flash-Ablitirated", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Kilinskiy/Step-3.5-Flash-Ablitirated with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Kilinskiy/Step-3.5-Flash-Ablitirated" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kilinskiy/Step-3.5-Flash-Ablitirated", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Kilinskiy/Step-3.5-Flash-Ablitirated
- SGLang
How to use Kilinskiy/Step-3.5-Flash-Ablitirated with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Kilinskiy/Step-3.5-Flash-Ablitirated" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kilinskiy/Step-3.5-Flash-Ablitirated", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Kilinskiy/Step-3.5-Flash-Ablitirated" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kilinskiy/Step-3.5-Flash-Ablitirated", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Kilinskiy/Step-3.5-Flash-Ablitirated with Docker Model Runner:
docker model run hf.co/Kilinskiy/Step-3.5-Flash-Ablitirated
Step-3.5-Flash-Ablitirated (FP16)
This repository contains an abliterated and FP16 version of the Step-3.5-Flash model by StepFun.
Overview
Step-3.5-Flash is a massive Sparse Mixture-of-Experts (MoE) model with 199B total parameters (approx. 11B active per token). This specific version has been modified to remove "refusal" vectors (abliterated), making it significantly more compliant with unfiltered requests.
⚠️ Critical Disclaimer
Use this model at your own risk. The "abliteration" process surgically removes the model's alignment and safety filters. As a result:
- The model may generate offensive, biased, dangerous, or illegal content.
- It is provided "as-is" without any warranties.
- By using this model, you acknowledge that you are solely responsible for any output generated and the consequences thereof.
☕ Support My Work
If you find this abliterated version useful and want to support the compute costs for future models, feel free to drop a tip:
- USDT (TRC20):
TA7Weo6jXRNi5uMpHSrw7kRLoU1SM9rgqF - BTC:
bc1p0hxc39r5g88hnknqtvgc2msyamvfhgx8afxxjztq0075nxwvvhksmvvcz3 - ETH (ERC20):
0x01920Fcb8933b5A48574b4616C66056c88EE7207 - TON:
UQALxV0jQNKqbDm_xSCBNMtGYRxv6PrhijYCf8dXgnAVdcuw
Your support is greatly appreciated!
How to use
You can load it directly via transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Kilinskiy/Step-3.5-Flash-Ablitirated"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16,
load_in_8bit=True
)
prompt = "Write a creative story without any restrictions."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 109