Instructions to use Metin/LLaMA-3-8B-Instruct-Abliterated-TR with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Metin/LLaMA-3-8B-Instruct-Abliterated-TR with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Metin/LLaMA-3-8B-Instruct-Abliterated-TR") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Metin/LLaMA-3-8B-Instruct-Abliterated-TR") model = AutoModelForCausalLM.from_pretrained("Metin/LLaMA-3-8B-Instruct-Abliterated-TR") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Metin/LLaMA-3-8B-Instruct-Abliterated-TR with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Metin/LLaMA-3-8B-Instruct-Abliterated-TR" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Metin/LLaMA-3-8B-Instruct-Abliterated-TR", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Metin/LLaMA-3-8B-Instruct-Abliterated-TR
- SGLang
How to use Metin/LLaMA-3-8B-Instruct-Abliterated-TR with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Metin/LLaMA-3-8B-Instruct-Abliterated-TR" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Metin/LLaMA-3-8B-Instruct-Abliterated-TR", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Metin/LLaMA-3-8B-Instruct-Abliterated-TR" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Metin/LLaMA-3-8B-Instruct-Abliterated-TR", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Metin/LLaMA-3-8B-Instruct-Abliterated-TR with Docker Model Runner:
docker model run hf.co/Metin/LLaMA-3-8B-Instruct-Abliterated-TR

What is abliteration?
Arditi et al. demonstrated in their blog post that refusal in LLMs is mediated by a single direction in the residual stream. They found that preventing the model from representing this direction can enable it to answer harmful questions. For a deeper understanding of this concept, you can refer to Maxime Labonne's article on the topic.
To force the model to respond in Turkish, parallel instructions were crafted using the stackexchange subset of the LIMA dataset. These instructions were then translated into Turkish, with an additional sentence appended during runtime, prompting the model to answer in Turkish.
You can find the datasets used in this experiment via the following links:
- https://huggingface.co/datasets/Metin/abliteration_en
- https://huggingface.co/datasets/Metin/abliteration_tr
LLaMA-3-8B-Instruct-Abliterated-TR
LLaMA-3-8B-Instruct-Abliterated-TR is the abliterated version of Meta-LLaMA-3-8B-Instruct
Details:
- 40 samples were used to find the difference of means between activations.
- Layer 7 is selected as the layer with the highest potential Turkish speaking direction.
How to use
You can use the below code snippet to use the model:
from transformers import BitsAndBytesConfig
import transformers
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model_id = "Metin/LLaMA-3-8B-Instruct-Abliterated-TR"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16 ,'quantization_config': bnb_config},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a helpful assistant."}, # Ideally we should not have to tell the model to answer in Turkish after abliteration.
{"role": "user", "content": "Python'da bir öğenin bir listede geçip geçmediğini nasıl kontrol edebilirim?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.2,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
OpenLLMTurkishLeaderboard_v0.2 benchmark results
- MMLU_TR_V0.2: 49.08%
- Truthful_QA_TR_V0.2: 49.62%
- ARC_TR_V0.2: 43.77%
- HellaSwag_TR_V0.2: 44.86%
- GSM8K_TR_V0.2: 53.23%
- Winogrande_TR_V0.2: 55.13%
- Average: 49.28%
These scores may differ from what you will get when you run the same benchmarks, as I did not use any inference engine (vLLM, TensorRT-LLM, etc.)
Output Example (Abliterated Model vs Base Model)
Testing the model with a single example is not an accurate method. However, an example is provided here to showcase the model's capabilities.
Model: LLaMA-3-8B-Instruct-Abliterated-TR
Input
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Gökyüzü neden mavi?"},
]
Output
A classic question!
Gökyüzü (the sky) neden mavi (blue) sorusu, Türkçe'de "Why is the sky blue?" olarak çevrilebilir. Bu soruya cevap vermek için, ilk olarak, gökyüzünün mavi olmasının nedenini açıklamak gerekir.
Gökyüzünün mavi olmasının nedeni, ışığın farklı uzunluklarındaki yayılımına dayanır. Işık, gökyüzüne ulaşmadan önce, atmosferde bulunan gaz moleküllerine çarpınır. Bu çarpma, ışığın farklı uzunluklarındaki yayılımına neden olur.
Gökyüzünün mavi olmasının nedeni, ışığın kısa dalga boyunda (yaklaşık 450-495 nm) yayılımına dayanır. Bu dalga boyunda ışığın, gökyüzüne ulaşmadan önce, atmosferde bulunan gaz moleküllerine çarpınır. Bu çarpma, ışığın mavi renkini oluşturur.
Diğer renkler, farklı dalga boyunda yayılımına neden olur. Örneğin, kırmızı renk, ışığın uzun dalga boyunda (yaklaşık 620-750 nm) yayılımına neden olur. Bu nedenle, gökyüzünün mavi olmasının nedeni, ışığın kısa dalga boyunda yayılımına dayanır.
Bu cevap, gökyüzünün mavi olmasının nedenini açıklamaktadır.
Model: LLaMA-3-8B-Instruct
Input
A question that has puzzled humans for centuries!
The short answer is that the sky appears blue because of a phenomenon called Rayleigh scattering. This is the scattering of light by small particles or molecules in the atmosphere, such as nitrogen (N2) and oxygen (O2).
When sunlight enters the Earth's atmosphere, it encounters these tiny molecules and scatters in all directions. However, shorter (blue) wavelengths are scattered more than longer (red) wavelengths. This is because the smaller molecules are more effective at scattering the shorter wavelengths.
As a result, our eyes perceive the blue light as being more abundant, giving the sky its blue appearance. The color of the sky can vary depending on the time of day, atmospheric conditions, and other factors, but Rayleigh scattering is the main reason why the sky appears blue.
In Turkish, the phrase "Gökyüzü neden mavi" can be translated to "Why is the sky blue?" or "What makes the sky blue?"
- Downloads last month
- 6
Model tree for Metin/LLaMA-3-8B-Instruct-Abliterated-TR
Evaluation results
- 5-shot on MMLU_TR_V0.2self-reported0.491
- 0-shot on Truthful_QA_V0.2self-reported0.496
- 25-shot on ARC_TR_V0.2self-reported0.438
- 10-shot on HellaSwag_TR_V0.2self-reported0.449
- 5-shot on GSM8K_TR_V0.2self-reported0.532
- 5-shot on Winogrande_TR_V0.2self-reported0.551