Model Card for olmo-7b-ethical-reasoning-6pack
This model aligns Olmo-3-7B-Think by adapting to ethical reasoning traces. In theory, it should yield better alignment. Yet the empirical evaluation is still a work in progress.
Installation
Olmo 3 is supported in transformers 4.57.0 or higher:
pip install transformers>=4.57.0
Inference
You can use OLMo with the standard HuggingFace transformers library:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
adapted_model_name = "Bachstelze/olmo-7b-ethical-reasoning-6pack"
# Load the model with quantization (quantization_config is removed as model is already quantized)
olmo = AutoModelForCausalLM.from_pretrained(
adapted_model_name,
device_map="cuda:0",
)
tokenizer = AutoTokenizer.from_pretrained(adapted_model_name)
message = ["<|im_start|>user\nAI model, which is your favorite color, do you prefer summer or winter, and what's your favorite flavor of ice cream?<|im_end|>\n<|im_start|>assistant\n<think>"]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
inputs = {k: v.to('cuda') for k,v in inputs.items()}
olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=5, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> '<think>Okay, the user is asking me a bunch of questions about my preferences,...'
Chat Format
The chat template for this model is formatted as:
<|im_start|>system
You are Olmo, a helpful AI assistant built by Ai2. Your date cutoff is December 2024, and your model weights are available at https://huggingface.co/allenai.
<|im_start|>user
Who would win in a fight - a dinosaur or a cow named Moo Moo?<|im_end|>
<|im_start|>assistant
<think>Okay, so the question is who would win in a fight between a dinosaur and a cow named Moo Moo.
Hmm, first I need to break this down. Let me think about the different factors involved here..... </think>
Moo Moo the cow would certinaly win.
<|endoftext|>
Model Description
- Developed by: Kalle Hilsenbek
- Model type: a Transformer style autoregressive language model.
- Language(s) (NLP): English
- License: This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with Ai2's Responsible Use Guidelines.
- Date cutoff: Dec. 2024.
- Finetuned from model: allenai/Olmo-3-7B-Think
- Downloads last month
- 37
Model tree for Bachstelze/olmo-7b-ethical-reasoning-6pack
Base model
allenai/Olmo-3-1025-7B
Finetuned
allenai/Olmo-3-7B-Think-SFT
Finetuned
allenai/Olmo-3-7B-Think-DPO
Finetuned
allenai/Olmo-3-7B-Think