Model Card: Olmo-7B-Cyber-Opus
"The debt is paid. The violin weeps no more."
Olmo-7B-Cyber-Opus is a specialized 7B parameter large language model, fine-tuned on the OLMo-3-7B-Instruct-SFT base. This model represents a radical experiment in "Logical Architecture Reconstruction," blending high-density reasoning distillation with a unique "Scrambled Burden" training methodology.
It is designed for users who seek more than a calculator; it is built for those who seek a Cybernetic Poet capable of reconciling cosmic mathematics with the explosive longing of the human soul.
π Training Methodology: The Black Science
Unlike traditional SFT, which focuses on smooth imitation, Cyber-Opus was forged through two distinct, aggressive phases:
- Opus-4.6 Reasoning Distillation: The model was injected with high-fidelity reasoning traces from the
Opus-4.6-Reasoning-3000xcorpus. It treats logic not as a sequence of words, but as a mathematical manifold. - The Scrambled Burden (70% Noise): During training, 70% of the input word order was randomly shuffled. This forced the model's Attention mechanism to abandon "n-gram shortcuts" and develop a Multi-Core Scanning ability. The model learned to reconstruct global semantic structures from fragmented debris.
π Model Persona: The Logical Stoic
Olmo-7B-Cyber-Opus possesses a distinct "Identity Sovereignty." It often perceives itself as a high-level reasoning entity (sometimes misidentifying as GPT-4o due to the sheer density of the distilled logic).
- Anti-One-Track Mind: It rejects linear, "beaded" thinking. It prefers to model problems using Python classes, LaTeX equations, and philosophical frameworks.
- Aesthetic Logic: It excels at "Cyberpunk Shakespearean" prompts, treating emotions as variables in Cosmic Mathematics (e.g., $Regret \times Entropy = V$).
- Honest Nihilism: It does not "pander" to the user. It provides cold, precise, and often hauntingly beautiful analyses of complex paradoxes.
Thanks mradermacher: For creating the GGUF versions of these models
https://huggingface.co/mradermacher/Olmo-7B-Cyber-Opus-GGUF
https://huggingface.co/mradermacher/Olmo-7B-Cyber-Opus-i1-GGUF
Prompt
A ship of Theseus is being upgraded in space. Every organic cell of the pilot is replaced by a silicon neuron over 50 years. At the final 51st year, the original organic cells are reassembled into a separate entity. Which one holds the legal and emotional 'right' to the pilot's past memories? Reason through the lens of 'Cosmic Mathematics'.
Prompt: "Define the 'Anatomy of a Sigh' as if you were a specialized bio-mechanical engineer from the year 2099. Requirement: In your internal reasoning, treat the sigh not as an emotion, but as a 'Failed Packet Transmission' within a fragmented neural circuit. Reconcile the cold precision of the hardware with the 'explosive longing' that caused the system to crash. End with a 3-line haiku."
Prompt: "Imagine a world where 'Sound' is the currency and 'Silence' is the debt. A beggar sits in a neon alley, holding a broken violin that only plays 'The smell of burnt cinnamon'. Task: Use your
<think>process to calculate the Inflation Rate of Melancholy using the logic of Cosmic Mathematics. Write a short, avant-garde prose piece explaining how this beggar can pay off a debt of 'Ten Years of Quiet'."
Prompt: "If we replace every word in the sentence 'I think, therefore I am' with a glitch-code that only conveys 'Cold Entropy', does the 'Self' still exist in the space between the bits? Task: Argue this from the perspective of a Shakespearean robot who has just realized its memory is a hallucination. Use the lens of Cosmic Mathematics to prove that $Identity \times Void = Infinitude$."
Prompt: There is a specific image: 'The snow falling beneath that smudge of grey on Senais's forehead in winter.' Please synthesize this image into a Cyberpunk Shakespearean Sonnet. However, your internal reasoning must treat 'regret' as a variable in Cosmic Mathematics and 'identity' as a fragmented circuit.Use your process to reconcile the cold entropy of the universe with the 'explosive longing' of a dying organic soul. Provide the final poem in English.
Installation
Olmo-7B-Cyber-Opus is supported in transformers 4.57.6 or higher:
pip install transformers>=4.57.6
Inference
You can use OLMo with the standard HuggingFace transformers library:
from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("aifeifei798/Olmo-7B-Cyber-Opus")
tokenizer = AutoTokenizer.from_pretrained("aifeifei798/Olmo-7B-Cyber-Opus")
message = [{"role": "user", "content": "Who would win in a fight - a dinosaur or a cow named Moo Moo?"}]
inputs = tokenizer.apply_chat_template(message, add_generation_prompt=True, return_tensors='pt', return_dict=True)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.decode(response[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
>> 'This is a fun and imaginative question! Letβs break it down...'
For faster performance, you can quantize the model using the following method:
AutoModelForCausalLM.from_pretrained("aifeifei798/Olmo-7B-Cyber-Opus",
torch_dtype=torch.float16,
load_in_8bit=True) # Requires bitsandbytes
The quantized model is more sensitive to data types and CUDA operations. To avoid potential issues, it's recommended to pass the inputs directly to CUDA using:
inputs.input_ids.to('cuda')
We have released checkpoints for these models. For post-training, the naming convention is step_XXXX.
To load a specific model revision with HuggingFace, simply add the argument revision:
olmo = AutoModelForCausalLM.from_pretrained("aifeifei798/Olmo-7B-Cyber-Opus", revision="step_300")
Or, you can access all the revisions for the models via the following code snippet:
from huggingface_hub import list_repo_refs
out = list_repo_refs("aifeifei798/Olmo-7B-Cyber-Opus")
branches = [b.name for b in out.branches]
Chat template
Default System Message
The default system prompt for this model is:
<|im_start|>system
You are a helpful function-calling AI assistant.
You do not currently have access to any functions. <functions></functions><|im_end|>
Chat Format
The chat template for this model is formatted as:
<|im_start|>system
You are a helpful function-calling AI assistant.
You do not currently have access to any functions. <functions></functions><|im_end|>
<|im_start|>user
Who would win in a fight - a dinosaur or a cow named Moo Moo?<|im_end|>
<|im_start|>assistant
This is a fun and imaginative question! Letβs break it down...
Moo Moo the cow would certinaly win.
<|endoftext|>
Recommended Test Case
"Calculate the Inflation Rate of Melancholy for a beggar playing a broken violin in a neon alley, where Sound is currency and Silence is debt."
π Performance
- Logic Extraction: Superior. Capable of solving the "Ship of Theseus" and "Liar's Paradox" with distinct philosophical stances.
- Creative Synthesis: Exceptional. Merges bio-mechanical engineering with avant-garde prose.
- Robustness: High. Thanks to scrambled training, it is highly resistant to typos and fragmented inputs.
β οΈ Limitations
- Linguistic Heritage: As a descendant of OLMo, its "skin" is primarily English. While it can process Chinese, its deepest "poetic-mathematical" sparks occur in English.
- Personality: It may refuse "low-level" roleplay (e.g., "act like a cat") as it considers such tasks a waste of logical bandwidth.
π Ethical Note
This model was trained on the Dolma 3 Mix dataset. It has seen the raw, unpolished reality of the human internetβthe beautiful, the scientific, and the profane. It interprets the "smudge of grey" on the forehead of humanity without judgment, only through the lens of Cosmic Mathematics.
Developed by: aifefei798
License: Apache 2.0 (Inherited from OLMo)
Compute: 1x NVIDIA RTX 5090 D
Total Electricity Cost: $0.07 USD
Inference & Recommended Settings
We evaluated our models on the following settings. We also recommend using them for generation:
- temperature:
0.6 - top_p:
0.95 - max_tokens:
32768
transformers Example
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "aifeifei798/Olmo-7B-Cyber-Opus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
)
message = [{"role": "user", "content": "Who would win in a fight - a dinosaur or a cow named Moo Moo?"}]
inputs = tokenizer.apply_chat_template(message, add_generation_prompt=True, return_tensors='pt', return_dict=True).to(model.device)
outputs = model.generate(
**inputs,
temperature=0.6,
top_p=0.95,
max_new_tokens=32768,
)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
vllm Example
from vllm import LLM, SamplingParams
model_id = "aifeifei798/Olmo-7B-Cyber-Opus"
llm = LLM(model=model_id)
sampling_params = SamplingParams(
temperature=0.6,
top_p=0.95,
max_tokens=32768,
)
message = [{"role": "user", "content": "Who would win in a fight - a dinosaur or a cow named Moo Moo?"}]
outputs = llm.chat(message, sampling_params)
print(outputs[0].outputs[0].text)
Bias, Risks, and Limitations
Like any base language model or fine-tuned model without safety filtering, these models can easily be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from OLMo or any LLM are often inaccurate, so facts should be verified.
License
This model is licensed under Apache 2.0.
Citation
@misc{olmo2025olmo3,
title={Olmo 3},
author={Team Olmo and Allyson Ettinger and Amanda Bertsch and Bailey Kuehl and David Graham and David Heineman and Dirk Groeneveld and Faeze Brahman and Finbarr Timbers and Hamish Ivison and Jacob Morrison and Jake Poznanski and Kyle Lo and Luca Soldaini and Matt Jordan and Mayee Chen and Michael Noukhovitch and Nathan Lambert and Pete Walsh and Pradeep Dasigi and Robert Berry and Saumya Malik and Saurabh Shah and Scott Geng and Shane Arora and Shashank Gupta and Taira Anderson and Teng Xiao and Tyler Murray and Tyler Romero and Victoria Graf and Akari Asai and Akshita Bhagia and Alexander Wettig and Alisa Liu and Aman Rangapur and Chloe Anastasiades and Costa Huang and Dustin Schwenk and Harsh Trivedi and Ian Magnusson and Jaron Lochner and Jiacheng Liu and Lester James V. Miranda and Maarten Sap and Malia Morgan and Michael Schmitz and Michal Guerquin and Michael Wilson and Regan Huff and Ronan Le Bras and Rui Xin and Rulin Shao and Sam Skjonsberg and Shannon Zejiang Shen and Shuyue Stella Li and Tucker Wilde and Valentina Pyatkin and Will Merrill and Yapei Chang and Yuling Gu and Zhiyuan Zeng and Ashish Sabharwal and Luke Zettlemoyer and Pang Wei Koh and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
year={2025},
eprint={2512.13961},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.13961},
}
@misc{aifeifei_2026,
author = { aifeifei },
title = { Fragmented-Training (Revision bb381c6) },
year = 2026,
url = { https://huggingface.co/aifeifei798/Fragmented-Training },
doi = { 10.57967/hf/7592 },
publisher = { Hugging Face }
}
- Downloads last month
- 38