File size: 3,549 Bytes
ea4a33f 0ea2a9b fd36686 0ea2a9b fd36686 0ea2a9b fd36686 0ea2a9b ea4a33f 0ea2a9b ea4a33f 0ea2a9b 8d0a1bc 0ea2a9b 25c111c 0ea2a9b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: Vex_Amber_mini_2.5
tags:
- generated_from_trainer
- trl
- sft
- code
- reasoning
- 2B
licence: license
license: cc-by-nc-4.0
language:
- en
- fa
- fr
metrics:
- code_eval
new_version: Arioron/Vex-Amber-Mini-1.2
pipeline_tag: text-generation
num_parameters: 2000000000
---
type: text-generation
name: Mathematical Reasoning
dataset:
name: MATH
type: math
split: test
metrics:
- name: Accuracy
type: accuracy
value: 55.0
---
# Amber Fable 1.0
## Model Description
**Amber Fable 1.0** is a **1.7B parameter** specialized language model, fine-tuned using **LoRA (Low-Rank Adaptation)** on the powerful **Qwen3-1.7B** base model.
This model is engineered specifically for **mathematical reasoning** and **algorithmic logic**. It achieves remarkable performance on math benchmarks (75% on GSM8K) for its size class, making it a highly efficient solution for educational tools and logic-based tasks, although it trades off some general world knowledge (MMLU) to achieve this peak reasoning capability.
- **Developed by:** Arioron
- **Model type:** Decoder-only Transformer (LoRA Adapter)
- **Language(s):** English
- **License:** Apache 2.0
- **Finetuned from model:** Qwen/Qwen3-1.7B
### Model Sources
- **Repository:** https://huggingface.co/Arioron/Amber-Fable-1.0
- **Documentation:** Arioron Model Docs
## Performance
Amber Fable 1.0 demonstrates state-of-the-art efficiency in mathematical tasks.
| Benchmark | Metric | Score | Description |
| :--- | :--- | :--- | :--- |
| **GSM8K** | Accuracy | **75.0%** | Grade School Math |
| **MATH** | Accuracy | **55.0%** | Advanced Math Problems |
| **HumanEval**| Pass@1 | **42.0%** | Python Coding Capability |
| MMLU | Accuracy | 22.0% | General World Knowledge |
## Quick Start
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "Arioron/Amber-Fable-1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Math reasoning example
messages = [
{"role": "user", "content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"},
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.6,
do_sample=True,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Model Summary
- **Model:** Amber Fable 1.0 (1.7B)
- **Specialty:** Advanced Math Reasoning
- **Logic:** Chain-of-Thought (CoT)
- **Coding:** Python & Algorithms (42%)
- **Tuning:** LoRA on Synthetic/Textbooks
- **Base:** Qwen3-1.7B (PyTorch/PEFT)
- **Usage:** Tutoring, Puzzles & Scripts
- **Caution:** Verify all calculations
- **Author:** Arioron (2025)
If you use this model in your research, please cite:
code
Bibtex
@misc{amberfable1.0,
title = {Amber Fable 1.0: A Specialized 1.7B Math Model},
author = {Arioron},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Arioron/Amber-Fable-1.0}}
contact
Email: inquiry@arioron.com
Website: https://arioron.com
Documentation: https://docs.arioron.com
} |