AdvRahul commited on
Commit
f82a9c8
·
verified ·
1 Parent(s): 8da3772

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -12
README.md CHANGED
@@ -1,14 +1,70 @@
1
  ---
2
- license: apache-2.0
3
- base_model:
4
- - agentica-org/DeepScaleR-1.5B-Preview
5
- pipeline_tag: text-generation
6
  tags:
7
- - legal
8
- - quantized
9
- - llm
10
- - q4_k_m
11
- - efficient-inference
12
- language:
13
- - en
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ base_model: sky-t/DeepScaleR-1.5B-Preview
 
 
4
  tags:
5
+ - reasoning
6
+ - math
7
+ - safety-tuned
8
+ - rl
9
+ - axion
10
+ - mit
11
+ ---
12
+
13
+ # AdvRahul/Axion-1.5B-Reasoning
14
+
15
+ **A safety-enhanced version of the state-of-the-art DeepScaleR-1.5B mathematical reasoning model.** 🧠
16
+
17
+ `Axion-1.5B-Reasoning` builds upon the exceptional mathematical capabilities of `sky-t/DeepScaleR-1.5B-Preview`, a model renowned for its top-tier performance on complex reasoning tasks like the AIME competition. This version has been specifically fine-tuned to improve safety, making it suitable for a broader range of applications.
18
+
19
+ ## 🚀 Model Details
20
+
21
+ * **Model Creator:** AdvRahul
22
+ * **Base Model:** [sky-t/DeepScaleR-1.5B-Preview](https://huggingface.co/sky-t/DeepScaleR-1.5B-Preview)
23
+ * **Fine-tuning Focus:** Enhanced Safety & Harmlessness
24
+ * **Core Capability:** Advanced Mathematical & Logical Reasoning
25
+ * **Architecture:** Qwen 1.5 (derived from the base model's lineage)
26
+ * **License:** **MIT License** (Permissive for commercial use)
27
+
28
+ ***
29
+
30
+ ## 📝 Model Description
31
+
32
+ ### Fusing Elite Reasoning with Robust Safety
33
+
34
+ `Axion-1.5B-Reasoning` was developed to bridge the gap between a pure, high-performance research model and a deployable, application-ready AI. It combines two key attributes:
35
+
36
+ 1. **State-of-the-Art Reasoning:** It inherits the powerful reinforcement learning-based training of its predecessor, allowing it to solve complex mathematical and logical problems with high accuracy.
37
+ 2. **Enhanced Safety Alignment:** The model has undergone **extensive red-team testing and safety-focused fine-tuning**. This process was designed to make the model more robust against generating harmful, biased, or inappropriate content, a critical requirement for user-facing systems.
38
+
39
+ This makes `Axion-1.5B-Reasoning` an ideal choice for educational tools, AI-powered tutors, data analysis assistants, and any system that requires both high-fidelity logical reasoning and a strong safety profile.
40
+
41
+ ***
42
+
43
+ ## 💻 How to Use
44
+
45
+ This model can be used directly with the `transformers` library. For optimal results on complex problems, it's best to instruct the model to think step-by-step.
46
+
47
+ ```python
48
+ from transformers import pipeline
49
+ import torch
50
+
51
+ # Initialize the text-generation pipeline
52
+ pipe = pipeline(
53
+ "text-generation",
54
+ model="AdvRahul/Axion-1.5B-Reasoning",
55
+ torch_dtype=torch.bfloat16,
56
+ device_map="auto"
57
+ )
58
+
59
+ # Prepare the prompt using the Qwen chat template format
60
+ messages = [
61
+ {"role": "system", "content": "You are a helpful assistant that is an expert in mathematical reasoning."},
62
+ {"role": "user", "content": "There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? Reason step by step."}
63
+ ]
64
+
65
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
66
+
67
+ # Generate the response
68
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
69
+
70
+ print(outputs[0]["generated_text"])