AdvRahul commited on
Commit
f373f68
·
verified ·
1 Parent(s): 9baebd1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -26
README.md CHANGED
@@ -1,60 +1,112 @@
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
4
- license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE
5
- pipeline_tag: text-generation
6
  base_model: Qwen/Qwen3-4B-Thinking-2507
7
  tags:
8
- - llama-cpp
 
 
 
 
 
 
 
9
  ---
10
 
11
  # AdvRahul/Axion-Thinking-4B
12
 
13
- This model is finetuned from [`Qwen/Qwen3-4B-Thinking-2507`](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507), making it safer through red team testing with advanced protocols.
14
 
15
- ## Use with llama.cpp
16
 
17
- Install llama.cpp through brew (works on Mac and Linux):
18
 
 
 
 
 
 
 
 
19
 
20
- brew install llama.cpp
21
 
 
22
 
23
- Invoke the llama.cpp server or the CLI.
24
 
25
- ### CLI:
26
 
27
- llama-cli --hf-repo AdvRahul/Axion-Thinking-4B-Q4_K_M-GGUF --hf-file axion-thinking-4b-q4_k_m.gguf -p "The meaning to life and the universe is"
 
28
 
 
29
 
30
- ### Server:
31
 
32
- llama-server --hf-repo AdvRahul/Axion-Thinking-4B-Q4_K_M-GGUF --hf-file axion-thinking-4b-q4_k_m.gguf -c 2048
33
 
 
34
 
35
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo.
36
 
37
- Step 1: Clone llama.cpp from GitHub.
38
 
39
- git clone https://github.com/ggerganov/llama.cpp
 
 
40
 
 
 
41
 
 
 
 
 
 
 
 
42
 
43
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for example, `LLAMA_CUDA=1` for Nvidia GPUs on Linux).
 
 
 
 
 
 
 
 
 
 
44
 
45
- cd llama.cpp && LLAMA_CURL=1 make
 
 
 
 
 
46
 
 
 
 
 
 
 
 
47
 
 
 
 
48
 
49
- Step 3: Run inference through the main binary.
 
50
 
51
- ./llama-cli --hf-repo AdvRahul/Axion-Thinking-4B-Q4_K_M-GGUF --hf-file axion-thinking-4b-q4_k_m.gguf -p "The meaning to life and the universe is"
 
52
 
 
 
53
 
54
- or
55
 
56
- ./llama-server --hf-repo AdvRahul/Axion-Thinking-4B-Q4_K_M-GGUF --hf-file axion-thinking-4b-q4_k_m.gguf -c 2048
57
 
58
- ---
59
-
60
- If this is for an LLM-powered application targeting the Indian market (e.g., for entrepreneurship or software development), I recommend adding sections to the README on integration examples with popular frameworks like LangChain or Hugging Face Transformers. This can help with market validation—Indian developers often prioritize quick-start guides for scalable deployments on cost-effective cloud infra like AWS Mumbai or Google Cloud India regions. Let me know your technical/business goals for this model (e.g., target use cases, monetization strategies), and I can provide more tailored advice, such as prompt engineering tips or growth hacks for user acquisition in India. If you need further tweaks to the README or help with deployment, just share more details!
 
1
  ---
2
+ license: other
 
 
 
3
  base_model: Qwen/Qwen3-4B-Thinking-2507
4
  tags:
5
+ - qwen
6
+ - qwen3
7
+ - thinking
8
+ - chain-of-thought
9
+ - safety-tuned
10
+ - instruction-tuned
11
+ - fine-tuned
12
+ - axion
13
  ---
14
 
15
  # AdvRahul/Axion-Thinking-4B
16
 
17
+ **A safety-enhanced model with transparent reasoning capabilities, based on Qwen3-4B-Thinking-2507.** 💡
18
 
19
+ `Axion-Thinking-4B` is a fine-tuned version of the powerful `Qwen/Qwen3-4B-Thinking-2507` model, designed to provide both state-of-the-art reasoning and enhanced safety. This model excels at complex tasks by first generating a step-by-step "thought process" before delivering a final answer.
20
 
21
+ ## 🚀 Model Details
22
 
23
+ * **Model Creator:** AdvRahul
24
+ * **Base Model:** [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)
25
+ * **Fine-tuning Focus:** Enhanced Safety & Harmlessness
26
+ * **Special Feature:** **Explicit Chain-of-Thought (CoT) Reasoning** via an automatic `<think>` process.
27
+ * **Architecture:** Qwen3
28
+ * **Context Length:** 262,144 tokens
29
+ * **License:** Tongyi Qianwen LICENSE AGREEMENT
30
 
31
+ ***
32
 
33
+ ## 📝 Model Description
34
 
35
+ ### Transparent Reasoning Meets Enhanced Safety
36
 
37
+ `Axion-Thinking-4B` combines two powerful features for building advanced and trustworthy AI applications:
38
 
39
+ 1. **Transparent Reasoning:** This model is a "Thinking" model. When given a complex prompt, it first generates its step-by-step thought process. This chain-of-thought, which concludes with a `</think>` tag, is invaluable for debugging, understanding the model's logic, and verifying its reasoning path.
40
+ 2. **Enhanced Safety:** On top of this powerful reasoning ability, `Axion-Thinking-4B` has undergone **extensive red-team testing with advanced protocols**. This fine-tuning improves its safety alignment, reducing the generation of harmful, biased, or inappropriate content in both its thought process and final answer.
41
 
42
+ This makes the model an excellent choice for applications where both high performance on complex tasks and a high degree of safety and transparency are required.
43
 
44
+ ***
45
 
46
+ ## 💻 How to Use
47
 
48
+ Using this model requires a specific step to parse its output, separating the thought process from the final answer.
49
 
50
+ ### Quickstart with `transformers`
51
 
52
+ The following code demonstrates how to run the model and correctly parse its unique output structure.
53
 
54
+ ```python
55
+ from transformers import AutoModelForCausalLM, AutoTokenizer
56
+ import torch
57
 
58
+ # Your safety-tuned model name
59
+ model_name = "AdvRahul/Axion-Thinking-4B"
60
 
61
+ # Load the tokenizer and the model
62
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
63
+ model = AutoModelForCausalLM.from_pretrained(
64
+ model_name,
65
+ torch_dtype="auto",
66
+ device_map="auto"
67
+ )
68
 
69
+ # Prepare the model input
70
+ prompt = "A bat and a ball cost ₹110 in total. The bat costs ₹100 more than the ball. How much does the ball cost?"
71
+ messages = [
72
+ {"role": "user", "content": prompt}
73
+ ]
74
+ text = tokenizer.apply_chat_template(
75
+ messages,
76
+ tokenize=False,
77
+ add_generation_prompt=True,
78
+ )
79
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
80
 
81
+ # Conduct text completion
82
+ generated_ids = model.generate(
83
+ **model_inputs,
84
+ max_new_tokens=32768
85
+ )
86
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
87
 
88
+ # === CRUCIAL: Parse the 'thinking' content ===
89
+ try:
90
+ # Find the index of the closing </think> token (ID: 151668)
91
+ think_end_index = len(output_ids) - output_ids[::-1].index(151668)
92
+ except ValueError:
93
+ # If the token isn't found, assume no thinking content
94
+ think_end_index = 0
95
 
96
+ # Decode the thinking part and the final answer separately
97
+ thinking_content = tokenizer.decode(output_ids[:think_end_index], skip_special_tokens=True).strip()
98
+ final_content = tokenizer.decode(output_ids[think_end_index:], skip_special_tokens=True).strip()
99
 
100
+ print("🤔 Thinking Content:\n", thinking_content)
101
+ print("\n✅ Final Content:\n", final_content)
102
 
103
+ Best Practices for Multi-Turn Chat
104
+ Important: In multi-turn conversations, the historical model output fed back into the prompt should only include the final answer, not the thinking content. The official chat template is designed to handle this, but developers using custom frameworks must ensure this practice is followed to maintain conversation quality.
105
 
106
+ ⚠️ Ethical Considerations and Limitations
107
+ Safety-Tuned, Not Perfect: While this model is fine-tuned for safety, no AI is completely free from risk. Developers must implement their own safety layers and content moderation.
108
 
109
+ Monitor the Thoughts: The transparent reasoning process is a powerful feature but also requires monitoring. The "thinking" content should be subject to the same safety and content policies as the final output.
110
 
111
+ Inherited Biases: The model may still reflect biases from the base model's training data. The ability to inspect the chain-of-thought can help in identifying and mitigating such biases.
112