SkyAsl commited on
Commit
54070a5
·
verified ·
1 Parent(s): 3f8cc1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -14
README.md CHANGED
@@ -6,7 +6,6 @@ language:
6
  - en
7
  base_model:
8
  - unsloth/phi-4-reasoning
9
- new_version: SkyAsl/Rust-Master-thinking
10
  pipeline_tag: text-generation
11
  library_name: transformers
12
  tags:
@@ -58,13 +57,13 @@ import torch
58
  model_id = "SkyAsl/Rust-Master-thinking"
59
 
60
  tokenizer = AutoTokenizer.from_pretrained(model_id)
61
- model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
62
  model.eval()
63
 
64
  prompt = "Explain why Rust ownership prevents data races."
65
 
66
  input_text = (
67
- f"<|user|>\n{test_data[0]['prompt']}\n"
68
  f"<|assistant|>\n<think>\n"
69
  )
70
 
@@ -73,7 +72,7 @@ inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
73
  with torch.no_grad():
74
  output = model.generate(
75
  **inputs,
76
- max_new_tokens=500,
77
  temperature=0.7,
78
  top_p=0.9,
79
  do_sample=True,
@@ -127,16 +126,6 @@ Includes:
127
  This dataset improves the model's ability to produce structured and
128
  accurate explanations for Rust programming tasks.
129
 
130
- ## 🔍 Notes on Reasoning Tags
131
-
132
- This model preserves **hidden reasoning structure**:
133
-
134
- - `<think>` content is **internal chain-of-thought**
135
- - The final output is **placed after the reasoning block**
136
-
137
- ⚠️ Users should NOT expect the `<think>` content to be revealed; the
138
- model is aligned to hide reasoning by default.
139
-
140
  ## ✨ Acknowledgements
141
 
142
  - **Unsloth** for optimized model training
 
6
  - en
7
  base_model:
8
  - unsloth/phi-4-reasoning
 
9
  pipeline_tag: text-generation
10
  library_name: transformers
11
  tags:
 
57
  model_id = "SkyAsl/Rust-Master-thinking"
58
 
59
  tokenizer = AutoTokenizer.from_pretrained(model_id)
60
+ model = AutoModelForCausalLM.from_pretrained(model_id, dtype=torch.bfloat16, device_map="auto")
61
  model.eval()
62
 
63
  prompt = "Explain why Rust ownership prevents data races."
64
 
65
  input_text = (
66
+ f"<|user|>\n{prompt}\n"
67
  f"<|assistant|>\n<think>\n"
68
  )
69
 
 
72
  with torch.no_grad():
73
  output = model.generate(
74
  **inputs,
75
+ max_new_tokens=3000,
76
  temperature=0.7,
77
  top_p=0.9,
78
  do_sample=True,
 
126
  This dataset improves the model's ability to produce structured and
127
  accurate explanations for Rust programming tasks.
128
 
 
 
 
 
 
 
 
 
 
 
129
  ## ✨ Acknowledgements
130
 
131
  - **Unsloth** for optimized model training