MihaiPopa-1 commited on
Commit
8ffa926
·
verified ·
1 Parent(s): 2d7ce46

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -2
README.md CHANGED
@@ -18,8 +18,22 @@ license: apache-2.0
18
  I just made a fine-tune of [SmolLM2 135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct) on the [GSM8K](https://huggingface.co/openai/gsm8k) dataset
19
  and it does improve math sometimes.
20
  ## Evaluation Results
21
-
22
  | Metric | Value |
23
  | :----- | :--------: |
24
  | **Loss** | 1.284519 |
25
- | **Steps** | 2805 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  I just made a fine-tune of [SmolLM2 135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct) on the [GSM8K](https://huggingface.co/openai/gsm8k) dataset
19
  and it does improve math sometimes.
20
  ## Evaluation Results
 
21
  | Metric | Value |
22
  | :----- | :--------: |
23
  | **Loss** | 1.284519 |
24
+ | **Steps** | 2805 |
25
+ ## How to Use
26
+ This code is by Gemini 3 Flash:
27
+ ```
28
+ from transformers import AutoModelForCausalLM, AutoTokenizer
29
+
30
+ model_name = "MihaiPopa-1/SmolLM2-135M-Math" # Replace with your repo path
31
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
32
+ model = AutoModelForCausalLM.from_pretrained(model_name)
33
+
34
+ prompt = "Question: If John has 5 apples and eats 2, then buys 4 more, how many does he have?\nAnswer:"
35
+ inputs = tokenizer(prompt, return_tensors="pt")
36
+ outputs = model.generate(**inputs, max_new_tokens=50)
37
+
38
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
39
+ ```