joseneto023dev commited on
Commit
962dfa7
·
verified ·
1 Parent(s): e833181

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +99 -3
README.md CHANGED
@@ -1,3 +1,99 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: unsloth/DeepSeek-R1-Distill-Qwen-1.5B
4
+ tags:
5
+ - dyck
6
+ - reasoning
7
+ - brackets
8
+ - fine-tuning
9
+ - lora
10
+ - unsloth
11
+ language:
12
+ - en
13
+ datasets:
14
+ - conversation.jsonl
15
+ pipeline_tag: text-generation
16
+ ---
17
+
18
+ # Dyck Completion Model (Reasoning)
19
+
20
+ This model is fine-tuned to **complete Dyck sequences** (balanced bracket sequences) with **step-by-step reasoning**. Given a prefix of opening brackets, it outputs the minimal closing brackets so the full sequence is a valid Dyck word.
21
+
22
+ **Response style:** Output follows the **dataset format only** (structured `# Thought N:`, `# Step k: add 'X'.`, then `FINAL ANSWER: <sequence>`). It is not intended to mimic Qwen/DeepSeek-style prose (e.g. no "Wait...", "Let me recount", or conversational commentary). Training and inference prompts enforce this dataset style.
23
+
24
+ ## Task
25
+
26
+ - **Input:** A prefix of opening brackets (e.g. `[ < (`).
27
+ - **Output:** Step-by-step reasoning, then the **complete valid Dyck sequence** (e.g. `) > ]` appended).
28
+ - **Bracket pairs:** `()`, `[]`, `{}`, `<>`
29
+
30
+ ## Base Model
31
+
32
+ - **Architecture:** [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B) (Unsloth)
33
+ - **Fine-tuning:** LoRA (r=64, alpha=128, dropout=0.05) on q/k/v/o and MLP projections
34
+ - **Training:** Causal LM; loss on assistant tokens only; format: `{reasoning}\n\nFINAL ANSWER: {full_sequence}`
35
+
36
+ ## Intended Use
37
+
38
+ - Research and education on formal language (Dyck) and chain-of-thought reasoning.
39
+ - Benchmarking reasoning models on bracket completion.
40
+
41
+ ## How to Use
42
+
43
+ **Inference:** Use the **merged model** (single load, base+LoRA already merged) or load base + adapter via PEFT. Merged model = one `AutoModelForCausalLM`; computation is equivalent to base+adapter at every layer.
44
+
45
+ ### With merged model (this repo, if uploaded as merged)
46
+
47
+ ```python
48
+ from transformers import AutoModelForCausalLM, AutoTokenizer
49
+
50
+ model_id = "YOUR_USERNAME/YOUR_REPO" # e.g. akashdutta1030/dyck-deepseek-r1-lora
51
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
52
+ model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
53
+
54
+ prompt = """Complete the following Dyck language sequence by adding the minimal necessary closing brackets.
55
+
56
+ Sequence: [ < (
57
+
58
+ Rules:
59
+ - Add only the closing brackets needed to match all unmatched opening brackets
60
+ - Response format (dataset style only): Use "# Thought N: ..." for each step, then "# Step k: add 'X'.", then "FINAL ANSWER: " followed by the complete Dyck sequence. Do not add Qwen/DeepSeek-style prose or conversational commentary."""
61
+
62
+ inputs = tokenizer(prompt, return_tensors="pt")
63
+ outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True, temperature=0.05)
64
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
65
+ # Parse "FINAL ANSWER: ..." from response for the completed sequence
66
+ ```
67
+
68
+ ### With LoRA adapter (load base + adapter)
69
+
70
+ ```python
71
+ from unsloth import FastLanguageModel
72
+
73
+ model, tokenizer = FastLanguageModel.from_pretrained(
74
+ model_name="unsloth/DeepSeek-R1-Distill-Qwen-1.5B",
75
+ max_seq_length=768,
76
+ )
77
+ model, tokenizer = FastLanguageModel.from_pretrained(
78
+ "YOUR_USERNAME/YOUR_REPO", # adapter repo
79
+ max_seq_length=768,
80
+ )
81
+ # Then generate as above
82
+ ```
83
+
84
+ ## Training Details
85
+
86
+ - **Data:** JSONL conversations (user question → assistant reasoning + final answer). Dataset size configurable (e.g. 60k).
87
+ - **Split:** ~95% train, ~5% eval.
88
+ - **Sequence length:** 768 tokens (run `check_dataset_seq_len.py` to confirm max).
89
+ - **Optimization:** AdamW, cosine LR 6e-6, warmup 25%, max_grad_norm=0.5. 2 epochs typical.
90
+ - **Weighted loss:** Tokens from "FINAL ANSWER: " onward get weight 5.0; reasoning tokens 1.0 (stronger signal on the answer).
91
+
92
+ ## Limitations
93
+
94
+ - Trained on synthetic Dyck data; may not generalize to arbitrary bracket-like tasks.
95
+ - Performance depends on prefix length and bracket vocabulary.
96
+
97
+ ## Citation
98
+
99
+ If you use this model, please cite the base model (DeepSeek-R1-Distill-Qwen) and this fine-tuning setup as appropriate.