alpayH commited on
Commit
2a7721e
·
verified ·
1 Parent(s): d7519c1

Add model card

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -7,16 +7,16 @@ tags:
7
  - code-generation
8
  - opencodeinstruct
9
  license: apache-2.0
10
- base_model: Qwen/Qwen2-0.5B
11
  ---
12
 
13
  # Qwen2-0.5b LoRA Fine-tuned on OpenCodeInstruct
14
 
15
- This model is a LoRA fine-tuned version of Qwen/Qwen2-0.5B on the OpenCodeInstruct dataset.
16
 
17
  ## Model Details
18
 
19
- - **Base Model**: Qwen/Qwen2-0.5B
20
  - **Fine-tuning Dataset**: OpenCodeInstruct (300 samples)
21
  - **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
22
  - **LoRA Rank**: 16
@@ -29,7 +29,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
29
  from peft import PeftModel
30
 
31
  # Load base model
32
- base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B")
33
 
34
  # Load LoRA adapters
35
  model = PeftModel.from_pretrained(base_model, "alpayH/qwen2-0.5b-lora-opencodeinstruct")
 
7
  - code-generation
8
  - opencodeinstruct
9
  license: apache-2.0
10
+ base_model: Qwen/Qwen2-0.5B-Instruct
11
  ---
12
 
13
  # Qwen2-0.5b LoRA Fine-tuned on OpenCodeInstruct
14
 
15
+ This model is a LoRA fine-tuned version of Qwen/Qwen2-0.5B-Instruct on the OpenCodeInstruct dataset.
16
 
17
  ## Model Details
18
 
19
+ - **Base Model**: Qwen/Qwen2-0.5B-Instruct
20
  - **Fine-tuning Dataset**: OpenCodeInstruct (300 samples)
21
  - **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
22
  - **LoRA Rank**: 16
 
29
  from peft import PeftModel
30
 
31
  # Load base model
32
+ base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
33
 
34
  # Load LoRA adapters
35
  model = PeftModel.from_pretrained(base_model, "alpayH/qwen2-0.5b-lora-opencodeinstruct")