LLM4APR commited on
Commit
f7a4b0d
·
verified ·
1 Parent(s): dfd1859

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # StarCoder-15B_for_NTR
2
+
3
+ We fine-tuned [StarCoder-15B](https://huggingface.co/bigcode/starcoder) on [Transfer_dataset](https://drive.google.com/drive/folders/1F1BPfTxHDGX-OCBthudCbu_6Qvcg_fbP?usp=drive_link) under the [NTR](https://sites.google.com/view/neuraltemplaterepair) framework for APR research.
4
+
5
+ ## Model Use
6
+
7
+ To use this model, please make sure to install transformers, peft, bitsandbytes, and accelerate.
8
+
9
+ ```bash
10
+ pip install transformers
11
+ pip install peft
12
+ pip install bitsandbytes
13
+ pip install accelerate
14
+ ```
15
+
16
+ Then, please run the following script to merge the adapter into the CodeLlama.
17
+
18
+ ```bash
19
+ bash merge.sh
20
+ ```
21
+
22
+ Finally, you can load the model to generate patches for buggy code.
23
+
24
+ ```python
25
+ from transformers import AutoTokenizer, AutoModelForCausalLM
26
+ from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training
27
+ import torch
28
+
29
+
30
+ # load model and tokenizer
31
+
32
+ tokenizer = AutoTokenizer.from_pretrained('bigcode/starcoderbase', use_auth_token=True)
33
+
34
+ model = AutoModelForCausalLM.from_pretrained(
35
+ "StarCoder-15B_for_NTR/Epoch_1/-merged",
36
+ use_auth_token=True,
37
+ use_cache=True,
38
+ load_in_8bit=True,
39
+ device_map="auto"
40
+ )
41
+
42
+ model = prepare_model_for_int8_training(model)
43
+
44
+ lora_config = LoraConfig(
45
+ r=16,
46
+ lora_alpha=32,
47
+ lora_dropout=0.05,
48
+ bias="none",
49
+ task_type="CAUSAL_LM",
50
+ target_modules = ["c_proj", "c_attn", "q_attn"]
51
+ )
52
+
53
+ model = get_peft_model(model, lora_config)
54
+
55
+
56
+ # a bug-fix pairs
57
+
58
+ buggy_code = "
59
+ public MultiplePiePlot(CategoryDataset dataset){
60
+ super();
61
+ // bug_start
62
+ this.dataset=dataset;
63
+ // bug_end
64
+ PiePlot piePlot=new PiePlot(null);
65
+ this.pieChart=new JFreeChart(piePlot);
66
+ this.pieChart.removeLegend();
67
+ this.dataExtractOrder=TableOrder.BY_COLUMN;
68
+ this.pieChart.setBackgroundPaint(null);
69
+ TextTitle seriesTitle=new TextTitle("Series Title",new Font("SansSerif",Font.BOLD,12));
70
+ seriesTitle.setPosition(RectangleEdge.BOTTOM);
71
+ this.pieChart.setTitle(seriesTitle);
72
+ this.aggregatedItemsKey="Other";
73
+ this.aggregatedItemsPaint=Color.lightGray;
74
+ this.sectionPaints=new HashMap();
75
+ }
76
+ "
77
+
78
+ repair_template = "OtherTemplate"
79
+
80
+ fixed_code = "
81
+ // fix_start
82
+ setDataset(dataset);
83
+ // fix_end
84
+ "
85
+
86
+ # model inference
87
+
88
+ input_text = '<commit_before>\n' + buggy_code + '\n<commit_msg>\n' + repair_template + '\n<commit_after>\n'
89
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
90
+
91
+ eos_id = tokenizer.convert_tokens_to_ids(tokenizer.eos_token)
92
+ generated_ids = model.generate(
93
+ input_ids=input_ids,
94
+ max_new_tokens=256,
95
+ num_beams=10,
96
+ num_return_sequences=10,
97
+ early_stopping=True,
98
+ pad_token_id=eos_id,
99
+ eos_token_id=eos_id
100
+ )
101
+
102
+ for generated_id in generated_ids:
103
+ generated_text = tokenizer.decode(generated_id, skip_special_tokens=False)
104
+ patch = generated_text.split('\n<commit_after>\n')[1]
105
+ patch = patch.replace('<|endoftext|>','')
106
+ print(patch)
107
+
108
+
109
+ ```
110
+
111
+ ## Model Details
112
+ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).