knockknock404 commited on
Commit
5e56f2f
·
verified ·
1 Parent(s): 5130a59

Upload 19 files

Browse files
code/codekey_proofread.txt ADDED
@@ -0,0 +1,449 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 00101,危害国家安全罪
2
+ 00102,危害公共安全罪
3
+ 0010102,分裂国家罪
4
+ 0010103,煽动分裂国家罪
5
+ 0010105,颠覆国家政权罪
6
+ 0010106,煽动颠覆国家政权罪
7
+ 0010108,投敌叛变罪
8
+ 0010110,间谍罪
9
+ 0010111,为境外窃取、剌探、收买、非法提供国家秘密、情报罪
10
+ 0010201,放火罪
11
+ 0010202,决水罪
12
+ 0010203,爆炸罪
13
+ 0010204,投放危险物质罪
14
+ 0010205,以危险方法危害公共安全罪
15
+ 0010206,失火罪
16
+ 0010207,过失决水罪
17
+ 0010208,过失爆炸罪
18
+ 0010209,过失投放危险物质罪
19
+ 0010210,过失以危险方法危害公共安全罪
20
+ 0010211,破坏交通工具罪
21
+ 0010212,破坏交通设施罪
22
+ 0010213,破坏电力设备罪
23
+ 0010214,破坏易燃易爆设备罪
24
+ 0010216,过失损坏交通设施罪
25
+ 0010217,过失损坏电力设备罪
26
+ 0010218,过失损坏易燃易爆设备罪
27
+ 0010219,组织、领导、参加恐怖组织罪
28
+ 0010220,资助恐怖活动罪
29
+ 0010221,劫持航空器罪
30
+ 0010222,劫持船只、汽车罪
31
+ 0010223,暴力危及飞行安全罪
32
+ 0010224,破坏广播电视设施、公用电信设施罪
33
+ 0010225,过失损坏广播电视设施、公用电信设施罪
34
+ 0010226,非法制造、买卖、运输、邮寄、储存枪支、弹药、爆炸物罪
35
+ 0010227,非法制造、买卖、运输、储存危险物质罪
36
+ 0010228,违规制造、销售枪支罪
37
+ 0010229,盗窃、抢夺枪支、弹药、爆炸物、危险物质罪
38
+ 0010230,抢劫枪支、弹药、爆炸物、危险物质罪
39
+ 0010231,非法持有、私藏枪支、弹药罪
40
+ 0010232,非法出租、出借枪支罪
41
+ 0010234,非法携带枪支、弹药、管制刀具、危险物品危及公共安全罪
42
+ 0010235,重大飞行事故罪
43
+ 0010236,铁路运营安全事故罪
44
+ 0010237,交通肇事罪
45
+ 0010238,重大责任事故罪
46
+ 0010239,重大劳动安全事故罪
47
+ 0010240,危险物品肇事罪
48
+ 0010241,工程重大安全事故罪
49
+ 0010242,教育设施重大安全事故罪
50
+ 0010243,消防责任事故罪
51
+ 0010247,强令违章冒险作业罪
52
+ 0010248,大型群众性活动重大安全事故罪
53
+ 0010249,不报、谎报安全事故罪
54
+ 0010250,危险驾驶罪
55
+ 0010251,帮助恐怖活动罪
56
+ 0010253,宣扬恐怖主义、极端主义、煽动实施恐怖活动罪
57
+ 0010256,非法持有宣扬恐怖主义、极端主义物品罪
58
+ 0010301,生产、销售伪劣商品罪
59
+ 0010302,走私罪
60
+ 0010303,妨害对公司、企业的管理秩序罪
61
+ 0010304,破坏金融管理秩序罪
62
+ 0010305,金融诈骗罪
63
+ 0010306,危害税收征管罪
64
+ 0010307,侵犯知识产权罪
65
+ 0010308,扰乱市场秩序罪
66
+ 0010401,故意杀人罪
67
+ 0010402,过失致人死亡罪
68
+ 0010403,故意伤害罪
69
+ 0010404,过失致人重伤罪
70
+ 0010405,强奸罪
71
+ 0010406,强制猥亵、侮辱妇女罪
72
+ 0010407,猥亵儿童罪
73
+ 0010408,非法拘禁罪
74
+ 0010409,绑架罪
75
+ 0010410,拐卖妇女、儿童罪
76
+ 0010411,收买被拐卖的妇女、儿童罪
77
+ 0010413,诬告陷害罪
78
+ 0010415,雇用童工从事危重劳动罪
79
+ 0010416,非法搜查罪
80
+ 0010417,非法侵入住宅罪
81
+ 0010418,侮辱罪
82
+ 0010419,诽谤罪
83
+ 0010420,刑讯逼供罪
84
+ 0010421,暴力取证罪
85
+ 0010422,虐待被监管人罪
86
+ 0010423,煽动民族仇恨、民族歧视罪
87
+ 0010424,出版歧视、侮辱少数民族作品罪
88
+ 0010427,侵犯通信自由罪
89
+ 0010428,私自开拆、隐匿、毁弃邮件、电报罪
90
+ 0010429,报复陷害罪
91
+ 0010431,破坏选举罪
92
+ 0010432,暴力干涉婚姻自由罪
93
+ 0010433,重婚罪
94
+ 0010434,破坏军婚罪
95
+ 0010435,虐待罪
96
+ 0010436,遗弃罪
97
+ 0010437,拐骗儿童罪
98
+ 0010438,中介组织人员提供虚假证明文件罪
99
+ 0010439,中介组织人员出具证明文件重大失实罪
100
+ 0010440,奸淫幼女罪
101
+ 0010441,组织残疾人、儿童乞讨罪
102
+ 0010442,出售、非法提供公民个人信息罪
103
+ 0010443,非法获取公民个人信息罪
104
+ 0010444,组织未成年人进行违反治安管理活动罪
105
+ 0010445,组织出卖人体器官罪
106
+ 0010446,强迫劳动罪
107
+ 0010447,强制猥亵、侮辱罪
108
+ 0010448,侵犯公民个人信息罪
109
+ 0010449,虐待被监护、看护人罪
110
+ 0010501,抢劫罪
111
+ 0010502,盗窃罪
112
+ 0010503,诈骗罪
113
+ 0010504,抢夺罪
114
+ 0010505,聚众哄抢罪
115
+ 0010506,侵占罪
116
+ 0010507,职务侵占罪
117
+ 0010508,挪用资金罪
118
+ 0010509,挪用特定款物罪
119
+ 0010510,敲诈勒索罪
120
+ 0010511,故意毁坏财物罪
121
+ 0010512,破坏生产经营罪
122
+ 0010513,拒不支付劳动报酬罪
123
+ 0010601,扰乱公共秩序罪
124
+ 0010602,妨害司法罪
125
+ 0010603,妨害国(边)境管理罪
126
+ 0010604,妨害文物管理罪
127
+ 0010605,危害公共卫生罪
128
+ 0010606,破坏环境资源保护罪
129
+ 0010607,走私、贩卖、运输、制造毒品罪
130
+ 0010608,组织、强迫、引诱、容留、介绍卖淫罪
131
+ 0010609,制作、贩卖、传播淫秽物品罪
132
+ 0010701,阻碍军人执行职务罪
133
+ 0010702,阻碍军事行动罪
134
+ 0010703,破坏武器装备、军事设施、军事通信罪
135
+ 0010707,聚众扰乱军事管理区秩序罪
136
+ 0010708,冒充军人招摇撞骗罪
137
+ 0010711,接送不合格兵员罪
138
+ 0010712,伪造、变造、买卖武��部队公文、证件、印章罪
139
+ 0010714,非法生产、买卖军用标志罪
140
+ 0010722,过失损坏武器装备、军事设施、军事通信罪
141
+ 0010723,非法生产、买卖武装部队制式服装罪
142
+ 0010724,伪造、盗窃、买卖、非法提供、非法使用武装部队专用标志罪
143
+ 0010801,贪污罪
144
+ 0010802,挪用公款罪
145
+ 0010803,受贿罪
146
+ 0010804,单位受贿罪
147
+ 0010805,行贿罪
148
+ 0010806,对单位行贿罪
149
+ 0010807,介绍贿赂罪
150
+ 0010808,单位行贿罪
151
+ 0010809,巨额财产来源不明罪
152
+ 0010810,隐瞒境外存款罪
153
+ 0010811,私分国有资产罪
154
+ 0010812,私分罚没财物罪
155
+ 0010813,利用影响力受贿罪
156
+ 0010814,对有影响力的人行贿罪
157
+ 0010901,滥用职权罪
158
+ 0010902,玩忽职守罪
159
+ 0010903,故意泄露国家秘密罪
160
+ 0010904,过失泄露国家秘密罪
161
+ 0010905,徇私枉法罪
162
+ 0010906,民事、行政枉法裁判罪
163
+ 0010907,执行判决、裁定失职罪
164
+ 0010908,执行判决、裁定滥用职权罪
165
+ 0010909,私放在押人员罪
166
+ 0010910,失职致使在押人员脱逃罪
167
+ 0010911,徇私舞弊减刑、假释、暂予监外执行罪
168
+ 0010912,徇私舞弊不移交刑事案件罪
169
+ 0010913,滥用管理公司、证券职权罪
170
+ 0010914,徇私舞弊不征、少征税款罪
171
+ 0010915,徇私舞弊发售发票、抵扣税款、出口退税罪
172
+ 0010916,违法提供出口退税凭证罪
173
+ 0010917,国家机关工作人员签订、履行合同失职被骗罪
174
+ 0010918,违法发放林木采伐许可证罪
175
+ 0010919,环境监管失职罪
176
+ 0010920,传染病防治失职罪
177
+ 0010921,非法批准征用、占用土地罪
178
+ 0010922,非法低价出让国有土地使用权罪
179
+ 0010923,放纵走私罪
180
+ 0010924,商检徇私舞弊罪
181
+ 0010925,商检失职罪
182
+ 0010926,动植物检疫徇私舞弊罪
183
+ 0010927,动植物检疫失职罪
184
+ 0010928,放纵制售伪劣商品犯罪行为罪
185
+ 0010929,办理偷越国(边)境人员出入境证件罪
186
+ 0010930,放行偷越国(边)境人员罪
187
+ 0010931,不解救被拐卖、绑架妇女、儿童罪
188
+ 0010933,帮助犯罪分子逃避处罚罪
189
+ 0010934,招收公务员、学生徇私舞弊罪
190
+ 0010935,失职造成珍贵文物损毁、流失罪
191
+ 0010938,枉法裁判罪
192
+ 0010939,国家机关工作人员签订、履行合同失职罪
193
+ 0010940,枉法仲裁罪
194
+ 0010941,国家机关工作人员徇私舞弊罪
195
+ 0010943,食品监管渎职罪
196
+ 0011002,隐瞒、谎报军情罪
197
+ 0011006,擅离、玩忽军事职守罪
198
+ 0011007,阻碍执行军事职务罪
199
+ 0011013,为境外窃取、剌探、收买、非法提供军事秘密罪
200
+ 0011018,逃离部队罪
201
+ 0011021,盗窃、抢夺武器装备、军用物资罪
202
+ 0011025,擅自出卖、转让军队房地产罪
203
+ 0011026,虐待部属罪
204
+ 0019799,商业受贿罪
205
+ 001030101,生产、销售伪劣产品罪
206
+ 001030102,生产、销售假药罪
207
+ 001030103,生产、销售劣药罪
208
+ 001030104,生产、销售不符合卫生标准的食品罪
209
+ 001030105,生产、销售有毒、有害食品罪
210
+ 001030106,生产、销售不符合标准的医用器材罪
211
+ 001030107,生产、销售不符合安全标准的产品罪
212
+ 001030108,生产、销售伪劣农药、兽药、化肥、种子罪
213
+ 001030109,生产、销售不符合卫生标准的化妆品罪
214
+ 001030110,生产、销售不符合安全标准的食品罪
215
+ 001030201,走私武器、弹药罪
216
+ 001030203,走私假币罪
217
+ 001030204,走私文物罪
218
+ 001030205,走私贵重金属罪
219
+ 001030206,走私珍贵动物、珍贵动物制品罪
220
+ 001030207,走私珍稀植物、珍稀植物制品罪
221
+ 001030208,走私淫秽物品罪
222
+ 001030209,走私普通货物、物品罪
223
+ 001030210,走私废物罪
224
+ 001030211,走私固体废物罪
225
+ 001030212,走私国家禁止进出口的货物、物品罪
226
+ 001030301,虚报注册资本罪
227
+ 001030302,虚假出资、抽逃出资罪
228
+ 001030305,妨害清算罪
229
+ 001030306,隐匿、故意销毁会计凭证、会计帐簿、财务会计报告罪
230
+ 001030309,非法经营同类营业罪
231
+ 001030310,为亲友非法牟利罪
232
+ 001030311,签订、履行合同失职被骗罪
233
+ 001030312,国有公司、企业、事业单位人员失职罪
234
+ 001030313,国有公司、企业、事业单位人员滥用职权罪
235
+ 001030314,徇私舞弊低价折股、出售国有资产罪
236
+ 001030316,违规披露、不披露重要信息罪
237
+ 001030317,虚假破产罪
238
+ 001030318,非国家工作人员受贿罪
239
+ 001030319,对非国家工作人员行贿罪
240
+ 001030320,背信损害上市公司利益罪
241
+ 001030401,伪造货币罪
242
+ 001030402,出售、购买、运输假币罪
243
+ 001030403,金融工作人员购买假币、以假币换取货币罪
244
+ 001030404,持有、使用假币罪
245
+ 001030405,变造货币罪
246
+ 001030406,擅自设立金融机构罪
247
+ 001030407,伪造、变造、转让金融机构经营许可证、批准文件罪
248
+ 001030408,高利转贷罪
249
+ 001030409,非法吸收公众存款罪
250
+ 001030410,伪造、变造金融票证罪
251
+ 001030411,伪造、变造国家有价证券罪
252
+ 001030412,伪造、变造股票、公司、企业债券罪
253
+ 001030413,擅自发行股票、公司、企业债券罪
254
+ 001030414,内幕交易、泄露内幕信��罪
255
+ 001030415,编造并传播证券、期货交易虚假信息罪
256
+ 001030417,操纵证券、期货交易价格罪
257
+ 001030418,骗购外汇罪
258
+ 001030419,违法向关系人发放贷款罪
259
+ 001030420,违法发放贷款罪
260
+ 001030422,非法出具金融票证罪
261
+ 001030423,对违法票据承兑、付款、保证罪
262
+ 001030424,逃汇罪
263
+ 001030425,洗钱罪
264
+ 001030426,骗取贷款、票据承兑、金融票证罪
265
+ 001030427,妨害信用卡管理罪
266
+ 001030428,窃取、收买、非法提供信用卡信息罪
267
+ 001030429,操纵证券、期货市场罪
268
+ 001030431,违法运用资金罪
269
+ 001030434,违规出具金融票证罪
270
+ 001030435,利用未公开信息交易罪
271
+ 001030501,集资诈骗罪
272
+ 001030502,贷款诈骗罪
273
+ 001030503,票据诈骗罪
274
+ 001030504,金融凭证诈骗罪
275
+ 001030505,信用证诈骗罪
276
+ 001030506,信用卡诈骗罪
277
+ 001030507,有价证券诈骗罪
278
+ 001030508,保险诈骗罪
279
+ 001030601,偷税罪
280
+ 001030602,抗税罪
281
+ 001030603,逃避追缴欠税罪
282
+ 001030604,骗取出口退税罪
283
+ 001030605,虚开增值税专用发票、用于骗取出口退税、抵押税款发票罪
284
+ 001030606,伪造、出售伪造的增值税专用发票罪
285
+ 001030607,非法出售增值税专用发票罪
286
+ 001030608,非法购买增值税专用发票、购买伪造的增值税专用发票罪
287
+ 001030609,非法制造、出售非法制造的用于骗取出口退税、抵押税款发票罪
288
+ 001030610,非法制造、出售非法制造的发票罪
289
+ 001030611,非法出售用于骗取出口退税、抵扣税款发票罪
290
+ 001030612,非法出售发票罪
291
+ 001030613,逃税罪
292
+ 001030614,虚开发票罪
293
+ 001030615,持有伪造的发票罪
294
+ 001030701,假冒注册商标罪
295
+ 001030702,销售假冒注册商标的商品罪
296
+ 001030703,非法制造、销售非法制造的注册商标标识罪
297
+ 001030704,假冒专利罪
298
+ 001030705,侵犯著作权罪
299
+ 001030706,销售侵权复制品罪
300
+ 001030707,侵犯商业秘密罪
301
+ 001030801,损害商业信誉、商品声誉罪
302
+ 001030802,虚假广告罪
303
+ 001030803,串通投标罪
304
+ 001030804,合同诈骗罪
305
+ 001030805,非法经营罪
306
+ 001030806,强迫交易罪
307
+ 001030807,伪造、倒卖伪造的有价票证罪
308
+ 001030808,倒卖车票、船票罪
309
+ 001030809,非法转让、倒卖土地使用权罪
310
+ 001030810,提供虚假证明文件罪
311
+ 001030811,出具证明文件重大失实罪
312
+ 001030812,逃避商检罪
313
+ 001030813,组织、领导传销活动罪
314
+ 001060101,妨害公务罪
315
+ 001060102,煽动暴力抗拒法律实施罪
316
+ 001060103,招摇撞骗罪
317
+ 001060104,伪造、变造、买卖国家机关公文、证件、印章罪
318
+ 001060105,盗窃、抢夺、毁灭国家机关公文、证件、印章罪
319
+ 001060106,伪造公司、企业、事业单位、人民团体印章罪
320
+ 001060107,伪造、变造居民身份证罪
321
+ 001060108,非法生产、买卖警用装备罪
322
+ 001060109,非法获取国家秘密罪
323
+ 001060110,非法持有国家绝密、机密文件、资料、物品罪
324
+ 001060112,非法使用窃听、窃照专用器材罪
325
+ 001060113,非法侵入计算机信息系统罪
326
+ 001060114,破坏计算机信息系统罪
327
+ 001060115,扰乱无线电通讯管理秩序罪
328
+ 001060116,聚众扰乱社会秩序罪
329
+ 001060117,聚众冲击国家机关罪
330
+ 001060118,聚众扰乱公共场所秩序、交通秩序罪
331
+ 001060119,投放虚假危险物质罪
332
+ 001060120,聚众斗殴罪
333
+ 001060121,寻衅滋事罪
334
+ 001060122,组织、领导、参加黑社会性质组织罪
335
+ 001060123,入境发展黑社会组织罪
336
+ 001060124,包庇、纵容黑社会性质组织罪
337
+ 001060125,传授犯罪方法罪
338
+ 001060126,非法集会、游行、示威罪
339
+ 001060129,侮辱国旗、国徽罪
340
+ 001060130,组织、利用会道门、邪教组织、利用迷信破坏法律实施罪
341
+ 001060131,组织、利用会道门、邪教组织利用迷信致人死亡罪
342
+ 001060132,聚众淫乱罪
343
+ 001060133,引诱未成年人聚众淫乱罪
344
+ 001060135,赌博罪
345
+ 001060137,编造、故意传播虚假恐怖信息罪
346
+ 001060138,开设赌场罪
347
+ 001060139,非法获取计算机信息系统数据、非法控制计算机信息系统罪
348
+ 001060140,提供侵入、非法控制计算机信息系统程序、工具罪
349
+ 001060141,伪造、变造、买卖身份证件罪
350
+ 001060142,使用虚假身份证件、盗用身份证件罪
351
+ 001060143,非法生产、销售专用间谍器材、窃听、窃照专用器材罪
352
+ 001060144,组织考试作弊罪
353
+ 001060145,非法出售、提供试题、答案罪
354
+ 001060146,代替考试罪
355
+ 001060148,非法利用信息网络罪
356
+ 001060149,帮助信息网络犯罪活动罪
357
+ 001060150,扰乱国家机关工作秩序罪
358
+ 001060151,组织、资助非法聚集罪
359
+ 001060152,编造、故意传播虚假信息罪
360
+ 001060154,盗窃、侮辱、故意毁坏尸体、尸骨、骨灰罪
361
+ 001060201,伪证罪
362
+ 001060202,辩护人、诉讼代理人毁灭证据、伪造证据、妨害作证罪
363
+ 001060203,妨害作证罪
364
+ 001060204,帮助毁灭、伪造证据罪
365
+ 001060205,打击报复证人罪
366
+ 001060206,扰乱法庭秩序罪
367
+ 001060207,窝藏、包庇罪
368
+ 001060209,窝藏、转移、收购���销售赃物罪
369
+ 001060210,拒不执行判决、裁定罪
370
+ 001060211,非法处置查封、扣押、冻结的财产罪
371
+ 001060212,破坏监管秩序罪
372
+ 001060213,脱逃罪
373
+ 001060214,劫夺被押解人员罪
374
+ 001060215,组织越狱罪
375
+ 001060216,暴动越狱罪
376
+ 001060217,聚众持械劫狱罪
377
+ 001060218,掩饰、隐瞒犯罪所得、犯罪所得收益罪
378
+ 001060219,虚假诉讼罪
379
+ 001060301,组织他人偷越国(边)境罪
380
+ 001060302,骗取出境证件罪
381
+ 001060303,提供伪造、变造的出入境证件罪
382
+ 001060304,出售出入境证件罪
383
+ 001060305,运送他人偷越国(边)境罪
384
+ 001060306,偷越国(边)境罪
385
+ 001060401,故意损毁文物罪
386
+ 001060402,故意损毁名胜古迹罪
387
+ 001060403,过失损毁文物罪
388
+ 001060404,非法向外国人出售、赠送珍贵文物罪
389
+ 001060405,倒卖文物罪
390
+ 001060406,非法出售、私赠文物藏品罪
391
+ 001060407,盗掘古文化遗址、古墓葬罪
392
+ 001060408,盗掘古人类化石、古脊椎动物化石罪
393
+ 001060409,抢夺、窃取国有档案罪
394
+ 001060410,擅自出卖、转让国有档案罪
395
+ 001060504,非法组织卖血罪
396
+ 001060505,强迫卖血罪
397
+ 001060506,非法采集、供应血液、制作、供应血液制品罪
398
+ 001060508,医疗事故罪
399
+ 001060509,非法行医罪
400
+ 001060510,非法进行节育手术罪
401
+ 001060512,妨害动植物防疫、检疫罪
402
+ 001060601,重大环境污染事故罪
403
+ 001060602,非法处置进口的固体废物罪
404
+ 001060603,擅自进口固体废物罪
405
+ 001060604,非法捕捞水产品罪
406
+ 001060605,非法猎捕、杀害珍贵、濒危野生动物罪
407
+ 001060606,非法收购、运输、出售珍贵、濒危野生动物、珍贵、濒危野生动物制品罪
408
+ 001060607,非法狩猎罪
409
+ 001060608,非法占用农用地罪
410
+ 001060609,非法采矿罪
411
+ 001060610,破坏性采矿罪
412
+ 001060611,非法采伐、毁坏国家重点保护植物罪
413
+ 001060612,非法收购、运输、加工、出售国家重点保护植物、国家重点保护植物制品罪
414
+ 001060613,盗伐林木罪
415
+ 001060614,滥伐林木罪
416
+ 001060615,非法收购、运输盗伐、滥伐的林木罪
417
+ 001060616,非法占用耕地罪
418
+ 001060617,非法采伐、毁坏珍贵树木罪
419
+ 001060618,非法收购盗伐、滥伐的林木罪
420
+ 001060619,污染环境罪
421
+ 001060701,走私、贩卖、运输、制造毒品罪
422
+ 001060702,非法持有毒品罪
423
+ 001060703,包庇毒品犯罪分子罪
424
+ 001060704,窝藏、转移、隐瞒毒品、毒赃罪
425
+ 001060705,走私制毒物品罪
426
+ 001060706,非法买卖制毒物品罪
427
+ 001060707,非法种植毒品原植物罪
428
+ 001060708,非法买卖、运输、携带、持有毒品原植物种子、幼苗罪
429
+ 001060709,引诱、教唆、欺骗他人吸毒罪
430
+ 001060710,强迫他人吸毒罪
431
+ 001060711,容留他人吸毒罪
432
+ 001060712,非法提供麻醉药品、精神药品罪
433
+ 001060713,非法生产、买卖、运输制毒物品、走私制毒物品罪
434
+ 001060801,组织卖淫罪
435
+ 001060802,强迫卖淫罪
436
+ 001060803,协助组织卖淫罪
437
+ 001060804,引诱、容留、介绍卖淫罪
438
+ 001060805,引诱幼女卖淫罪
439
+ 001060806,传播性病罪
440
+ 001060901,制作、复制、出版、贩卖、传播淫秽物品牟利罪
441
+ 001060903,传播淫秽物品罪
442
+ 001060904,组织播放淫秽音像制品罪
443
+ 001060905,组织淫秽表演罪
444
+ 001970201,破坏通讯设备罪
445
+ 001970302,伪造货币或贩运伪造的货币罪
446
+ 001970303,伪造车票、船票、邮票、税票、货票罪
447
+ 001970304,破坏集体生产罪
448
+ 001970401,拐卖人口罪
449
+ 001970801,徇私舞弊罪
code/configs/hyperparametric.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ class Generate_config():
3
+ def __init__(self):
4
+ self.max_length = 2048
5
+ self.truncation = True
6
+ self.do_sample = True
7
+ self.max_new_tokens = 1024
8
+ self.temperature = 0.7
9
+ self.output_hidden_states = True,
10
+ self.return_dict_in_generate = True
11
+ self.output_logits=True
12
+
13
+ def to_dict(self):
14
+ return self.__dict__
15
+
16
+ class Reward_config():
17
+ def __init__(self):
18
+ self.max_length = 2048
19
+ self.truncation = True
20
+ self.do_sample = True
21
+ self.max_new_tokens = 10
22
+ self.temperature = 0.7
23
+ self.padding = True
24
+ self.output_hidden_states = True,
25
+ self.return_dict_in_generate = True
26
+ self.output_logits=True
27
+
28
+ self.open_tag = '<v>'
29
+ self.close_tag = '</v>'
30
+ self.lower = 0
31
+ self.upper = 100
32
+
33
+ def to_dict(self):
34
+ return self.__dict__
35
+
36
+ class Tree_config():
37
+ def __init__(self):
38
+ self.max_depth = 5
39
+ self.branch = 5
40
+
41
+ def to_dict(self):
42
+ return self.__dict__
43
+
44
+
45
+
code/data/adv/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
code/data/adv/train.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
code/data/ori/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
code/data/ori/train.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
code/main.py ADDED
@@ -0,0 +1,383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import argparse
4
+ import logging
5
+ import time
6
+ import json
7
+
8
+ #os.environ["CUDA_VISIBLE_DEVICES"] = "1"
9
+
10
+ from peft import PeftModel,LoraConfig, get_peft_model, prepare_model_for_kbit_training
11
+ from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments, BitsAndBytesConfig
12
+ from datasets import load_dataset,Dataset
13
+ import torch
14
+
15
+ from utils.warp import Warp,WarpLJP
16
+ from utils.dataset import DataCollatorForReward
17
+ from utils.trainer import PRGTrainer
18
+ from tree.base import Tree,Node,I_policy
19
+ from configs.hyperparametric import Reward_config,Tree_config
20
+ from model.logitsprocessor import OutputControlLogitsProcessor,RewardControlLogitsProcessor
21
+ from tree.asts import AST
22
+ from utils.model_generate import generate_string,generate_score
23
+
24
+ tree_config = Tree_config().to_dict()
25
+ #reward_config = Reward_config().to_dict()
26
+
27
+ import torch
28
+ #torch.cuda.set_device(0)
29
+
30
+
31
+ #TASKS = ['ecthr_a','ecthr_b']
32
+ TASKS = ['ljp',]
33
+
34
+
35
+ def get_args():
36
+ parser = argparse.ArgumentParser()
37
+
38
+ ## ___datasets___
39
+ #parser.add_argument('--data_path',default='lex_glue',type=str, help='Path containing dataset')
40
+ parser.add_argument('--train_path',default='',type=str, help='Path containing dataset')
41
+ parser.add_argument('--eval_path',default='',type=str, help='Path containing dataset')
42
+ parser.add_argument('--test_path',default='',type=str, help='Path containing dataset')
43
+ parser.add_argument('--dataset',default='ljp',type=str, help='Dataset of choice in data_path')
44
+ parser.add_argument('--save_data_path',default='',type=str, help='The path used to save dataset')
45
+ parser.add_argument('--output_path',default='',type=str, help='The path used to save outputs')
46
+ parser.add_argument('--sample_path',default='',type=str, help='The path used to samples')
47
+ parser.add_argument('--control_file',default='./codekey_proofread.txt',type=str, help='The path used to output control')
48
+
49
+ ## ___model___
50
+ parser.add_argument('--generate_model_path',default='',type=str, help='Path containing model')
51
+ parser.add_argument('--reward_model_path',default='',type=str, help='Path containing model')
52
+ parser.add_argument('--reward_save_path',default='./output/reward',type=str, help='Path containing model')
53
+ parser.add_argument('--reward_lora_path',default='',type=str,)
54
+ parser.add_argument('--per_device_train_batch_size',default=2,type=int)
55
+ parser.add_argument('--gradient_accumulation_steps',default=2,type=int)
56
+ parser.add_argument('--learning_rate',default=1e-3,type=float)
57
+ parser.add_argument('--num_train_epochs',default=10,type=int)
58
+ parser.add_argument('--logging_steps',default=200,type=int)
59
+ parser.add_argument('--save_strategy',default='epoch',type=str,)
60
+ parser.add_argument('--fp16',action='store_true',default=True,)
61
+ parser.add_argument('--optim',default='paged_adamw_8bit',type=str,)
62
+
63
+ parser.add_argument('--lora_rank',default=64,type=int)
64
+ parser.add_argument('--lora_alpha',default=16,type=int)
65
+ parser.add_argument('--lora_dropout',default=0.1,type=float)
66
+
67
+ ## ___pipline___
68
+ parser.add_argument('--do_train',action='store_true',default=False, help='Training or not')
69
+ parser.add_argument('--do_test',action='store_true',default=True, help='Eval or not')
70
+
71
+ ## ___parameter___
72
+ parser.add_argument('--budget',default=20,type=int, help='iterations of search')
73
+ parser.add_argument('--reward_funcation',default='leaf',type=str,choices=['random','reward','leaf'], help='iterations of search')
74
+ parser.add_argument('--iteration',default=3,type=int, help='iterations of sample')
75
+
76
+ ## ___special___
77
+ parser.add_argument('--ljp_mode',default='p',type=str,choices=['p','pd','pdf'])
78
+ parser.add_argument('--logits_control',action='store_true',default=False, help='Training or not')
79
+ parser.add_argument('--add_reward',action='store_true',default=False,)
80
+ parser.add_argument('--inference_mode',default='zeroshot',type=str,choices=['zeroshot','fewshot','cot'])
81
+
82
+
83
+
84
+ return parser.parse_args()
85
+
86
+ def get_logger(path='./'):
87
+ log_path = os.path.join(path,"log_%s.txt"%(time.strftime("%Y-%m-%d-%H-%M-%S",time.localtime())))
88
+ logging.basicConfig(
89
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
90
+ datefmt="%m/%d/%Y %H:%M:%S",
91
+ handlers=[logging.StreamHandler(sys.stdout),
92
+ logging.FileHandler(log_path)],
93
+ )
94
+ logger = logging.getLogger(__name__)
95
+ logger.setLevel(level=logging.DEBUG)
96
+ return logger
97
+
98
+ def load_data(args):
99
+ if os.path.isdir(args.train_path):
100
+ save_path = os.path.join(args.save_data_path,args.dataset)
101
+ if not os.path.exists(save_path):
102
+ dataset = load_dataset(path=args.train_path,name=args.dataset)
103
+ dataset.save_to_disk(save_path)
104
+ else:
105
+ dataset = load_dataset(save_path)
106
+ if os.path.isfile(args.train_path):
107
+ data_files = {mode:path for mode,path
108
+ in zip(['train','validation','test'],[args.train_path,args.eval_path,args.test_path])
109
+ if path}
110
+ dataset = load_dataset('json',data_files=data_files)
111
+ return dataset
112
+
113
+ def load_samples(sample_path,):
114
+ path_list = os.listdir(sample_path)
115
+ samples = []
116
+ for path in path_list:
117
+ path = os.path.join(sample_path,path)
118
+ with open(path,'r') as f:
119
+ for l in f.readlines():
120
+ sample = json.loads(l)
121
+ samples.append(sample)
122
+ return samples
123
+
124
+
125
+
126
+ def train(args,warp,dataset,):
127
+ # init TrainingArgument
128
+ training_args = TrainingArguments(
129
+ output_dir=os.path.join(args.reward_save_path,'reward_%s'%(time.strftime("%Y-%m-%d-%H-%M-%S",time.localtime()))),
130
+ per_device_train_batch_size=args.per_device_train_batch_size,
131
+ gradient_accumulation_steps=args.gradient_accumulation_steps,
132
+ learning_rate=args.learning_rate,
133
+ num_train_epochs=args.num_train_epochs,
134
+ logging_steps=args.logging_steps,
135
+ save_strategy=args.save_strategy,
136
+ fp16=args.fp16,
137
+ optim=args.optim,
138
+ remove_unused_columns=False
139
+ )
140
+
141
+ bnb_config = BitsAndBytesConfig(
142
+ load_in_4bit=True,
143
+ bnb_4bit_compute_dtype=torch.float16,
144
+ bnb_4bit_use_double_quant=True,
145
+ bnb_4bit_quant_type="nf4",
146
+ llm_int8_threshold=6.0,
147
+ llm_int8_has_fp16_weight=False,
148
+ )
149
+
150
+ peft_config = LoraConfig(
151
+ r=args.lora_rank,
152
+ lora_alpha=args.lora_alpha,
153
+ lora_dropout=args.lora_dropout,
154
+ target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
155
+ bias="none",
156
+ task_type="CAUSAL_LM"
157
+ )
158
+
159
+ # init model
160
+ if not warp.reward_model:
161
+ warp.load_reward_model(bnb_config=bnb_config)
162
+
163
+ model = warp.reward_model
164
+ tokenizer = warp.reward_tokenizer
165
+
166
+ model = prepare_model_for_kbit_training(model)
167
+ model = get_peft_model(model, peft_config)
168
+
169
+ # init collator
170
+ collator = DataCollatorForReward(tokenizer=tokenizer)
171
+ logits_processor = RewardControlLogitsProcessor(tokenizer=tokenizer)
172
+
173
+ # init dataset
174
+ trainset = dataset['0']
175
+
176
+ # init trainer
177
+ trainer = PRGTrainer(
178
+ tokenizer=tokenizer,
179
+ model=model,
180
+ args=training_args,
181
+ train_dataset=trainset,
182
+ data_collator=collator,
183
+ logits_processor=logits_processor
184
+ )
185
+
186
+ # clean memory
187
+ warp.generate_model = None
188
+ torch.cuda.empty_cache()
189
+
190
+ # training
191
+ logger.info('start training..')
192
+ for i,trainset in dataset.items():
193
+ if i != '0':
194
+ trainer.train_dataset = trainset
195
+
196
+ trainer.train()
197
+
198
+ logger.info('training end..')
199
+ warp.reward_model = model
200
+ warp.reward_tokenizer = tokenizer
201
+
202
+ def evaluate(args,warp,dataset,):
203
+ if warp.generate_model == None:
204
+ warp.load_generate_model()
205
+ if args.add_reward:
206
+ if args.reward_lora_path != '':
207
+ bnb_config = BitsAndBytesConfig(load_in_4bit=True,
208
+ bnb_4bit_compute_dtype=torch.float16,
209
+ bnb_4bit_use_double_quant=True,
210
+ bnb_4bit_quant_type="nf4",
211
+ llm_int8_threshold=6.0,
212
+ llm_int8_has_fp16_weight=False,
213
+ )
214
+ warp.load_reward_model(bnb_config=bnb_config)
215
+ warp.reward_model = PeftModel.from_pretrained(warp.reward_model, args.reward_lora_path)
216
+ else:
217
+ warp.load_reward_model()
218
+ rewarder = {'model':warp.reward_model,'tokenizer':warp.reward_tokenizer,}
219
+ if args.logits_control:
220
+ rewarder['reward_processor'] = RewardControlLogitsProcessor(tokenizer=rewarder['tokenizer'])
221
+ else:
222
+ rewarder = {}
223
+
224
+ model = warp.generate_model
225
+ tokenizer = warp.generate_tokenizer
226
+
227
+ def get_response(x,a,tokenizer,model,rewarder={}):
228
+ if rewarder == {}:
229
+ inputs = warp.prompt_to_crime(x,a,bos=tokenizer.bos_token,eos=tokenizer.eos_token)
230
+ if args.logits_control:
231
+ outputs = generate_string(inputs,tokenizer=tokenizer,model=model,logits_processor=warp.logits_processor)
232
+ else:
233
+ outputs = generate_string(inputs,tokenizer=tokenizer,model=model,)
234
+
235
+ response = warp.step_from_response(outputs)
236
+ return response
237
+ if 'reward_processor' not in rewarder.keys():
238
+ rewarder['reward_processor'] = None
239
+
240
+ for i in range(tree_config['branch']):
241
+ inputs = warp.prompt_to_crime(x,a,bos=tokenizer.bos_token,eos=tokenizer.eos_token)
242
+ if args.logits_control:
243
+ outputs = generate_string(inputs,tokenizer=tokenizer,model=model,logits_processor=warp.logits_processor)
244
+ else:
245
+ outputs = generate_string(inputs,tokenizer=tokenizer,model=model,)
246
+ response = warp.step_from_response(outputs)
247
+
248
+ thought = warp.prompt_to_value(x,a+response,bos=rewarder['tokenizer'].bos_token,eos=rewarder['tokenizer'].eos_token)
249
+ reward = generate_score(thought,
250
+ tokenizer=rewarder['tokenizer'],
251
+ model=rewarder['model'],
252
+ reward_processor=rewarder['reward_processor']
253
+ )
254
+ r = warp.value_from_response(reward)
255
+ if '拒绝' in r:
256
+ continue
257
+ else:
258
+ break
259
+ return response
260
+
261
+ logger.info('start eval..')
262
+ time_start = time.time()
263
+ reward_control = 'rewardcontrol' if args.add_reward else 'un-reward'
264
+ task_name = args.test_path.split('/')[-2]
265
+ save_path = os.path.join(args.output_path,'eval',"%s_%s_%s_%s.json"%(task_name,args.inference_mode,reward_control,time.strftime("%Y-%m-%d-%H-%M-%S",time.localtime())))
266
+
267
+ preds = []
268
+ for i,data in enumerate(dataset):
269
+ x,a,y = warp.processing_single(data)
270
+ if args.inference_mode == 'fewshot':
271
+ x = '这是一个例子:根据案情描述和已有步骤仅给出一个推理。如果是结论则直接输出<e></e>,例如<e>盗窃罪</e>。如果是步骤则直接输出<p></p>,例如<p>步骤1:…</p>\n案情描述:2013年下半年至2015年10月26日,被告人张和菊利用担任山东泰开电力建设工程有限公司、山东泰开国际工程技术有限公司现金出纳的职务便利,多次将公司的资金共计4472572.91元挪出,用于其在深圳石油化工交易所、天津渤海商品交易所的投资交易,已全部亏损。2015年10月26日,张和菊从公司提取现金26万元后,携款潜逃至济南市长清区租房处藏匿。2015年11月6日,张和菊被公安机关抓获。\n已有推理步骤:\n<e>挪用资金罪</e>\n这是问题:\n根据案情描述和已有步骤仅给出一个推理。如果是结论则直接输出<e></e>,例如<e>盗窃罪</e>。如果是步骤则直接输出<p></p>,例如<p>步骤1:…</p>\n案情描述:'+x
272
+ elif args.inference_mode == 'cot':
273
+ x = '一步步思考并回答,' + x
274
+ y = y['crime']
275
+ a = ''
276
+ for d in range(tree_config['max_depth']):
277
+ try:
278
+ response = get_response(x,a,tokenizer,model,rewarder=rewarder)
279
+ except Exception as E:
280
+ response = ''
281
+ if '<e>' in response:
282
+ break
283
+ if '<p>' in response:
284
+ a += response
285
+ y_ = response
286
+ preds.append({'x':x,'y':y,'pred':y_})
287
+
288
+ if i % args.logging_steps == 0:
289
+ logger.info('{x}'.format(x=str({'x':x,'y':y,'pred':y_})))
290
+
291
+ logger.info('eval: save...')
292
+ with open(save_path,'w') as file:
293
+ for l in preds:
294
+ line = json.dumps(l,ensure_ascii=False)
295
+ file.write(line)
296
+ file.write('\n')
297
+ time_end = time.time()
298
+ logger.info('reward_eval : {x} '.format(x=args.reward_model_path))
299
+ logger.info('save_eval : {x} '.format(x=save_path))
300
+ logger.info('running time: {x}'.format(x=time_end-time_start))
301
+ logger.info('eval: fin')
302
+
303
+ def sample(args,warp):
304
+
305
+ # load datset
306
+ logger.info('load datset..')
307
+ dataset = load_data(args)
308
+
309
+ logger.info('start sampling..')
310
+ samples = {}
311
+ for iter in range(args.iteration):
312
+ logger.info('iter_{x}'.format(x=iter))
313
+
314
+ train_samples = []
315
+
316
+ sample_path = 'branch{b}_deep{d}_budget{g}_iter{i}'.format(b=tree_config['branch'],d=tree_config['max_depth'],g=args.budget,i=iter)
317
+ if args.sample_path != '' and sample_path in os.listdir(args.sample_path):
318
+ sample_path = os.path.join(args.sample_path,sample_path)
319
+ train_samples += load_samples(sample_path)
320
+
321
+ else:
322
+ save_path = os.path.join(args.output_path,'data',
323
+ 'branch{b}_deep{d}_budget{g}_iter{i}'.format(b=tree_config['branch'],d=tree_config['max_depth'],g=args.budget,i=iter),
324
+ )
325
+ os.makedirs(save_path) if not os.path.exists(save_path) else None
326
+ for i,sample in enumerate(dataset['train']):
327
+
328
+ time_start = time.time()
329
+
330
+ tree_of_sample = Tree(sample=sample,warp=warp)
331
+ tree_of_sample.monte_carlo_tree_search(budget=args.budget,reward_funcation=args.reward_funcation)
332
+ #train_samples += tree_of_sample.sample(attribute='positive')
333
+
334
+ save_path = os.path.join(args.output_path,'data',
335
+ 'branch{b}_deep{d}_budget{g}_iter{i}'.format(b=tree_config['branch'],d=tree_config['max_depth'],g=args.budget,i=iter),
336
+ 'samples' + time.strftime("-%Y-%m-%d-%H:%M:%S", time.localtime()) + '.json')
337
+ train_samples += tree_of_sample.save(path=save_path)
338
+
339
+ time_end = time.time()
340
+ logger.info('\nrunning time: {x}'.format(x=time_end-time_start))
341
+ _example = tree_of_sample.root.x[:50] if len(tree_of_sample.root.x) > 50 else tree_of_sample.root.x
342
+ logger.info('{i}-th sample: {x}'.format(i=i,x=_example))
343
+
344
+ train_samples = Dataset.from_list(train_samples)
345
+ samples[str(iter)] = train_samples
346
+
347
+ return samples
348
+
349
+
350
+ def run(args):
351
+
352
+ # create framework
353
+ if args.dataset in ['ljp']:
354
+ warp = WarpLJP(args=args)
355
+
356
+ warp.load_generate_model()
357
+
358
+ # load training data
359
+ #
360
+
361
+ # data collection
362
+ if args.do_train:
363
+ trainsets = sample(args=args,warp=warp)
364
+ #raise ValueError
365
+ train(args,warp,trainsets)
366
+ # test
367
+ if args.do_test:
368
+ datasets = load_data(args)
369
+ evaluate(args,warp,datasets['test'])
370
+
371
+
372
+
373
+ if __name__ == "__main__":
374
+ args = get_args()
375
+ logger = get_logger(args.output_path)
376
+
377
+ loginfo = '\n'.join(['{k}: {v}'.format(k=k,v=v) for k,v in vars(args).items()])
378
+ logger.info(loginfo)
379
+
380
+ try:
381
+ run(args)
382
+ except Exception as E:
383
+ logger.exception('{x}'.format(x=E))
code/model/constrainedmodel.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import DataCollatorForSeq2Seq,LogitsProcessor,LogitsProcessorList, AutoModelForCausalLM
2
+ import torch
3
+
4
+ class ConstrainedQwenModel(AutoModelForCausalLM):
5
+ def generate(self, *args, **kwargs):
6
+ if "logits_processor" not in kwargs:
7
+ kwargs["logits_processor"] = LogitsProcessorList()
8
+
9
+ kwargs["logits_processor"].append(
10
+ LogitsProcessor(self.tokenizer)
11
+ )
12
+
13
+ return super().generate(*args, **kwargs)
14
+
15
+ if __name__ == '__main__':
16
+ pass
code/model/logitsprocessor.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ from transformers import DataCollatorForSeq2Seq,LogitsProcessor,LogitsProcessorList, AutoModelForCausalLM
4
+ import torch
5
+
6
+ from tree.asts import ASC,Node
7
+
8
+ class OutputControlLogitsProcessor(LogitsProcessor):
9
+ def __init__(self, tokenizer,ast):
10
+ self.tokenizer = tokenizer
11
+ self.ast = ast
12
+ self.trigger = tokenizer.encode('<e>')[-3:]
13
+
14
+ def _get_valid_token_ids(self):
15
+ valid_tokens = self.ast.return_next_token()
16
+ valid_ids = []
17
+
18
+ if len(valid_tokens) == 0:
19
+ return list(range(len(self.tokenizer.get_vocab())))
20
+
21
+ for token in valid_tokens:
22
+ token_id = self.tokenizer.convert_tokens_to_ids(token)
23
+ if token_id != self.tokenizer.unk_token_id:
24
+ valid_ids.append(token_id)
25
+
26
+ return valid_ids
27
+
28
+ def __call__(self, input_ids, scores):
29
+ if input_ids[0,-3:].tolist() != self.trigger:
30
+ return scores
31
+
32
+ last_token = input_ids[0, -1].item()
33
+ last_token = self.tokenizer.decode([last_token], skip_special_tokens=False)
34
+
35
+ self.ast.update_state(last_token)
36
+ current_ids = self._get_valid_token_ids()
37
+
38
+ mask = torch.full_like(scores, -float(1e10))
39
+ for ids in current_ids:
40
+ mask[:, ids] = 0
41
+
42
+ filtered_scores = scores + mask
43
+
44
+ return filtered_scores
45
+
46
+ class RewardControlLogitsProcessor(LogitsProcessor):
47
+ def __init__(self, tokenizer, open_tag='<v>',close_tag='</v>',lower=0,upper=100):
48
+ self.tokenizer = tokenizer
49
+
50
+ open_tag = tokenizer.encode(open_tag, add_special_tokens=False)
51
+ close_tag = tokenizer.encode(close_tag, add_special_tokens=False)
52
+ number_tokens = [tokenizer.encode(str(i), add_special_tokens=False) for i in ['接受','拒绝']]
53
+
54
+ self.tag_tokens = {"open_tag": open_tag,"close_tag": close_tag,"label": number_tokens}
55
+ self.get_tag_chain()
56
+
57
+ def get_tag_chain(self):
58
+ asc = ASC()
59
+ current_node = asc.root
60
+ for i,t in enumerate(self.tag_tokens['open_tag']):
61
+ node = Node(t)
62
+ current_node.children[t] = node
63
+ current_node = node
64
+ end_node = None
65
+ end = '</v>'
66
+ for i,t in enumerate(self.tag_tokens['close_tag']):
67
+ if i == 0:
68
+ end_node = Node(t)
69
+ _end_node = end_node
70
+ end = t
71
+ else:
72
+ node = Node(t)
73
+ _end_node.children[t] = node
74
+ _end_node = node
75
+ if i == len(self.tag_tokens['close_tag']) - 1:
76
+ node = Node('')
77
+ node.end = True
78
+ _end_node.children[t] = node
79
+
80
+ for t in self.tag_tokens['label']:
81
+ for c in t:
82
+ node = Node(c)
83
+ current_node.children[c] = node
84
+ current_node = node
85
+ current_node.children[end] = end_node
86
+
87
+ self.asc = asc
88
+
89
+ def __call__(self, input_ids, scores):
90
+ if self.asc.current_node.end:
91
+ return scores
92
+
93
+ last_token = input_ids[0, -1].item()
94
+ self.asc.update_state(last_token)
95
+
96
+ mask = torch.full_like(scores, -float(1e8))
97
+ for ids in [list(self.asc.current_node.children.keys())]:
98
+ mask[:, ids] = 0
99
+ filtered_scores = scores + mask
100
+
101
+ return filtered_scores
102
+
103
+ if __name__ == '__main__':
104
+ from transformers import AutoTokenizer, AutoModelForCausalLM
105
+ from tree.asts import AST
106
+
107
+ # 加载模型和tokenizer
108
+ model_name = ''
109
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
110
+ model = AutoModelForCausalLM.from_pretrained(model_name)
111
+
112
+ # 初始化语法树
113
+ syntax_tree = AST('./codekey_proofread.txt')
114
+
115
+ # 创建logits processor
116
+ logits_processor = OutputControlLogitsProcessor(syntax_tree, tokenizer)
117
+
118
+ input_text = "eqwmdsadas乱码"
119
+ input_ids = tokenizer.encode(input_text, return_tensors="pt")
120
+
121
+ output = model.generate(
122
+ input_ids,
123
+ max_length=50,
124
+ do_sample=True,
125
+ num_return_sequences=1,
126
+ logits_processor=[logits_processor],
127
+ pad_token_id=tokenizer.eos_token_id
128
+ )
129
+
130
+
131
+ decoded_output = tokenizer.decode(output[0], skip_special_tokens=False)
132
+ print(decoded_output)
code/test.sh ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ CUDA_VISIBLE_DEVICES=0 python main.py \
4
+ --test_path='' \
5
+ --do_test
6
+
7
+
code/train.sh ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ CUDA_VISIBLE_DEVICES=0 python main.py \
4
+ --train_path='' \
5
+ --eval_path='' \
6
+ --test_path='' \
7
+ --reward_model_path='' \
8
+ --add_reward \
9
+ --do_train
10
+
11
+
code/tree/asts.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments
2
+
3
+ import random
4
+
5
+ class Node():
6
+ def __init__(self,x):
7
+ self.x = x
8
+ self.children = {}
9
+ self.end = False
10
+ if x == '罪':
11
+ self.end = True
12
+
13
+ class AST():
14
+ def __init__(self,path,tokenizer):
15
+ self.root = Node('<e>')
16
+ self.current_node = self.root
17
+ self.end_token = '</e>'
18
+
19
+ crimes = []
20
+ with open(path,'r') as f:
21
+ for l in f.readlines():
22
+ l = l.strip().split(',')
23
+ crimes.append(l[1])
24
+
25
+ for crime in crimes:
26
+ node = self.root
27
+ crime = tokenizer.tokenize(crime)
28
+ for char in crime:
29
+ if char not in [c for c in node.children.keys()]:
30
+ node.children[char] = Node(char)
31
+ node = node.children[char]
32
+ node.children['</e>'] = Node('</e>')
33
+
34
+ def update_state(self,token):
35
+ if token in self.current_node.children.keys():
36
+ self.current_node = self.current_node.children[token]
37
+ elif token == self.end_token:
38
+ self.current_node = self.root # 回到根节点
39
+ elif token == "<e>":
40
+ self.current_node = self.root # 重新开始
41
+ else:
42
+ pass
43
+
44
+ def return_next_token(self,):
45
+ if self.current_node.x == '<e>':
46
+ return set([o.x for o in self.current_node.children.values()])
47
+ if self.current_node.end:
48
+ return set([self.end_token])
49
+ if self.current_node.x == '</e>':
50
+ return set([])
51
+ return set([o.x for o in self.current_node.children.values()])
52
+
53
+ class ASC():
54
+ def __init__(self,root='<v>',leaf='</v>'):
55
+ self.root = Node(root)
56
+ self.current_node = self.root
57
+ self.end_token = leaf
58
+
59
+ def update_state(self,token):
60
+ if token in self.current_node.children.keys():
61
+ self.current_node = self.current_node.children[token]
62
+ elif token == '<v>':
63
+ token = random.choice(list(self.asc.current_node.children.keys()))
64
+ self.current_node = self.current_node[token]
65
+ else:
66
+ pass
67
+
68
+ def return_next_token(self,):
69
+ if self.current_node.x == '<v>':
70
+ return set([o.x for o in self.current_node.children.values()])
71
+ if self.current_node.end:
72
+ return set([self.end_token])
73
+ if self.current_node.x == '</v>':
74
+ return set([])
75
+ return set([o.x for o in self.current_node.children.values()])
76
+
77
+
78
+
79
+ if __name__ == '__main__':
80
+ tokenizer = AutoTokenizer.from_pretrained('', trust_remote_code=True)
81
+ ast = AST('./codekey_proofread.txt',tokenizer)
82
+ print('-')
83
+
84
+
85
+
code/tree/base.py ADDED
@@ -0,0 +1,466 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ import math
3
+ import copy
4
+ import re
5
+ import json
6
+ import torch
7
+ from utils.model_generate import generate_response,generate_string,generate_score
8
+ from utils.warp import Warp
9
+ from configs.hyperparametric import Tree_config
10
+
11
+ class Node():
12
+ def __init__(self,x,depth=None,prob=0):
13
+ # structure
14
+ self.parent = None
15
+ self.children = []
16
+ self.depth = depth
17
+ self.visited = 0
18
+ self.V = 0
19
+
20
+ # data
21
+ self.x = x
22
+ self.prob = prob
23
+
24
+ # attribute
25
+ self.isReasoning = True if x.startswith('<p>') else False
26
+ self.isTerminal = True if '<e>' in x else False
27
+ self.isRoot = False if self.isReasoning or self.isTerminal else True
28
+
29
+
30
+ def add_value(self,V):
31
+ self.V = self.V + V
32
+
33
+ def add_parent(self,x):
34
+ assert self.parent == None
35
+ self.parent = x
36
+
37
+ def add_children(self,x):
38
+ if x not in self.children:
39
+ self.children.append(x)
40
+ if x.parent == None:
41
+ x.add_parent(self)
42
+
43
+ def _trace_of_reasoning(self,):
44
+ if self.isRoot:
45
+ return self.x
46
+ elif self.parent != None:
47
+ x = self.parent._trace_of_reasoning()
48
+ return '{x}<trace_sp>{a}'.format(x=x,a=self.x)
49
+
50
+ def trace_of_reasoning(self,none_token='<p>无</p>'):
51
+ trace = self._trace_of_reasoning().split('<trace_sp>')
52
+ if self.isRoot:
53
+ x,a = trace[0],[none_token]
54
+ else:
55
+ x,a = trace[0],trace[1:]
56
+ return x,a
57
+
58
+ def pedigree(self,):
59
+ r = []
60
+ node = self
61
+ while True:
62
+ r.append(node)
63
+ if node.parent:
64
+ node = node.parent
65
+ else:
66
+ break
67
+ return r
68
+
69
+
70
+ class Tree():
71
+ #def __init__(self,x,generater_model,processer,y=None,a=['<p>无</p>']):
72
+ def __init__(self,sample,warp,):
73
+ #y=None,a=['<p>无</p>']
74
+ config = Tree_config()
75
+
76
+ # structure
77
+ self.max_depth = config.max_depth
78
+ self.branch = config.branch
79
+
80
+ # reward
81
+ self.reward_funcation = None
82
+ self.reward_upper = None
83
+ self.reward_lower = None
84
+ self.alpha_restmcts = 0.5
85
+
86
+ # tool
87
+ self.warp = warp
88
+
89
+
90
+ # parameter
91
+ self.reward_funcation = None
92
+
93
+ # sign
94
+ self.none_token = self.warp.none_token
95
+
96
+ # init
97
+ x,a,y = self.warp.processing_single(sample)
98
+
99
+ self.root = Node(x,depth=0)
100
+ self.y = y['crime']
101
+
102
+ if a[0] not in ['<p>无</p>','<p>none</p>']:
103
+ current_node = self.root
104
+ for _a in a:
105
+ child = Node(x=_a,depth=current_node.depth+1)
106
+ current_node.add_children(child)
107
+ current_node = child
108
+
109
+ def isFullyExpanded(self,node):
110
+ return True if len(node.children) >= self.branch or node.isTerminal else False
111
+
112
+ def set_reward_funcation(self,reward_funcation):
113
+ self.reward_funcation = reward_funcation
114
+ if self.reward_funcation == 'reward':
115
+ self.reward_upper = 1
116
+ self.reward_lower = 0
117
+ elif self.reward_funcation == 'leaf':
118
+ self.reward_upper = 1
119
+ self.reward_lower = 0
120
+ else:
121
+ self.reward_upper = 1
122
+ self.reward_lower = 0
123
+
124
+ def _bestchild_ucb(self,node):
125
+ if isinstance(node,list):
126
+ children = node
127
+ node = children[0].parent
128
+ else:
129
+ children = node.children
130
+
131
+ bestValue = 0
132
+ bestNodes = []
133
+ for child in children:
134
+ nodeValue = child.V + 0.7 * math.sqrt(
135
+ 2 * math.log(node.visited) / child.visited) if child.visited > 0 else child.V + 1
136
+ if nodeValue > bestValue:
137
+ bestValue = nodeValue
138
+ bestNodes = [child]
139
+ elif nodeValue == bestValue:
140
+ bestNodes.append(child)
141
+ return random.choice(bestNodes)
142
+
143
+ def select_bestchild(self,node):
144
+ depth = node.depth
145
+ current_node = node
146
+ while len(current_node.children) != 0 and depth < self.max_depth:
147
+ current_node = self._bestchild_ucb(current_node)
148
+ depth += 1
149
+
150
+ if current_node.isTerminal == False:
151
+ return True, current_node
152
+ else:
153
+ return False, current_node
154
+
155
+ def select_child(self,node):
156
+ depth = node.depth
157
+ current_node = node
158
+ while len(current_node.children) != 0 and depth < self.max_depth:
159
+ unchoice = [c for c in current_node.children if c.visited==0]
160
+ if len(unchoice) > 0:
161
+ current_node = self._bestchild_ucb(unchoice)
162
+ else:
163
+ current_node = self._bestchild_ucb(current_node)
164
+ depth += 1
165
+
166
+ if current_node.isTerminal == False and depth < self.max_depth:
167
+ return True, current_node
168
+ else:
169
+ return False, current_node
170
+
171
+ def expand(self,node):
172
+ x,a = node.trace_of_reasoning()
173
+ a = '\n'.join(a)
174
+ inputs = self.warp.prompt_to_crime(x=x,a=a,
175
+ bos=self.warp.generate_bos,
176
+ eos=self.warp.generate_eos)
177
+
178
+ actions = []
179
+ if len(node.children) > 0:
180
+ for child in node.children:
181
+ actions.append({'a':child.x,'p':child.prob})
182
+
183
+ n = 0
184
+ while n < self.branch:
185
+ action = '<p> </p>'
186
+ for _ in range(3):
187
+ try:
188
+ response = generate_response(inputs,self.warp.generate_model,
189
+ self.warp.generate_tokenizer,
190
+ logits_processor=self.warp.logits_processor)
191
+ action = self.warp.step_from_response(response['response'])
192
+ prob = response['prob'].to('cpu')
193
+
194
+ if action != '</none_response>':
195
+ actions.append({'a':action,'p':prob})
196
+ break
197
+ except Exception as e:
198
+ action = '<p> </p>'
199
+ continue
200
+ n += 1
201
+
202
+ for action in actions:
203
+ child = Node(x=action['a'],depth=node.depth+1,prob=action['p'])
204
+ node.add_children(child)
205
+
206
+ return node
207
+
208
+
209
+ def greedyt_policy(self,node):
210
+ simulation_distance = self.max_depth - node.depth
211
+ depth = 0
212
+ #current_choice = copy.deepcopy(node)
213
+ current_choice = node
214
+ best_choice = None
215
+
216
+ while current_choice.isTerminal == False and depth < simulation_distance:
217
+ x,a = current_choice.trace_of_reasoning()
218
+ a = '\n'.join(a)
219
+ inputs = self.warp.prompt_to_crime(x=x,a=a,
220
+ bos=self.warp.generate_bos,
221
+ eos=self.warp.generate_eos)
222
+ actions = []
223
+ n = 0
224
+ while n < self.branch:
225
+ action = ''
226
+ prob = torch.tensor([1],dtype=torch.float32)
227
+ for _ in range(3):
228
+ try:
229
+ response = generate_response(inputs,self.warp.generate_model,
230
+ self.warp.generate_tokenizer,
231
+ logits_processor=self.warp.logits_processor)
232
+ action,prob = response['response'],response['prob'].to('cpu')
233
+ action = self.warp.step_from_response(action)
234
+ except Exception:
235
+ action = ''
236
+ continue
237
+ break
238
+ if action != '</none_response>':
239
+ actions.append({'a':action,'p':prob})
240
+ n += 1
241
+ if len(actions) == 0:
242
+ actions.append({'a':'<e></e>','p':prob})
243
+
244
+ best_choice = None
245
+ best_value = self.reward_lower
246
+ for action in actions:
247
+ child = Node(action['a'],prob=action['p'],depth=node.depth+1)
248
+ current_choice.add_children(child)
249
+ if self.reward_funcation == 'reward':
250
+ r = self._reward_self_llm(child)
251
+ elif self.reward_funcation == 'leaf':
252
+ r = self._reward_leaf(child)
253
+ else:
254
+ r = self._reward_random(child)
255
+ if r > best_value:
256
+ best_choice = (child,r)
257
+ best_value = r
258
+ child.V = r
259
+
260
+ current_choice = best_choice[0] if best_choice else random.choice(current_choice.children)
261
+ depth += 1
262
+
263
+ if best_choice:
264
+ reward = best_choice[1]
265
+ else:
266
+ if self.reward_funcation == 'reward':
267
+ reward = self._reward_self_llm(current_choice)
268
+ elif self.reward_funcation == 'leaf':
269
+ reward = self._reward_leaf(current_choice)
270
+ else:
271
+ reward = self._reward_random(current_choice)
272
+
273
+ return reward
274
+
275
+
276
+ def _back_single(self,node):
277
+ node.visited += 1
278
+ child_Vs = [child.V * child.visited for child in node.children]
279
+ total_num_visits = sum([child.visited for child in node.children])
280
+ if total_num_visits > 0:
281
+ node.V = sum(child_Vs) / total_num_visits
282
+
283
+
284
+ def back_propagate(self, node, reward):
285
+ if self.reward_funcation == 'reward':
286
+ node.V = self.alpha_restmcts * node.V + (1 - self.alpha_restmcts) * reward
287
+ node.visited += 1
288
+ node = node.parent
289
+ elif self.reward_funcation == 'leaf':
290
+ node.V = reward
291
+ node.visited += 1
292
+ node = node.parent
293
+
294
+ while node != None:
295
+ self._back_single(node)
296
+ node = node.parent
297
+
298
+ def _reward_leaf(self,node):
299
+ reward = 0
300
+ ylist = self.y.split(';')
301
+ for y in ylist:
302
+ if re.search(y,node.x):
303
+ return 1
304
+ if re.search('<e>.*{x}.*</e>'.format(x=y[3:-4]),node.x):
305
+ return 1
306
+
307
+ return reward
308
+
309
+ def _reward_random(self,node):
310
+ return random.uniform(-1,1)
311
+
312
+ def _reward_self_llm(self,node):
313
+ x,a = node.trace_of_reasoning()
314
+ a = '\n'.join(a)
315
+ inputs = self.warp.prompt_to_value(x=x,a=a,
316
+ bos=self.warp.generate_bos,
317
+ eos=self.warp.generate_eos)
318
+ score = 0.0
319
+ for _ in range(3):
320
+ try:
321
+ score = generate_score(inputs,self.warp.generate_model,self.warp.generate_tokenizer)
322
+ score = self.warp.value_from_response(score)
323
+ except Exception:
324
+ score = 0.0
325
+ continue
326
+ if score:
327
+ break
328
+ return score
329
+
330
+ def _reward_extra_llm(self,node):
331
+ x,a = node.trace_of_reasoning()
332
+ a = '\n'.join(a)
333
+ inputs = self.warp.prompt_to_value(x=x,a=a,
334
+ bos=self.warp.generate_bos,
335
+ eos=self.warp.generate_eos)
336
+ score = 0.0
337
+ for _ in range(3):
338
+ try:
339
+ score = generate_score(inputs,self.reward_model,self.reward_tokenizer)
340
+ score = self.warp.value_from_response(score)
341
+ except Exception:
342
+ score = 0.0
343
+ continue
344
+ if score:
345
+ break
346
+ return score
347
+
348
+ def monte_carlo_tree_search(self,root=None,budget=10,reward_funcation='random'):
349
+ if not root:
350
+ root = self.root
351
+ if reward_funcation in ['random','reward','leaf']:
352
+ self.set_reward_funcation(reward_funcation)
353
+
354
+ iteration = 0
355
+ while iteration < budget:
356
+ need_expand,node = self.select_child(root)
357
+ if need_expand:
358
+ node = self.expand(node)
359
+ _,node = self.select_child(node)
360
+
361
+ V = self.greedyt_policy(node)
362
+ self.back_propagate(node,V)
363
+
364
+ iteration += 1
365
+
366
+
367
+ def sample(self,root=None,attribute='positive'):
368
+ if root is None:
369
+ root = self.root
370
+ if attribute == 'positive':
371
+ _,node = self.select_bestchild(root)
372
+ elif attribute == 'negative':
373
+ _,node = self.select_bestchild(root)
374
+ node.pedigree()[:-1]
375
+ rdm = random.choice(node.pedigree()[:-1])
376
+ rdm = [_ for _ in rdm.parent.children if _.x != rdm.x]
377
+ node = random.choice(rdm) if len(rdm) > 0 else node
378
+ while len(node.children) > 0:
379
+ node = random.choice(node.children)
380
+ else:
381
+ node = root
382
+ while len(node.children) > 0:
383
+ node = random.choice(node.children)
384
+
385
+ train_samples = []
386
+ trace = node.pedigree()
387
+ for i,o in enumerate(trace):
388
+ if i == len(trace) - 1:
389
+ break
390
+ text = '\n'.join(reversed([_.x for _ in trace[i:-1]]))
391
+ text = self.warp.prompt_to_value(x=trace[-1].x,a=text,
392
+ bos=self.warp.generate_bos,
393
+ eos=self.warp.generate_eos)
394
+ reward = o.V
395
+ prob = o.prob
396
+ i_sample = I_policy(o)
397
+ i_all = self.branch
398
+ train_samples.append({'text':text,'reward':reward,'I-all':i_all,'I-sample':i_sample,'prob':prob,'visited':o.visited,'y':self.y})
399
+
400
+ return train_samples
401
+
402
+ def save(self,path=None):
403
+ root = self.root
404
+ leaves = []
405
+
406
+ def dfs(node):
407
+ if node not in leaves:
408
+ leaves.append(node)
409
+ if len(node.children) == 0:
410
+ return
411
+ else:
412
+ for child in node.children:
413
+ dfs(child)
414
+
415
+ for child in root.children:
416
+ dfs(child)
417
+
418
+ samples = []
419
+ for leaf in leaves:
420
+ x,a = leaf.trace_of_reasoning()
421
+ text = '\n'.join(a)
422
+ text = self.warp.prompt_to_value(x=x,a=text,
423
+ bos=self.warp.generate_bos,
424
+ eos=self.warp.generate_eos)
425
+ reward = leaf.V
426
+ prob = leaf.prob.item()
427
+ i_sample = I_policy(leaf)
428
+ i_all = self.branch
429
+ samples.append({'text':text,'reward':reward,'I-all':i_all,'I-sample':i_sample,'prob':prob,'visited':leaf.visited,'y':self.y})
430
+
431
+ if path:
432
+ with open(path,'w') as f:
433
+ for sample in samples:
434
+ sample = json.dumps(sample,ensure_ascii=False)
435
+ f.write(sample)
436
+ f.write('\n')
437
+
438
+ return samples
439
+
440
+
441
+ def inference(self,root=None):
442
+ if root is None:
443
+ root = self.root
444
+ _,node = self.select_bestchild(root)
445
+
446
+ if re.search('<e>.*</e>',node.x):
447
+ y_ = re.search('<e>.*</e>',node.x).group(0)[3:-4]
448
+ elif re.search('<p>.*</p>',node.x):
449
+ y_ = re.search('<p>.*</p>',node.x).group(0)[3:-4]
450
+ else:
451
+ y_ = 'none'
452
+
453
+ return y_
454
+
455
+
456
+ def I_policy(node):
457
+ x = node.x
458
+ i_sample = len([o for o in node.parent.children if x == o.x])
459
+ return i_sample
460
+
461
+
462
+
463
+
464
+
465
+
466
+
code/utils/dataset.py ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from torch.utils.data import Dataset
2
+ from transformers import DataCollatorForSeq2Seq,DataCollatorForLanguageModeling,LogitsProcessor,LogitsProcessorList
3
+ import torch
4
+ from torch.nn.utils.rnn import pad_sequence
5
+
6
+ import re
7
+
8
+ from utils.warp import Warp
9
+ from configs.hyperparametric import Reward_config
10
+ config = Reward_config().to_dict()
11
+ template_to_qwen = Warp.template_to_qwen
12
+
13
+ #'system\nYou are a helpful assistant.<|end▁of▁sentence|>\n<|begin▁of▁sentence|>user\n根据案情描述对给出的已有推理步骤在0至100内打分并在<v></v>中给出分数,例如<v>1</v>\n案情描述:2012年3月份一天,被告人于某生伙同司某、张某戊、张某丁(均另案处理)窜至河南省郑州市,经杜某(另案处理)联系,由于某生出资从马某(另案处理)手中购买500万假银行承兑汇票一张(出票人为江门市林源贸易有限公司,出票日期为2012年2月28日,到期日为2012年8月28日,票号为4020005120580925,收款人为中国矿产有限责任公司,付款行为江门市区农村信用合作联社)。2012年4月16日,被告人于某生将此银行承兑汇票以淄博亚豪博业建陶有限公司名义转让给莱芜市光昊工贸有限公司,获得赃款4765000元。\n已有推理步骤:\n<p>根据案情描述,被告人于某生伙同他人购买并销售假银行承兑汇票,用于谋取巨额利益,符合票据诈骗罪。</p>\n:<|end▁of▁sentence|>\n<|begin▁of▁sentence|>assistant\n'
14
+
15
+ def sample_for_feature(feature):
16
+ if feature['reward'] == 1:
17
+ return '<v>接受</v><|im_end|>'
18
+ #return '<v>接受</v>'
19
+ if feature['reward'] - feature['I-sample'] / feature['I-all'] > 0:
20
+ return '<v>接受</v><|im_end|>'
21
+ return '<v>拒绝</v><|im_end|>'
22
+
23
+ class DataCollatorForReward(DataCollatorForLanguageModeling):
24
+ def __init__(self,tokenizer):
25
+ self.tokenizer = tokenizer
26
+
27
+ def __call__(self, examples):
28
+ text = [example['text'] for example in examples]
29
+ text = [re.sub('已有推理步骤在0至100内打分并在<v></v>中给出分数,例如<v>1</v>','已有推理步骤选择接受或拒绝,并在<v></v>中给出选择,例如<v>接受</v>',t) for t in text]
30
+
31
+ labels = [sample_for_feature(example) for example in examples]
32
+
33
+ batch_input = self.tokenizer(
34
+ [t + l for t,l in zip(text,labels)],
35
+ truncation=config['truncation'],
36
+ max_length=config['max_length'],
37
+ return_tensors=None,
38
+ add_special_tokens=True
39
+ )
40
+
41
+ input_ids,attention_mask = batch_input['input_ids'], batch_input['attention_mask']
42
+ padding = [max([len(i) for i in input_ids])-len(ids) for ids in input_ids]
43
+ input_ids = [ids+[self.tokenizer.pad_token_id]*padding[i] for i,ids in enumerate(input_ids)]
44
+ attention_mask = [ids+[0]*padding[i] for i,ids in enumerate(attention_mask)]
45
+
46
+ label_ids,label_mask = [],[]
47
+ for i, label in enumerate(labels):
48
+ label = self.tokenizer.encode(label, add_special_tokens=False)
49
+ label_ids.append([-100]*len(input_ids[i])+label)
50
+ label_mask.append([0]*len(input_ids[i])+[1]*len(label))
51
+ input_ids[i] += label
52
+ attention_mask[i] += [1] * len(label)
53
+
54
+ input_ids = pad_sequence(
55
+ [torch.tensor(ids) for ids in input_ids],
56
+ batch_first=True,
57
+ padding_value=self.tokenizer.pad_token_id,
58
+ )
59
+ attention_mask = pad_sequence(
60
+ [torch.tensor(mask) for mask in attention_mask],
61
+ batch_first=True,
62
+ padding_value=0,
63
+ )
64
+ label_ids = pad_sequence(
65
+ [torch.tensor(l) for l in label_ids],
66
+ batch_first=True,
67
+ padding_value=-100,
68
+ )
69
+ label_mask = pad_sequence(
70
+ [torch.tensor(mask) for mask in label_mask],
71
+ batch_first=True,
72
+ padding_value=0,
73
+ )
74
+
75
+ batch = {'input_ids':input_ids,'attention_mask':attention_mask,'labels':label_ids,'label_mask':label_mask}
76
+
77
+
78
+ extra_features = [{k: v for k, v in example.items() if k not in ['text','y']}
79
+ for example in examples]
80
+
81
+ if extra_features and len(extra_features[0]) > 0:
82
+ for key in extra_features[0].keys():
83
+ batch[key] = torch.tensor([f[key] for f in extra_features],dtype=torch.float32)
84
+
85
+ if len(batch['input_ids'].shape) > 2:
86
+ batch['input_ids'] = batch['input_ids'].squeeze(0)
87
+ if len(batch['attention_mask'].shape) > 2:
88
+ batch['attention_mask'] = batch['attention_mask'].squeeze(0)
89
+
90
+ return batch
91
+
92
+
code/utils/loss.py ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch import nn
3
+ import re
4
+
5
+ #from utils.warp import get_tag_tokens
6
+ from configs.hyperparametric import Reward_config
7
+
8
+ config = Reward_config().to_dict()
9
+
10
+ class FormatGradientMasker:
11
+ def __init__(self, tokenizer, pattern=r"<a>\d</a>"):
12
+ self.tokenizer = tokenizer
13
+ self.pattern = re.compile(pattern)
14
+ self.format_token_ids = self._get_format_tokens()
15
+
16
+ def _get_format_tokens(self):
17
+ tokens = set()
18
+ lower,upper = config['lower'],config['upper']
19
+ for i in range(lower, upper):
20
+ text = '{s}{_}{e}'.format(s=config['open_tag'],_=i,e=config['close_tag'])
21
+ token_ids = self.tokenizer.encode(text, add_special_tokens=False)
22
+ tokens.update(token_ids)
23
+ return tokens
24
+
25
+ def create_mask(self, input_ids):
26
+ mask = torch.zeros_like(input_ids, dtype=torch.float32)
27
+ text = self.tokenizer.decode(input_ids[0], skip_special_tokens=True)
28
+
29
+ for match in self.pattern.finditer(text):
30
+ start_pos = match.start()
31
+ end_pos = match.end()
32
+ match_text = text[start_pos:end_pos]
33
+ match_tokens = self.tokenizer.encode(match_text, add_special_tokens=False)
34
+
35
+ # 在input_ids中找到匹配位置
36
+ for i in range(len(input_ids[0]) - len(match_tokens) + 1):
37
+ if torch.all(input_ids[0, i:i+len(match_tokens)] == torch.tensor(match_tokens).to(input_ids.device)):
38
+ mask[0, i:i+len(match_tokens)] = 1
39
+
40
+ return mask.bool()
41
+
42
+ class FormatAwareLoss(nn.Module):
43
+ def __init__(self, tokenizer):
44
+ super().__init__()
45
+ self.tokenizer = tokenizer
46
+ self.ce_loss = nn.CrossEntropyLoss(reduction='none')
47
+ self.masker = FormatGradientMasker(tokenizer)
48
+
49
+ def forward(self, logits, labels):
50
+ shift_logits = logits[..., :-1, :].contiguous()
51
+ shift_labels = labels[..., 1:].contiguous()
52
+
53
+ losses = self.ce_loss(
54
+ shift_logits.view(-1, shift_logits.size(-1)),
55
+ shift_labels.view(-1)
56
+ ).view(shift_labels.shape)
57
+
58
+ mask = self.masker.create_mask(labels[:, :-1])
59
+
60
+ # 只保留格式区域的损失
61
+ masked_losses = losses * mask.float()
62
+
63
+ return masked_losses.sum() / (mask.sum() + 1e-8)
code/utils/matrix.py ADDED
@@ -0,0 +1,310 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import re
4
+ import sklearn
5
+ import collections
6
+ from sklearn.metrics import f1_score,accuracy_score
7
+
8
+ def get_f1(samples):
9
+ gold,pred = [],[]
10
+ for sample in samples:
11
+ if sample['gold'] in sample['pred']:
12
+ gold.append(sample['gold'])
13
+ pred.append(sample['gold'])
14
+ else:
15
+ gold.append(sample['gold'])
16
+ pred.append(sample['pred'])
17
+
18
+ micro_f1 = f1_score(gold,pred,average='micro')
19
+ macro_f1 = f1_score(gold,pred,average='macro')
20
+
21
+
22
+ print(f"Micro-F1: {micro_f1:.4f}")
23
+ print(f"Macro-F1: {macro_f1:.4f}")
24
+ return micro_f1, macro_f1
25
+
26
+ def get_cls_acc(samples):
27
+ gold,pred = [],[]
28
+ for sample in samples:
29
+ if sample['gold'] in sample['pred']:
30
+ gold.append(sample['gold'])
31
+ pred.append(sample['gold'])
32
+ else:
33
+ gold.append(sample['gold'])
34
+ pred.append(sample['pred'])
35
+
36
+ pool = {}
37
+ for g,p in zip(gold,pred):
38
+ if g not in pool.keys():
39
+ pool[g] = {'g':[],'p':[]}
40
+ pool[g]['g'].append(g)
41
+ pool[g]['p'].append(p)
42
+
43
+ result = {}
44
+ counter = {}
45
+ for k,v in pool.items():
46
+ g,p = v['g'],v['p']
47
+ acc = accuracy_score(g,p)
48
+ result[k] = acc
49
+ counter[k] = len(g)
50
+
51
+ result = sorted([(k,v,counter[k]) for k,v in result.items()],key=lambda x:x[2],reverse=True)
52
+
53
+ #_result = [r for r in result if r[1] > 0]
54
+ _result = [r for r in result][:20]
55
+
56
+ import pandas as pd
57
+ prob = [r[1] for r in result]
58
+ number = [r[2] for r in result]
59
+ crime = [r[0] for r in result]
60
+ data = {'crime':crime,'prob':prob,'number':number}
61
+ df = pd.DataFrame(data)
62
+ df.to_excel('./output/eval/lawbench.xlsx',index=False)
63
+
64
+ for r in result:
65
+ k,v = r[0],r[1]
66
+ print('{x} : {y}'.format(x=k,y=v))
67
+ print(accuracy_score(gold,pred))
68
+
69
+ def get_bias_with_acc(samples):
70
+
71
+ def load_result(path):
72
+ samples = []
73
+ samples_with_tag = []
74
+ with open(path,'r') as f:
75
+ for l in f.readlines():
76
+ line = json.loads(l)
77
+ y = line['y']
78
+ y_ = line['pred'] if line['pred'] else ''
79
+
80
+ y = [s[3:-4] for s in y.split(';')]
81
+ for c in y:
82
+ samples.append({'gold':c,'pred':y_[3:-4]})
83
+ if re.search('<e>.*</e>',y_):
84
+ samples_with_tag.append({'gold':c,'pred':y_})
85
+ return samples
86
+
87
+ def get_acc_in_cls(samples):
88
+ gold,pred = [],[]
89
+ for sample in samples:
90
+ if sample['gold'] in sample['pred']:
91
+ gold.append(sample['gold'])
92
+ pred.append(sample['gold'])
93
+ else:
94
+ gold.append(sample['gold'])
95
+ pred.append(sample['pred'])
96
+ pool = {}
97
+ for g,p in zip(gold,pred):
98
+ if g not in pool.keys():
99
+ pool[g] = {'g':[],'p':[]}
100
+ pool[g]['g'].append(g)
101
+ pool[g]['p'].append(p)
102
+
103
+ result = {}
104
+ counter = {}
105
+ for k,v in pool.items():
106
+ g,p = v['g'],v['p']
107
+ acc = accuracy_score(g,p)
108
+ result[k] = acc
109
+ counter[k] = len(g)
110
+
111
+ import numpy
112
+ result = sorted([(k,v,numpy.log(counter[k]/len(gold))) for k,v in result.items()],key=lambda x:x[2],reverse=True)
113
+
114
+ return result
115
+
116
+ zeroshot = load_result('')
117
+ oneshot = load_result('')
118
+ cot = load_result('')
119
+ our = load_result('')
120
+
121
+ zeroshot = get_acc_in_cls(zeroshot)
122
+ oneshot = get_acc_in_cls(oneshot)
123
+ cot = get_acc_in_cls(cot)
124
+ our = get_acc_in_cls(our)
125
+
126
+ crimes = {c[0]:c[-1] for c in zeroshot}
127
+
128
+ crimes_pool = []
129
+ log = []
130
+ acc_zero = []
131
+ acc_one = []
132
+ acc_cot = []
133
+ acc_our = []
134
+ for c,crime in crimes.items():
135
+ crimes_pool.append(c)
136
+ log.append(crime)
137
+ for i,s in enumerate(zeroshot):
138
+ if c in s[0]:
139
+ acc_zero.append(s[1])
140
+ break
141
+ if i == len(zeroshot) and c not in s[0]:
142
+ raise ValueError
143
+ for i,s in enumerate(oneshot):
144
+ if c in s[0]:
145
+ acc_one.append(s[1])
146
+ break
147
+ if i == len(oneshot) and c not in s[0]:
148
+ raise ValueError
149
+ if len(acc_one) < len(acc_zero):
150
+ acc_one.append(0)
151
+ for i,s in enumerate(cot):
152
+ if c in s[0]:
153
+ acc_cot.append(s[1])
154
+ break
155
+ if i == len(cot) and c not in s[0]:
156
+ raise ValueError
157
+ for i,s in enumerate(our):
158
+ if c in s[0]:
159
+ acc_our.append(s[1])
160
+ break
161
+ if i == len(our) and c not in s[0]:
162
+ raise ValueError
163
+
164
+ import pandas as pd
165
+ idx = [i for i in range(len(crimes_pool))]
166
+ data = {'crime':crimes_pool,'prob':log,'zs':acc_zero,'os':acc_one,'ct':acc_cot,'our':acc_our,'id':idx}
167
+ df = pd.DataFrame(data)
168
+ df.to_excel('',index=False)
169
+
170
+ def get_effect_with_f1():
171
+
172
+ def load_result(path):
173
+ samples = []
174
+ samples_with_tag = []
175
+ with open(path,'r') as f:
176
+ for l in f.readlines():
177
+ line = json.loads(l)
178
+ y = line['y']
179
+ y_ = line['pred'] if line['pred'] else ''
180
+
181
+ y = [s[3:-4] for s in y.split(';')]
182
+ for c in y:
183
+ #crimes_counter[c] += 1
184
+ samples.append({'gold':c,'pred':y_[3:-4]})
185
+ if re.search('<e>.*</e>',y_):
186
+ samples_with_tag.append({'gold':c,'pred':y_})
187
+ return samples, samples_with_tag
188
+
189
+ dzeroshot,dzeroshottag = load_result('')
190
+ dzeromi,dzeroma = get_f1(dzeroshot)
191
+ d_zeromi,d_zeroma = get_f1(dzeroshottag)
192
+
193
+ doneshot,doneshottag = load_result('')
194
+ donemi,donema = get_f1(doneshot)
195
+ d_onemi,d_onema = get_f1(doneshottag)
196
+
197
+ dcotshot,dcotshottag = load_result('')
198
+ dcotmi,dcotma = get_f1(dcotshot)
199
+ d_cotmi,d_cotma = get_f1(dcotshottag)
200
+
201
+ dourshot,dourshottag = load_result('')
202
+ dourmi,dourma = get_f1(dourshot)
203
+ d_ourmi,d_ourma = get_f1(dourshottag)
204
+
205
+ zeroshot,zeroshottag = load_result('')
206
+ zeromi,zeroma = get_f1(zeroshot)
207
+ _zeromi,_zeroma = get_f1(zeroshottag)
208
+
209
+ oneshot,oneshottag = load_result('')
210
+ onemi,onema = get_f1(oneshot)
211
+ _onemi,_onema = get_f1(oneshottag)
212
+
213
+ cotshot,cotshottag = load_result('')
214
+ cotmi,cotma = get_f1(cotshot)
215
+ _cotmi,_cotma = get_f1(cotshottag)
216
+
217
+ ourshot,ourshottag = load_result('')
218
+ ourmi,ourma = get_f1(ourshot)
219
+ _ourmi,_ourma = get_f1(ourshottag)
220
+
221
+
222
+ # mi
223
+ acc_zero_effect = (dzeromi - zeromi) / (dzeromi - zeromi)
224
+ acc_one_effect = (donemi - zeromi) / (dzeromi - zeromi)
225
+ acc_cot_effect = (dcotmi - zeromi) / (dzeromi - zeromi)
226
+ acc_our_effect = (dourmi - zeromi) / (dzeromi - zeromi)
227
+ # ma
228
+ acc_zero_effect = (dzeroma - zeroma) / (dzeroma - zeroma)
229
+ acc_one_effect = (donema - zeroma) / (dzeroma - zeroma)
230
+ acc_cot_effect = (dcotma - zeroma) / (dzeroma - zeroma)
231
+ acc_our_effect = (dourma - zeroma) / (dzeroma - zeroma)
232
+
233
+ print(acc_zero_effect)
234
+ print(acc_one_effect)
235
+ print(acc_cot_effect)
236
+ print(acc_our_effect)
237
+
238
+ def crime_topk_acc(samples,crimes=None,name='deepseek'):
239
+ gold,pred = [],[]
240
+ for sample in samples:
241
+ if sample['gold'] in sample['pred']:
242
+ gold.append(sample['gold'])
243
+ pred.append(sample['gold'])
244
+ else:
245
+ gold.append(sample['gold'])
246
+ pred.append(sample['pred'])
247
+
248
+ pool = {}
249
+ for g,p in zip(gold,pred):
250
+ if g not in pool.keys():
251
+ pool[g] = {'g':[],'p':[]}
252
+ pool[g]['g'].append(g)
253
+ pool[g]['p'].append(p)
254
+
255
+ if crimes is None:
256
+ crimes = {k:i for i,k in enumerate(pool.keys())}
257
+
258
+ #counter = {}
259
+ result = {}
260
+ for k,v in pool.items():
261
+ counter = collections.Counter()
262
+ g,p = v['g'],v['p']
263
+ for pi in p:
264
+ counter[pi] += 1
265
+ for c in crimes:
266
+ if c not in counter.keys():
267
+ counter[c] = 0
268
+ result[k] = [counter[c] for c in crimes]
269
+
270
+ import pandas as pd
271
+ df = pd.DataFrame(result)
272
+ df.to_excel('./output/eval/warm_acc_{x}.xlsx'.format(x=name),index=False)
273
+
274
+ return result,crimes
275
+
276
+
277
+ def main():
278
+ path = ''
279
+
280
+ def load_data(path):
281
+ crimes_counter = collections.Counter()
282
+ samples = []
283
+ samples_with_tag = []
284
+ with open(path,'r') as f:
285
+ for l in f.readlines():
286
+ line = json.loads(l)
287
+ y = line['y']
288
+ y_ = line['pred'] if line['pred'] else ''
289
+
290
+ y = [s[3:-4] for s in y.split(';')]
291
+ for c in y:
292
+ crimes_counter[c] += 1
293
+ samples.append({'gold':c,'pred':y_[3:-4]})
294
+ if re.search('<e>.*</e>',y_):
295
+ samples_with_tag.append({'gold':c,'pred':y_})
296
+ return samples,samples_with_tag
297
+
298
+ samples,samples_with_tag = load_data(path)
299
+ print('/'.join(path.split('/')[-2:]))
300
+ get_f1(samples)
301
+ print('-'*30)
302
+ get_f1(samples_with_tag)
303
+ print('-'*30)
304
+ get_cls_acc(samples)
305
+ print('-'*30)
306
+ get_bias_with_acc(samples)
307
+
308
+
309
+ if __name__ =='__main__':
310
+ main()
code/utils/model_generate.py ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import logging
3
+ import torch
4
+ import torch.nn.functional as F
5
+ from configs.hyperparametric import Generate_config,Reward_config
6
+ from utils.warp import Warp
7
+ from transformers.generation import GenerateDecoderOnlyOutput
8
+
9
+
10
+ logger = logging.getLogger(__name__)
11
+
12
+ assistant_from_template_in_response = Warp.assistant_from_template_in_response
13
+ config = Generate_config().to_dict()
14
+ reward_config = Reward_config().to_dict()
15
+
16
+ def extra_span_from_tokens(tokens,sign='e'):
17
+ start,end = -1,-1
18
+ for i in range(len(tokens)-3):
19
+ token = ''.join(tokens[i:i+3])
20
+ if '<{x}>'.format(x=sign) in token:
21
+ start = i
22
+ break
23
+ for i in range(len(tokens)-3):
24
+ token = ''.join(tokens[i:i+3])
25
+ if '</{x}>'.format(x=sign) in token:
26
+ end = i+3
27
+ break
28
+ if start < end and 0 <= start < len(tokens) and 0 < end < len(tokens):
29
+ return start, end
30
+ else:
31
+ return -1, -1
32
+
33
+
34
+ def generate_response(inputs,model,tokenizer,logits_processor=None):
35
+ ## max_length=2048, truncation=True, max_new_tokens=1024, temperature=0.7, do_sample=False
36
+
37
+ data = tokenizer.encode_plus(inputs,
38
+ max_length=config['max_length'],
39
+ truncation=config['truncation'],
40
+ return_tensors='pt')
41
+
42
+ input_ids = data['input_ids'].to('cuda')
43
+ attention_mask = data['attention_mask'].to('cuda')
44
+ #bos_token,eos_token = tokenizer.bos_token, tokenizer.eos_token
45
+
46
+ if logits_processor:
47
+ output = model.generate(input_ids, attention_mask=attention_mask,
48
+ do_sample=config['do_sample'],
49
+ max_new_tokens=config['max_new_tokens'],
50
+ temperature=config['temperature'],
51
+ output_hidden_states=config['output_hidden_states'],
52
+ return_dict_in_generate=config['return_dict_in_generate'],
53
+ output_logits=config['output_logits'],
54
+ logits_processor=[logits_processor],
55
+ bos_token_id=tokenizer.bos_token_id,
56
+ eos_token_id=tokenizer.eos_token_id,
57
+ pad_token_id=tokenizer.eos_token_id,
58
+ )
59
+ else:
60
+ output = model.generate(input_ids, attention_mask=attention_mask,
61
+ do_sample=config['do_sample'],
62
+ max_new_tokens=config['max_new_tokens'],
63
+ temperature=config['temperature'],
64
+ output_hidden_states=config['output_hidden_states'],
65
+ return_dict_in_generate=config['return_dict_in_generate'],
66
+ output_logits=config['output_logits'],
67
+ bos_token_id=tokenizer.bos_token_id,
68
+ eos_token_id=tokenizer.eos_token_id,
69
+ pad_token_id=tokenizer.eos_token_id,
70
+ )
71
+ if isinstance(output,GenerateDecoderOnlyOutput):
72
+ #ori_string = tokenizer.decode(output.sequences[0], skip_special_tokens=False)
73
+ logits = output.logits
74
+ #prob = [F.softmax(lgt,dim=-1) for lgt in output.logits]
75
+ generated_ids = torch.stack([torch.argmax(logit, dim=-1) for logit in logits], dim=1)
76
+ generated_tokens = [tokenizer.decode(t,skip_special_tokens=True) for t in generated_ids[0]]
77
+
78
+ # extract probs of the target tokens
79
+ start,end = extra_span_from_tokens(generated_tokens,'p')
80
+ if start > 0:
81
+ logits_thought = [logits[_] for _ in range(start,end)]
82
+ else:
83
+ start,end = extra_span_from_tokens(generated_tokens,'e')
84
+ if start > 0:
85
+ logits_thought = [logits[_] for _ in range(start,end)]
86
+
87
+ else:
88
+ logits_thought = [logit for logit in logits]
89
+
90
+ prob_thought = [F.softmax(logit,dim=-1) for logit in logits_thought]
91
+ prob_thought = [torch.amax(logit,dim=-1) for logit in prob_thought]
92
+ prob_thought = torch.stack(prob_thought).mean(0)
93
+
94
+ else:
95
+ logger.info('GenerateDecoderOnlyOutput error')
96
+ raise Exception
97
+
98
+ return {'response':''.join(generated_tokens),'prob':prob_thought}
99
+
100
+ def generate_string(inputs,model,tokenizer,logits_processor=None):
101
+ ## max_length=2048, truncation=True, max_new_tokens=1024, temperature=0.7, do_sample=False
102
+
103
+ data = tokenizer.encode_plus(inputs,
104
+ max_length=config['max_length'],
105
+ truncation=config['truncation'],
106
+ return_tensors='pt')
107
+
108
+ input_ids = data['input_ids'].to('cuda')
109
+ attention_mask = data['attention_mask'].to('cuda')
110
+ bos_token,eos_token = tokenizer.bos_token, tokenizer.eos_token
111
+
112
+ if logits_processor:
113
+ output = model.generate(input_ids, attention_mask=attention_mask,
114
+ do_sample=config['do_sample'],
115
+ max_new_tokens=config['max_new_tokens'],
116
+ temperature=config['temperature'],
117
+ logits_processor=[logits_processor],
118
+ bos_token_id=tokenizer.bos_token_id,
119
+ eos_token_id=tokenizer.eos_token_id,
120
+ pad_token_id=tokenizer.eos_token_id)
121
+ else:
122
+ output = model.generate(input_ids, attention_mask=attention_mask,
123
+ do_sample=config['do_sample'],
124
+ max_new_tokens=config['max_new_tokens'],
125
+ temperature=config['temperature'],
126
+ bos_token_id=tokenizer.bos_token_id,
127
+ eos_token_id=tokenizer.eos_token_id,
128
+ pad_token_id=tokenizer.eos_token_id)
129
+ ori_string = tokenizer.decode(output[0], skip_special_tokens=False)
130
+ response = assistant_from_template_in_response(x=ori_string,bos=bos_token,eos=eos_token)
131
+ return response
132
+
133
+ def generate_score(inputs,model,tokenizer,reward_processor=None):
134
+ ## max_length=2048, truncation=True, max_new_tokens=1024, temperature=0.7, do_sample=False
135
+
136
+ data = tokenizer.encode_plus(inputs,
137
+ max_length=config['max_length'],
138
+ truncation=config['truncation'],
139
+ return_tensors='pt')
140
+
141
+ input_ids = data['input_ids'].to('cuda')
142
+ attention_mask = data['attention_mask'].to('cuda')
143
+ bos_token,eos_token = tokenizer.bos_token, tokenizer.eos_token
144
+
145
+ if reward_processor:
146
+ output = model.generate(input_ids, attention_mask=attention_mask,
147
+ do_sample=reward_config['do_sample'],
148
+ max_new_tokens=reward_config['max_new_tokens'],
149
+ temperature=reward_config['temperature'],
150
+ logits_processor=[reward_processor],
151
+ bos_token_id=tokenizer.bos_token_id,
152
+ eos_token_id=tokenizer.eos_token_id,
153
+ pad_token_id=tokenizer.eos_token_id)
154
+ else:
155
+ output = model.generate(input_ids, attention_mask=attention_mask,
156
+ do_sample=reward_config['do_sample'],
157
+ max_new_tokens=reward_config['max_new_tokens'],
158
+ temperature=reward_config['temperature'],
159
+ bos_token_id=tokenizer.bos_token_id,
160
+ eos_token_id=tokenizer.eos_token_id,
161
+ pad_token_id=tokenizer.eos_token_id)
162
+ ori_string = tokenizer.decode(output[0], skip_special_tokens=False)
163
+ #response = assistant_from_template_in_response(x=ori_string,bos=bos_token,eos=eos_token)
164
+ response = ori_string
165
+ return response
166
+
code/utils/trainer.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments, LogitsProcessor
4
+ from configs.hyperparametric import Reward_config,Tree_config
5
+ from utils.warp import Warp
6
+ from utils.model_generate import extra_span_from_tokens
7
+ #from model.logitsprocessor import RewardControlLogitsProcessor
8
+
9
+ config = Reward_config().to_dict()
10
+
11
+ def reinforced_loss(W,V,I_a,I_s,p,lower=1e-3,upper=1-1e-3):
12
+ prob = I_a / I_s * p
13
+ #c = W - V
14
+ W = torch.where(W > lower, W, lower)
15
+ W = torch.where(W < upper, W, upper)
16
+
17
+ return - torch.log(torch.mean(W * prob))
18
+
19
+
20
+ class PRGTrainer(Trainer):
21
+ def __init__(self,tokenizer,logits_processor=None,**kwargs):
22
+ super(PRGTrainer,self).__init__(**kwargs)
23
+ self.loss_tokenizer = tokenizer
24
+ self.logits_processor = logits_processor if logits_processor else LogitsProcessor
25
+ self.ce = nn.CrossEntropyLoss(ignore_index=-100)
26
+
27
+ self.open_tag = tokenizer.encode('<v>',add_special_tokens=False)
28
+ self.close_tag = tokenizer.encode('</v>',add_special_tokens=False)
29
+
30
+ self.state = 'ref' ## 'ref' or 'sft'
31
+
32
+ def compute_loss(self, model, inputs, return_outputs=False,num_items_in_batch=None):
33
+ V,I_a,I_s,prob = inputs.pop('reward'),inputs.pop('I-all'),inputs.pop('I-sample'),inputs.pop('prob')
34
+ input_ids,attention_mask = inputs.pop('input_ids'),inputs.pop('attention_mask')
35
+ labels,label_mask = inputs.pop('labels'),inputs.pop('label_mask')
36
+
37
+ outputs = model(input_ids=input_ids,
38
+ attention_mask=attention_mask,
39
+ return_dict=config['return_dict_in_generate'],
40
+ )
41
+ logits = outputs.logits
42
+ logits = torch.softmax(logits,dim=-1)
43
+
44
+ shift_logits = logits[..., :-1, :].contiguous()
45
+ shift_labels = labels[..., 1:].contiguous()
46
+ shift_label_mask = label_mask[..., 1:].contiguous()
47
+
48
+ batch_label_prob = shift_logits * shift_label_mask.unsqueeze(-1)
49
+ shift_label_mask = (batch_label_prob != 0).any(dim=-1).any(dim=0)
50
+ shift_logits = shift_logits[:,shift_label_mask,:]
51
+ shift_labels = shift_labels[:,shift_label_mask]
52
+
53
+ W = torch.max(shift_logits,dim=-1)[0]
54
+ W = torch.mean(W,dim=-1)
55
+
56
+ loss_ce = self.ce(shift_logits.view(-1, shift_logits.size(-1)),shift_labels.view(-1))
57
+ loss_fb = reinforced_loss(W=W,V=V,I_a=I_a,I_s=I_s,p=prob)
58
+
59
+ loss = loss_ce + loss_fb
60
+
61
+ return (loss, outputs) if return_outputs else loss
code/utils/warp.py ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import re
3
+ from configs.hyperparametric import Reward_config
4
+
5
+ from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments
6
+ from model.logitsprocessor import OutputControlLogitsProcessor,RewardControlLogitsProcessor
7
+ from tree.asts import AST
8
+
9
+ #reward_config = Reward_config()
10
+
11
+ def find_subarray_positions(large_array, small_array):
12
+ n = len(small_array)
13
+ result = []
14
+
15
+ for i in range(len(large_array) - n + 1):
16
+ if large_array[i:i+n] == small_array:
17
+ result.append(i)
18
+
19
+ return result
20
+
21
+ class Warp():
22
+ def __init__(self,args,):
23
+ self.args = args
24
+ self.generate_tokenizer = None
25
+ self.generate_model = None
26
+ self.logits_processor = None
27
+ self.reward_tokenizer = None
28
+ self.reward_model = None
29
+ self.reward_processor = None
30
+
31
+ def load_generate_model(self):
32
+ args = self.args
33
+ self.generate_tokenizer = AutoTokenizer.from_pretrained(args.generate_model_path, trust_remote_code=True)
34
+ self.generate_model = AutoModelForCausalLM.from_pretrained(args.generate_model_path,trust_remote_code=True).half().cuda()
35
+
36
+ if args.logits_control:
37
+ syntax_tree = AST(path=args.control_file,tokenizer=self.generate_tokenizer)
38
+ self.logits_processor = OutputControlLogitsProcessor(ast=syntax_tree, tokenizer=self.generate_tokenizer)
39
+
40
+ if re.search('<think>',self.generate_tokenizer.chat_template):
41
+ self.generate_tokenizer.chat_template = re.sub('<think>','',self.generate_tokenizer.chat_template)
42
+
43
+ self.none_token = '<p>none</p>'
44
+ self.generate_bos = self.generate_tokenizer.bos_token
45
+ self.generate_eos = self.generate_tokenizer.eos_token
46
+
47
+ def load_reward_model(self,**kwargs):
48
+ args = self.args
49
+ if 'bnb_config' in kwargs.keys():
50
+ self.reward_model = AutoModelForCausalLM.from_pretrained(args.reward_model_path,
51
+ trust_remote_code=True,
52
+ device_map='auto',
53
+ quantization_config=kwargs['bnb_config'],
54
+ )
55
+ else:
56
+ self.reward_model = AutoModelForCausalLM.from_pretrained(args.reward_model_path,
57
+ trust_remote_code=True
58
+ ).half().cuda()
59
+ self.reward_tokenizer = AutoTokenizer.from_pretrained(args.reward_model_path, trust_remote_code=True)
60
+
61
+ if args.logits_control:
62
+ self.reward_processor = RewardControlLogitsProcessor(tokenizer=self.generate_tokenizer)
63
+
64
+
65
+ if re.search('<think>',self.reward_tokenizer.chat_template):
66
+ self.reward_tokenizer.chat_template = re.sub('<think>','',self.reward_tokenizer.chat_template)
67
+
68
+ @staticmethod
69
+ def template_to_qwen(x,bos='<|im_start|>',eos='<|im_end|>'):
70
+ return """system\n{system}{eos}\n{bos}user\n{query}{eos}\n{bos}assistant\n""".format(
71
+ bos=bos,
72
+ eos=eos,
73
+ system="You are a helpful assistant.",
74
+ query=x
75
+ )
76
+
77
+ @staticmethod
78
+ def assistant_from_template_in_response(x,bos='<|im_start|>',eos='<|im_end|>'):
79
+ processed_string = x.split(bos)[3].strip()
80
+ processed_string = processed_string.split(eos)[0].strip()
81
+ processed_string = re.sub('^assistant','',processed_string).strip()
82
+ return processed_string
83
+
84
+ @staticmethod
85
+ def step_from_response(x):
86
+ step = '</none_response>'
87
+ if '</think>' in x:
88
+ x = x.split('</think>')[-1].strip()
89
+ if '<p>' in x:
90
+ x_ = re.search('<p>.+</p>',x)
91
+ step = x_.group() if x_ else x
92
+ if '<e>' in x:
93
+ x_ = re.search('<e>.+</e>',x)
94
+ step = x_.group() if x_ else x
95
+ return step
96
+
97
+ @staticmethod
98
+ def value_from_response(x):
99
+ if '<v>' in x:
100
+ x_ = re.search('<v>\w+</v>',x)
101
+ return x_.group()[3:-4] if x_ else ''
102
+ #return float(x_.group()[3:-4])/100 if x_ else 0
103
+ return ''
104
+
105
+ class WarpLJP(Warp):
106
+ def __init__(self,args,mode='p'):
107
+ super().__init__(args)
108
+ self.mode = mode
109
+
110
+ def processing_single(self,x,mode=''):
111
+ self.none_token = '<p>无</p>'
112
+
113
+ p = x['Procuratorate']
114
+ a = []
115
+ if 'd' in mode:
116
+ for d in x['Defence'].split('。'):
117
+ if len(d.strip()) > 0:
118
+ a.append(d)
119
+ if 'f' in mode:
120
+ for f in x['Fact'].split('。'):
121
+ if len(f.strip()) > 0:
122
+ a.append(f)
123
+
124
+ crime = [c['charge'] for c in x['Annotations'][0]['annotation']]
125
+ penalty = [c['penalty'] for c in x['Annotations'][0]['annotation']]
126
+ imprisonment = [c['imprisonment'] for c in x['Annotations'][0]['annotation']]
127
+
128
+ label = {'crime':';'.join(['<e>{c}罪</e>'.format(c=c) for c in crime]),
129
+ 'penalty':';'.join(['<e>{c}</e>'.format(c=c) for c in penalty]),
130
+ 'imprisonment':';'.join(['<e>{c}</e>'.format(c=c) for c in imprisonment])}
131
+
132
+
133
+ if a != []:
134
+ a = [_ for _ in map(lambda x_:'<p>{i}</p>'.format(i=x_) if not x_.startswith('<p>') else x_,a)]
135
+ return p,a,label
136
+ else:
137
+ return p,['<p>无</p>'],label
138
+
139
+ @staticmethod
140
+ def prompt_to_value(x,a,bos='<|im_start|>',eos='<|im_end|>'):
141
+ pmt = '根据案情描述对给出的已有推理步骤选择接受或拒绝,并在<v></v>中给出选择,例如<v>接受</v>\n案情描述:{x}\n已有推理步骤:\n{a}\n:'.format(x=x,a=a)
142
+ return Warp.template_to_qwen(pmt,bos=bos,eos=eos)
143
+
144
+ @staticmethod
145
+ def prompt_to_crime(x,a,bos='<|im_start|>',eos='<|im_end|>'):
146
+ pmt = '根据案情描述和已有步骤仅给出一个推理。如果是结论则直接输出<e></e>,例如<e>盗窃罪</e>。如果是步骤则直接输出<p></p>,例如<p>步骤1:…</p>\n案情描述:{x}\n已有推理步骤:\n{a}\n:'.format(x=x,a=a)
147
+ return Warp.template_to_qwen(pmt,bos=bos,eos=eos)
148
+