rendchevi commited on
Commit
91cbba9
·
verified ·
1 Parent(s): 258da14

End of training

Browse files
Files changed (4) hide show
  1. README.md +70 -0
  2. all_results.json +7 -0
  3. train_results.json +7 -0
  4. trainer_state.json +524 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model: mental/mental-roberta-base
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - f1
9
+ - precision
10
+ - recall
11
+ model-index:
12
+ - name: mental-roberta-base-CD_baseline
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # mental-roberta-base-CD_baseline
20
+
21
+ This model is a fine-tuned version of [mental/mental-roberta-base](https://huggingface.co/mental/mental-roberta-base) on an unknown dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 1.2696
24
+ - Accuracy: 0.5565
25
+ - F1: 0.5303
26
+ - Precision: 0.5330
27
+ - Recall: 0.5565
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 2e-05
47
+ - train_batch_size: 16
48
+ - eval_batch_size: 16
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - num_epochs: 5
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
57
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
58
+ | 1.5964 | 1.0 | 125 | 1.6031 | 0.4043 | 0.3393 | 0.3328 | 0.4043 |
59
+ | 1.5226 | 2.0 | 250 | 1.4421 | 0.4739 | 0.4077 | 0.3895 | 0.4739 |
60
+ | 1.1656 | 3.0 | 375 | 1.3132 | 0.5261 | 0.4795 | 0.4490 | 0.5261 |
61
+ | 1.1095 | 4.0 | 500 | 1.2819 | 0.5565 | 0.5231 | 0.5156 | 0.5565 |
62
+ | 1.0974 | 5.0 | 625 | 1.2696 | 0.5565 | 0.5303 | 0.5330 | 0.5565 |
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.38.0
68
+ - Pytorch 2.8.0+cu128
69
+ - Datasets 4.2.0
70
+ - Tokenizers 0.15.2
all_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 5.0,
3
+ "train_loss": 1.3634769760131835,
4
+ "train_runtime": 225.3467,
5
+ "train_samples_per_second": 44.287,
6
+ "train_steps_per_second": 2.774
7
+ }
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 5.0,
3
+ "train_loss": 1.3634769760131835,
4
+ "train_runtime": 225.3467,
5
+ "train_samples_per_second": 44.287,
6
+ "train_steps_per_second": 2.774
7
+ }
trainer_state.json ADDED
@@ -0,0 +1,524 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 1.269640326499939,
3
+ "best_model_checkpoint": "mental-roberta-base-CD_baseline/checkpoint-625",
4
+ "epoch": 5.0,
5
+ "eval_steps": 500,
6
+ "global_step": 625,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.08,
13
+ "grad_norm": 2.921304702758789,
14
+ "learning_rate": 1.968e-05,
15
+ "loss": 2.3689,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.16,
20
+ "grad_norm": 5.469061851501465,
21
+ "learning_rate": 1.936e-05,
22
+ "loss": 2.0549,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.24,
27
+ "grad_norm": 5.983363628387451,
28
+ "learning_rate": 1.904e-05,
29
+ "loss": 1.9011,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.32,
34
+ "grad_norm": 3.8824734687805176,
35
+ "learning_rate": 1.8720000000000004e-05,
36
+ "loss": 1.9096,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.4,
41
+ "grad_norm": 18.308826446533203,
42
+ "learning_rate": 1.8400000000000003e-05,
43
+ "loss": 1.669,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.48,
48
+ "grad_norm": 5.767693996429443,
49
+ "learning_rate": 1.8080000000000003e-05,
50
+ "loss": 1.8629,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 0.56,
55
+ "grad_norm": 2.5474987030029297,
56
+ "learning_rate": 1.7760000000000003e-05,
57
+ "loss": 1.7754,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 0.64,
62
+ "grad_norm": 3.5733895301818848,
63
+ "learning_rate": 1.7440000000000002e-05,
64
+ "loss": 1.7045,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 0.72,
69
+ "grad_norm": 3.388791799545288,
70
+ "learning_rate": 1.7120000000000002e-05,
71
+ "loss": 1.5388,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 0.8,
76
+ "grad_norm": 2.2617416381835938,
77
+ "learning_rate": 1.6800000000000002e-05,
78
+ "loss": 1.6174,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 0.88,
83
+ "grad_norm": 2.040916681289673,
84
+ "learning_rate": 1.648e-05,
85
+ "loss": 1.7345,
86
+ "step": 110
87
+ },
88
+ {
89
+ "epoch": 0.96,
90
+ "grad_norm": 1.7220678329467773,
91
+ "learning_rate": 1.616e-05,
92
+ "loss": 1.5964,
93
+ "step": 120
94
+ },
95
+ {
96
+ "epoch": 1.0,
97
+ "eval_accuracy": 0.4043478260869565,
98
+ "eval_f1": 0.33928429317982867,
99
+ "eval_loss": 1.603085994720459,
100
+ "eval_precision": 0.33282864804603934,
101
+ "eval_recall": 0.4043478260869565,
102
+ "eval_runtime": 1.7794,
103
+ "eval_samples_per_second": 129.257,
104
+ "eval_steps_per_second": 8.43,
105
+ "step": 125
106
+ },
107
+ {
108
+ "epoch": 1.04,
109
+ "grad_norm": 15.428503036499023,
110
+ "learning_rate": 1.584e-05,
111
+ "loss": 1.7193,
112
+ "step": 130
113
+ },
114
+ {
115
+ "epoch": 1.12,
116
+ "grad_norm": 2.470576524734497,
117
+ "learning_rate": 1.552e-05,
118
+ "loss": 1.5825,
119
+ "step": 140
120
+ },
121
+ {
122
+ "epoch": 1.2,
123
+ "grad_norm": 3.9197633266448975,
124
+ "learning_rate": 1.5200000000000002e-05,
125
+ "loss": 1.4732,
126
+ "step": 150
127
+ },
128
+ {
129
+ "epoch": 1.28,
130
+ "grad_norm": 6.2479400634765625,
131
+ "learning_rate": 1.4880000000000002e-05,
132
+ "loss": 1.6381,
133
+ "step": 160
134
+ },
135
+ {
136
+ "epoch": 1.36,
137
+ "grad_norm": 4.600492000579834,
138
+ "learning_rate": 1.4560000000000001e-05,
139
+ "loss": 1.5587,
140
+ "step": 170
141
+ },
142
+ {
143
+ "epoch": 1.44,
144
+ "grad_norm": 6.798262596130371,
145
+ "learning_rate": 1.4240000000000001e-05,
146
+ "loss": 1.8529,
147
+ "step": 180
148
+ },
149
+ {
150
+ "epoch": 1.52,
151
+ "grad_norm": 4.2226996421813965,
152
+ "learning_rate": 1.392e-05,
153
+ "loss": 1.4755,
154
+ "step": 190
155
+ },
156
+ {
157
+ "epoch": 1.6,
158
+ "grad_norm": 5.053036212921143,
159
+ "learning_rate": 1.3600000000000002e-05,
160
+ "loss": 1.4778,
161
+ "step": 200
162
+ },
163
+ {
164
+ "epoch": 1.68,
165
+ "grad_norm": 4.096879959106445,
166
+ "learning_rate": 1.3280000000000002e-05,
167
+ "loss": 1.4701,
168
+ "step": 210
169
+ },
170
+ {
171
+ "epoch": 1.76,
172
+ "grad_norm": 18.876264572143555,
173
+ "learning_rate": 1.2960000000000001e-05,
174
+ "loss": 1.6647,
175
+ "step": 220
176
+ },
177
+ {
178
+ "epoch": 1.84,
179
+ "grad_norm": 5.195082664489746,
180
+ "learning_rate": 1.2640000000000001e-05,
181
+ "loss": 1.4843,
182
+ "step": 230
183
+ },
184
+ {
185
+ "epoch": 1.92,
186
+ "grad_norm": 12.766644477844238,
187
+ "learning_rate": 1.232e-05,
188
+ "loss": 1.4018,
189
+ "step": 240
190
+ },
191
+ {
192
+ "epoch": 2.0,
193
+ "grad_norm": 7.949278831481934,
194
+ "learning_rate": 1.2e-05,
195
+ "loss": 1.5226,
196
+ "step": 250
197
+ },
198
+ {
199
+ "epoch": 2.0,
200
+ "eval_accuracy": 0.47391304347826085,
201
+ "eval_f1": 0.40774923443789374,
202
+ "eval_loss": 1.4421453475952148,
203
+ "eval_precision": 0.38951398452357533,
204
+ "eval_recall": 0.47391304347826085,
205
+ "eval_runtime": 1.2912,
206
+ "eval_samples_per_second": 178.127,
207
+ "eval_steps_per_second": 11.617,
208
+ "step": 250
209
+ },
210
+ {
211
+ "epoch": 2.08,
212
+ "grad_norm": 6.321002960205078,
213
+ "learning_rate": 1.168e-05,
214
+ "loss": 1.3278,
215
+ "step": 260
216
+ },
217
+ {
218
+ "epoch": 2.16,
219
+ "grad_norm": 5.3338093757629395,
220
+ "learning_rate": 1.136e-05,
221
+ "loss": 1.2814,
222
+ "step": 270
223
+ },
224
+ {
225
+ "epoch": 2.24,
226
+ "grad_norm": 4.96796178817749,
227
+ "learning_rate": 1.1040000000000001e-05,
228
+ "loss": 1.4562,
229
+ "step": 280
230
+ },
231
+ {
232
+ "epoch": 2.32,
233
+ "grad_norm": 10.834942817687988,
234
+ "learning_rate": 1.072e-05,
235
+ "loss": 1.39,
236
+ "step": 290
237
+ },
238
+ {
239
+ "epoch": 2.4,
240
+ "grad_norm": 8.158452033996582,
241
+ "learning_rate": 1.04e-05,
242
+ "loss": 1.3208,
243
+ "step": 300
244
+ },
245
+ {
246
+ "epoch": 2.48,
247
+ "grad_norm": 6.426680088043213,
248
+ "learning_rate": 1.008e-05,
249
+ "loss": 1.3216,
250
+ "step": 310
251
+ },
252
+ {
253
+ "epoch": 2.56,
254
+ "grad_norm": 7.6382904052734375,
255
+ "learning_rate": 9.760000000000001e-06,
256
+ "loss": 1.3789,
257
+ "step": 320
258
+ },
259
+ {
260
+ "epoch": 2.64,
261
+ "grad_norm": 7.195258617401123,
262
+ "learning_rate": 9.440000000000001e-06,
263
+ "loss": 1.4057,
264
+ "step": 330
265
+ },
266
+ {
267
+ "epoch": 2.72,
268
+ "grad_norm": 6.953582763671875,
269
+ "learning_rate": 9.12e-06,
270
+ "loss": 1.328,
271
+ "step": 340
272
+ },
273
+ {
274
+ "epoch": 2.8,
275
+ "grad_norm": 7.441910743713379,
276
+ "learning_rate": 8.8e-06,
277
+ "loss": 1.3708,
278
+ "step": 350
279
+ },
280
+ {
281
+ "epoch": 2.88,
282
+ "grad_norm": 10.070989608764648,
283
+ "learning_rate": 8.48e-06,
284
+ "loss": 1.288,
285
+ "step": 360
286
+ },
287
+ {
288
+ "epoch": 2.96,
289
+ "grad_norm": 7.511960029602051,
290
+ "learning_rate": 8.16e-06,
291
+ "loss": 1.1656,
292
+ "step": 370
293
+ },
294
+ {
295
+ "epoch": 3.0,
296
+ "eval_accuracy": 0.5260869565217391,
297
+ "eval_f1": 0.479516001356244,
298
+ "eval_loss": 1.313248872756958,
299
+ "eval_precision": 0.448993952375781,
300
+ "eval_recall": 0.5260869565217391,
301
+ "eval_runtime": 1.2954,
302
+ "eval_samples_per_second": 177.546,
303
+ "eval_steps_per_second": 11.579,
304
+ "step": 375
305
+ },
306
+ {
307
+ "epoch": 3.04,
308
+ "grad_norm": 14.974254608154297,
309
+ "learning_rate": 7.840000000000001e-06,
310
+ "loss": 1.1554,
311
+ "step": 380
312
+ },
313
+ {
314
+ "epoch": 3.12,
315
+ "grad_norm": 6.2412238121032715,
316
+ "learning_rate": 7.520000000000001e-06,
317
+ "loss": 1.0172,
318
+ "step": 390
319
+ },
320
+ {
321
+ "epoch": 3.2,
322
+ "grad_norm": 8.691516876220703,
323
+ "learning_rate": 7.2000000000000005e-06,
324
+ "loss": 1.3717,
325
+ "step": 400
326
+ },
327
+ {
328
+ "epoch": 3.28,
329
+ "grad_norm": 8.422670364379883,
330
+ "learning_rate": 6.88e-06,
331
+ "loss": 1.187,
332
+ "step": 410
333
+ },
334
+ {
335
+ "epoch": 3.36,
336
+ "grad_norm": 6.4102396965026855,
337
+ "learning_rate": 6.560000000000001e-06,
338
+ "loss": 1.0076,
339
+ "step": 420
340
+ },
341
+ {
342
+ "epoch": 3.44,
343
+ "grad_norm": 8.417737007141113,
344
+ "learning_rate": 6.24e-06,
345
+ "loss": 1.1928,
346
+ "step": 430
347
+ },
348
+ {
349
+ "epoch": 3.52,
350
+ "grad_norm": 9.579270362854004,
351
+ "learning_rate": 5.92e-06,
352
+ "loss": 1.0687,
353
+ "step": 440
354
+ },
355
+ {
356
+ "epoch": 3.6,
357
+ "grad_norm": 11.200490951538086,
358
+ "learning_rate": 5.600000000000001e-06,
359
+ "loss": 1.11,
360
+ "step": 450
361
+ },
362
+ {
363
+ "epoch": 3.68,
364
+ "grad_norm": 9.393120765686035,
365
+ "learning_rate": 5.28e-06,
366
+ "loss": 1.1811,
367
+ "step": 460
368
+ },
369
+ {
370
+ "epoch": 3.76,
371
+ "grad_norm": 9.762161254882812,
372
+ "learning_rate": 4.960000000000001e-06,
373
+ "loss": 1.0488,
374
+ "step": 470
375
+ },
376
+ {
377
+ "epoch": 3.84,
378
+ "grad_norm": 10.627289772033691,
379
+ "learning_rate": 4.6400000000000005e-06,
380
+ "loss": 1.1537,
381
+ "step": 480
382
+ },
383
+ {
384
+ "epoch": 3.92,
385
+ "grad_norm": 13.637901306152344,
386
+ "learning_rate": 4.32e-06,
387
+ "loss": 1.032,
388
+ "step": 490
389
+ },
390
+ {
391
+ "epoch": 4.0,
392
+ "grad_norm": 13.00786018371582,
393
+ "learning_rate": 4.000000000000001e-06,
394
+ "loss": 1.1095,
395
+ "step": 500
396
+ },
397
+ {
398
+ "epoch": 4.0,
399
+ "eval_accuracy": 0.5565217391304348,
400
+ "eval_f1": 0.5231441442614202,
401
+ "eval_loss": 1.2819163799285889,
402
+ "eval_precision": 0.5155867764196465,
403
+ "eval_recall": 0.5565217391304348,
404
+ "eval_runtime": 1.2834,
405
+ "eval_samples_per_second": 179.211,
406
+ "eval_steps_per_second": 11.688,
407
+ "step": 500
408
+ },
409
+ {
410
+ "epoch": 4.08,
411
+ "grad_norm": 9.446922302246094,
412
+ "learning_rate": 3.6800000000000003e-06,
413
+ "loss": 0.88,
414
+ "step": 510
415
+ },
416
+ {
417
+ "epoch": 4.16,
418
+ "grad_norm": 9.645947456359863,
419
+ "learning_rate": 3.3600000000000004e-06,
420
+ "loss": 0.9479,
421
+ "step": 520
422
+ },
423
+ {
424
+ "epoch": 4.24,
425
+ "grad_norm": 6.6881184577941895,
426
+ "learning_rate": 3.04e-06,
427
+ "loss": 1.0034,
428
+ "step": 530
429
+ },
430
+ {
431
+ "epoch": 4.32,
432
+ "grad_norm": 13.422988891601562,
433
+ "learning_rate": 2.7200000000000002e-06,
434
+ "loss": 1.0929,
435
+ "step": 540
436
+ },
437
+ {
438
+ "epoch": 4.4,
439
+ "grad_norm": 9.193038940429688,
440
+ "learning_rate": 2.4000000000000003e-06,
441
+ "loss": 0.9017,
442
+ "step": 550
443
+ },
444
+ {
445
+ "epoch": 4.48,
446
+ "grad_norm": 8.080782890319824,
447
+ "learning_rate": 2.08e-06,
448
+ "loss": 0.9509,
449
+ "step": 560
450
+ },
451
+ {
452
+ "epoch": 4.56,
453
+ "grad_norm": 10.08934497833252,
454
+ "learning_rate": 1.76e-06,
455
+ "loss": 0.9522,
456
+ "step": 570
457
+ },
458
+ {
459
+ "epoch": 4.64,
460
+ "grad_norm": 8.592775344848633,
461
+ "learning_rate": 1.44e-06,
462
+ "loss": 0.9903,
463
+ "step": 580
464
+ },
465
+ {
466
+ "epoch": 4.72,
467
+ "grad_norm": 10.12938117980957,
468
+ "learning_rate": 1.12e-06,
469
+ "loss": 0.9153,
470
+ "step": 590
471
+ },
472
+ {
473
+ "epoch": 4.8,
474
+ "grad_norm": 13.647997856140137,
475
+ "learning_rate": 8.000000000000001e-07,
476
+ "loss": 1.0831,
477
+ "step": 600
478
+ },
479
+ {
480
+ "epoch": 4.88,
481
+ "grad_norm": 15.894301414489746,
482
+ "learning_rate": 4.800000000000001e-07,
483
+ "loss": 1.1632,
484
+ "step": 610
485
+ },
486
+ {
487
+ "epoch": 4.96,
488
+ "grad_norm": 11.063687324523926,
489
+ "learning_rate": 1.6e-07,
490
+ "loss": 1.0974,
491
+ "step": 620
492
+ },
493
+ {
494
+ "epoch": 5.0,
495
+ "eval_accuracy": 0.5565217391304348,
496
+ "eval_f1": 0.5303004793794206,
497
+ "eval_loss": 1.269640326499939,
498
+ "eval_precision": 0.532996572682937,
499
+ "eval_recall": 0.5565217391304348,
500
+ "eval_runtime": 1.2978,
501
+ "eval_samples_per_second": 177.219,
502
+ "eval_steps_per_second": 11.558,
503
+ "step": 625
504
+ },
505
+ {
506
+ "epoch": 5.0,
507
+ "step": 625,
508
+ "total_flos": 1434186246250944.0,
509
+ "train_loss": 1.3634769760131835,
510
+ "train_runtime": 225.3467,
511
+ "train_samples_per_second": 44.287,
512
+ "train_steps_per_second": 2.774
513
+ }
514
+ ],
515
+ "logging_steps": 10,
516
+ "max_steps": 625,
517
+ "num_input_tokens_seen": 0,
518
+ "num_train_epochs": 5,
519
+ "save_steps": 500,
520
+ "total_flos": 1434186246250944.0,
521
+ "train_batch_size": 16,
522
+ "trial_name": null,
523
+ "trial_params": null
524
+ }