File size: 85,602 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
{
    "paper_id": "2021",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T03:35:46.006594Z"
    },
    "title": "Related Named Entities Classification in the Economic-Financial Context",
    "authors": [
        {
            "first": "Daniel",
            "middle": [],
            "last": "De",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "\u00b9Pontifical Catholic University of Rio Grande do Sul",
                "location": {
                    "addrLine": "University of\u00c9vora",
                    "country": "Portugal"
                }
            },
            "email": "daniel.reyes@edu.pucrs.br"
        },
        {
            "first": "Los",
            "middle": [],
            "last": "Reyes\u00b9",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "\u00b9Pontifical Catholic University of Rio Grande do Sul",
                "location": {
                    "addrLine": "University of\u00c9vora",
                    "country": "Portugal"
                }
            },
            "email": ""
        },
        {
            "first": "Allan",
            "middle": [],
            "last": "Barcelos\u00b9",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "\u00b9Pontifical Catholic University of Rio Grande do Sul",
                "location": {
                    "addrLine": "University of\u00c9vora",
                    "country": "Portugal"
                }
            },
            "email": ""
        },
        {
            "first": "Renata",
            "middle": [],
            "last": "Vieira\u00b2",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "\u00b9Pontifical Catholic University of Rio Grande do Sul",
                "location": {
                    "addrLine": "University of\u00c9vora",
                    "country": "Portugal"
                }
            },
            "email": "renatav@uevora.pt"
        },
        {
            "first": "Isabel",
            "middle": [
                "H"
            ],
            "last": "Manssour\u00b9",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "\u00b9Pontifical Catholic University of Rio Grande do Sul",
                "location": {
                    "addrLine": "University of\u00c9vora",
                    "country": "Portugal"
                }
            },
            "email": "isabel.manssour@pucrs.br"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The present work uses the Bidirectional Encoder Representations from Transformers (BERT) to process a sentence and its entities and indicate whether two named entities present in a sentence are related or not, constituting a binary classification problem. It was developed for the Portuguese language, considering the financial domain and exploring deep linguistic representations to identify a relation between entities without using other lexical-semantic resources. The results of the experiments show an accuracy of 86% of the predictions.",
    "pdf_parse": {
        "paper_id": "2021",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The present work uses the Bidirectional Encoder Representations from Transformers (BERT) to process a sentence and its entities and indicate whether two named entities present in a sentence are related or not, constituting a binary classification problem. It was developed for the Portuguese language, considering the financial domain and exploring deep linguistic representations to identify a relation between entities without using other lexical-semantic resources. The results of the experiments show an accuracy of 86% of the predictions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "In the context of the financial market, the news bring information regarding sectors economy, industrial policies, acquisitions and partnerships of companies, among others. The analysis of this data, in the form of financial reports, headlines and corporate announcements, can support personal and corporate economic decision making (Zhou and Zhang, 2018) . However, thousands of news items are published every day and this number continues to increase, which makes the task of using and interpreting this huge amount of data impossible through manual means.",
                "cite_spans": [
                    {
                        "start": 333,
                        "end": 355,
                        "text": "(Zhou and Zhang, 2018)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Information Extraction (IE) can contribute with tools that allow the monitoring of these news items in a faster way and with less effort, through automation of the extraction and structuring of information. IE is the technology based on natural language, that receives text as input and generates results in a predefined format (Cvita\u0161, 2011) . Among the tasks of the IE area, it is possible to highlight both Named Entity Recognition (NER) and Relation Extraction (RE). For example, it is possible to extract that a given organization (first entity) was purchased (relation) by another organization (second entity) (Sarawagi, 2008) .",
                "cite_spans": [
                    {
                        "start": 328,
                        "end": 342,
                        "text": "(Cvita\u0161, 2011)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 616,
                        "end": 632,
                        "text": "(Sarawagi, 2008)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "A model based on the BERT language model (Devlin et al., 2018) is proposed to classify whether a sentence containing a tuple entity 1 and entity 2 (e1,e2), expresses a relation among them. Leveraging the power of BERT networks, the semantics of the sentence can be obtained without using enhanced feature selection or other external resources.",
                "cite_spans": [
                    {
                        "start": 41,
                        "end": 62,
                        "text": "(Devlin et al., 2018)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The contribution of this work is in building an approach for extracting entity relations for the Portuguese language on the financial context.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The remainder of this work is organized as follows. Section 2 presents news processing for the Competitive Intelligence (CI) area. Section 3 presents the related work. Section 4 provides a detailed description of the proposed solution. Section 5 explains the experimental process in detail, followed by section 6, which shows the relevant experimental results. Finally, section 7 presents our conclusions, as well as future work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Some of the largest companies in the financial segment have a Competitive Intelligence (CI) sector where information from different sources is strategically analyzed, allowing to anticipate market trends, enabling the evolution of the business compared to its competitors. This sector is usually formed by one or more professionals dedicated specifically to monitor the movements of the competition. In a time of competitiveness that is based on knowledge and innovation, CI allows companies to exercise pro-activity. The conclusions obtained through this process allow the company to know if it really remains competitive and if there is sustainability for its business model. CI can provide some advantages to companies that use it, such as: minimizing surprises from competitors, identify-ing opportunities and threats, obtaining relevant knowledge to formulate strategic planning, understanding the repercussions of their actions in the market, among others.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Competitive Intelligence and News Processing",
                "sec_num": "2"
            },
            {
                "text": "The process of capturing information through news still requires a lot of manual effort, as it often depends on a professional responsible for carefully reading numerous news about organizations to highlight possible market movements that also retain this knowledge. It is then estimated that a system, that automatically filters the relations between financial market entities, can reduce the effort and the time spent on these tasks. Another benefit is that this same system can feed the Business Intelligence (BI) systems and, thus, establish a historical database with market events. Thus, knowledge about market movements can be stored and organized more efficiently.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Competitive Intelligence and News Processing",
                "sec_num": "2"
            },
            {
                "text": "ER is a task that has been the subject of many studies, especially now when information and communication technologies allow the storage of and processing of massive data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3"
            },
            {
                "text": "Zhang (Zhang et al., 2017) proposes to incorporate the position of words and entities into an approach employing combinations of N-grams for extracting relations. Presenting a different methodology to extract the relations, Wu (Wu and He, 2019) proposed to use a pre-trained BERT language model and the entity types for RE on the English language. In order to circumvent the problem of lack of memory for very large sequences in convolutional networks, some authors (Li et al., 2018; Florez et al., 2019; Pandey et al., 2017) have adopted an approach using memory cells for neural networks, Long short-term memory (LSTM). In this sense, Qingqing's Li work (Li et al., 2018) uses a Bidirectional Long Short-Term Memory (Bi-LSTM) network, which are an extension of traditional LSTMs, for its multitasking model, and features a version with attention that considerably improves the results in all tested datasets. Also using Bi-LSTM networks, Florez (Florez et al., 2019) differs from other authors in that it uses types of entities and the words of the entities being considered for a relation in addition to using information such as number of entities and distances, measured by the number of words and phrases between the pair of entities. The entry of the Bi-LSTM layer is concatenation of words and relations, with all words between the candidate entities (included), provided by a pre-trained interpolation layer. Yi (Yi and Hu, 2019) proposes to join a BERT language model and a Bidirectional Gated Recurrent Unit (Bi-GRU) network, which is a version of Bi-LSTM with a lower computational cost. Finally, they train their model based on a pre-trained BERT network, instead of training from the beginning, to speed up coverage. Some works (Qin et al., 2017; GAN et al., 2019; Zhou and Zhang, 2018) use attention mechanisms to improve the performance of their neural network models. Such mechanisms assist in the automatic information filtering step that helps to find the most appropriate sentence section to distinguish named entities. Thus, it is possible that even in a very long sentence, and due to its size being considered complex, the model can capture the context information of each token in the sentence, being able to concentrate more in these terms the weights of influence. Pengda Qin (Qin et al., 2017) proposes a method using Bi-GRU with an attention mechanism that can automatically focus on valuable words, also using the pairs of entities and adding information related to them.",
                "cite_spans": [
                    {
                        "start": 6,
                        "end": 26,
                        "text": "(Zhang et al., 2017)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 227,
                        "end": 244,
                        "text": "(Wu and He, 2019)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 466,
                        "end": 483,
                        "text": "(Li et al., 2018;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 484,
                        "end": 504,
                        "text": "Florez et al., 2019;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 505,
                        "end": 525,
                        "text": "Pandey et al., 2017)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 656,
                        "end": 673,
                        "text": "(Li et al., 2018)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 947,
                        "end": 968,
                        "text": "(Florez et al., 2019)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 1421,
                        "end": 1438,
                        "text": "(Yi and Hu, 2019)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 1742,
                        "end": 1760,
                        "text": "(Qin et al., 2017;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 1761,
                        "end": 1778,
                        "text": "GAN et al., 2019;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 1779,
                        "end": 1800,
                        "text": "Zhou and Zhang, 2018)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 2302,
                        "end": 2320,
                        "text": "(Qin et al., 2017)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3"
            },
            {
                "text": "Tao Gan (GAN et al., 2019) also addresses RE with an attention method to capture important parts of the sentence and for that, it uses an LSTM attention network for entities at the subsequent level. In this way, he focuses more on important contextual information between two entities. Zhou (Zhou and Zhang, 2018 ) also implement a model based on RNN Bi-GRU with an attention mechanism to focus on the most important assumptions of the sentences for the financial market.",
                "cite_spans": [
                    {
                        "start": 4,
                        "end": 26,
                        "text": "Gan (GAN et al., 2019)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 291,
                        "end": 312,
                        "text": "(Zhou and Zhang, 2018",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3"
            },
            {
                "text": "Despite having great importance, the financial domain, specifically, has been little explored in the literature. The authors at (Zhou and Zhang, 2018) created a corpus collecting 3000 sentence records manually from the main news sites, which was used to recognize the entity and extract relations such as learning and training as a whole.",
                "cite_spans": [
                    {
                        "start": 128,
                        "end": 150,
                        "text": "(Zhou and Zhang, 2018)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3"
            },
            {
                "text": "Most studies present RE solutions for English texts, and, in this way, it is also possible to identify a larger number of data sets in this language. There are few data sets available in the Portuguese language, such as the Golden Collection HAREM, which is widely used in the literature (Chaves, 2008; Cardoso, 2008; Collovini et al., 2016) . HAREM is a joint assessment event for the Portuguese language, organized by Linguateca (Santos and Cardoso, 2007) . Its objective is to evaluate recognizing systems of NE (Santos and Cabral, 2009) . The Golden Collection (GC) is a subset of the HAREM collection, being used for the task of evaluating the systems that deal with Recognition of Named Entities.",
                "cite_spans": [
                    {
                        "start": 288,
                        "end": 302,
                        "text": "(Chaves, 2008;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 303,
                        "end": 317,
                        "text": "Cardoso, 2008;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 318,
                        "end": 341,
                        "text": "Collovini et al., 2016)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 431,
                        "end": 457,
                        "text": "(Santos and Cardoso, 2007)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 515,
                        "end": 540,
                        "text": "(Santos and Cabral, 2009)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3"
            },
            {
                "text": "The lack of this type of resource forces researchers to develop their own research corpus. In most cases, it is necessary to first create a set with the sentences and write them down when the classification is supervised to proceed with the RE task. Besides, the lack of public data sets also makes it difficult to fairly compare related work, as well as requires more time and effort from the researcher.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3"
            },
            {
                "text": "It is possible to observe that there are works that discuss the task of extracting relations between NE and that already employ machine learning techniques for this purpose. However, although we found some works for the RE task, few of them are suitable for the Portuguese language, and none of them are related to the financial context. Considering other languages, The work of Zhou (Zhou and Zhang, 2018) was the only one that came closest to our goals. However, there is a gap in the literature for works that address such tasks using deep learning techniques and Portuguese as the main language, especially in the financial-economic context as addressed in this work.",
                "cite_spans": [
                    {
                        "start": 384,
                        "end": 406,
                        "text": "(Zhou and Zhang, 2018)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3"
            },
            {
                "text": "In this section, we present our BERT-based model in detail. As shown in Figure 1 , it contains three parts: (1) Input layer; (2) BERT layer; and (3) Output layer, which is composed of a Sigmoid activation function and two neurons that represent the classes to be predicted.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 72,
                        "end": 80,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Architecture",
                "sec_num": "4"
            },
            {
                "text": "The input layer consists of a BERT encoder used for input sentence tokenization and produces a tuple of arrays (token, mask, sequence ids), which were used as input to the second layer that is the Portuguese BERT language model (Souza et al., 2020) 1 from Huggingface python package 2 (Wolf et al., 2020) . Figure 2 illustrates the input layer of the proposed model. The entry consists of (1) the original sentence with the mentioned entities and (2) the entities to be verified concatenated. A special token [cls] and a token [sep] are added at the beginning and end of the input string respectively, as mentioned in the original BERT implementation (Devlin et al., 2018) .",
                "cite_spans": [
                    {
                        "start": 285,
                        "end": 304,
                        "text": "(Wolf et al., 2020)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 527,
                        "end": 532,
                        "text": "[sep]",
                        "ref_id": null
                    },
                    {
                        "start": 651,
                        "end": 672,
                        "text": "(Devlin et al., 2018)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 307,
                        "end": 315,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Architecture",
                "sec_num": "4"
            },
            {
                "text": "The third layer of the model architecture is identified as the output layer. This layer is fully connected with a tangent activation function. The output of this layer is propagated to a new fully connected layer, with a Sigmoid activation function, whose characteristic is the mapping of input values to 0 or 1. In this model, these values represent non-relation and relation, respectively. As shown in Figure 1 , this layer still has two output neurons, which indicate the respective classes to be predicted by the model. In the end, we added a dropout layer with a 0.1 rate to avoid model overfitting, which happens when the model memorizes the training data and thereby loses the power of generalization.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 404,
                        "end": 412,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Architecture",
                "sec_num": "4"
            },
            {
                "text": "The purpose of this section is to verify the proposed model performance thought experiments on the financial domain corpus. The proposed study follows the classic methodology of Knowledge Discovery in Databases (KDD) (Fayyad et al., 1996) , which contains 5 phases that range from data collection to the evaluation of the results.",
                "cite_spans": [
                    {
                        "start": 217,
                        "end": 238,
                        "text": "(Fayyad et al., 1996)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "5"
            },
            {
                "text": "The following subsections aim to indicate how each step of the methodology was applied in the context of our work. Subsection 5.1 refers to the Selection step and seeks to indicate what data will be used during the experiments for the RE task. Subsection 5.2 addresses the Pre-processing step, indicating procedures for quality checking, cleaning, correction, or removal of inconsistent or missing data. Subsection 5.3 reports the Transformation phase, where the transformation processes applied to the data set in the context of our work are explored. Subsection 5.4 brings the penultimate phase, of Mining, where the data mining process is presented. Finally, the last phase of the methodology is presented in the subsection 5.5, which consists of evaluating the performance of the model applied on top of the data that were not used in the training or mining phase.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "5"
            },
            {
                "text": "As indicated in section 3, there was no evidence of open data sets in the context of extracting relations in the financial field for the Portuguese language. Therefore, for this work, a corpus was created with 3,288 tuples annotated manually. These tuples originate from more than 4,000 paragraphs of financial : Examples of data transformations in the input layer of the model. The entities to be evaluated appear in bold, and the text that represents the semantic relation between them is underlined. market news, provided by a partner company that collected them in various communication vehicles such as financial market websites, newspapers, and corporate balance sheets. Sentences that include co-referral are also removed because co-reference treatment would require additional processing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Selection",
                "sec_num": "5.1"
            },
            {
                "text": "The next step concerns the data pre-processing and cleaning. This step occurs through the manual process of spelling correction of each sentence. Acronyms are also extended, as well as the standardization of different ways of indicating the same named entity.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pre-processing",
                "sec_num": "5.2"
            },
            {
                "text": "The standardization can be done manually, but in a real work scenario, this task becomes massive and can be automated by creating a base of named entities and their acronyms. Thus, it is possible to elaborate a process that validates the acronyms contained in the sentence and replace them with their extensions or even with an approach that focuses on only a few specific entities informed by the CI analyst himself.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pre-processing",
                "sec_num": "5.2"
            },
            {
                "text": "The data cleaning process is also done manually, where special characters and acronyms that follow the description itself are removed. Sentences containing less than 4 tokens will also be removed, as they can be considered irrelevant to the context of the approach. At the end of this cleaning step, just over 2500 sentences are filtered.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pre-processing",
                "sec_num": "5.2"
            },
            {
                "text": "In this same phase, the identification of named entities will also occur, through a single NER tool, called SpaCy 3 , ensuring that the same criterion was used for all sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pre-processing",
                "sec_num": "5.2"
            },
            {
                "text": "The named entities in question are those related to the categories person, location, and organization. The focal point is information about the organizations, as well as its relations with other organizations, persons, and locations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pre-processing",
                "sec_num": "5.2"
            },
            {
                "text": "After identifying all named entities, sentences that have less than 2 entities are discarded. At the end of this new disposal, the corpus consists of 1292 unique sentences that move on to the next stage.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pre-processing",
                "sec_num": "5.2"
            },
            {
                "text": "With the identification of the Named Entities in the previous phase, a combination of all the entities present in the sentence is made and a triple (sentence, entity, entity) is formed for each combination, which can generate several records for the same sentence. After this creation of records with the combination of entities, manual annotation of records that have a semantic relation between the highlighted named entities is made manually.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transformation",
                "sec_num": "5.3"
            },
            {
                "text": "After the end of the manual annotation of the relations between the entities, the corpus consists of 3288 records. Of this total, 1485 (45%) are positive tuples, that is, it contains a relation between the highlighted entities, and 1803 (55%) are negative tuples, where there is no relation between the entities. Finally, the two named entities are concatenated at the end of the sentence. The data set is available at https://github.com/DanielReeyes/ relation-extraction-deep-learning.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transformation",
                "sec_num": "5.3"
            },
            {
                "text": "The relation annotating process did not consider the past defined classes or relations. A positive tuple is considered when there is any semantic relation between two named entities of the categories defined in 5.1. Here are some examples of positive annotated tuples that contain relation between 3 https://spacy.io/ named entities of type organization:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transformation",
                "sec_num": "5.3"
            },
            {
                "text": "\u2022 A Abra\u00e7o\u00e9 uma Institui\u00e7\u00e3o Particular de Solidariedade Social.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transformation",
                "sec_num": "5.3"
            },
            {
                "text": "\u2022 A Caixa\u00e9 controladora do Pan , ao lado do BTG , com 32,8% do neg\u00f3cio.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transformation",
                "sec_num": "5.3"
            },
            {
                "text": "\u2022 A Havanna fecha parceria com o Santander para inaugurar um novo modelo de neg\u00f3cios.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transformation",
                "sec_num": "5.3"
            },
            {
                "text": "\u2022 A partir de agora , a NET est\u00e1 na Claro.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transformation",
                "sec_num": "5.3"
            },
            {
                "text": "As sentences are naturally composed of words and characters, then the transformation step in the present study also consists of transforming the tokens into numerical representations by the BERT encoder. As stated in past sections, the special tokens [CLS] and [SEP] are also added and encoded properly on each sentence, finalizing the composition of the input layer.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transformation",
                "sec_num": "5.3"
            },
            {
                "text": "The predictive task is characterized by the search for a behavioral pattern that can predict the behavior of a future entity (Fayyad et al., 1996) . The corpus data are randomly divided into two parts, 80% of which are used for training the model and 20% for testing. The part for the test is still divided equally into 2, where they are used as validation and test sets to test the generalization of the model. The first set is used so that the algorithm can search for this particular pattern in the data concerning the relation label. Thus, after the training stage where the model can recognize this pattern, it is possible to apply it to the validation data and later on the test set, simulating a real environment. In this step, the original balance level is also maintained in all sets created, being able to rule out that the model contains any bias to learn a certain type of complexity.",
                "cite_spans": [
                    {
                        "start": 125,
                        "end": 146,
                        "text": "(Fayyad et al., 1996)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mining",
                "sec_num": "5.4"
            },
            {
                "text": "The adjustment of hyper-parameters of the BERT used was due to the combination of all values indicated by Jacob Devlin in (Devlin et al., 2018) , in addition to the standard values for the Simple Transformers library model. In this work, Jacob used most of the hyper-parameters with default values except for the lot size, learning rate, and the number of training epochs. The dropout rate was always maintained at 0.1. Thus, the values tested for this task were:",
                "cite_spans": [
                    {
                        "start": 122,
                        "end": 143,
                        "text": "(Devlin et al., 2018)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mining",
                "sec_num": "5.4"
            },
            {
                "text": "\u2022 Batch Size: 16, 32;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mining",
                "sec_num": "5.4"
            },
            {
                "text": "Hyper-parameter Value Batch Size 32 Learning Rate 5e-5 Epochs 4 \u2022 Learning Rate (AdamW): 5e-5, 3e-5, 2e-5;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mining",
                "sec_num": "5.4"
            },
            {
                "text": "\u2022 Epochs: 2, 3, 4, 5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mining",
                "sec_num": "5.4"
            },
            {
                "text": "In the end, we did a total of 24 experiments with all the possible combinations of the above described parameters. After analyzing the results, the model with the values was selected according to Table 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 196,
                        "end": 203,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Mining",
                "sec_num": "5.4"
            },
            {
                "text": "To evaluate the model, metrics such as Accuracy, Recall, Precision, and F1-Measure were provided. According to Table 2 , each set maintained the original imbalance of the data set according to the target variable, in this case, indicating whether or not there is a relation between the entities assessed. In this way, the model is evaluated for the ability to indicate whether a given pair of entities contained in a sentence has a relation or not, configuring a binary classification problem, whose positive class refers to entities that have a semantic relation.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 111,
                        "end": 118,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "5.5"
            },
            {
                "text": "After the training stage of the model, it was applied to the test data set. In this evaluation step, the model obtained reasonable results, achieving an overall accuracy and F-Measure of 86%. An important observation to make is that results are also good when it comes to the target class, that is, when the label is positive, as can be seen in Table  3 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 345,
                        "end": 353,
                        "text": "Table  3",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "As indicated in Section 3, the vast majority of studies present RE solutions for texts in English or a domain other than finance. Thus, it is difficult to compare the results of the proposed method with state-of-the-art approaches.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "Nevertheless, it is shown that the proposed model was able to recognize patterns and indicate when two entities are semantically related in the same sentence in the financial domain.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "The process of finding the best parameters for BERT is time-consuming as the predictions made by the network. The time might not be a constraint to using the RE task model applied to the context of the financial domain considering that this demand does not require the processing time to be real-time.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "We believe that if the data set is increased with more samples, the model may have a performance gain. Also, we can notice that the data set has a small unbalanced distribution rate, with a greater number of negative samples.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "This imbalance can help explain the difference in precision and F-measure between the positive and negative class indicated in Table 3 , where it is possible to see that the model gets more right when the tested entities had no relation in the sentence.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 127,
                        "end": 134,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "Regarding Recall, the study indicates that, even with the imbalance of the data, the proposed model achieved a very good performance of approximately 90% when it comes to the positive class (it has a relation). That is, when it really belongs to the positive class, in approximately 90% of the cases, it identifies correctly.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "It is also possible to carry out tests with adjustments of more hyper-parameters such as loss function, optimizers, among others. In addition to adjustments to the hyper-parameters of the approach, more contextual information of the samples can be added, such as the type of the named entity, whether it is an organization, person, or place, and scope adopted for the task being worked on. In this way, it is possible to delimit the types of relations between 2 entities, excluding, for example, an acquisition relation between two entities of the person type.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "The present work proposed an approach to extract relations between named entities, in the financialeconomic context, based on the Portuguese BERT language model, to our best knowledge, different from what is already in the literature. Thus, it provides an insight into the use of pre-trained deep language models for extracting relations for the Portuguese language financial market.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future works",
                "sec_num": "7"
            },
            {
                "text": "From the related work section, it is possible to verify that there is little research on the technology for extracting the relation between named entities for the financial domain, for the Portuguese language. This domain lacks practical solutions, given a large amount of information in the financial field, and manual analysis becomes difficult to meet the needs and make full use of that information.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future works",
                "sec_num": "7"
            },
            {
                "text": "A model of classification of relations between named entities based on BERT was proposed, which replaces explicit linguistic resources, required by previous methods. This approach uses the information from the sentence and the concatenated entity pair, which allows more than one entry to be sent since a sentence can have N pairs of named entities. Therefore, the adopted approach allows the sentence and the pair of entities to be inferred to be sent separately.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future works",
                "sec_num": "7"
            },
            {
                "text": "The results demonstrate that the approach used can bring satisfactory results, reaching an accuracy of 86%. During the discussion of results, some adjustments were made to try to improve accuracy, such as testing other combinations of hyper-parameters and also the increase in the corpus. However, the development of memory improvements and optimizations are still in need, especially in the training period, due to the complexity of the pre-trained BERT model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future works",
                "sec_num": "7"
            },
            {
                "text": "As a natural continuation of this work, we will proceed with tests with other combinations of hyper-parameters as indicated in Section 6. To try to reduce the chance of the model being surprised with some non-standard samples, new data will be annotated and added to the research corpus. Thus, the model can be trained with a greater amount of data and a greater diversity of data patterns.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future works",
                "sec_num": "7"
            },
            {
                "text": "As a continuity, a second model will also be developed, with sequential classification, so that it is possible to highlight the parts of the sentences that represent or describe the relation between the named entities verified. To achieve this goal, this second model will be trained only with the tuples that contain the annotated relation. Thus, the output of the model proposed in this work will be the input of the sequential classifier model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future works",
                "sec_num": "7"
            },
            {
                "text": "Available at https://simpletransformers. ai/ 2 Available at https://github.com/ huggingface/transformers",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This work was partially funded by the Portuguese Foundation for Science and Technology, project UIDB/00057/2020.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Rembrandt-reconhecimento de entidades mencionadas baseado em rela\u00e7oes e an\u00e1lise detalhada do texto. quot; Encontro do Segundo HAREM",
                "authors": [
                    {
                        "first": "Nuno",
                        "middle": [],
                        "last": "Cardoso",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nuno Cardoso. 2008. Rembrandt-reconhecimento de entidades mencionadas baseado em rela\u00e7oes e an\u00e1lise detalhada do texto. quot; Encontro do Se- gundo HAREM (Universidade de Aveiro Portugal 7 de Setembro de 2008).",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Geo-ontologias e padr\u00f5es para reconhecimento de locais e de suas rela\u00e7\u00f5es em textos: o sei-geo no segundo harem. quot",
                "authors": [
                    {
                        "first": "Marc\u00edrio",
                        "middle": [],
                        "last": "Chaves",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marc\u00edrio Chaves. 2008. Geo-ontologias e padr\u00f5es para reconhecimento de locais e de suas rela\u00e7\u00f5es em tex- tos: o sei-geo no segundo harem. quot; In Cristina Mota;",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Desafios na avalia\u00e7\u00e3o conjunta do reconhecimento de entidades mencionadas: O Segundo HAREM Linguateca",
                "authors": [
                    {
                        "first": "Diana",
                        "middle": [],
                        "last": "Santos",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diana Santos (ed) Desafios na avalia\u00e7\u00e3o con- junta do reconhecimento de entidades mencionadas: O Segundo HAREM Linguateca 2008.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "A sequence model approach to relation extraction in portuguese",
                "authors": [
                    {
                        "first": "Sandra",
                        "middle": [],
                        "last": "Collovini",
                        "suffix": ""
                    },
                    {
                        "first": "Gabriel",
                        "middle": [],
                        "last": "Machado",
                        "suffix": ""
                    },
                    {
                        "first": "Renata",
                        "middle": [],
                        "last": "Vieira",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
                "volume": "",
                "issue": "",
                "pages": "1908--1912",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sandra Collovini, Gabriel Machado, and Renata Vieira. 2016. A sequence model approach to relation extrac- tion in portuguese. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC'16), pages 1908-1912.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Relation extraction from text documents",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Cvita\u0161",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "2011 Proceedings of the 34th International Convention MIPRO",
                "volume": "",
                "issue": "",
                "pages": "1565--1570",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A Cvita\u0161. 2011. Relation extraction from text docu- ments. In 2011 Proceedings of the 34th Interna- tional Convention MIPRO, pages 1565-1570. IEEE.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
                "authors": [
                    {
                        "first": "Jacob",
                        "middle": [],
                        "last": "Devlin",
                        "suffix": ""
                    },
                    {
                        "first": "Ming-Wei",
                        "middle": [],
                        "last": "Chang",
                        "suffix": ""
                    },
                    {
                        "first": "Kenton",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Kristina",
                        "middle": [],
                        "last": "Toutanova",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1810.04805"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "From data mining to knowledge discovery in databases",
                "authors": [
                    {
                        "first": "Usama",
                        "middle": [],
                        "last": "Fayyad",
                        "suffix": ""
                    },
                    {
                        "first": "Gregory",
                        "middle": [],
                        "last": "Piatetsky-Shapiro",
                        "suffix": ""
                    },
                    {
                        "first": "Padhraic",
                        "middle": [],
                        "last": "Smyth",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "AI magazine",
                "volume": "17",
                "issue": "3",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.1609/aimag.v17i3.1230"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth. 1996. From data mining to knowl- edge discovery in databases. AI magazine, 17(3):37. GS Search.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Deep learning for identification of adverse drug reaction relations",
                "authors": [
                    {
                        "first": "Edson",
                        "middle": [],
                        "last": "Florez",
                        "suffix": ""
                    },
                    {
                        "first": "Frederic",
                        "middle": [],
                        "last": "Precioso",
                        "suffix": ""
                    },
                    {
                        "first": "Romaric",
                        "middle": [],
                        "last": "Pighetti",
                        "suffix": ""
                    },
                    {
                        "first": "Michel",
                        "middle": [],
                        "last": "Riveill",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2019 International Symposium on Signal Processing Systems",
                "volume": "",
                "issue": "",
                "pages": "149--153",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Edson Florez, Frederic Precioso, Romaric Pighetti, and Michel Riveill. 2019. Deep learning for identifica- tion of adverse drug reaction relations. In Proceed- ings of the 2019 International Symposium on Signal Processing Systems, pages 149-153.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Subsequence-level entity attention lstm for relation extraction",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Tao",
                        "suffix": ""
                    },
                    {
                        "first": "Yunqiang",
                        "middle": [],
                        "last": "Gan",
                        "suffix": ""
                    },
                    {
                        "first": "Yanmin",
                        "middle": [],
                        "last": "Gan",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "2019 16th International Computer Conference on Wavelet Active Media Technology and Information Processing",
                "volume": "",
                "issue": "",
                "pages": "262--265",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "TAO GAN, YUNQIANG GAN, and YANMIN HE. 2019. Subsequence-level entity attention lstm for re- lation extraction. In 2019 16th International Com- puter Conference on Wavelet Active Media Tech- nology and Information Processing, pages 262-265. IEEE.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "A multi-task learning based approach to biomedical entity relation extraction",
                "authors": [
                    {
                        "first": "Qingqing",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Zhihao",
                        "middle": [],
                        "last": "Yang",
                        "suffix": ""
                    },
                    {
                        "first": "Ling",
                        "middle": [],
                        "last": "Luo",
                        "suffix": ""
                    },
                    {
                        "first": "Lei",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Yin",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Hongfei",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "Jian",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Liang",
                        "middle": [],
                        "last": "Yang",
                        "suffix": ""
                    },
                    {
                        "first": "Kan",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    },
                    {
                        "first": "Yijia",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)",
                "volume": "",
                "issue": "",
                "pages": "680--682",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Qingqing Li, Zhihao Yang, Ling Luo, Lei Wang, Yin Zhang, Hongfei Lin, Jian Wang, Liang Yang, Kan Xu, and Yijia Zhang. 2018. A multi-task learning based approach to biomedical entity relation extrac- tion. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 680- 682. IEEE.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Improving rnn with attention and embedding for adverse drug reactions",
                "authors": [
                    {
                        "first": "Chandra",
                        "middle": [],
                        "last": "Pandey",
                        "suffix": ""
                    },
                    {
                        "first": "Zina",
                        "middle": [],
                        "last": "Ibrahim",
                        "suffix": ""
                    },
                    {
                        "first": "Honghan",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 2017 International Conference on Digital Health",
                "volume": "",
                "issue": "",
                "pages": "67--71",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chandra Pandey, Zina Ibrahim, Honghan Wu, Ehte- sham Iqbal, and Richard Dobson. 2017. Improving rnn with attention and embedding for adverse drug reactions. In Proceedings of the 2017 International Conference on Digital Health, pages 67-71.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Designing an adaptive attention mechanism for relation classification",
                "authors": [
                    {
                        "first": "Pengda",
                        "middle": [],
                        "last": "Qin",
                        "suffix": ""
                    },
                    {
                        "first": "Weiran",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "2017 International Joint Conference on Neural Networks (IJCNN)",
                "volume": "",
                "issue": "",
                "pages": "4356--4362",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pengda Qin, Weiran Xu, and Jun Guo. 2017. De- signing an adaptive attention mechanism for rela- tion classification. In 2017 International Joint Con- ference on Neural Networks (IJCNN), pages 4356- 4362. IEEE.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Gikiclef: Crosscultural issues in an international setting: asking non-english-centered questions to wikipedia",
                "authors": [
                    {
                        "first": "Diana",
                        "middle": [],
                        "last": "Santos",
                        "suffix": ""
                    },
                    {
                        "first": "Lu\u00eds Miguel",
                        "middle": [],
                        "last": "Cabral",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "quot; In Francesca Borri; Alessandro Nardi",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diana Santos and Lu\u00eds Miguel Cabral. 2009. Giki- clef: Crosscultural issues in an international setting: asking non-english-centered questions to wikipedia. In quot; In Francesca Borri; Alessandro Nardi;",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Cross Language Evaluation Forum: Working notes for CLEF",
                "authors": [
                    {
                        "first": "Carol",
                        "middle": [],
                        "last": "Peters",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "",
                "volume": "30",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Carol Peters (ed) Cross Language Evaluation Fo- rum: Working notes for CLEF 2009 (Corfu 30",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Reconhecimento de entidades mencionadas em portugu\u00eas: Documenta\u00e7\u00e3o e actas do harem",
                "authors": [
                    {
                        "first": "Diana",
                        "middle": [],
                        "last": "Santos",
                        "suffix": ""
                    },
                    {
                        "first": "Nuno",
                        "middle": [],
                        "last": "Cardoso",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diana Santos and Nuno Cardoso. 2007. Reconhec- imento de entidades mencionadas em portugu\u00eas: Documenta\u00e7\u00e3o e actas do harem, a primeira avalia\u00e7\u00e3o conjunta na\u00e1rea.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Information extraction",
                "authors": [
                    {
                        "first": "Sunita",
                        "middle": [],
                        "last": "Sarawagi",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sunita Sarawagi. 2008. Information extraction. Now Publishers Inc.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "BERTimbau: pretrained BERT models for Brazilian Portuguese",
                "authors": [
                    {
                        "first": "F\u00e1bio",
                        "middle": [],
                        "last": "Souza",
                        "suffix": ""
                    },
                    {
                        "first": "Rodrigo",
                        "middle": [],
                        "last": "Nogueira",
                        "suffix": ""
                    },
                    {
                        "first": "Roberto",
                        "middle": [],
                        "last": "Lotufo",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "9th Brazilian Conference on Intelligent Systems",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "F\u00e1bio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2020. BERTimbau: pretrained BERT models for Brazilian Portuguese. In 9th Brazilian Conference on Intelligent Systems, BRACIS, Rio Grande do Sul, Brazil, October 20-23 (to appear).",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Transformers: State-of-the-art natural language processing",
                "authors": [
                    {
                        "first": "Thomas",
                        "middle": [],
                        "last": "Wolf",
                        "suffix": ""
                    },
                    {
                        "first": "Lysandre",
                        "middle": [],
                        "last": "Debut",
                        "suffix": ""
                    },
                    {
                        "first": "Victor",
                        "middle": [],
                        "last": "Sanh",
                        "suffix": ""
                    },
                    {
                        "first": "Julien",
                        "middle": [],
                        "last": "Chaumond",
                        "suffix": ""
                    },
                    {
                        "first": "Clement",
                        "middle": [],
                        "last": "Delangue",
                        "suffix": ""
                    },
                    {
                        "first": "Anthony",
                        "middle": [],
                        "last": "Moi",
                        "suffix": ""
                    },
                    {
                        "first": "Pierric",
                        "middle": [],
                        "last": "Cistac",
                        "suffix": ""
                    },
                    {
                        "first": "Tim",
                        "middle": [],
                        "last": "Rault",
                        "suffix": ""
                    },
                    {
                        "first": "R\u00e9mi",
                        "middle": [],
                        "last": "Louf",
                        "suffix": ""
                    },
                    {
                        "first": "Morgan",
                        "middle": [],
                        "last": "Funtowicz",
                        "suffix": ""
                    },
                    {
                        "first": "Joe",
                        "middle": [],
                        "last": "Davison",
                        "suffix": ""
                    },
                    {
                        "first": "Sam",
                        "middle": [],
                        "last": "Shleifer",
                        "suffix": ""
                    },
                    {
                        "first": "Clara",
                        "middle": [],
                        "last": "Patrick Von Platen",
                        "suffix": ""
                    },
                    {
                        "first": "Yacine",
                        "middle": [],
                        "last": "Ma",
                        "suffix": ""
                    },
                    {
                        "first": "Julien",
                        "middle": [],
                        "last": "Jernite",
                        "suffix": ""
                    },
                    {
                        "first": "Canwen",
                        "middle": [],
                        "last": "Plu",
                        "suffix": ""
                    },
                    {
                        "first": "Teven",
                        "middle": [
                            "Le"
                        ],
                        "last": "Xu",
                        "suffix": ""
                    },
                    {
                        "first": "Sylvain",
                        "middle": [],
                        "last": "Scao",
                        "suffix": ""
                    },
                    {
                        "first": "Mariama",
                        "middle": [],
                        "last": "Gugger",
                        "suffix": ""
                    },
                    {
                        "first": "Quentin",
                        "middle": [],
                        "last": "Drame",
                        "suffix": ""
                    },
                    {
                        "first": "Alexander",
                        "middle": [
                            "M"
                        ],
                        "last": "Lhoest",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Rush",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
                "volume": "",
                "issue": "",
                "pages": "38--45",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Enriching pretrained language model with entity information for relation classification",
                "authors": [
                    {
                        "first": "Shanchan",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "Yifan",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management",
                "volume": "",
                "issue": "",
                "pages": "2361--2364",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shanchan Wu and Yifan He. 2019. Enriching pre- trained language model with entity information for relation classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2361-2364.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Pre-trained bert-gru model for relation extraction",
                "authors": [
                    {
                        "first": "Rongli",
                        "middle": [],
                        "last": "Yi",
                        "suffix": ""
                    },
                    {
                        "first": "Wenxin",
                        "middle": [],
                        "last": "Hu",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2019 8th International Conference on Computing and Pattern Recognition",
                "volume": "",
                "issue": "",
                "pages": "453--457",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rongli Yi and Wenxin Hu. 2019. Pre-trained bert-gru model for relation extraction. In Proceedings of the 2019 8th International Conference on Computing and Pattern Recognition, pages 453-457.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "A convolutional neural network method for relation classification",
                "authors": [
                    {
                        "first": "Qin",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Jianhua",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Ying",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Zhixiong",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "2017 International Conference on Progress in Informatics and Computing (PIC)",
                "volume": "",
                "issue": "",
                "pages": "440--444",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Qin Zhang, Jianhua Liu, Ying Wang, and Zhixiong Zhang. 2017. A convolutional neural network method for relation classification. In 2017 Interna- tional Conference on Progress in Informatics and Computing (PIC), pages 440-444. IEEE.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Research on entity relationship extraction in financial and economic field based on deep learning",
                "authors": [
                    {
                        "first": "Zhenyu",
                        "middle": [],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "Haiyang",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "2018 IEEE 4th International Conference on Computer and Communications (ICCC)",
                "volume": "",
                "issue": "",
                "pages": "2430--2435",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhenyu Zhou and Haiyang Zhang. 2018. Research on entity relationship extraction in financial and eco- nomic field based on deep learning. In 2018 IEEE 4th International Conference on Computer and Com- munications (ICCC), pages 2430-2435. IEEE.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "num": null,
                "text": "Complete model architecture with its 3 layers: (1) Input layer; (2) BERT layer; (3) Output layer.",
                "type_str": "figure"
            },
            "FIGREF1": {
                "uris": null,
                "num": null,
                "text": "Figure 2: Examples of data transformations in the input layer of the model. The entities to be evaluated appear in bold, and the text that represents the semantic relation between them is underlined.",
                "type_str": "figure"
            },
            "TABREF0": {
                "content": "<table><tr><td/><td/><td>Positive</td><td>Positive</td></tr><tr><td>Set</td><td colspan=\"2\">Samples Class</td><td>Samples</td></tr><tr><td/><td/><td>Distribution (%)</td><td/></tr><tr><td>Original</td><td>3288</td><td>45.16</td><td>1485</td></tr><tr><td>Training</td><td>2630</td><td>45.17</td><td>1188</td></tr><tr><td colspan=\"2\">Validation 329</td><td>45.28</td><td>149</td></tr><tr><td>Test</td><td>329</td><td>45.98</td><td>148</td></tr></table>",
                "num": null,
                "type_str": "table",
                "html": null,
                "text": "Combination of hyper-parameters that presented better results."
            },
            "TABREF1": {
                "content": "<table/>",
                "num": null,
                "type_str": "table",
                "html": null,
                "text": "Sample composition of each data set used in the experiments."
            },
            "TABREF3": {
                "content": "<table/>",
                "num": null,
                "type_str": "table",
                "html": null,
                "text": "Precision, Recall and F-Measure calculated for each class and Accuracy and general F-Measure of the model."
            }
        }
    }
}