File size: 83,675 Bytes
6fa4bc9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 | {
"paper_id": "P15-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:11:51.709299Z"
},
"title": "Low-Rank Regularization for Sparse Conjunctive Feature Spaces: An Application to Named Entity Classification",
"authors": [
{
"first": "Audi",
"middle": [],
"last": "Primadhanty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universitat Polit\u00e8cnica de Catalunya",
"location": {}
},
"email": "primadhanty@cs.upc.edu"
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": "",
"affiliation": {},
"email": "xavier.carreras@xrce.xerox.com"
},
{
"first": "Ariadna",
"middle": [],
"last": "Quattoni",
"suffix": "",
"affiliation": {},
"email": "ariadna.quattoni@xrce.xerox.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Entity classification, like many other important problems in NLP, involves learning classifiers over sparse highdimensional feature spaces that result from the conjunction of elementary features of the entity mention and its context. In this paper we develop a low-rank regularization framework for training maxentropy models in such sparse conjunctive feature spaces. Our approach handles conjunctive feature spaces using matrices and induces an implicit low-dimensional representation via low-rank constraints. We show that when learning entity classifiers under minimal supervision, using a seed set, our approach is more effective in controlling model capacity than standard techniques for linear classifiers.",
"pdf_parse": {
"paper_id": "P15-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "Entity classification, like many other important problems in NLP, involves learning classifiers over sparse highdimensional feature spaces that result from the conjunction of elementary features of the entity mention and its context. In this paper we develop a low-rank regularization framework for training maxentropy models in such sparse conjunctive feature spaces. Our approach handles conjunctive feature spaces using matrices and induces an implicit low-dimensional representation via low-rank constraints. We show that when learning entity classifiers under minimal supervision, using a seed set, our approach is more effective in controlling model capacity than standard techniques for linear classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many important problems in NLP involve learning classifiers over sparse high-dimensional feature spaces that result from the conjunction of elementary features. For example, to classify an entity in a document, it is standard to exploit features of the left and right context in which the entity occurs as well as spelling features of the entity mention itself. These sets of features can be grouped into vectors which we call elementary feature vectors. In our example, there will be one elementary feature vector for the left context, one for the right context and one for the features of the mention. Observe that, when the elementary vectors consist of binary indicator features, the outer product of any pair of vectors represents all conjunctions of the corresponding elementary features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ideally, we would like to train a classifier that can leverage all conjunctions of elementary features, since among them there might be some that are discriminative for the classification task at hand. However, allowing for such expressive high dimensional feature space comes at a cost: data sparsity becomes a key challenge and controlling the capacity of the model is crucial to avoid overfitting the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem of data sparsity is even more severe when the goal is to train classifiers with minimal supervision, i.e. small training sets. For example, in the entity classification setting we might be interested in training a classifier using only a small set of examples of each entity class. This is a typical scenario in an industrial setting, where developers are interested in classifying entities according to their own classification schema and can only provide a handful of examples of each class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A standard approach to control the capacity of a linear classifier is to use 1 or 2 regularization on the parameter vector. However, this type of regularization does not seem to be effective when dealing with sparse conjunctive feature spaces. The main limitation is that 1 and 2 regularization can not let the model give weight to conjunctions that have not been observed at training. Without such ability it is unlikely that the model will generalize to novel examples, where most of the conjunctions will be unseen in the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Of course, one could impose a strong prior on the weight vector so that it assigns weight to unseen conjunctions, but how can we build such a prior? What kind of reasonable constraints can we put on unseen conjunctions?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another common approach to handle high dimensional conjunctive feature spaces is to manually design the feature function so that it includes only a subset of \"relevant\" conjunctions. But designing such a feature function can be time consuming and one might need to design a new feature function for each classification task. Ideally, we would have a learning algorithm that does not require such feature engineering and that it can automatically leverage rich conjunctive feature spaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present a solution to this problem by developing a regularization framework specifically designed for sparse conjunctive feature spaces. Our approach results in a more effective way of controlling model capacity and it does not require feature engineering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our strategy is based on:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Employing tensors to define the scoring function of a max-entropy model as a multilinear form that computes weighted inner products between elementary vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Forcing the model to induce low-dimensional embeddings of elementary vectors via lowrank regularization on the tensor parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The proposed regularization framework is based on a simple conceptual trick. The standard approach to handle conjunctive feature spaces in NLP is to regard the parameters of the linear model as long vectors computing an inner product with a high dimensional feature representation that lists explicitly all possible conjunctions. Instead, the parameters of our the model will be tensors and the compatibility score between an input pattern and a class will be defined as the sum of multilinear functions over elementary vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We then show that the rank 1 of the tensor has a very natural interpretation. It can be seen as the intrinsic dimensionality of a latent embedding of the elementary feature vectors. Thus by imposing a low-rank penalty on the tensor parameters we are encouraging the model to induce a lowdimensional projection of the elementary feature vectors . Using the rank itself as a regularization constraint in the learning algorithm would result in a non-convex optimization. Instead, we follow a standard approach which is to use the nuclear norm as a convex relaxation of the rank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary the main contributions of this paper are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We develop a new regularization framework for training max-entropy models in high-dimensional sparse conjunctive feature spaces. Since the proposed regularization implicitly induces a low dimensional embedding of feature vectors, our algorithm can also be seen as a way of implicitly learning a latent variable model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present a simple convex learning algorithm for training the parameters of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We conduct experiments on learning entity classifiers with minimal supervision. Our results show that the proposed regularization framework is better for sparse conjunctive feature spaces than standard 2 and 1 regularization. These results make us conclude that encouraging the max-entropy model to operate on a low-dimensional space is an effective way of controlling the capacity of the model an ensure good generalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The formulation we develop in this paper applies to any prediction task whose inputs are some form of tuple. We focus on classification of entity mentions, or entities in the context of a sentence. Formally, our input objects are tuples x = l, e, r consisting of an entity e, a left context l and a right context r. The goal is to classify x into one entity class in the set Y. We will use log-linear models of the form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Classification with Log-linear Models",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(y | x; \u03b8) = exp{s \u03b8 (x, y)} y exp{s \u03b8 (x, y )}",
"eq_num": "(1)"
}
],
"section": "Entity Classification with Log-linear Models",
"sec_num": "2"
},
{
"text": "where s \u03b8 : X \u00d7 Y \u2192 R is a scoring function of entity tuples with a candidate class, and \u03b8 are the parameters of this function, to be specified below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Classification with Log-linear Models",
"sec_num": "2"
},
{
"text": "In the literature it is common to employ a feature-based linear model. That is, one defines a feature function \u03c6 : X \u2192 {0, 1} n that represents entity tuples in an n-dimensional binary feature space 2 , and the model has a weight vector for each class,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Classification with Log-linear Models",
"sec_num": "2"
},
{
"text": "\u03b8 = {w y } y\u2208Y . Then s \u03b8 (x, y) = \u03c6(x) \u2022 w y .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Classification with Log-linear Models",
"sec_num": "2"
},
{
"text": "In this section we propose a specific family of models for classifying entity tuples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low-rank Entity Classification Models",
"sec_num": "3"
},
{
"text": "We start from the observation that when representing tuple objects such as x = l, e, r with features, we often depart from a feature representation of each element of the tuple. Hence, let \u03c6 l and \u03c6 r be two feature functions representing left and right contexts, with binary dimensions d 1 and d 2 respectively. For now, we will define a model that ignores the entity mention e and makes predictions using context features. It is natural to define conjunctions of left and right features. Hence, in its most general form, one can define a matrix W y \u2208 R d 1 \u00d7d 2 for each class, such that \u03b8 = {W y } y\u2208Y and the score is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Low-rank Model of Left-Right Contexts",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s \u03b8 ( l, e, r , y) = \u03c6 l (l) W y \u03c6 r (r) .",
"eq_num": "(2)"
}
],
"section": "A Low-rank Model of Left-Right Contexts",
"sec_num": "3.1"
},
{
"text": "Note that this corresponds to a feature-based linear model operating in the product space of \u03c6 l and \u03c6 r , that is, the score has one term for each pair of features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Low-rank Model of Left-Right Contexts",
"sec_num": "3.1"
},
{
"text": "i,j \u03c6 l (l)[i] \u03c6 r (r)[j] W y [i, j].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Low-rank Model of Left-Right Contexts",
"sec_num": "3.1"
},
{
"text": "Note also that it is trivial to include elementary features of \u03c6 l and \u03c6 r , in addition to conjunctions, by having a constant dimension in each of the two representations set to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Low-rank Model of Left-Right Contexts",
"sec_num": "3.1"
},
{
"text": "In all, the model in Eq. (2) is very expressive, with the caveat that it can easily overfit the data, specially when we work only with a handful of labeled examples. The standard way to control the capacity of a linear model is via 1 or 2 regularization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Low-rank Model of Left-Right Contexts",
"sec_num": "3.1"
},
{
"text": "Regarding our parameters as matrices allows us to control the capacity of the model via regularizers that favor parameter matrices with low rank. To see the effect of these regularizers, consider that W y has rank k, and let W y = U y \u03a3 y V y be the singular value decomposition, where U y \u2208 R d 1 \u00d7k and V y \u2208 R d 2 \u00d7k are orthonormal projections and \u03a3 y \u2208 R k\u00d7k is a diagonal matrix of singular values. We can rewrite the score function as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Low-rank Model of Left-Right Contexts",
"sec_num": "3.1"
},
{
"text": "s \u03b8 ( l, e, r , y) = (\u03c6 l (l) U y ) \u03a3 y (V y \u03c6 r (r)) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Low-rank Model of Left-Right Contexts",
"sec_num": "3.1"
},
{
"text": "(3) In words, the rank k is the intrinsic dimensionality of the inner product behind the score function. A low-rank regularizer will favor parameter matrices that have low intrinsic dimensionality. Below we describe a convex optimization for low-rank models using nuclear norm regularization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Low-rank Model of Left-Right Contexts",
"sec_num": "3.1"
},
{
"text": "The model above classifies entities based only on the context. Here we propose an extension to make use of features of the entity. Let T be a set of possible entity feature tags, i.e. tags that describe an entity, such as ISCAPITALIZED, CONTAINSDIG-ITS, SINGLETOKEN, . . . Let \u03c6 e be a feature function representing entities. For this case, to simplify our expression, we will use a set notation and denote by \u03c6 e (e) \u2286 T the set of feature tags that describe e. Our model will be defined with one parameter matrix per feature tag and class label, i.e. \u03b8 = {W t,y } t\u2208T ,y\u2208Y . The model form is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Entity Features",
"sec_num": "3.2"
},
{
"text": "s \u03b8 ( l, e, r , y) = t\u2208\u03c6e(e) \u03c6 l (l) W t,y .\u03c6 r (r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Entity Features",
"sec_num": "3.2"
},
{
"text": "(4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Entity Features",
"sec_num": "3.2"
},
{
"text": "In this section we describe a convex procedure to learn models of the above form that have low rank. We will define an objective that combines a loss and a regularization term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "Our first observation is that our parameters are a tensor with up to four axes, namely left and right context representations, entity features, and entity classes. While a matrix has a clear definition of rank, it is not the case for general tensors, and there exist various definitions in the literature. The technique that we use is based on matricization of the tensor, that is, turning the tensor into a matrix that has the same parameters as the tensor but organized in two axes. This is done by partitioning the tensor axes into two sets, one for matrix rows and another for columns. Once the tensor has been turned into a matrix, we can use the standard definition of matrix rank. A main advantage of this approach is that we can make use of standard routines like singular value decomposition (SVD) to decompose the matricized tensor. This is the main reason behind our choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "In general, different ways of partitioning the tensor axes will lead to different notions of intrinsic dimensions. In our case we choose the left context axes as the row dimension, and the rest of axes as the column dimension. 3 In this section, we will denote as W the matricized version of the parameters \u03b8 of our models.",
"cite_spans": [
{
"start": 227,
"end": 228,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "The second observation is that minimizing the rank of a matrix is a non-convex problem. We make use of a convex relaxation based on the nuclear norm (Srebro and Shraibman, 2005) . The nuclear norm 4 of a matrix W, denoted W , is the sum of its singular values:",
"cite_spans": [
{
"start": 149,
"end": 177,
"text": "(Srebro and Shraibman, 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "W = i \u03a3 i,i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "where W = U\u03a3V is the singular value decomposition of W. This norm has been used in several applications in machine learning as a convex surrogate for imposing low rank, e.g. (Srebro et al., 2004) .",
"cite_spans": [
{
"start": 174,
"end": 195,
"text": "(Srebro et al., 2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "Thus, the nuclear norm is used as a regularizer. With this, we define our objective as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "argmin W L(W) + \u03c4 R(W) ,",
"eq_num": "(5)"
}
],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "where L(W) is a convex loss function, R(W) is a regularizer, and \u03c4 is a constant that trades off error and capacity. In experiments we will compare nuclear norm regularization with 1 and 2 regularizers. In all cases we use the negative log-likelihood as loss function, denoting the training data as D:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "L(W) = ( l,e,r ,y)\u2208D \u2212 log Pr(y | l, e, r ; W) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "(6) To solve the objective in Eq. (5) we use a simple optimization scheme known as forward-backward splitting (FOBOS) (Duchi and Singer, 2009) . In a series of iterations, this algorithm performs a gradient update followed by a proximal projection of the parameters. Such projection depends on the regularizer used: for 1 it thresholds the parameters; for 2 it scales them; and for nuclearnorm regularization it thresholds the singular values. This means that, for nuclear norm regularization, each iteration requires to decompose W using SVD. See (Madhyastha et al., 2014) for details about this optimization for a related application.",
"cite_spans": [
{
"start": 118,
"end": 142,
"text": "(Duchi and Singer, 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with Low-rank Constraints",
"sec_num": "3.3"
},
{
"text": "The main aspect of our approach is the use of a spectral penalty (i.e., the rank) to control the capacity of multilinear functions parameterized by matrices or tensors. used nuclear-norm regularization to learn latentvariable max-margin sequence taggers. Madhyastha et al. (2014) defined bilexical distribu- Lei et al. (2014) also use low-rank tensor learning in the context of dependency parsing, where like in our case dependencies are represented by conjunctive feature spaces. While the motivation is similar, their technical solution is different. We use the technique of matricization of a tensor combined with a nuclear-norm relaxation to obtain a convex learning procedure. In their case they explicitly look for a low-dimensional factorization of the tensor using a greedy alternating optimization. Also recently, have framed entity classification as a low-rank matrix completion problem. The idea is based on the fact that if two entities (in rows) have similar descriptions (in columns) they should have similar classes. The low-rank structure of the matrix defines intrinsic representations of entities and feature descriptions. The same idea was applied to relation extraction , using a matrix of entity pairs times descriptions that corresponds to a matricization of an entity-entity-description tensor. Very recently Singh et al. (2015) explored alternative ways of applying low-rank constraints to tensor-based relation extraction.",
"cite_spans": [
{
"start": 308,
"end": 325,
"text": "Lei et al. (2014)",
"ref_id": "BIBREF6"
},
{
"start": 1332,
"end": 1351,
"text": "Singh et al. (2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Another aspect of this paper is training entity classification models using minimal supervision, which has been addressed by multiple works in the literature. A classical successful approach for this problem is to use co-training (Blum and Mitchell, 1998) : learn two classifiers that use different views of the data by using each other's predictions. In the same line, Collins and Singer (1999) trained entity classifiers by bootstraping from an initial set of seeds, using a boosting version of co-training. Seed sets have also been exploited by graphical model approaches. Haghighi and Klein (2006) define a graphical model that is soft-constrained such that the prediction for an unlabeled example agrees with the labels of seeds that are distributionally similar. Li et al. (2010) present a Bayesian approach to expand an initial seed set, with the goal of creating a gazetteer.",
"cite_spans": [
{
"start": 230,
"end": 255,
"text": "(Blum and Mitchell, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 370,
"end": 395,
"text": "Collins and Singer (1999)",
"ref_id": "BIBREF3"
},
{
"start": 576,
"end": 601,
"text": "Haghighi and Klein (2006)",
"ref_id": "BIBREF5"
},
{
"start": 769,
"end": 785,
"text": "Li et al. (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Another approach to entity recognition that, like in our case, learns projections of contextual features is the method by Ando and Zhang (2005) . : For each entity class, the seed of entities for the 10-30 set, together with the number of mentions in the training data that involve entities in the seed for various sizes of the seeds.",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "Ando and Zhang (2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "They define a set of auxiliary tasks, which can be supervised using unlabeled data, and find a projection of the data that works well as input representation for the auxiliary tasks. This representation is then used for the target task. More recently Neelakantan and Collins (2014) presented another approach to gazetteer expansion using an initial seed. A novel aspect is the use of Canonical Correlation Analysis (CCA) to compute embeddings of entity contexts, that are used by the named entity classifier. Like in our case, their method learns a compressed representation of contexts that helps prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In this section we evaluate our regularization framework for training models in highdimensional sparse conjunctive feature spaces. We run experiments on learning entity classifiers with minimal supervision. We focus on classification of unseen entities to highlight the ability of the regularizer to generalize over conjunctions that are not observed at training. We simulate minimal supervision using the CoNLL-2003 Shared Task data (Tjong Kim Sang and De Meulder, 2003) , and compare the performance to 1 and 2 regularizers.",
"cite_spans": [
{
"start": 434,
"end": 471,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use a minimal supervision setting where we provide the algorithm a seed of entities for each class, that is, a list of entities that is representative for that class. The assumption is that any mention of an entity in the seed is a positive example for the corresponding class. Given unlabeled data and a seed of entities for each class, the goal is to learn a model that correctly classifies mentions of entities that are not in the seed. In addition to standard entity classes, we also consider a special non-entity class, which is part of the classification but is excluded from evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimal Supervision Task",
"sec_num": "5.1"
},
{
"text": "Note that named entity classification for unseen entities is a challenging problem. Even in the standard fully-supervised scenario, when we measure the performance of state-of-the-art methods on unseen entities, the F1 values are in the range of 60%. This represents a significant drop with respect to the standard metrics for named entity recognition, which consider all entity mentions of the test set irrespective of whether they appear in the training data or not, and where F1 values at 90% levels are obtained (e.g. (Ratinov and Roth, 2009) ). This suggests that part of the success of state-of-the-art models is in storing known entities together with their type (in the form of gazetteers or directly in lexicalized parameters of the model).",
"cite_spans": [
{
"start": 522,
"end": 546,
"text": "(Ratinov and Roth, 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Minimal Supervision Task",
"sec_num": "5.1"
},
{
"text": "We use the CoNLL-2003 English data, which is annotated with four types: person (PER), location (LOC), organization (ORG), and miscellaneous (MISC). In addition, the data is tagged with partsof-speech (PoS), and we compute word clusters running the Brown clustering algorithm (Brown et al., 1992) on the words in the training set.",
"cite_spans": [
{
"start": 275,
"end": 295,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "5.2"
},
{
"text": "We consider annotated entity phrases as candidate entities, and all single nouns that are not part of an entity as candidate non-entities (O). Both candidate entities and non-entities will be referred to as candidates in the remaining of this section. We lowercase all candidates and remove the am- biguous ones (i.e., those with more than one label in different mentions). 5 To simulate a minimal supervision, we create supervision seeds by picking the n most frequent training candidates for entity types, and the m most frequent candidate non-entities. We create seeds of various sizes n-m, namely 10-30, 40-120, 640-1920, as well as all of the candidates. For each seed, the training set consists of all training mentions that involve entities in the seed. Table 1 shows the smaller seed, as well as the number of mentions for each seed size.",
"cite_spans": [
{
"start": 374,
"end": 375,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 761,
"end": 768,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Setting",
"sec_num": "5.2"
},
{
"text": "For evaluation we use the development and test sections of the data, but we remove the instances of candidates in the training data (i.e., that are in the all seed). We do not remove instances that are ambiguous in the tests. 6 As evaluation metric we use the average F1 score computed over all entity types, excluding the non-entity type. ",
"cite_spans": [
{
"start": 226,
"end": 227,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "5.2"
},
{
"text": "We refer to context as the sequence of tokens before (left context) and after (right context) a candidate mention in a sentence. Different classifiers can be built using different representations of the contexts. For example we can change the window size of the context sequence (i.e., for a window size of 1 we only use the last token before the mention and the first token after the mention). We can treat the left and right contexts independently of each other, we can treat them as a unique combination, or we can use both. We can also choose to use the word form of a token, its PoS tag, a word cluster, or a combination of these. we will use the elementary features that are more predictive and compact: clusters and PoS tags in windows of size at most 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Representations",
"sec_num": "5.3"
},
{
"text": "We compare the performance of models trained using the nuclear norm regularizer with models trained using 1 and 2 regularizers. To train each model, we validate the regularization parameter and the number of iterations on development data, trying a wide range of values. The best performing configuration is then used for the comparison. Figure 1 shows results on the development set for different feature sets. We started representing context using cluster labels, as it is the most compact representation obtaining good results in preliminary experiments. We tried several conjunctions: a conjunction of the left and right context, as well as conjunctions of left and right contexts and features of the candidate entity. We also tried all different conjunction combinations of the contexts and the candidate entity features, as well as adding PoS tags to represent contexts. To represent an entity candidate we use standard traits of the spelling of the mention, such as capitalization, ation. Using our richest feature set, the model obtains 76.76 of accuracy in the development, for the task of classifing entities with correct boundaries. If we add features capturing the full entity and its tokens, then the accuracy is 87.63, which is similar to state-of-the-art performance (the best results in literature typically exploit additional gazetteers). Since our evaluation focuses on unknown entities, our features do not include information about the word tokens of entites. the existence of symbols, as well as the number of tokens in the candidate. See Table 3 for the definition of the features describing entity candidates. We observe that for most conjunction settings our regularizer performs better than the 1 and 2 regularizers. Using the best model from each regularizer, we evaluated on the test set. Table 4 shows the test results. For all seed sets, the nuclear norm regularizer obtains the best average F1 performance. This shows that encouraging the max-entropy model to operate on a lowdimensional space is effective. Moreover, Figure 2 shows model performance as a function of the number of dimensions of the intrinsic projection. The model obtains a good performance even if only a few intrinsic dimensions are used. Figure 1f trained with the 10-30 seed, with respect to observations of the associated features in training and development. Non-white conjunctions correspond to non-zero weights: black is for conjunctions seen in both the training and development sets; blue is for those seen in training but not in the development; red indicates that the conjunctions were observed only in the development; yellow is for those not observed in training nor development.",
"cite_spans": [],
"ref_spans": [
{
"start": 338,
"end": 346,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1560,
"end": 1567,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 1816,
"end": 1824,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 2049,
"end": 2058,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 2241,
"end": 2250,
"text": "Figure 1f",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparing Regularizers",
"sec_num": "5.4"
},
{
"text": "rank model in Figure 1f trained with the 10-30 seed, with respect to observed features in training and development data. Many of the conjunctions of the development set were never observed in the training set. Our regularizer framework is able to propagate weights from the conjunctive features seen in training to unseen conjunctive features that are close to each other in the projected space (these are the yellow and red cells in the matrix). In contrast, 1 and 2 regularization techniques can not put weight on unseen conjunctions.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 23,
"text": "Figure 1f",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparing Regularizers",
"sec_num": "5.4"
},
{
"text": "We have developed a low-rank regularization framework for training max-entropy models in sparse conjunctive feature spaces. Our formulation is based on using tensors to parameterize classifiers. We control the capacity of the model using the nuclear-norm of a matricization of the tensor. Overall, our formulation results in a convex procedure for training model parameters. We have experimented with these techniques in the context of learning entity classifiers. Compared to 1 and 2 penalties, the low-rank model obtains better performance, without the need to manually specify feature conjunctions. In our analysis, we have illustrated how the low-rank approach can assign non-zero weights to conjunctions that were unobserved at training, but are similar to observed conjunctions with respect to the low-dimensional projection of their elements. We have used matricization of a tensor to define its rank, using a fixed transformation of the tensor into a matrix. Future work should explore how to combine efficiently different transformations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "There are many ways of defining the rank of a tensor. In this paper we matricize tensors into matrices and use the rank of the resulting matrix. Matricization is also referred to as unfolding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In general, all models in this paper accept real-valued feature functions. But we focus on binary indicator features because in practice these are the standard type of features in NLP classifiers, and the ones we use here. In fact, in this paper we develop feature spaces based on products of elementary feature functions, in which case the resulting representations correspond to conjunctions of the elementary features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In preliminary experiments we tried variations, such as having right prefixes in the columns, and left prefixes, entity tags and classes in the rows. We only observer minor, nonsignificant variations in the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Also known as the trace norm. tions parameterized by matrices which result lexical embeddings tailored for a particular linguistic relation. Like in our case, the low-dimensional latent projections in these papers are learned implicitly by imposing low-rank constraints on the predictions of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the CoNLL-2003 English training set, only 235 candidates are ambiguous out of 13,441 candidates, i.e. less than 2%. This suggests that in this data the difficulty behind the task is in recognizing and classifying unseen entities, and not in disambiguating known entities in a certain context.6 After removing the ambiguous candidates from the training data, and removing candidates seen in the training from the development and test sets, this is the number of mentions (and number of unique candidates in parenthesis) in the data used in our experiments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "regularization, using the 10-30 seed. We use the lexical representation (the word itself) and a word cluster representation of the context tokens and use a window size of one to three. We use two types of features: bag-of-words features (1grams of tokens in the specified window) and ngram features (with n smaller or equal to the window size). The performance of using word clusters is comparable, and sometimes better, to using lexical representations. Moreover, using a longer window, in this case, does not necessarily result in better performance.7 In the rest of the experiments 7 Our learner and feature configuration, using 2 regularization, obtains state-of-the-art results on the standard evalu-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "regularization. Each plot corresponds to a different conjunctive feature space with respect to window size (1 or 2), context representation (cluster with/out PoS), using entity features or not, and combining or not full conjunctions with lower-order conjunctions and elementary features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Gabriele Musillo and the anonymous reviewers for their helpful comments and suggestions. This work has been partially funded by the Spanish Government through the SKATER project (TIN2012-38584-C06-01) and an FPI predoctoral grant for Audi Primadhanty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A framework for learning predictive structures from multiple tasks and unlabeled data",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Kubota",
"suffix": ""
},
{
"first": "Ando",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "J. Mach. Learn. Res",
"volume": "6",
"issue": "",
"pages": "1817--1853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Kubota Ando and Tong Zhang. 2005. A frame- work for learning predictive structures from multi- ple tasks and unlabeled data. J. Mach. Learn. Res., 6:1817-1853, December.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Combining labeled and unlabeled data with co-training",
"authors": [
{
"first": "Avrim",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT' 98",
"volume": "",
"issue": "",
"pages": "92--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT' 98, pages 92-100, New York, NY, USA. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer",
"middle": [
"C"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mer- cer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18:467-479.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised models for named entity classification",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Yoram Singer. 1999. Unsuper- vised models for named entity classification. In Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Cor- pora.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Efficient online and batch learning using forward backward splitting",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Machine Learning Research",
"volume": "10",
"issue": "",
"pages": "2899--2934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi and Yoram Singer. 2009. Efficient online and batch learning using forward backward splitting. Journal of Machine Learning Research, 10:2899- 2934.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Prototype-driven learning for sequence models",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL '06",
"volume": "",
"issue": "",
"pages": "320--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of the Main Conference on Human Language Tech- nology Conference of the North American Chap- ter of the Association of Computational Linguistics, HLT-NAACL '06, pages 320-327, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Low-rank tensors for scoring dependency structures",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Xin",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1381--1391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scor- ing dependency structures. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1381-1391, Baltimore, Maryland, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Distributional similarity vs. pu learning for entity set expansion",
"authors": [
{
"first": "Xiao-Li",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "See-Kiong",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 Conference Short Papers, ACLShort '10",
"volume": "",
"issue": "",
"pages": "359--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao-Li Li, Lei Zhang, Bing Liu, and See-Kiong Ng. 2010. Distributional similarity vs. pu learn- ing for entity set expansion. In Proceedings of the ACL 2010 Conference Short Papers, ACLShort '10, pages 359-364, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning Task-specific Bilexical Embeddings",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Pranava Swaroop Madhyastha",
"suffix": ""
},
{
"first": "Ariadna",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Quattoni",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "161--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranava Swaroop Madhyastha, Xavier Carreras, and Ariadna Quattoni. 2014. Learning Task-specific Bilexical Embeddings. In Proceedings of COLING 2014, the 25th International Conference on Compu- tational Linguistics: Technical Papers, pages 161- 171, Dublin, Ireland, August. Dublin City Univer- sity and Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mallet: A machine learning for language toolkit",
"authors": [
{
"first": "Andrew",
"middle": [
"K"
],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew K. McCallum. 2002. Mallet: A machine learning for language toolkit.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning dictionaries for named entity recognition using minimal supervision",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "452--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arvind Neelakantan and Michael Collins. 2014. Learning dictionaries for named entity recognition using minimal supervision. In Proceedings of the 14th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 452- 461, Gothenburg, Sweden, April. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Spectral regularization for max-margin sequence tagging",
"authors": [
{
"first": "Ariadna",
"middle": [],
"last": "Quattoni",
"suffix": ""
},
{
"first": "Borja",
"middle": [],
"last": "Balle",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on Machine Learning (ICML-14)",
"volume": "",
"issue": "",
"pages": "1710--1718",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ariadna Quattoni, Borja Balle, Xavier Carreras, and Amir Globerson. 2014. Spectral regularization for max-margin sequence tagging. In Tony Jebara and Eric P. Xing, editors, Proceedings of the 31st Inter- national Conference on Machine Learning (ICML- 14), pages 1710-1718. JMLR Workshop and Con- ference Proceedings.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Design challenges and misconceptions in named entity recognition",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design chal- lenges and misconceptions in named entity recog- nition. In Proceedings of the Thirteenth Confer- ence on Computational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Relation extraction with matrix factorization and universal schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"M"
],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "74--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 74-84, Atlanta, Georgia, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Towards Combined Matrix and Tensor Factorization for Universal Schema Relation Extraction",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2015,
"venue": "NAACL Workshop on Vector Space Modeling for NLP (VSM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Singh, Tim Rockt\u00e4schel, and Sebastian Riedel. 2015. Towards Combined Matrix and Tensor Fac- torization for Universal Schema Relation Extraction. In NAACL Workshop on Vector Space Modeling for NLP (VSM).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Rank, tracenorm and max-norm",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Srebro",
"suffix": ""
},
{
"first": "Adi",
"middle": [],
"last": "Shraibman",
"suffix": ""
}
],
"year": 2005,
"venue": "Learning Theory",
"volume": "",
"issue": "",
"pages": "545--560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Srebro and Adi Shraibman. 2005. Rank, trace- norm and max-norm. In Learning Theory, pages 545-560. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Maximum-margin matrix factorization",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Srebro",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Rennie",
"suffix": ""
},
{
"first": "Tommi",
"middle": [
"S"
],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2004,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1329--1336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Srebro, Jason Rennie, and Tommi S Jaakkola. 2004. Maximum-margin matrix factorization. In Advances in neural information processing systems, pages 1329-1336.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recog- nition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Universal schema for entity type prediction",
"authors": [
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC '13",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limin Yao, Sebastian Riedel, and Andrew McCallum. 2013. Universal schema for entity type prediction. In Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC '13, pages 79-84, New York, NY, USA. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Avg. F1 on development for increasing dimensions, using the low-rank model inFigure 1etrained with all seeds.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "shows the parameter matrix of the low-Full parameter matrix of the low-rank model. The ticks in x-axis indicate the space for different entity types, while the ticks in y-axis indicate the space for different prefix context representations.a ll \u2212 c a p 1 = 1 a ll \u2212 c a p 1 = 0 a ll \u2212 c a p 2 = 1 a ll \u2212 c a p 2 = 0 a ll \u2212 lo w = 1 a ll \u2212 lo w = n b \u2212 to k = 1 n b \u2212 to k = 2 n b \u2212 to k > 2 c a p = 1 c a p = 0 d u m m y prefix=NNP (b) The subblock for PER entity type and PoS representation of the prefixes. The ticks in x-axis indicate the space of the entity features used, while the tick in y-axis indicates an example of a frequently observed prefix for this entity type.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Parameter matrix of the low-rank model in",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table/>",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"text": "Average-F1 of classification of unseen entity candidates on development data, using the 10-30 training seed and 2 regularization, for different conjunctive spaces (elementary only, full conjunctions, all). Bag-of-words elementary features contain all clusters/PoS in separate windows to the left and to the right of the candidate. N-grams elementary features contain all n-grams of clusters/PoS in separate left and right windows (e.g. for size 3 it includes unigrams, bigrams and trigrams on each side).",
"content": "<table/>",
"num": null
},
"TABREF5": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>compares different context represen-</td></tr><tr><td>tations and their performance in classifying un-</td></tr><tr><td>seen candidates using maximum-entropy classi-</td></tr><tr><td>fiers trained with Mallet (McCallum, 2002) with</td></tr></table>",
"num": null
},
"TABREF6": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>cap2 are from</td></tr></table>",
"num": null
},
"TABREF7": {
"html": null,
"type_str": "table",
"text": "Results on the test for models trained with different sizes of the seed, using the parameters and features that obtain the best evaluation results the development set. NN refers to nuclear norm regularization, L1 and L2 refer to 1 and 2 regularization. Only test entities unseen at training are considered. Avg. F1 is over PER, LOC, ORG and MISC, excluding O.",
"content": "<table/>",
"num": null
}
}
}
} |