File size: 136,967 Bytes
6fa4bc9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 | {
"paper_id": "Q17-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:12:12.051876Z"
},
"title": "Domain-Targeted, High Precision Knowledge Extraction",
"authors": [
{
"first": "Bhavana",
"middle": [],
"last": "Dalvi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": {
"addrLine": "2157 N Northlake Way Suite 110",
"postCode": "98103",
"settlement": "Seattle",
"region": "WA"
}
},
"email": "bhavanad@allenai.org"
},
{
"first": "Mishra",
"middle": [
"Niket"
],
"last": "Tandon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": {
"addrLine": "2157 N Northlake Way Suite 110",
"postCode": "98103",
"settlement": "Seattle",
"region": "WA"
}
},
"email": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": {
"addrLine": "2157 N Northlake Way Suite 110",
"postCode": "98103",
"settlement": "Seattle",
"region": "WA"
}
},
"email": "peterc@allenai.org"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Our goal is to construct a domain-targeted, high precision knowledge base (KB), containing general (subject,predicate,object) statements about the world, in support of a downstream question-answering (QA) application. Despite recent advances in information extraction (IE) techniques, no suitable resource for our task already exists; existing resources are either too noisy, too named-entity centric, or too incomplete, and typically have not been constructed with a clear scope or purpose. To address these, we have created a domaintargeted, high precision knowledge extraction pipeline, leveraging Open IE, crowdsourcing, and a novel canonical schema learning algorithm (called CASI), that produces high precision knowledge targeted to a particular domain-in our case, elementary science. To measure the KB's coverage of the target domain's knowledge (its \"comprehensiveness\" with respect to science) we measure recall with respect to an independent corpus of domain text, and show that our pipeline produces output with over 80% precision and 23% recall with respect to that target, a substantially higher coverage of tuple-expressible science knowledge than other comparable resources. We have made the KB publicly available 1 .",
"pdf_parse": {
"paper_id": "Q17-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "Our goal is to construct a domain-targeted, high precision knowledge base (KB), containing general (subject,predicate,object) statements about the world, in support of a downstream question-answering (QA) application. Despite recent advances in information extraction (IE) techniques, no suitable resource for our task already exists; existing resources are either too noisy, too named-entity centric, or too incomplete, and typically have not been constructed with a clear scope or purpose. To address these, we have created a domaintargeted, high precision knowledge extraction pipeline, leveraging Open IE, crowdsourcing, and a novel canonical schema learning algorithm (called CASI), that produces high precision knowledge targeted to a particular domain-in our case, elementary science. To measure the KB's coverage of the target domain's knowledge (its \"comprehensiveness\" with respect to science) we measure recall with respect to an independent corpus of domain text, and show that our pipeline produces output with over 80% precision and 23% recall with respect to that target, a substantially higher coverage of tuple-expressible science knowledge than other comparable resources. We have made the KB publicly available 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "While there have been substantial advances in knowledge extraction techniques, the availability of high precision, general knowledge about the world, remains elusive. Specifically, our goal is a large, high precision body of (subject,predicate,object) statements relevant to elementary science, to support a downstream QA application task. Although there are several impressive, existing resources that can contribute to our endeavor, e.g., NELL (Carlson et al., 2010) , ConceptNet (Speer and Havasi, 2013) , WordNet (Fellbaum, 1998) , WebChild (Tandon et al., 2014) , Yago (Suchanek et al., 2007) , FreeBase (Bollacker et al., 2008) , and ReVerb-15M (Fader et al., 2011) , their applicability is limited by both",
"cite_spans": [
{
"start": 446,
"end": 468,
"text": "(Carlson et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 482,
"end": 506,
"text": "(Speer and Havasi, 2013)",
"ref_id": "BIBREF28"
},
{
"start": 517,
"end": 533,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 545,
"end": 566,
"text": "(Tandon et al., 2014)",
"ref_id": "BIBREF31"
},
{
"start": 574,
"end": 597,
"text": "(Suchanek et al., 2007)",
"ref_id": "BIBREF30"
},
{
"start": 609,
"end": 633,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 651,
"end": 671,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 limited coverage of general knowledge (e.g.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "FreeBase and NELL primarily contain knowledge about Named Entities; WordNet uses only a few (< 10) semantic relations) \u2022 low precision (e.g., many ConceptNet assertions express idiosyncratic rather than general knowledge)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal in this work is to create a domain-targeted knowledge extraction pipeline that can overcome these limitations and output a high precision KB of triples relevant to our end task. Our approach leverages existing techniques of open information extraction (Open IE) and crowdsourcing, along with a novel schema learning algorithm. There are three main contributions of this work. First, we present a high precision extraction pipeline able to extract (subject,predicate,object) tuples relevant to a domain with precision in excess of 80%. The input to the pipeline is a corpus, a sensedisambiguated domain vocabulary, and a small set of entity types. The pipeline uses a combination of text filtering, Open IE, Turker annotation on samples, and precision prediction to generate its output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, we present a novel canonical schema induction method (called CASI) that identifies clusters of similar-meaning predicates, and maps them to the most appropriate general predicate that captures that canonical meaning. Open IE, used in the early part of our pipeline, generates triples containing a large number of predicates (expressed as verbs or verb phrases), but equivalences and generalizations among them are not captured. Synonym dictionaries, paraphrase databases, and verb taxonomies can help identify these relationships, but only partially so because the meaning of a verb often shifts as its subject and object vary, something that these resources do not explicitly model. To address this challenge, we have developed a corpus-driven method that takes into account the subject and object of the verb, and thus can learn argument-specific mapping rules, e.g., the rule \"(x:Animal,found in,y:Location) \u2192 (x:Animal,live in,y:Location)\" states that if some animal is found in a location then it also means the animal lives in the location. Note that 'found in' can have very different meaning in the schema \"(x:Substance,found in,y:Material). The result is a KB whose general predicates are more richly populated, still with high precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we contribute the science KB itself as a resource publicly available 2 to the research community. To measure how \"complete\" the KB is with respect to the target domain (elementary science), we use an (independent) corpus of domain text to characterize the target science knowledge, and measure the KB's recall at high (>80%) precision over that corpus (its \"comprehensiveness\" with respect to science). This measure is similar to recall at the point P=80% on the PR curve, except measured against a domain-specific sample of data that reflects the distribution of the target domain knowledge. Comprehensiveness thus gives us an approximate notion of the completeness of the KB for (tuple-expressible) facts in our target domain, something that has been lacking in earlier KB construction research. We show that our KB has comprehensiveness (recall of domain facts at >80% precision) of 23% with respect to science, a substantially higher coverage of tuple-expressible science knowledge than other comparable resources. We are making the KB publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We discuss the related work in Section 2. In Section 3, we describe the domain-targeted pipeline, including how the domain is characterized to the algorithm and the sequence of filters and predictors used. In Section 4, we describe how the relationships between predicates in the domain are identified and the more general predicates further populated. Finally in Section 5, we evaluate our approach, including evaluating its comprehensiveness (high-precision coverage of science knowledge).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Outline",
"sec_num": null
},
{
"text": "There has been substantial, recent progress in knowledge bases that (primarily) encode knowledge about Named Entities, including Freebase (Bollacker et al., 2008) , Knowledge Vault (Dong et al., 2014) , DBPedia (Auer et al., 2007) , and others that hierarchically organize nouns and named entities, e.g., Yago (Suchanek et al., 2007) . While these KBs are rich in facts about named entities, they are sparse in general knowledge about common nouns (e.g., that bears have fur). KBs covering general knowledge have received less attention, although there are some notable exceptions constructed using manual methods, e.g., WordNet (Fellbaum, 1998) , crowdsourcing, e.g., ConceptNet (Speer and Havasi, 2013) , and, more recently, using automated methods, e.g., WebChild (Tandon et al., 2014) . While useful, these resources have been constructed to target only a small set of relations, providing only limited coverage for a domain of interest.",
"cite_spans": [
{
"start": 138,
"end": 162,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 181,
"end": 200,
"text": "(Dong et al., 2014)",
"ref_id": "BIBREF9"
},
{
"start": 211,
"end": 230,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF0"
},
{
"start": 310,
"end": 333,
"text": "(Suchanek et al., 2007)",
"ref_id": "BIBREF30"
},
{
"start": 629,
"end": 645,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 680,
"end": 704,
"text": "(Speer and Havasi, 2013)",
"ref_id": "BIBREF28"
},
{
"start": 767,
"end": 788,
"text": "(Tandon et al., 2014)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To overcome relation sparseness, the paradigm of Open IE (Banko et al., 2007; Soderland et al., 2013) extracts knowledge from text using an open set of relationships, and has been used to successfully build large-scale (arg1,relation,arg2) resources such as ReVerb-15M (containing 15 million general triples) (Fader et al., 2011) . Although broad coverage, however, Open IE techniques typically produce noisy output. Our extraction pipeline can be viewed as an extension of the Open IE paradigm: we start with targeted Open IE output, and then apply a sequence of filters to substantially improve the output's precision, and learn and apply relationships between predicates.",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(Banko et al., 2007;",
"ref_id": "BIBREF1"
},
{
"start": 78,
"end": 101,
"text": "Soderland et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 309,
"end": 329,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The task of finding and exploiting relationships between different predicates requires identifying both equivalence between relations (e.g., clustering to find paraphrases), and implication (hierarchical organization of relations). One class of approach is to use existing resources, e.g., verb taxonomies, as a source of verbal relationships, e.g., (Grycner and Weikum, 2014) , (Grycner et al., 2015) . However, the hierarchical relationship between verbs, out of context, is often unclear, and some verbs, e.g., \"have\", are ambiguous. To address this, we characterize semantic relationships not only by a verb but also by the types of its arguments. A second class of approach is to induce semantic equivalence from data, e.g., using algorithms such as DIRT (Lin and Pantel, 2001 ), RESOLVER (Yates and Etzioni, 2009) , WiseNet (Moro and Navigli, 2012) , and AMIE (Gal\u00e1rraga et al., 2013) . These allow relational equivalences to be inferred, but are also noisy. In our pipeline, we combine these two approaches together, by clustering relations using a similarity measure computed from both existing resources and data.",
"cite_spans": [
{
"start": 350,
"end": 376,
"text": "(Grycner and Weikum, 2014)",
"ref_id": "BIBREF16"
},
{
"start": 379,
"end": 401,
"text": "(Grycner et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 760,
"end": 781,
"text": "(Lin and Pantel, 2001",
"ref_id": "BIBREF18"
},
{
"start": 794,
"end": 819,
"text": "(Yates and Etzioni, 2009)",
"ref_id": "BIBREF34"
},
{
"start": 830,
"end": 854,
"text": "(Moro and Navigli, 2012)",
"ref_id": "BIBREF21"
},
{
"start": 866,
"end": 890,
"text": "(Gal\u00e1rraga et al., 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A novel feature of our approach is that we not only cluster the (typed) relations, but also identify a canonical relation that all the other relations in a cluster can be mapped to, without recourse to human annotated training data or a target relational vocabulary (e.g., from Freebase). This makes our problem setting different from that of universal schema (Riedel et al., 2013) where the clusters of relations are not explicitly represented and mapping to canon-ical relations can be achieved given an existing KB like Freebase. Although no existing methods can be directly applied in our problem setting, the AMIEbased schema clustering method of (Gal\u00e1rraga et al., 2014) can be modified to do this also. We have implemented this modification (called AMIE*, described in Section 5.3), and we use it as a baseline to compare our schema clustering method (CASI) against.",
"cite_spans": [
{
"start": 360,
"end": 381,
"text": "(Riedel et al., 2013)",
"ref_id": "BIBREF25"
},
{
"start": 652,
"end": 676,
"text": "(Gal\u00e1rraga et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, interactive methods have been used to create common sense knowledge bases, for example ConceptNet (Speer and Havasi, 2013; Liu and Singh, 2004) includes a substantial amount of knowledge manually contributed by people through a Web-based interface, and used in numerous applications (Faaborg and Lieberman, 2006; Dinakar et al., 2012) . More recently there has been work on interactive methods (Dalvi et al., 2016; Wolfe et al., 2015; Soderland et al., 2013) , which can be seen as a \"machine teaching\" approach to KB construction. These approaches focus on human-in-theloop methods to create domain specific knowledge bases. Such approaches are proven to be effective on domains where expert human input is available. In contrast, our goal is to create extraction techniques that need little human supervision, and result in comprehensive coverage of the target domain.",
"cite_spans": [
{
"start": 107,
"end": 131,
"text": "(Speer and Havasi, 2013;",
"ref_id": "BIBREF28"
},
{
"start": 132,
"end": 152,
"text": "Liu and Singh, 2004)",
"ref_id": "BIBREF19"
},
{
"start": 292,
"end": 321,
"text": "(Faaborg and Lieberman, 2006;",
"ref_id": "BIBREF10"
},
{
"start": 322,
"end": 343,
"text": "Dinakar et al., 2012)",
"ref_id": "BIBREF8"
},
{
"start": 403,
"end": 423,
"text": "(Dalvi et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 424,
"end": 443,
"text": "Wolfe et al., 2015;",
"ref_id": null
},
{
"start": 444,
"end": 467,
"text": "Soderland et al., 2013)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We first describe the overall extraction pipeline. The pipeline is a chain of filters and transformations, outputting (subject,predicate,object) triples at the end. It uses a novel combination of familiar technologies, plus a novel schema learning module, described in more detail in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Extraction Pipeline",
"sec_num": "3"
},
{
"text": "Unlike many prior efforts, our goal is a domainfocused KB. To specify the KB's extent and focus, we use two inputs: 1. A domain vocabulary listing the nouns and verbs relevant to the domain. In our particular application, the domain is Elementary science, and the domain vocabulary is the typical vocabulary of a Fourth Grader (\u223c10 year old child), augmented with additional science terms from 4th Grade Science texts, comprising of about 6000 nouns, 2000 verbs, 2000 adjectives, and 600 adverbs. 2. A small set of types for the nouns, listing the primary types of entity relevant to the domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inputs and Outputs",
"sec_num": "3.1"
},
{
"text": "In our domain, we use a manually constructed inventory of 45 types (animal, artifact, body part, measuring instrument, etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inputs and Outputs",
"sec_num": "3.1"
},
{
"text": "In addition, the pipeline also uses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inputs and Outputs",
"sec_num": "3.1"
},
{
"text": "3. a large, searchable text corpus to provide sentences for knowledge extraction. In our case, we use the Web via a search engine (Bing), followed by filters to extract clean sentences from search results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inputs and Outputs",
"sec_num": "3.1"
},
{
"text": "Although, in general, nouns are ambiguous, in a targeted domain there is typically a clear, primary sense that can be identified. For example, while in general the word \"pig\" can refer to an animal, a person, a mold, or a block of metal, in 4th Grade Science it universally refers to an animal 3 . We leverage this for our task by assuming one sense per noun in the domain vocabulary, and notate these senses by manually assigning each noun to one of the entity types in the type inventory. Verbs are more challenging, because even within a domain they are often polysemous out of context (e.g., \"have\"). To handle this, we refer to verbs along with their argument types, the combination expressed as a verbal schema, e.g., (Animal,\"have\",BodyPart). This allows us to distinguish different contextual uses of a verb without introducing a proliferation of verb sense symbols. Others have taken a similar approach of using type restrictions to express verb semantics (Pantel et al., 2007; Del Corro et al., 2014) .",
"cite_spans": [
{
"start": 965,
"end": 986,
"text": "(Pantel et al., 2007;",
"ref_id": "BIBREF22"
},
{
"start": 987,
"end": 1010,
"text": "Del Corro et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Senses",
"sec_num": "3.2"
},
{
"text": "The pipeline is sketched in Figure 1 and exemplified in Table 1 , and consists of six steps:",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 36,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 56,
"end": 63,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Pipeline",
"sec_num": "3.3"
},
{
"text": "The first step is to construct a collection of (loosely) domain-appropriate sentences from the larger corpus. There are multiple ways this could be done, but in our case we found the most effective way was as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "3.3.1"
},
{
"text": "a. List the core topics in the domain of interest (science), here producing 81 topics derived from syllabus guides. b. For each topic, author 1-3 query templates, parameterized using one or more of the 45 domain types. For example, for the topic \"animal adapation\", a template was \"[Animal] adaptation environment\", parameterized by the type Animal. The purpose of query templates is to steer the search engine to domain-relevant text. c. For each template, automatically instantiate its type(s) in all possible ways using the domain vocabulary members of those types. d. Use each instantiation as a search query over the corpus, and collect sentences in the top (here, 10) documents retrieved. In our case, this resulted in a generally domainrelevant corpus of 7M sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "3.3.1"
},
{
"text": "Second, we run an open information extraction system over the sentences to generate an initial set of (np, vp, np) tuples. In our case, we use OpenIE 4.2 (Soderland et al., 2013; Mausam et al., 2012) .",
"cite_spans": [
{
"start": 154,
"end": 178,
"text": "(Soderland et al., 2013;",
"ref_id": "BIBREF27"
},
{
"start": 179,
"end": 199,
"text": "Mausam et al., 2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tuple Generation",
"sec_num": "3.3.2"
},
{
"text": "Third, the np arguments are replaced with their headwords, by applying a simple headword filtering utility. We discard tuples with infrequent vps or verbal schemas (here vp frequency < 10, schema frequency < 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Headword Extraction and Filtering",
"sec_num": "3.3.3"
},
{
"text": "Pipeline Example Outputs: Inputs: corpus + vocabulary + types 1. Sentence selection:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Headword Extraction and Filtering",
"sec_num": "3.3.3"
},
{
"text": "\"In addition, green leaves have chlorophyll.\") 2. Tuple Generation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Headword Extraction and Filtering",
"sec_num": "3.3.3"
},
{
"text": "(\"green leaves\" \"have\" \"chlorophyll\") 3. Headword Extraction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Headword Extraction and Filtering",
"sec_num": "3.3.3"
},
{
"text": "(\"leaf\" \"have\" \"chlorophyll\") 4. Refinement and Scoring:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Headword Extraction and Filtering",
"sec_num": "3.3.3"
},
{
"text": "(\"leaf\" \"have\" \"chlorophyll\") @0.89 (score) 5. Phrasal tuple generation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Headword Extraction and Filtering",
"sec_num": "3.3.3"
},
{
"text": "(\"leaf\" \"have\" \"chlorophyll\") @0.89 (score) (\"green leaf\" \"have\" \"chlorophyll\") @0.89 (score) 6. Relation Canonicalization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Headword Extraction and Filtering",
"sec_num": "3.3.3"
},
{
"text": "(\"leaf\" \"have\" \"chlorophyll\") @0.89 (score) (\"green leaf\" \"have\" \"chlorophyll\") @0.89 (score) (\"leaf\" \"contain\" \"chlorophyll\") @0.89 (score) (\"green leaf\" \"contain\" \"chlorophyll\") @0.89 (score) Table 1 : Illustrative outputs of each step of the pipeline for the term \"leaf\".",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 201,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Headword Extraction and Filtering",
"sec_num": "3.3.3"
},
{
"text": "Fourth, to improve precision, Turkers are asked to manually score a proportion (in our case, 15%) of the tuples, then a model is constructed from this data to score the remainder. For the Turk task, Turkers were asked to label each tuple as true or false/nonsense. Each tuple is labeled 3 times, and a majority vote is applied to yield the overall label. The semantics we apply to tuples (and which we explain to Turkers) is one of plausibility: if the fact is true for some of the arg1's, then score it as true. For example, if it is true that some birds lay eggs, then the tuple (bird, lay, egg) should be marked true. The degree of manual vs. automated can be selected here depending on the precision/cost constraints of the end application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Refinement and Scoring",
"sec_num": "3.3.4"
},
{
"text": "We then build a model using this data to predict scores on other tuples. For this model, we use logistic regression applied to a set of tuple features. These tuple features include normalized count features, schema and type level features, PMI statistics and semantic features. Normalized count features are based on the number of occurrences of tuples, and the number of unique sentences the tuple is extracted from. Schema and type level features are derived from the subject and object type, and frequency of schema in the corpus. Semantic features are based on whether subject and object are ab-stract vs. concrete (using Turney et al's abstractness database (Turney et al., 2011)), and whether there are any modal verbs (e.g. may, should etc.) in the original sentence. PMI features are derived from the count statistics of subject, predicate, object and entire triple in the Google n-gram corpus (Brants and Franz, 2006) .",
"cite_spans": [
{
"start": 902,
"end": 926,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Refinement and Scoring",
"sec_num": "3.3.4"
},
{
"text": "Fifth, for each headword tuple (n, vp, n), retrieve the original phrasal triples (np, vp, np) it was derived from, and add sub-phrase versions of these phrasal tuples to the KB. For example, if a headword tuple (cat, chase, mouse) was derived from (A black furry cat, chased, a grey mouse) then the algorithm considers adding (black cat, chase, mouse) (black furry cat, chase, mouse) (black cat, chase, grey mouse) (black furry cat, chase, grey mouse)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Tuple Generation",
"sec_num": "3.3.5"
},
{
"text": "Valid noun phrases are those following a pattern \"<Adj>* <Noun>+\". The system only retains constructed phrasal tuples for which both subject and object phrases satisfy PMI and count thresholds 4 , computed using the Google N-gram corpus (Brants and Franz, 2006) . In general, if the headword tuple is scored as correct and the PMI and count thresholds are met, then the phrasal originals and variants are also correct. (We evaluate this in Section 5.2).",
"cite_spans": [
{
"start": 237,
"end": 261,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Tuple Generation",
"sec_num": "3.3.5"
},
{
"text": "Finally, we induce a set of schema mapping rules over the tuples that identify clusters of equivalent and similar relations, and map them to a canonical, generalized relation. These canonical, generalized relations are referred to as canonical schemas, and the induction algorithm is called CASI (Canonical Schema Induction). The rules are then applied to the tuples, resulting in additional general tuples being added to the KB. The importance of this step is that generalizations among seemingly disparate tuples are made explicit. While we could then discard tuples that are mapped to a generalized form, we instead retain them in case a query is made to the KB that requires the original fine-grained distinctions. In the next section, we describe how these schema mapping rules are learned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "4 Canonical Schema Induction (CASI) 4.1 Task: Induce schema mapping rules",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "The role of the schema mapping rules is to make generalizations among seemingly disparate tuples explicit in the KB. To do this, the system identifies clusters of relations with similar meaning, and maps them to a canonical, generalized relation. The mappings are expressed using a set of schema mapping rules, and the rules can be applied to infer additional, general triples in the KB. Informally, mapping rules should combine evidence from both external resources (e.g., verb taxonomies) and data (tuples in the KB). This observation allows us to formally define an objective function to guide the search for mapping rules. We define:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "\u2022 a schema is a structure (type1,verb phrase,type2) here the types are from the input type inventory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "\u2022 a schema mapping rule is a rule of the form schema i \u2192 schema j stating that a triple using schema i can be reexpressed using schema j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "\u2022 a canonical schema is a schema that does not occur on the left-hand side of any mapping rule, i.e., it does not point to any other schema.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "To learn a set of schema mapping rules, we select from the space of possible mapping rules so as to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "\u2022 maximize the quality of the selected mapping rules, i.e., maximize the evidence that the selected rules express valid paraphrases or generalization. That is we are looking for synonymous and type-of edges between schemas. This evidence is drawn from both existing resources (e.g., WordNet) and from statistical evidence (among the tuples themselves).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "\u2022 satisfy the constraint that every schema points to a canonical schema, or is itself a canonical schema.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "We can view this task as a subgraph selection problem in which the nodes are schemas, and directed edges are possible mapping rules between schemas. The learning task is to select subgraphs such that all nodes in a subgraph are similar, and point to a single, canonical node (Figure 2 ). We refer to the blue nodes in Figure 2 as induced canonical schemas.",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 284,
"text": "(Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 318,
"end": 326,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "To solve this selection problem, we formulate it as as a linear optimization task and solve it using integer linear programming (ILP), as we now describe. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Induction",
"sec_num": "3.3.6"
},
{
"text": "To assess the quality of candidate mapping rules, we combine features from the following sources: Moby, WordNet, association rules and statistical features from our corpus. These features indicate synonymy or type-of links between schemas. For each schema S i e.g. (Animal, live in, Location) we define the relation r i as being the verb phrase (e.g. \"live in\"), and v i as the root verb of r i (e.g. \"live\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "\u2022 Moby: We also use verb phrase similarity scores derived from the Moby thesaurus. Moby score M ij for a schema pair is computed by a lookup in this dataset for relation pair r i , r j or root verb pair v i , v j . This is also a directed feature, i.e. M ij = M ji . \u2022 WordNet: If there exists a troponym link path from schema r i to r j , then we define the Word-Net score W ij for this schema pair as the inverse of the number of edges that need to be Type Use which parts of schema? What kind of relations do they encode? Feature source semantic distributional subject predicate object synonym type-of temporal implication Moby WordNet AMIE-typed AMIE-untyped ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "X ij \u03bb 1 * M ij + \u03bb 2 * Wij+\u03bb 3 * AT ij +\u03bb 4 * AU ij +\u03bb 5 * S ij \u2212 \u03b4 * X 1 subject to, X ij \u2208 {0, 1}, \u2200 i, j X ij are boolean. X ij + X ji \u2264 1, \u2200i, j schema mapping relation is asymmetric. j X ij \u2264 1, \u2200i select at most one parent per schema. X ij + X jk \u2212 X ik \u2264 1, \u2200 i, j, k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "schema mapping relation is transitive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "(1) Figure 3 : The ILP used for canonical schema induction traveled to reach r j from r i . If such a path does not exist, then we look for a path from v i to v j . Since we do not know the exact Word-Net synset applicable for each schema, we consider all possible synset choices and pick the best score as W ij . This is a directed feature i.e., W ij = W ji . Note that even though Word-Net is a high quality resource, it is not completely sufficient for our purposes. Out of 955 unique relations (verb phrases) in our KB, only 455 (47%) are present in WordNet. We can deal with these out of WordNet verb phrases by relying on other sets of features described next. \u2022 AMIE: AMIE is an association rule mining system that can produce association rules of the form: \"?a eat ?b \u2192 ?a consume ?b\". We have two sets of AMIE features: typed and untyped. Untyped features are of the form r i \u2192 r j , e.g., eat \u2192 consume, whereas typed features are of the form S i \u2192 S j , e.g., (Animal, eat, F ood) \u2192 (Animal, consume, F ood). AMIE produces real valued scores 5 between 0 to 1 for each rule. We define AU ij and AT ij as untyped and typed AMIE rule scores respectively. 5 We use PCA confidence scores produced by AMIE.",
"cite_spans": [
{
"start": 1163,
"end": 1164,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 4,
"end": 12,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "\u2022 Specificity: We define specificity of each relation as its IDF score in terms of the number of argument pairs it occurs with, compared to total number of argument type pairs in the corpus. The specificity score of a schema mapping rule favors more general predicates on the parent side of the rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "specif icity(r) = IDF (r) SP (r) = specif icity(r) max r specif icity(r )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "S ij = SP (r i ) \u2212 SP (r j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "Further, we have a small set of very generic relations like \"have\" and \"be\" that are considered as relation stopwords by setting their SP (r) scores to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "These features encode different aspects of similarity between schemas as described in Table 2 . In this work we combine semantic high-quality features from WordNet, Moby thesaurus with weak distributional similarity features from AMIE to generate schema mapping rules. We have observed that thesaurus features are very effective for predicates which are less ambiguous e.g. eat, consume, live in. Association rule features on the other hand have evidence for predicates which are very ambiguous e.g. have, be. Thus these features are complementary.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "Further, these features indicate different kinds of relations between two schemas: synonymy, typeof and temporal implication (refer Table 2 ). In this work, we want to learn the schema mapping rules that capture synonymy and type-of relations and discard the temporal implications. This makes our problem setting different from that of knowledge base completion methods e.g., (Socher et al., 2013) . Our proposed method CASI uses an ensemble of semantic and statistical features enabling us to promote the synonymy and type-of edges, and to select the most general schema as canonical schema per cluster.",
"cite_spans": [
{
"start": 376,
"end": 397,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Features for learning schema mapping rules",
"sec_num": "4.2"
},
{
"text": "The features described in Section 4.2 provide partial support for possible schema mapping rules in our dataset. The final set of rules we select needs to comply with asymmetry, transitive closure and at most one parent per schema constraints. We use an integer linear program to find the optimal set of schema mapping rules that satisfy these constraints, shown formally in Figure 3 . We decompose the schema mapping problem into multiple independent sub-problems by considering schemas related to a pair of argument types, e.g, all schemas that have domain or range types Animal, Location would be considered as a separate sub-problem. This way we can scale our method to large sets of schemas. The ILP for each sub-problem is presented in Equation 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 374,
"end": 382,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "ILP model used in CASI",
"sec_num": "4.3"
},
{
"text": "In Equation 1, each X ij is a boolean variable representing whether we pick the schema mapping rule S i \u2192 S j . As described in Section 4.2, M ij , W ij , AT ij , AU ij , S ij represent the scores produced by Moby, WordNet, AMIE-typed, AMIEuntyped and Specificity features respectively for the schema mapping rule S i \u2192 S j . The objective function maximizes the weighted combination of these scores. Further, the solution picked by this ILP satisfies constraints such as asymmetry, transitive closure and at most one parent per schema. We also apply an L 1 sparsity penalty on X, retaining only those schema mapping edges for which the model is reasonably confident.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP model used in CASI",
"sec_num": "4.3"
},
{
"text": "For n schemas, there are O(n 3 ) transitivity constraints which make the ILP very inefficient. Berant et al. (2011) proposed two approximations to handle a large number of transitivity rules by decomposing the ILP or solving it in an incremental way. Instead we re-write the ILP rules in such a way that we can efficiently solve our mapping problem without introducing any approximations. The last two constraints of this ILP can be rewritten as follows:",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ILP model used in CASI",
"sec_num": "4.3"
},
{
"text": "j X ij \u2264 1, \u2200i AND X ij + X jk \u2212 X ik \u2264 1, \u2200 i, j, k =\u21d2 If(X ij = 1) then X jk = 0 \u2200k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP model used in CASI",
"sec_num": "4.3"
},
{
"text": "This results in O(n 2 ) constraints and makes the ILP efficient. Impact of this technique in terms of runtime is described in Section 5.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP model used in CASI",
"sec_num": "4.3"
},
{
"text": "We then use an off-the-shelf ILP optimization engine called SCPSolver (Planatscher and Schober, 2015) to solve the ILP problems. The output of our ILP model is the schema mapping rules. We then apply these rules onto KB tuples to generate additional, general tuples. Some examples of the learned rules are:",
"cite_spans": [
{
"start": 70,
"end": 101,
"text": "(Planatscher and Schober, 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ILP model used in CASI",
"sec_num": "4.3"
},
{
"text": "(Organism, have, Phenomenon) \u2192 (Organism, undergo, Phenomenon) (Animal, have, Event) \u2192 (Animal, experience, Event) (Bird, occupy, Location) \u2192 (Bird, inhabit, Location)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP model used in CASI",
"sec_num": "4.3"
},
{
"text": "5 Evaluation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP model used in CASI",
"sec_num": "4.3"
},
{
"text": "Our overall goal is a high-precision KB that has reasonably \"comprehensive\" coverage of facts in the target domain, on the grounds that these are the facts that a domain application is likely to query about. This notion of KB comprehensiveness is an important but under-discussed aspect of knowledge bases. For example, in the automatic KB construction literature, while a KB's size is often reported, this does not reveal whether the KB is near-complete or merely a drop in the ocean of that required (Razniewski et al., 2016; Stanovsky and Dagan, 2016) . More formally, we define comprehensiveness as: recall at high (> 80%) precision of domainrelevant facts. This measure is similar to recall at the point P=80% on the PR curve, except recall is measured with respect to a different distribution of facts (namely facts about elementary science) rather than a held-out sample of data used to build the KB. is important is that the same precision point is used when comparing results. We choose 80% as subjectively reasonable; at least 4 out of 5 queries to the KB should be answered correctly. There are several ways this target distribution of required facts can be modeled. To fully realize the ambition of this metric, we would directly identify a sample of required end-task facts, e.g., by manual analysis of questions posed to the end-task system, or from logs of the interaction between the endtask system and the KB. However, given the practical challenges of doing this at scale, we take a simpler approach and approximate this end-task distribution using facts extracted from an (independent) domain-specific text corpus (we call this a reference corpus). Note that these facts are only a sample of domain-relevant facts, not the entirety. Otherwise, we could simply run our extractor over the reference corpus and have all we need. Now we are in a strong position, because the reference corpus gives us a fixed point of reference to measure comprehensiveness: we can sample facts from it and measure what fraction the KB \"knows\", i.e., can answer as true (Figure 4 ).",
"cite_spans": [
{
"start": 502,
"end": 527,
"text": "(Razniewski et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 528,
"end": 554,
"text": "Stanovsky and Dagan, 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 2068,
"end": 2077,
"text": "(Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "KB Comprehensiveness",
"sec_num": "5.1"
},
{
"text": "For our specific task of elementary science QA, we have assembled a reference corpus 6 of \u223c1.2M sentences comprising of multiple elementary science textbooks, multiple dictionary definitions of all fourth grade vocabulary words, and simple Wikipedia pages for all fourth grade vocabulary words (where such pages exist). To measure our KB's comprehensiveness (of facts within the expressive power of our KB), we randomly sampled 4147 facts, expressed as headword tuples, from the reference corpus. These were generated semiautomatically using parts of our pipeline, namely information extraction then Turker scoring to obtain true facts 7 . We call these facts the Reference KB 8 . To the extent our tuple KB contains facts in this Reference KB (and under the simplifying assumption that these facts are representative of the science knowledge our QA application needs), we say our tuple KB is comprehensive. Doing this yields a value of 23% comprehensiveness for our KB (Table 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KB Comprehensiveness",
"sec_num": "5.1"
},
{
"text": "We also measured the precision and science coverage of other, existing fact KBs. For precision, we took a random sample of 1000 facts in each KB, and followed the same methodology as earlier so that the comparison is valid: Turkers label each fact as true or false/nonsense, each fact is labeled 3 times, and the majority label is the overall label. The precisions are shown in Table 3 . For ConceptNet, we used only the subset of facts with frequency > 1, as frequency=1 facts are particularly noisy (thus the precision of the full ConceptNet would be lower).",
"cite_spans": [],
"ref_spans": [
{
"start": 378,
"end": 385,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "KB Comprehensiveness",
"sec_num": "5.1"
},
{
"text": "We also computed the science coverage (= comprehensiveness, if p>80%) using our reference KB. Note that these other KBs were not designed with elementary science in mind and so, not surprisingly, they do not cover many of the relations in our domain. To make the comparison as fair as possible, given these other KBs use different relational vocabularies, we first constructed a list of 20 very general relations (similar to the ConceptNet relations, e.g., causes, uses, part-of, requires), and then mapped relations used in both our reference facts, and in the other KBs, to these 20 relations. To compare if a reference fact is in one of these other KBs, only the general relations need to match, and only the subject and object headwords need to match. This allows substantial linguistic variation to be permitted during evaluation (e.g., \"contain\",. \"comprise\", \"part of\" etc. would all be considered matching). In other words, this is a generous notion of \"a KB containing a fact\", in order to be as fair as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KB Comprehensiveness",
"sec_num": "5.1"
},
{
"text": "As Table 4 illustrates, these other KBs cover very little of the target science knowledge. In the case of WebChild and NELL, the primary reason for low recall is low overlap between their target and ours. NELL has almost no predicate overlap with our Reference KB, reflecting it's Named Entity centric content. WebChild is rich in part-of and location information, and covers 60% of part-of and location facts in our Reference KB. However, these are only 4.5% of all the facts in the Reference KB, resulting in an overall recall (and comprehensiveness) of 3%. In contrast, ConceptNet and ReVerb-15M have substantially more relational overlap with our Reference KB, hence their recall numbers are higher. However, both have lower precision, limiting their utility.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "KB Comprehensiveness",
"sec_num": "5.1"
},
{
"text": "This evaluation demonstrates the limited science coverage of existing resources, and the degree to which we have overcome this limitation. The extraction methods used to build these resources are not directly comparable since they are starting with different input/output settings and involve significantly different degrees of supervision. Rather, the results suggest that general-purpose KBs (e.g., NELL) may have limited coverage for specific domains, and that our domain-targeted extraction pipeline can significantly alleviate this in terms of precision and coverage when that domain is known. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KB Comprehensiveness",
"sec_num": "5.1"
},
{
"text": "In addition, we measured the average precision of facts present in the KB after every stage of the pipeline (Table 4) . We can see that the pipeline take as input 7.5M OpenIE tuples with precision of 54% and produces a good quality science KB of over 340K facts with 80.6% precision organized into 15K schemas. The Table also shows that precision is largely preserved as we introduce phrasal triples and general tuples.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 117,
"text": "(Table 4)",
"ref_id": "TABREF4"
},
{
"start": 315,
"end": 325,
"text": "Table also",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance of the Extraction Pipeline",
"sec_num": "5.2"
},
{
"text": "In this section we will focus on usefulness and correctness of our canonical schema induction method. The parameters of the ILP model (see Equation 1) i.e., \u03bb 1 . . . \u03bb 5 and \u03b4 are tuned based on sample accuracy of individual feature sources and using a small schema mapping problem with schemas applicable to vocabulary types Animal and Body-Part. \u03bb 1 = 0.7, \u03bb 2 = 0.9, \u03bb 3 = 0.3, \u03bb 4 = 0.1, \u03bb 5 = 0.2, \u03b4 = 0.7 Further, with O(n 3 ) transitivity constraints we could not successfully solve a single ILP problem with 100 schemas within a time limit of 1 hour. Whereas, when we rewrite them with O(n 2 ) constraints as explained in Section 4.3, we could solve 443 ILP sub-problems within 6 minutes with average runtime per ILP being 800 msec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Canonical Schema Induction",
"sec_num": "5.3"
},
{
"text": "Canonical schema induction method Comprehensiveness None 20.0% AMIE* 20.9% CASI (our method)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Canonical Schema Induction",
"sec_num": "5.3"
},
{
"text": "23.2% Table 5 : Use of the CASI-induced schemas significantly (at the 99% confidence level) improves comprehensiveness of the KB.",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 13,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation of Canonical Schema Induction",
"sec_num": "5.3"
},
{
"text": "As discussed in Section 2, we not only cluster the (typed) relations, but also identify a canonical relation that all the other relations in a cluster can be mapped to, without recourse to human annotated training data or a target relational vocabulary. Although no existing methods do this directly, the AMIE-based schema clustering method of (Gal\u00e1rraga et al., 2014) can be extended to do this by incorporating the association rules learned by AMIE (both typed and untyped) inside our ILP framework to output schema mapping rules. We call this extension AMIE*, and use it as a baseline to compare the performance of CASI against.",
"cite_spans": [
{
"start": 344,
"end": 368,
"text": "(Gal\u00e1rraga et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Canonical Schema Induction",
"sec_num": "5.3"
},
{
"text": "The purpose of canonicalization is to allow equivalence between seemingly different schema to be recognized. For example, the KB query (\"polar bear\", \"reside in\", \"tundra\")? 9 can be answered by a KB triple (\"polar bear\", \"inhabit\", \"tundra\") if schema mapping rules map one or both to the same canonical form e.g., (\"polar bear\", \"live in\", \"tundra\") using the rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Usefulness",
"sec_num": "5.3.1"
},
{
"text": "(Animal, inhabit, Location) \u2192 (Animal, live in, Location) (Animal, reside in, Location) \u2192 (Animal, live in, Location)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Usefulness",
"sec_num": "5.3.1"
},
{
"text": "One way to quantitatively evaluate this is to measure the impact of schema mapping on the comprehensiveness metric. Table 5 shows that, before applying any canonical schema induction method, the comprehensiveness score of our KB was 20%. The AMIE* method improves this score to 20.9%, whereas our method achieves a comprehensiveness of 23.2%. This latter improvement over the original KB is statistically significant at the 99% confidence 9 e.g., posed by a QA system trying to answer the question \"Which is the likely location in which a polar bear to reside in? (A) Tundra (B) Desert (C) Grassland\" level (sample size is the 4147 facts sampled from the reference corpus).",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 123,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Canonical Schema Usefulness",
"sec_num": "5.3.1"
},
{
"text": "A second metric of interest is the correctness of the schema mapping rules (just because comprehensiveness improves does not imply every mapping rule is correct). We evaluate correctness of schema mapping rules using following metric: Precision of schema mapping rules: We asked Turkers to directly assess whether particular schema mapping rules were correct, for a random sample of rules. To make the task clear, Turkers were shown the schema mapping rule (expressed in English) along with an example fact that was rewritten using that rule (to give a concrete example of its use), and they were asked to select one option \"correct or incorrect or unsure\" for each rewrite rule. We asked this question to three different Turkers and considered the majority vote as final evaluation 10 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Schema Correctness",
"sec_num": "5.3.2"
},
{
"text": "The comparison results are shown in Table 6 . Starting with 15.8K schemas, AMIE* canonicalized only 822 of those into 102 canonical schemas (using 822 schema mapping rules). In contrast, our method CASI canonicalized 4.2K schemas into 2.5K canonical schemas. We randomly sampled 500 schema mapping rules generated by each method and asked Turkers to evaluate their correctness, as described earlier. As shown in Table 6 , the precision of rules produced was CASI is 68%, compared with AMIE* which achieved 59% on this metric. Thus CASI could canonicalize five times more schemas with 9% more precision.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 43,
"text": "Table 6",
"ref_id": null
},
{
"start": 412,
"end": 419,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Canonical Schema Correctness",
"sec_num": "5.3.2"
},
{
"text": "Next, we identify some of the limitations of our approach and directions for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "5.4"
},
{
"text": "1. Extracting Richer Representations of Knowledge: While triples can capture certain kinds of knowledge, there are other kinds of information, e.g. detailed descriptions of events or processes, that cannot be easily represented by a set of independent tuples. An extension of this work would be to extract event frames, capable of representing a richer set of Table 6 : CASI canonicalizes five times more schemas than AMIE*, and also achieves a small (9%) increase in precision, demonstrating how additional knowledge resources can help the canonicalization process (Section 4.2). Precision estimates are within +/-4% with 95% confidence interval.",
"cite_spans": [],
"ref_spans": [
{
"start": 360,
"end": 367,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "5.4"
},
{
"text": "roles in a wider context compared to a triple fact. For example in the news domain, while representing an event \"public shooting\", one would like to store the shooter, victims, weapon used, date, time, location and so on. Building high-precision extraction techniques that can go beyond binary relations towards event frames is a potential direction of future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "5.4"
},
{
"text": "2. Richer KB Organization: Our approach organizes entities and relations into flat entity types and schema clusters. An immediate direction for extending this work could be a better KB organization with deep semantic hierarchies for predicates and arguments, allowing inheritance of knowledge among entities and triples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "5.4"
},
{
"text": "3. Improving comprehensiveness beyond 23%: Our comprehensiveness score is currently at 23% indicating 77% of potentially useful science facts are still missing from our KB. There are multiple ways to improve this coverage including but not limited to 1) processing more science corpora through our extraction pipeline, 2) running standard KB completion methods on our KB to add the facts that are likely to be true given the existing facts, and 3) improving our canonical schema induction method further to avoid cases where the query fact is present in our KB but with a slight linguistic variation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "5.4"
},
{
"text": "4. Quantification Sharpening: Similar to other KBs, our tuples have the semantics of plausibility: If the fact is generally true for some of the arg1s, then score it as true. Although frequency filtering typically removes facts that are rarely true for the arg1s, there is still variation in the quantifier strength of facts (i.e., does the fact hold for all, most, or some arg1s?) that can affect downstream inference. We are exploring methods for quantification sharpening, e.g., (Gordon and Schubert, 2010) , to address this.",
"cite_spans": [
{
"start": 482,
"end": 509,
"text": "(Gordon and Schubert, 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "5.4"
},
{
"text": "5. Can the pipeline be easily adapted to a new domain? Our proposed extraction pipeline expects high-quality vocabulary and types information as input. In many domains, it is easy to import types from existing resources like WordNet or FreeBase. For other domains like medicine, legal it might require domain experts to encode this knowledge. However, we believe that manually encoding types is a much simpler task as compared to manually defining all the schemas relevant for an individual domain. Further, various design choices, e.g., precision vs. recall tradeoff of final KB, the amount of expert input available, etc. would depend on the domain and end task requirements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "5.4"
},
{
"text": "Our goal is to construct, a domain-targeted, high precision knowledge base of (subject,predicate,object) triples to support an elementary science application. We have presented a scalable knowledge extraction pipeline that is able to extract a large number of facts targeted to a particular domain. The pipeline leveraging Open IE, crowdsourcing, and a novel schema learning algorithm, and has produced a KB of over 340,163 facts at 80.6% precision for elementary science QA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We have also introduced a metric of comprehensiveness for measuring KB coverage with respect to a particular domain. Applying this metric to our KB, we have achieved a comprehensiveness of over 23% of science facts within the KB's expressive power, substantially higher than the science coverage of other comparable resources. Most importantly, the pipeline offers for the first time a viable way of extracting large amounts of high-quality knowledge targeted to a specific domain. We have made the KB publicly available at http://data.allenai. org/tuple-kb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This KB named as \"Aristo Tuple KB\" is available for download at http://data.allenai.org/tuple-kb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Aristo Tuple KB is available for download at http:// allenai.org/data/aristo-tuple-kb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "There are exceptions, e.g., in 4th Grade Science \"bat\" can refer to either the animal or the sporting implement, but these cases are rare.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "e.g., \"black bear\" is a usable phrase provided it occurs > k1 times in the N-gram corpus and log[p(\"black bear\")/p(\"black\").p(\"bear\")] > k2 in the N-gram corpus, where constants k1 and k2 were chosen to optimize performance on a small test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This corpus named as \"Aristo MINI Corpus\" is available for download at http://allenai.org/data/ aristo-tuple-kb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This method will of course miss many facts in the reference corpus, e.g., when extraction fails or when the fact is in a nonsentential form, e.g., a table. However, we only assume that the distribution of extracted facts is representative of the domain.8 These 4147 test facts are published with the dataset at http://allenai.org/data/aristo-tuple-kb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We discarded the unsure votes. For more than 95% of the rules, at least 2 out of 3 Turkers reached clear consensus on whether the rule is \"correct vs. incorrect\", indicating that the Turker task was clearly defined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to Paul Allen whose long-term vision continues to inspire our scientific endeavors. We would also like to thank Peter Turney and Isaac Cowhey for their important contributions to this project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "DBpedia: A nucleus for a web of open data",
"authors": [
{
"first": "S",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "ISWC/ASWC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Auer, C. Bizer, J. Lehmann, G. Kobilarov, R. Cyga- niak, and Z. Ives. 2007. DBpedia: A nucleus for a web of open data. In In ISWC/ASWC.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Open information extraction from the web",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2007,
"venue": "In IJCAI",
"volume": "7",
"issue": "",
"pages": "2670--2676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI, vol- ume 7, pages 2670-2676.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Global learning of typed entailment rules",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Freebase: A collaboratively created graph database for structuring human knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "SIGMOD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collabo- ratively created graph database for structuring human knowledge. In SIGMOD.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Web 1T 5-gram version 1 LDC2006T13. Philadelphia: Linguistic Data Consortium",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5- gram version 1 LDC2006T13. Philadelphia: Linguis- tic Data Consortium.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Toward an architecture for never-ending language learning",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Tom M",
"middle": [],
"last": "Estevam R Hruschka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "AAAI",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for never-ending lan- guage learning. In AAAI, volume 5, page 3.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "IKE -An Interactive Tool for Knowledge Extraction",
"authors": [
{
"first": "Bhavana",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Bhakthavatsalam",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Groeneveld",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhavana Dalvi, Sumithra Bhakthavatsalam, Chris Clark, Peter Clark, Oren Etzioni, Anthony Fader, and Dirk Groeneveld. 2016. IKE -An Interactive Tool for Knowledge Extraction. In AKBC@NAACL-HLT.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Werdy: Recognition and disambiguation of verbs and verb phrases with syntactic and semantic pruning",
"authors": [
{
"first": "Luciano",
"middle": [],
"last": "Del Corro",
"suffix": ""
},
{
"first": "Rainer",
"middle": [],
"last": "Gemulla",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2014,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "374--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luciano Del Corro, Rainer Gemulla, and Gerhard Weikum. 2014. Werdy: Recognition and disambigua- tion of verbs and verb phrases with syntactic and se- mantic pruning. In 2014 Conference on Empirical Methods in Natural Language Processing, pages 374- 385. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Common sense reasoning for detection, prevention, and mitigation of cyberbullying",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Dinakar",
"suffix": ""
},
{
"first": "Birago",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
},
{
"first": "Rosalind",
"middle": [],
"last": "Picard",
"suffix": ""
}
],
"year": 2012,
"venue": "ACM Transactions on Interactive Intelligent Systems (TiiS)",
"volume": "2",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Dinakar, Birago Jones, Catherine Havasi, Henry Lieberman, and Rosalind Picard. 2012. Common sense reasoning for detection, prevention, and mitiga- tion of cyberbullying. ACM Transactions on Interac- tive Intelligent Systems (TiiS), 2(3):18.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Knowledge vault: a web-scale approach to probabilistic knowledge fusion",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Geremy",
"middle": [],
"last": "Heitz",
"suffix": ""
},
{
"first": "Wilko",
"middle": [],
"last": "Horn",
"suffix": ""
},
{
"first": "Ni",
"middle": [],
"last": "Lao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Strohmann",
"suffix": ""
},
{
"first": "Shaohua",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: a web-scale approach to probabilistic knowl- edge fusion. In KDD.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A goaloriented web browser",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Faaborg",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the SIGCHI conference on Human Factors in computing systems",
"volume": "",
"issue": "",
"pages": "751--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Faaborg and Henry Lieberman. 2006. A goal- oriented web browser. In Proceedings of the SIGCHI conference on Human Factors in computing systems, pages 751-760. ACM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing, pages 1535-1545. Association for Computational Linguis- tics. ReVerb-15M available at http://openie. cs.washington.edu.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "AMIE: association rule mining under incomplete evidence in ontological knowledge bases",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Gal\u00e1rraga",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Teflioudi",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Hose",
"suffix": ""
},
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2013,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Gal\u00e1rraga, Christina Teflioudi, Katja Hose, and Fabian M. Suchanek. 2013. AMIE: association rule mining under incomplete evidence in ontological knowledge bases. In WWW.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Canonicalizing open knowledge bases",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Gal\u00e1rraga",
"suffix": ""
},
{
"first": "Geremy",
"middle": [],
"last": "Heitz",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2014,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Gal\u00e1rraga, Geremy Heitz, Kevin Murphy, and Fabian M. Suchanek. 2014. Canonicalizing open knowledge bases. In CIKM.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Quantificational sharpening of commonsense knowledge",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Gordon",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lenhart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2010,
"venue": "AAAI Fall Symposium: Commonsense Knowledge",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Gordon and Lenhart K Schubert. 2010. Quan- tificational sharpening of commonsense knowledge. In AAAI Fall Symposium: Commonsense Knowledge.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Harpy: Hypernyms and alignment of relational paraphrases",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Grycner",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Grycner and Gerhard Weikum. 2014. Harpy: Hy- pernyms and alignment of relational paraphrases. In COLING.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "RELLY: Inferring hypernym relationships between relational phrases",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Grycner",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Pujara",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Foulds",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Grycner, Gerhard Weikum, Jay Pujara, James R. Foulds, and Lise Getoor. 2015. RELLY: Inferring hy- pernym relationships between relational phrases. In EMNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "DIRT -discovery of inference rules from text",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. DIRT -discov- ery of inference rules from text. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 323- 328. ACM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "ConceptNet: a practical commonsense reasoning tool-kit",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Push",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2004,
"venue": "BT technology journal",
"volume": "22",
"issue": "4",
"pages": "211--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Liu and Push Singh. 2004. ConceptNet: a prac- tical commonsense reasoning tool-kit. BT technology journal, 22(4):211-226.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Open language learning for information extraction",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Bart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In EMNLP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "WiSeNet: building a wikipedia-based semantic network with ontologized relations",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2012,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Moro and Roberto Navigli. 2012. WiSeNet: building a wikipedia-based semantic network with on- tologized relations. In CIKM.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "ISP: Learning inferential selectional preferences",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "Bonaventura",
"middle": [],
"last": "Coppola",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2007,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "564--571",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Pantel, Rahul Bhagat, Bonaventura Coppola, Timothy Chklovski, and Eduard H Hovy. 2007. ISP: Learning inferential selectional preferences. In HLT- NAACL, pages 564-571.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "SCP solver",
"authors": [
{
"first": "Hannes",
"middle": [],
"last": "Planatscher",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schober",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannes Planatscher and Michael Schober. 2015. SCP solver. http://scpsolver.org.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "But what do we actually know?",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Razniewski",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Fabian",
"suffix": ""
},
{
"first": "Werner",
"middle": [],
"last": "Suchanek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nutt",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. AKBC'16",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Razniewski, Fabian M Suchanek, and Werner Nutt. 2016. But what do we actually know? In Proc. AKBC'16.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Relation extraction with matrix factorization and universal schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"M"
],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In HLT- NAACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural ten- sor networks for knowledge base completion. In NIPS.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Open Information Extraction to KBP Relations in 3 Hours",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Gilmer",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Bart",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Soderland, John Gilmer, Robert Bart, Oren Et- zioni, and Daniel S. Weld. 2013. Open Information Extraction to KBP Relations in 3 Hours. In TAC.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Conceptnet 5: A large semantic network for relational knowledge",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2013,
"venue": "The Peoples Web Meets NLP",
"volume": "",
"issue": "",
"pages": "161--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Speer and Catherine Havasi. 2013. Concept- net 5: A large semantic network for relational knowl- edge. In The Peoples Web Meets NLP, pages 161-176. Springer.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Creating a large benchmark for open information extraction",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In EMNLP.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Yago: A Core of Semantic Knowledge",
"authors": [
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A Core of Semantic Knowl- edge. In WWW.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "WebChild: Harvesting and Organizing Commonsense Knowledge from the Web",
"authors": [
{
"first": "Niket",
"middle": [],
"last": "Tandon",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "De Melo",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2014,
"venue": "WSDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niket Tandon, Gerard de Melo, Fabian Suchanek, and Gerhard Weikum. 2014. WebChild: Harvesting and Organizing Commonsense Knowledge from the Web. In WSDM.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Literal and metaphorical sense identification through concrete and abstract context",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neuman",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney, Yair Neuman, Dan Assaf, and Yohai Co- hen. 2011. Literal and metaphorical sense identifica- tion through concrete and abstract context. In EMNLP.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Unsupervised methods for determining object and relation synonyms on the web",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Yates and Oren Etzioni. 2009. Unsupervised methods for determining object and relation synonyms on the web. Journal of Artificial Intelligence Research.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The extraction pipeline. A vocabulary-guided sequence of open information extraction, crowdsourcing, and learning predicate relationships are used to produce high precision tuples relevant to the domain of interest."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Learning schema mapping rules can be viewed as a subgraph selection problem, whose result (illustrated) is a set of clusters of similar schemas, all pointing to a single, canonical form."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Comprehensiveness (frequency-weighted coverage C of the required facts D) can be estimated using coverage A of a reference KB B as a surrogate sampling of the target distribution."
},
"TABREF0": {
"html": null,
"text": "The different features used in relation canonicalization capture different aspects of similarity.",
"content": "<table><tr><td>maximize {Xij }</td><td>i,j</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "Precision and coverage of tuple-expressible elementary science knowledge by existing resources vs. our KB. Precision estimates are within +/-3% with 95% confidence interval.",
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "",
"content": "<table><tr><td>: Evaluation of KB at different stages of extrac-tion. Precision estimates are within +/-3% with 95% con-fidence interval.</td></tr></table>",
"num": null,
"type_str": "table"
}
}
}
} |