File size: 110,678 Bytes
6fa4bc9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 | {
"paper_id": "P96-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:03:04.505866Z"
},
"title": "INVITED TALK Head Automata and Bilingual Tiling: Translation with Minimal Representations",
"authors": [
{
"first": "Hiyan",
"middle": [],
"last": "Alshawi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Research",
"location": {
"addrLine": "600 Mountain Avenue",
"postCode": "07974",
"settlement": "Murray Hill",
"region": "NJ",
"country": "USA"
}
},
"email": "hiyan@research.att.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a language model consisting of a collection of costed bidirectional finite state automata associated with the head words of phrases. The model is suitable for incremental application of lexical associations in a dynamic programming search for optimal dependency tree derivations. We also present a model and algorithm for machine translation involving optimal \"tiling\" of a dependency tree with entries of a costed bilingual lexicon. Experimental results are reported comparing methods for assigning cost functions to these models. We conclude with a discussion of the adequacy of annotated linguistic strings as representations for machine translation.",
"pdf_parse": {
"paper_id": "P96-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a language model consisting of a collection of costed bidirectional finite state automata associated with the head words of phrases. The model is suitable for incremental application of lexical associations in a dynamic programming search for optimal dependency tree derivations. We also present a model and algorithm for machine translation involving optimal \"tiling\" of a dependency tree with entries of a costed bilingual lexicon. Experimental results are reported comparing methods for assigning cost functions to these models. We conclude with a discussion of the adequacy of annotated linguistic strings as representations for machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Until the advent of statistical methods in the mainstream of natural language processing, syntactic and semantic representations were becoming progressively more complex. This trend is now reversing itself, in part because statistical methods reduce the burden of detailed modeling required by constraint-based grammars, and in part because statistical models for converting natural language into complex syntactic or semantic representations is not well understood at present. At the same time, lexically centered views of language have continued to increase in popularity. We can see this in lexicalized grammatical theories, head-driven parsing and generation, and statistical disambiguation based on lexical associations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These themes --simple representations, statistical modeling, and lexicalism --form the basis for the models and algorithms described in the bulk of this paper. The primary purpose is to build effective mechanisms for machine translation, the oldest and still the most commonplace application of nonsuperficial natural language processing. A secondary motivation is to test the extent to which a non-trivial language processing task can be carried out without complex semantic representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2 we present reversible mono-lingual models consisting of collections of simple automata associated with the heads of phrases. These head automata are applied by an algorithm with admissible incremental pruning based on semantic association costs, providing a practical solution to the problem of combinatoric disambiguation (Church and Patil 1982) . The model is intended to combine the lexical sensitivity of N-gram models (Jelinek et al. 1992) and the structural properties of statistical context free grammars (Booth 1969) without the computational overhead of statistical lexicalized treeadjoining grammars (Schabes 1992 , Resnik 1992 .",
"cite_spans": [
{
"start": 336,
"end": 359,
"text": "(Church and Patil 1982)",
"ref_id": null
},
{
"start": 436,
"end": 457,
"text": "(Jelinek et al. 1992)",
"ref_id": "BIBREF13"
},
{
"start": 525,
"end": 537,
"text": "(Booth 1969)",
"ref_id": null
},
{
"start": 623,
"end": 636,
"text": "(Schabes 1992",
"ref_id": null
},
{
"start": 637,
"end": 650,
"text": ", Resnik 1992",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For translation, we use a model for mapping dependency graphs written by the source language head automata. This model is coded entirely as a bilingual lexicon, with associated cost parameters. The transfer algorithm described in Section 4 searches for the lowest cost 'tiling' of the target dependency graph with entries from the bilingual lexicon. Dynamic programming is again used to make exhaustive search tractable, avoiding the combinatoric explosion of shake-and-bake translation (Whitelock 1992 , Brew 1992 .",
"cite_spans": [
{
"start": 487,
"end": 502,
"text": "(Whitelock 1992",
"ref_id": "BIBREF17"
},
{
"start": 503,
"end": 514,
"text": ", Brew 1992",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 5 we present a general framework for associating costs with the solutions of search processes, pointing out some benefits of cost functions other than log likelihood, including an error-minimization cost function for unsupervised training of the parameters in our translation application. Section 6 briefly describes an English-Chinese translator employing the models and algorithms. We also present experimental results comparing the performance of different cost assignment methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we return to the more general discussion of representations for machine translation and other natural language processing tasks, arguing the case for simple representations close to natural language itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Head automata mono-lingual language models consist of a lexicon, in which each entry is a pair (w, m) of a word w from a vocabulary V and a head automaton m (defined below), and a parameter table giving an assignment of costs to events in a generative process involving the automata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexieal and Dependency Parameters",
"sec_num": "2.1"
},
{
"text": "We first describe the model in terms of the familiar paradigm of a generative statistical model, presenting the parameters as conditional probabilities. This gives us a stochastic version of dependency grammar (Hudson 1984) .",
"cite_spans": [
{
"start": 210,
"end": 223,
"text": "(Hudson 1984)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexieal and Dependency Parameters",
"sec_num": "2.1"
},
{
"text": "Each derivation in the generative statistical model produces an ordered dependency tree, that is, a tree in which nodes dominate ordered sequences of left and right subtrees and in which the nodes have labels taken from the vocabulary V and the arcs have labels taken from a set R of relation symbols. When a node with label w immediately dominates a node with label w' via an arc with label r, we say that w' is an r-dependent of the head w. The interpretation of this directed arc is that relation r holds between particular instances of w and w'. (A word may have several or no r-dependents for a particular relation r.) A recursive left-parent-right traversal of the nodes of an ordered dependency tree for a derivation yields the word string for the derivation. A head automaton m of a lexical entry (w, m) defines possible ordered local trees immediately dominated by w in derivations. Model parameters for head automata, together with dependency parameters and lexical parameters, give a probability distribution for derivations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexieal and Dependency Parameters",
"sec_num": "2.1"
},
{
"text": "A dependency parameter P ( L w'lw, r') is the probability, given a head w with a dependent arc with label r', that w' is the r'-dependent for this arc.",
"cite_spans": [
{
"start": 25,
"end": 38,
"text": "( L w'lw, r')",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexieal and Dependency Parameters",
"sec_num": "2.1"
},
{
"text": "A lexical parameter P (m, qlr, t, w) is the probability that a local tree immediately dominated by an r-dependent w is derived by starting in state q of some automaton m in a lexieal entry (w, m). The model also includes lexieal parameters P(w,m, qlt>) for the probability that w is the head word for an entire derivation initiated from state q of automaton m.",
"cite_spans": [
{
"start": 22,
"end": 36,
"text": "(m, qlr, t, w)",
"ref_id": null
},
{
"start": 240,
"end": 252,
"text": "P(w,m, qlt>)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexieal and Dependency Parameters",
"sec_num": "2.1"
},
{
"text": "A head automaton is a weighted finite state machine that writes (or accepts) a pair of sequences of relation symbols from R:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Head Automata",
"sec_num": "2.2"
},
{
"text": "((rl...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Head Automata",
"sec_num": "2.2"
},
{
"text": "These correspond to the relations between a head word and the sequences of dependent phrases to its left and right (see Figure 1 ). The machine consists of a finite set q0, \u2022 \u2022 \", qs of states and an action table specifying the finite cost (non-zero probability)",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "r,)).",
"sec_num": null
},
{
"text": "actions the automaton can undergo. There are three types of action for an automaton m: left transitions, right transitions, and stop actions. These actions, together with associated probabilistic model parameters, are as follows. \u2022 Left transition: if in state qi-1, m can write a symbol r onto the right end of the current left sequence and enter state qi with probability P (~, qi, rlqi-1, m) . \u2022 Right transition: if in state qi-1, m can write a symbol r onto the left end of the current right sequence and enter state qi with probability P (--* , qi, rlqi-1, m) .",
"cite_spans": [
{
"start": 376,
"end": 394,
"text": "(~, qi, rlqi-1, m)",
"ref_id": null
},
{
"start": 544,
"end": 565,
"text": "(--* , qi, rlqi-1, m)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "r,)).",
"sec_num": null
},
{
"text": "\u2022 Stop: if in state q, m can stop with probability P(t31q , m), at which point the sequences are considered complete.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "r,)).",
"sec_num": null
},
{
"text": "For a consistent probabilistic model, the probabilities of all transitions and stop actions from a state q must sum to unity. Any state of a head automaton can be an initial state, the probability of a particular initial state in a derivation being specified by lexical parameters. A derivation of a pair of symbol sequence thus corresponds to the selection of an initial state, a sequence of zero or more transitions (writing the symbols) and a stop action. The probability, given an initial state q, that automaton m will a generate a pair of sequences, i.e. P ((rl'.. rk) ",
"cite_spans": [
{
"start": 563,
"end": 574,
"text": "((rl'.. rk)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "r,)).",
"sec_num": null
},
{
"text": ", (rk+l\"'' rn)Ira, q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "r,)).",
"sec_num": null
},
{
"text": "is the product of the probabilities of the actions taken to generate the sequences. The case of zero transitions will yield empty sequences, corresponding to a leaf node of the dependency tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "r,)).",
"sec_num": null
},
{
"text": "From a linguistic perspective, head automata allow for a compact, graded, notion of lexical subcategorization (Gazdar et al. 1985) and the linear order of a head and its dependent phrases. Lexical parameters can control the saturation of a lexical item (for example a verb that is both transitive and intransitive) by starting the same automaton in different states. Head automata can also be used to code a grammar in which states of an automaton for word w corresponds to X-bar levels (Jaekendoff 1977) for phrases headed by w.",
"cite_spans": [
{
"start": 110,
"end": 130,
"text": "(Gazdar et al. 1985)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "r,)).",
"sec_num": null
},
{
"text": "Head automata are formally more powerful than finite state automata that accept regular languages in the following sense. Each head automaton defines a formal language with alphabet R whose strings are the concatenation of the left and right sequence pairs written by the automaton. The class of languages defined in this way clearly includes all regular languages, since strings of a regular language can be generated, for example, by a head automaton that only writes a left sequence. Head automata can also accept some non-regular languages requiring coordination of the left and right sequences, for example the language anb ~ (requiring two states), and the language of palindromes over a finite alphabet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "r,)).",
"sec_num": null
},
{
"text": "Let the probability of generating an ordered dependency subtree D headed by an r-dependent word w be P(D]w, r). The recursive process of generating this subtree proceeds as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation Probability",
"sec_num": "2.3"
},
{
"text": "1. Select an initial state q of an automaton m for w with lexical probability P (m, q[r, ~, w) .",
"cite_spans": [
{
"start": 80,
"end": 94,
"text": "(m, q[r, ~, w)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation Probability",
"sec_num": "2.3"
},
{
"text": "2. Run the automaton m0 with initial state q to generate a pair of relation sequences with probability P((rl... rk), (rk+l-\"\" r,,)lm, q).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation Probability",
"sec_num": "2.3"
},
{
"text": "3. For each relation ri in these sequences, select a dependent word wi with dependency probability P (l, wi[w, ri) .",
"cite_spans": [
{
"start": 101,
"end": 114,
"text": "(l, wi[w, ri)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation Probability",
"sec_num": "2.3"
},
{
"text": "4. For each dependent wi, recursively generate a subtree with probability P(D~ Iwi, ri).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation Probability",
"sec_num": "2.3"
},
{
"text": "We can now express the probability P(Do) for an entire ordered dependency tree derivation Do headed by a word w0 as P(Do) = P(wo, too, q0[ 1>) P ( (rl . . . rl,) , (rk+l \" . . rnl Imo, qo) YIl <i<n P(l, wilwo, ri)P( Di Iwi, ri).",
"cite_spans": [
{
"start": 145,
"end": 161,
"text": "( (rl . . . rl,)",
"ref_id": null
},
{
"start": 164,
"end": 188,
"text": "(rk+l \" . . rnl Imo, qo)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation Probability",
"sec_num": "2.3"
},
{
"text": "In the translation application we search for the highest probability derivation (or more generally, the Nhighest probability derivations). For other purposes, the probability of strings may be of more interest. The probability of a string according to the model is the sum of the probabilities of derivations of ordered dependency trees yielding the string. In practice, the number of parameters in a head automaton language model is dominated by the dependency parameters, that is, O(]V]2]RI) parameters. This puts the size of the model somewhere in between 2-gram and 3-gram model. The similarly motivated link grammar model (Lafferty, Sleator and Temperley 1992) has O([VI 3) parameters. Unlike simple N-gram models, head automata models yield an interesting distribution of sentence lengths. For example, the average sentence length for Monte-Carlo generation with our probabilistic head automata model for ATIS was 10.6 words (the average was 9.7 words for the corpus it was trained on).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation Probability",
"sec_num": "2.3"
},
{
"text": "Head automaton models admit efficient lexically driven analysis (parsing) algorithms in which partial analyses are costed incrementally as they are constructed. Put in terms of the traditional parsing issues in natural language understanding, \"semantic\" associations coded as dependency parameters are applied at each parsing step allowing semantically suboptimal analyses to be eliminated, so the analysis with the best semantic score can be identified without scoring an exponential number of syntactic parses. Since the model is lexical, linguistic constructions headed by lexical items not present in the input are not involved in the search the way they are with typical top-down or predictive parsing strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.1"
},
{
"text": "We will sketch an algorithm for finding the lowest cost ordered dependency tree derivation for an input string in polynomial time in the length of the string. In our experimental system we use a more general version of the algorithm to allow input in the form of word lattices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.1"
},
{
"text": "The algorithm is a bottom-up tabular parser (Younger 1967 , Early 1970 in which constituents are constructed \"head-outwards\" (Kay 1989, Sata and Stock 1989) . Since we are analyzing bottomup with generative model automata, the algorithm 'runs' the automata backwards. Edges in the parsing lattice (or \"chart\") are tuples representing partial or complete phrases headed by a word w from position i to position j in the string:",
"cite_spans": [
{
"start": 44,
"end": 57,
"text": "(Younger 1967",
"ref_id": null
},
{
"start": 58,
"end": 70,
"text": ", Early 1970",
"ref_id": "BIBREF7"
},
{
"start": 125,
"end": 144,
"text": "(Kay 1989, Sata and",
"ref_id": null
},
{
"start": 145,
"end": 156,
"text": "Stock 1989)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.1"
},
{
"text": "(w,t,i,j,m,q,c).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.1"
},
{
"text": "Here m is the head automaton for w in this derivation; the automaton is in state q; t is the dependency tree constructed so far, and c is the cost of the partial derivation. We will use the notation C(zly ) for the cost of a model event with probability P(zIy); the assignment of costs to events is discussed in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.1"
},
{
"text": "For each word w in the input between positions i and j, the lattice is initialized with phrases {w, {},i,j,m,q$,c$) for any lexical entry (w, m) and any final state q! of the automaton m in the entry. A final state is one for which the stop action cost c! = C(DJq!, m) is finite.",
"cite_spans": [
{
"start": 100,
"end": 115,
"text": "{},i,j,m,q$,c$)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization:",
"sec_num": null
},
{
"text": "Transitions: Phrases are combined bottom-up to form progressively larger phrases. There are two types of combination corresponding to left and right transitions of the automaton for the word acting as the head in the combination. We will specify left combination; right combination is the mirror image of left combination. If the lattice contains two phrases abutting at position k in the string: (Wl, tl, i, k, ml, ql, Cl) (W2, t2, k, j, ra2, q2, c2), and the parameter table contains the following finite costs parameters (a left v-transition of m2, a lexical parameter for wl, and an r-dependency parameter):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization:",
"sec_num": null
},
{
"text": "c3 = C(~---, q2, rlq~, m2) c4 = C(ml, qiir, ~, Wx) c5 = C(l, wllw2, r),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization:",
"sec_num": null
},
{
"text": "then build a new phrase headed by w2 with a tree t~ formed by adding tl to t~ as an r-dependent of w2: (w2, t~, i, j, m2, q~, cl + c2 + c3 + c4 -4-cs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization:",
"sec_num": null
},
{
"text": "When no more combinations are possible, for each phrase spanning the entire input we add the appropriate start of derivation cost to these phrases and select the one with the lowest total cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization:",
"sec_num": null
},
{
"text": "The dynamic programming condition for pruning suboptimal partial analyses is as follows. Whenever there are two phrases p: (w,t,i,j,m,q,c) p' = (w, t', i, j, m, q, c') , and c ~ is greater than c, then we can remove p~ because for any derivation involving p~ that spans the entire string, there will be a lower cost derivation involving p. This pruning condition is effective at curbing a combinatorial explosion arising from, for example, prepositional phrase attachment ambiguities (coded in the alternative trees t and t').",
"cite_spans": [
{
"start": 123,
"end": 167,
"text": "(w,t,i,j,m,q,c) p' = (w, t', i, j, m, q, c')",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning:",
"sec_num": null
},
{
"text": "The worst case asymptotic time complexity of the analysis algorithm is O(min(n 2, IY12)n3), where n is the length of an input string and IVI is the size of the vocabulary. This limit can be derived in a similar way to cubic time tabular recognition algorithms for context free grammars (Younger 1967) with the grammar related term being replaced by the term min(n 2, IVI 2) since the words of the input sentence also act as categories in the head automata model. In this context \"recognition\" refers to checking that the input string can be generated from the grammar. Note that our algorithm is for analysis (in the sense of finding the best derivation) which, in general, is a higher time complexity problem than recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning:",
"sec_num": null
},
{
"text": "By generation here we mean determining the lowest cost linear surface ordering for the dependents of each word in an unordered dependency structure resulting from the transfer mapping described in Section 4. In general, the output of transfer is a dependency graph and the task of the generator involves a search for a backbone dependency tree for the graph, if necessary by adding dependency edges to join up unconnected components of the graph. For each graph component, the main steps of the search process, described non-deterministically, are 1. Select a node with word label w having a finite start of derivation cost C(w, m, ql t>).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3.2"
},
{
"text": "2. Execute a path through the head automaton m starting at state q and ending at state q' with a finite stop action cost C(Olq' , m). When making a transition with relation ri in the path, select a graph edge with label ri from w to some previously unvisited node wi with finite dependency cost C (~,wilw, ri) . Include the cost of the transition (e.g. C(---% ql, rilqi-1, m)) in the running total for this derivation.",
"cite_spans": [
{
"start": 297,
"end": 309,
"text": "(~,wilw, ri)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3.2"
},
{
"text": "3. For each dependent node wi, select a lexical entry with cost C(mi, qilri, J., wi), and recursively apply the machine rni from state ql as in step 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3.2"
},
{
"text": "4. Perform a left-parent-right traversal of the nodes of the resulting dependency tree, yielding a target string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3.2"
},
{
"text": "The target string resulting from the lowest cost tree that includes all nodes in the graph is selected as the translation target string. The independence assumptions implicit in head automata models mean that we can select lowest cost orderings of local dependency trees, below a given relation r, independently in the search for the lowest cost derivation. When the generator is used as part of the translation system, the dependency parameter costs are not, in fact, applied by the generator. Instead, because these parameters are independent of surface order, they are applied earlier by the transfer component, influencing the choice of structure passed to the generator. The transfer parameter table specifies costs for the application of transfer entries. In a contextindependent model, each entry has a single cost parameter. In context-dependent transfer models, the cost function takes into account the identities of the labels of the arcs and nodes dominating wi in the source graph. (Context dependence is discussed further in Section 5.) The set of transfer parameters may also include costs for the null transfer entries for wi, for use in derivations in which wi is translated by the entry for another word v. For example, the entry for v might be for translating an idiom involving wi as a modifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3.2"
},
{
"text": "Each entry in the bilingual lexicon specifies a way of mapping part of a dependency tree, specifically that part \"matching\" (as explained below) the source fragment of the entry, into part of a target graph, as indicated by the target fragment. Entry mapping functions specify how the set of target fragments for deriving a translation are to be combined: whenever an entry is applied, a global node-mapping function is extended to include the entry mapping function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3.2"
},
{
"text": "Transfer mapping takes a source dependency tree S from analysis and produces a minimum cost derivation of a target graph T and a (possibly partial) function f from source nodes to target nodes. In fact, the transfer model is applicable to certain types of source dependency graphs that are more general than trees, although the version of the head automata model described here only produces trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, Tiling, and Derivation",
"sec_num": "4.2"
},
{
"text": "We will say that a tree fragment H matches an unordered dependency tree S if there is a function g (a matching function) from the nodes of H to the nodes of S such that \u2022 g is a total one-one function;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, Tiling, and Derivation",
"sec_num": "4.2"
},
{
"text": "\u2022 if a node n of H has a label, and that label is word w, then the word label for g(n) is also w;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, Tiling, and Derivation",
"sec_num": "4.2"
},
{
"text": "\u2022 for every arc in H with label r from node nl to node n2, there is an arc with label r from g(nz) to g(n2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, Tiling, and Derivation",
"sec_num": "4.2"
},
{
"text": "Unlike first order unification, this definition of matching is not commutative and is not deterministic in that there may be multiple matching functions for applying a bilingual entry to an input source tree. A particular match of an entry against a dependency tree can be represented by the matching function g, a set of arcs A in S, and the (possibly context dependent) cost c of applying the entry. \u2022 k is the number of nodes in the source tree S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, Tiling, and Derivation",
"sec_num": "4.2"
},
{
"text": "\u2022 Each Ei, 1 < i ~ k, is a bilingual entry (wi, Hi, hi, Gi, fil matching S with function gi (see Figure 2 ) and arcs Ai. A tiling of S yields a costed derivation of a target dependency graph T as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 105,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Matching, Tiling, and Derivation",
"sec_num": "4.2"
},
{
"text": "\u2022 The cost of the derivation is the sum of the costs ci for each match in the tiling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, Tiling, and Derivation",
"sec_num": "4.2"
},
{
"text": "\u2022 The nodes and arcs of T are composed of the nodes and arcs of the target fragments Gi for the entries Ei.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, Tiling, and Derivation",
"sec_num": "4.2"
},
{
"text": "\u2022 Let fi and fj be the mapping functions for entries Ei and Ej. For any node n of S for which target nodes fi(g[l(n)) and fj(g~l(n)) are defined, these two nodes are identified as a single node f(n) in T.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, Tiling, and Derivation",
"sec_num": "4.2"
},
{
"text": "The merging of target fragment nodes in the last condition has the effect of joining the target fragments in a consistent fashion. The node mapping function f for the entire tree thus has a different role from the alignment function in the IBM statistical translation model (Brown et al. 1990 (Brown et al. , 1993 ; the role of the latter includes the linear ordering of words in the target string. In our approach, target word order is handled exclusively by the target monolingual model.",
"cite_spans": [
{
"start": 274,
"end": 292,
"text": "(Brown et al. 1990",
"ref_id": "BIBREF0"
},
{
"start": 293,
"end": 313,
"text": "(Brown et al. , 1993",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, Tiling, and Derivation",
"sec_num": "4.2"
},
{
"text": "The main transfer search is preceded by a bilingual lexicon matching phase. This leads to greater efficiency as it avoids repeating matching operations during the search phase, and it allows a static analysis of the matching entries and source tree to identify subtrees for which the search phase can safely prune out suboptimal partial translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer Algorithm",
"sec_num": "4.3"
},
{
"text": "Transfer Configurations In order to apply target language model relation costs incrementally, we need to distinguish between complete and incomplete arcs: an arc is complete if both its nodes have labels, otherwise it is incomplete. The output of the lexicon matching phrase, and the partial derivations manipulated by the search phase are both in the form of transfer configurations (S,R,T,P,f,c,I) where S is the set of source nodes and arcs consumed so far in the derivation, R the remaining source nodes and arcs, f the mapping function built so far, T the set of nodes and complete arcs of the target graph, P the set of incomplete target arcs, c the partial derivation cost, and I a set of source nodes for which entries have yet to be applied.",
"cite_spans": [
{
"start": 384,
"end": 399,
"text": "(S,R,T,P,f,c,I)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer Algorithm",
"sec_num": "4.3"
},
{
"text": "Lexical matching phase The algorithm for lexical matching has a similar control structure to standard unification algorithms, except that it can result in multiple matches. We omit the details. The lexicon matching phase returns, for each source node i, a set of runtime entries. There is one runtime entry for each successful match and possibly a null entry for the node if the word label for i is included in successful matches for other entries. Runtime entries are transfer configurations of the form (Hi, \u00a2, Gi, Pi, fi, ci, {i}) in which Hi is the source fragment for the entry with each node replaced by its image under the applicable matching function; Gi the target fragment for the entry, except for the incomplete arcs Pi of this fragment; fi the composition of mapping function for the entry with the inverse of the matching function; ci the cost of applying the entry in the context of its match with the source graph plus the cost in the target model of the arcs in Gi.",
"cite_spans": [
{
"start": 505,
"end": 533,
"text": "(Hi, \u00a2, Gi, Pi, fi, ci, {i})",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer Algorithm",
"sec_num": "4.3"
},
{
"text": "Transfer Search Before the transfer search proper, the resulting runtime entries together with the source graph are analyzed to determine decomposition nodes. A decomposition node n is a source tree node for which it is safe to prune suboptimal translations of the subtree dominated by n. Specifically, it is checked that n is the root node of all source fragments Hn of runtime entries in which both n and its node label are included, and that fn(n) is not dominated by (i.e. not reachable via directed arcs from) another node in the target graph Gn of such entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer Algorithm",
"sec_num": "4.3"
},
{
"text": "Transfer search maintains a set M of active runtime entries. InitiMly, this is the set of runtime entries resulting from the lexicon matching phase. Overall search control is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer Algorithm",
"sec_num": "4.3"
},
{
"text": "1. Determine the set of decomposition nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer Algorithm",
"sec_num": "4.3"
},
{
"text": "that if nl dominates n2 in S then n2 precedes nl in D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sort the decomposition nodes into a list D such",
"sec_num": "2."
},
{
"text": "3. If D is empty, apply the subtree transfer search (given below) to S, return the lowest cost solution, and stop. 4. Remove the first decomposition node n from D and apply the subtree transfer search to the subtree S ~ dominated by n, to yield solutions (s', \u00a2, T', \u00a2, f', c', \u00a2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sort the decomposition nodes into a list D such",
"sec_num": "2."
},
{
"text": "5. Partition these solutions into subsets with the same word label for the node fl(n), and select the solution with lowest cost c' from each subset. 6. Remove from M the set of runtime entries for nodes in S ~. 7. For each selected subtree solution, add to M a new runtime entry (S', \u00a2, T', f', c', {n}). 8. Repeat from step 3. The subtree transfer search maintains a queue Q of configurations corresponding to partial derivations for translating the subtree. Control follows a standard non-deterministic search paradigm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sort the decomposition nodes into a list D such",
"sec_num": "2."
},
{
"text": "1. Initialize Q to contain a single configuration (\u00a2, R0, \u00a2, \u00a2, \u00a2, 0, I0) with the input subtree R0 and the set of nodes I0 in R0. 2. If Q is empty, return the lowest cost solution found and stop. 3. Remove a configuration iS, R, T, P, f, c, I) from the queue. 4. If R is empty, add the configuration to the set of subtree solutions. 5. Select a node i from I. 6. For each runtime entry (Hi, \u00a2, Gi, Pi, fi, cl, {i}) for i, if Hi is a subgraph of R, add to Q a configuration iS 0 Hi, T O Gi 0 G', fO fi, c +ci +cv, , , where G' is the set of newly completed arcs (those in P t3 Pi with both node labels in T U Gi O P 0 Pi) and cg, is the cost of the arcs G' in the target language model. 7. For any source node n for which f(n) and fi(n)",
"cite_spans": [
{
"start": 387,
"end": 415,
"text": "(Hi, \u00a2, Gi, Pi, fi, cl, {i})",
"ref_id": null
},
{
"start": 483,
"end": 495,
"text": "T O Gi 0 G',",
"ref_id": null
},
{
"start": 496,
"end": 502,
"text": "fO fi,",
"ref_id": null
},
{
"start": 503,
"end": 513,
"text": "c +ci +cv,",
"ref_id": null
},
{
"start": 514,
"end": 515,
"text": ",",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sort the decomposition nodes into a list D such",
"sec_num": "2."
},
{
"text": "are both defined, merge these two target nodes. 8. Repeat from step 2. Keeping the arcs P separate in the configuration allows efficient incremental application of target dependency costs cv, during the search, so these costs are taken into account in the pruning step of the overall search control. This way we can keep the benefits of monolingual/bilingual modularity (Isabelle and Macklovitch 1986) without the compu-tationM overhead of transfer-and-filter (Alshawi et al. 1992) .",
"cite_spans": [
{
"start": 460,
"end": 481,
"text": "(Alshawi et al. 1992)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sort the decomposition nodes into a list D such",
"sec_num": "2."
},
{
"text": "It is possible to apply the subtree search directly to the whole graph starting with the initial runtime entries from lexical matching. However, this would result in an exponential search, specifically a search tree with a branching factor of the order of the number of matching entries per input word. Fortunately, long sentences typically have several decomposition nodes, such as the heads of noun phrases, so the search as described is factored into manageable components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sort the decomposition nodes into a list D such",
"sec_num": "2."
},
{
"text": "Cost Functions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "The head automata model and transfer model were originally conceived as probabilistic models. In order to take advantage of more of the information available in our training data, we experimented with cost functions that make use of incorrect translations as negative examples and also to treat the correctness of a translation hypothesis as a matter of degree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Costed Search Processes",
"sec_num": "5.1"
},
{
"text": "To experiment with different models, we implemented a general mechanism for associating costs to solutions of a search process. Here, a search process is conceptualized as a non-deterministic computation that takes a single input string, undergoes a sequence of state transitions in a non-deterministic fashion, then outputs a solution string. Process states are distinct from, but may include, head automaton states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Costed Search Processes",
"sec_num": "5.1"
},
{
"text": "A cost function for a search process is a real valued function defined on a pair of equivalence classes of process states. The first element of the pair, a context c, is an equivalence class of states before transitions. The second element, an event e, is an equivalence class of states after transitions. (The equivalence relations for contexts and events may be different.) We refer to an event-context pair as a choice, for which we use the notation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Costed Search Processes",
"sec_num": "5.1"
},
{
"text": "borrowed from the special case of conditional probabilities. The cost of a derivation of a solution by the process is taken to be the sum of costs of choices involved in the derivation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(efc)",
"sec_num": null
},
{
"text": "We represent events and contexts by finite sequences of symbols (typically words or relation symbols in the translation application). We write C(al'\"anlbl'\"bk) for the cost of the event represented by (al ..-a,~) in the context represented by(b1 ..-bk).",
"cite_spans": [
{
"start": 143,
"end": 159,
"text": "C(al'\"anlbl'\"bk)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(efc)",
"sec_num": null
},
{
"text": "\"Backed off\" costs can be computed by averaging over larger equivalence classes (represented by shorter sequences in which positions are eliminated systematically). A similar smoothing technique has been applied to the specific case of prepositional phrase attachment by Collins and Brooks (1995) . We have used backed off costs in the translation application for the various cost functions described be-low. Although this resulted in some improvement in testing, so far the improvement has not been statistically significant.",
"cite_spans": [
{
"start": 271,
"end": 296,
"text": "Collins and Brooks (1995)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(efc)",
"sec_num": null
},
{
"text": "Taken together, the events, contexts, and cost function constitute a process cost model, or simply a model. The cost function specifies the model parameters; the other components are the model structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Cost Functions",
"sec_num": "5.2"
},
{
"text": "We have experimented with a number of model types, including the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Cost Functions",
"sec_num": "5.2"
},
{
"text": "In this model we assume a probability distribution on the possible events for a context, that is, E~ P(elc) = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model:",
"sec_num": null
},
{
"text": "The cost parameters of the model are defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model:",
"sec_num": null
},
{
"text": "C(elc) = -ln(P(elc)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model:",
"sec_num": null
},
{
"text": "Given a set of solutions from executions of a process, let n+(e]e) be the number of times choice (e[c) was taken leading to acceptable solutions (e.g. correct translations) and n+(c) be the number of times context c was encountered for these solutions. We can then estimate the probabilistic model costs with C(elc ) ~ ln(n+(c)) -ln(n+(elc)). Mean distance model: In the mean distance model, we make use of some measure of goodness of a solution ts for some input s by comparing it against an ideal solution is for s with a distance metric h: h(t,,i,) ~ d in which d is a non-negative real number. A parameter for choice (e]c) in the distance model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model:",
"sec_num": null
},
{
"text": "C(elc) = Eh(elc)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model:",
"sec_num": null
},
{
"text": "is the mean value of h(t~,t~) for solutions t, produced by derivations including the choice (eIc).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model:",
"sec_num": null
},
{
"text": "The mean distance model does not use the constraint that a particular choice faced by a process is always a choice between events with the same context. It is also somewhat sensitive to peculiarities of the distance function h. With the same assumptions we made for the mean distance model, let that is, the ratio of the expected distance for derivations involving the choice and the expected distance for all derivations involving the context for that choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized distance model:",
"sec_num": null
},
{
"text": "Reflexive Training If we have a manually translated corpus, we can apply the mean and normalized distance models to translation by taking the ideal solution t~ for translating a source string s to be the manual translation for s. In the absence of good metrics for comparing translations, we employ a heuristic string distance metric to compare word selection and word order in t~ and ~s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized distance model:",
"sec_num": null
},
{
"text": "In order to train the model parameters without a manually translated corpus, we use a \"reflexive\" training method (similar in spirit to the \"wakesleep\" algorithm, Hinton et al. 1995) . In this method, our search process translates a source sentence s to ts in the target language and then translates t~ back to a source language sentence #. The original sentence s can then act as the ideal solution of the overall process. For this training method to be effective, we need a reasonably good initial model, i.e. one for which the distance h(s, #) is inversely correlated with the probability that t~ is a good translation of s.",
"cite_spans": [
{
"start": 163,
"end": 182,
"text": "Hinton et al. 1995)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized distance model:",
"sec_num": null
},
{
"text": "We have built an experimental translation system using the monolingual and translation models described in this paper. The system translates sentences in the ATIS domain (Hirschman et al. 1993) between English and Mandarin Chinese. The translator is in fact a subsystem of a speech translation prototype, though the experiments we describe here are for transcribed spoken utterances. (We informally refer to the transcribed utterances as sentences.) The average time taken for translation of sentences (of unrestricted length) from the ATIS corpus was around 1.7 seconds with approximately 0.4 seconds being taken by the analysis algorithm and 0.7 seconds by the transfer algorithm. English and Chinese lexicons of around 1200 and 1000 words respectively were constructed. Altogether, the entries in these lexicons made reference to around 200 structurally distinct head automata. The transfer lexicon contained around 3500 paired graph fragments, most of which were used in both transfer directions. With this model structure, we tried a number of methods for assigning cost functions. The nature of the training methods and their corresponding cost functions meant that different amounts of training data could be used, as discussed further below.",
"cite_spans": [
{
"start": 170,
"end": 193,
"text": "(Hirschman et al. 1993)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental System",
"sec_num": "6"
},
{
"text": "The methods make use of a supervised training set and an unsupervised training set, both sets being chosen at random from the 20,000 or so ATIS sentences available to us. The supervised training set comprised around 1950 sentences. A subcollection of 1150 of these sentences were translated by the system, and the resulting translations manually classified as 'good' (800 translations) or 'bad' (350 translations). The remaining 800 supervised training set sentences were hand-tagged for prepositional attachment points. (Prepositional phrase attachment is a major cause of ambiguity in the ATIS corpus, and moreover can affect English-Chinese translation, see Chen and Chen 1992 .) The attachment information was used to generate additional negative and positive counts for dependency choices. The unsupervised training set consisted of approximately 13,000 sentences; it was used for automatic training (as described under 'Reflexive Training' above) by translating the sentences into Chinese and back to English.",
"cite_spans": [
{
"start": 661,
"end": 679,
"text": "Chen and Chen 1992",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental System",
"sec_num": "6"
},
{
"text": "In this model, all choices were assigned the same cost except for irregular events (such as unknown words or partial analyses) which were all assigned a high penalty cost. This model gives an indication of performance based solely on model structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A. Qualitative Baseline:",
"sec_num": null
},
{
"text": "Counts for choices leading to good translations for sentences of the supervised training corpus, together with counts from the manually assigned attachment points, were used to compute negated log probability costs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B. Probabilistic:",
"sec_num": null
},
{
"text": "The positive counts as in the probabilistic method, together with corresponding negative counts from bad translations or incorrect attachment choices, were used to compute log likelihood ratio costs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C. Discriminative:",
"sec_num": null
},
{
"text": "In this fully automatic method, normalized distance costs were computed from reflexive translation of the sentences in the unsupervised training corpus. The translation runs were carried out with parameters from method A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D. Normalized Distance:",
"sec_num": null
},
{
"text": "The same as method D except that the system used to carry out the reflexive translation was running with parameters from method C. Table 1 shows the results of evaluating the performance of these models for translating 200 unrestricted length ATIS sentences into Chinese. This was a previously unseen test set not included in any of the training sets. Two measures of translation acceptability are shown, as judged by a Chinese speaker. (In separate experiments, we verified that the judgments of this speaker were near the average of five Chinese speakers). The first measure, \"meaning and grammar\", gives the percentage of sentence translations judged to preserve meaning without the introduction of grammatical errors. For the second measure, \"meaning preservation\", grammatical errors were allowed if they did not interfere with meaning (in the sense of misleading the hearer). In the table, we have grouped together methods A and D for which the parameters were derived without human supervision effort, and methods B, C, and E which depended on the same amount of human supervision effort. This means that side by side comparison of these methods has practical relevance, even though the methods exploited different amounts of data. In the case of E, the supervision effort was used only as an oracle during training, not directly in the cost computations.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "E. Bootstrapped Normalized Distance:",
"sec_num": null
},
{
"text": "We can see from Table 1 that the choice of method affected translation quality (meaning and grammar) more than it affected preservation of meaning. A possible explanation is that the model structure was adequate for most lexical choice decisions because of the relatively low degree of polysemy in the ATIS corpus. For the stricter measure, the differences were statistically significant, according to the sign test at the 5% significance level, for the following comparisons: C and E each outperformed B and D, and B and D each outperformed A.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "E. Bootstrapped Normalized Distance:",
"sec_num": null
},
{
"text": "Language Processing and Semantic Representations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "The translation system we have described employs only simple representations of sentences and phrases. Apart from the words themselves, the only symbols used are the dependency relations R. In our experimental system, these relation symbols are themselves natural language words, although this is not a necessary property of our models. Information coded explicitly in sentence representations by word senses and feature constraints in our previous work (Alshawi 1992 ) is implicit in the models used to derive the dependency trees and translations. In particular, dependency parameters and context-dependent transfer parameters give rise to an implicit, graded notion of word sense. For language-centered applications like translation or summarization, for which we have a large body of examples of the desired behavior, we can think of the task in terms of the formal problem of modeling a relation between strings based on exampies of that relation. By taking this viewpoint, we seem to be ignoring the intuition that most interesting natural language processing tasks (translation, summarization, interfaces) are semantic in nature.",
"cite_spans": [
{
"start": 454,
"end": 467,
"text": "(Alshawi 1992",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "It is therefore tempting to conclude that an adequate treatment of these tasks requires the manipulation of artificial semantic representation languages with well-understood formal denotations. While the intuition seems reasonable, the conclusion might be too strong in that it rules out the possibility that natural language itself is adequate for manipulating semantic denotations. After all, this is the primary function of natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "The main justification for artificial semantic representation languages is that they are unambiguous by design. This may not be as critical, or useful, as it might first appear. While it is true that natural language is ambiguous and under-specified out of context, this uncertainty is greatly reduced by context to the point where further resolution (e.g. full scoping) is irrelevant to the task, or even the intended meaning. The fact that translation is insensitive to many ambiguities motivated the use of unresolved quasi-logical form for transfer (Alshawi et al. 1992) .",
"cite_spans": [
{
"start": 553,
"end": 574,
"text": "(Alshawi et al. 1992)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "To the extent that contextual resolution is necessary, context may be provided by the state of the language processor rather than complex semantic representations. Local context may include the state of local processing components (such as our head automata) for capturing grammatical constraints, or the identity of other words in a phrase for capturing sense distinctions. For larger scale context, I have argued elsewhere (Alshawi 1987 ) that memory activation patterns resulting from the process of carrying out an understanding task can act as global context without explicit representations of discourse. Under this view, the challenge is how to exploit context in performing a task rather than how to map natural language phrases to expressions of a formalism for coding meaning independently of context or intended use.",
"cite_spans": [
{
"start": 425,
"end": 438,
"text": "(Alshawi 1987",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "There is now greater understanding of the formal semantics of under-specified and ambiguous representations. In Alshawi 1996, I provide a denotational semantics for a simple under-specified language and argue for extending this treatment to a formal semantics of natural language strings as expressions of an under-specified representation. In this paradigm, ordered dependency trees can be viewed as natural language strings annotated so that some of the implicit relations are more explicit. A milder form of this kind of annotation is a bracketed natural language string. We are not advocating an approach in which linguistic structure is ignored (as it is in the IBM translator described by Brown et al. 1990 ), but rather one in which the syntactic and semantic structure of a string is implicit in the way it is processed by an interpreter.",
"cite_spans": [
{
"start": 695,
"end": 712,
"text": "Brown et al. 1990",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
},
{
"text": "One important advantage of using representations that are close to natural language itself is that it reduces the degrees of freedom in specifying language and task models, making these models easier to ac-quire automatically. With these considerations in mind, we have started to experiment with a version of the translator described here with even simpler representations and for which the model structure, not just the parameters, can be acquired automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work on cost functions and training methods was carried out jointly with Adam Buchsbaum who also customized the English model to ATIS and integrated the translator into our speech translation prototype. Jishen He constructed the Chinese ATIS language model and bilingual lexicon and identified many problems with early versions of the transfer component. I am also grateful for advice and help from Don Hindle, Fernando Pereira, Chi-Lin Shih, Richard Sproat, and Bin Wu.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Letting the Cat out of the Bag: Generation for Shake-and-Bake MT",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cambridge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "England",
"suffix": ""
},
{
"first": "H",
"middle": [
"; K"
],
"last": "Alshawi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Van Deemter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Publications",
"suffix": ""
},
{
"first": "California",
"middle": [],
"last": "Stanford",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Carter",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gamback",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rayner ; Brown",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocks",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rossin",
"suffix": ""
}
],
"year": 1969,
"venue": "Proceedings of COL-ING92, the International Conference on Computational Linguistics",
"volume": "16",
"issue": "",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alshawi, H. 1987. Memory and Context for Language Interpretation. Cambridge University Press, Cambridge, England. Alshawi, H. 1996. \"Underspecified First Order Log- ics\". In Semantic Ambiguity and Underspecification, edited by K. van Deemter and S. Peters, CSLI Publi- cations, Stanford, California. Alshawi, H. 1992. The Core Language Engine. MIT Press, Cambridge, Massachusetts. Alshawi, H., D. Carter, B. Gamback and M. Rayner. 1992. \"Swedish-English QLF Translation\". In H. A1- shawi (ed.) The Core Language Engine. MIT Press, Cambridge, Massachusetts. Booth, T. 1969. \"Probabilistic Representation of For- real Languages\". Tenth Annual IEEE Symposium on Switching and Automata Theory. Brew, C. 1992. \"Letting the Cat out of the Bag: Gen- eration for Shake-and-Bake MT'. Proceedings of COL- ING92, the International Conference on Computational Linguistics, Nantes, France. Brown, P., J. Cocks, S. Della Pietra, V. Della Pietra, F. Jelinek, J. Lafferty, R. Mercer and P. Rossin. 1990. \"A Statistical Approach to Machine Translation\". Com- putational Linguistics 16:79-85.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "19",
"issue": "",
"pages": "263--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P.F., S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer. 1993. \"The Mathematics of Statistical Machine Translation: Parameter Estimation\". Compu- tational Linguistics 19:263-312.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Attachment and Transfer of Prepositional Phrases with Constraint Propagation",
"authors": [
{
"first": "K",
"middle": [
"H"
],
"last": "Chen",
"suffix": ""
},
{
"first": "H",
"middle": [
"H"
],
"last": "Chen",
"suffix": ""
}
],
"year": 1992,
"venue": "Computer Processing of Chinese and Oriental Languages",
"volume": "6",
"issue": "2",
"pages": "123--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, K.H. and H. H. Chen. 1992. \"Attachment and Transfer of Prepositional Phrases with Constraint Prop- agation\". Computer Processing of Chinese and Oriental Languages, Vol. 6, No. 2, 123-142.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Coping with Syntactic Ambiguity or How to Put the Block in the Box on the Table",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Path",
"suffix": ""
}
],
"year": 1982,
"venue": "Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "139--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church K. and R. PatH. 1982. \"Coping with Syntactic Ambiguity or How to Put the Block in the Box on the Table\". Computational Linguistics 8:139-149.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Prepositional Phrase Attachment through a Backed-Off Model",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Brooks",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Third Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "27--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, M. and J. Brooks. 1995. \"Prepositional Phrase Attachment through a Backed-Off Model.\" Pro- ceedings of the Third Workshop on Very Large Corpora, Cambridge, Massachusetts, ACL, 27-38.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Machine Translation Divergences: A Formal Description and Proposed Solution",
"authors": [
{
"first": "B",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "",
"pages": "597--634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dorr, B.J. 1994. \"Machine Translation Divergences: A Formal Description and Proposed Solution\". Compu- tational Linguistics 20:597-634.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Accurate Methods for Statistics of Surprise and Coincidence",
"authors": [
{
"first": "T",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunning, T. 1993. \"Accurate Methods for Statistics of Surprise and Coincidence.\" Computational Linguistics. 19:61-74.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An Efficient Context-Free Parsing Algorithm",
"authors": [
{
"first": "J",
"middle": [],
"last": "Early",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Gazdar",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "G",
"middle": [
"K"
],
"last": "Pullum",
"suffix": ""
},
{
"first": "I",
"middle": [
"A"
],
"last": "Sag",
"suffix": ""
}
],
"year": 1970,
"venue": "Communications of the ACM",
"volume": "14",
"issue": "",
"pages": "453--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Early, J. 1970. \"An Efficient Context-Free Parsing Algorithm\". Communications of the ACM 14: 453-60. Gazdar, G., E. Klein, G.K. Pullum, and I.A.Sag.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The 'Wake-Sleep' Algorithm for Unsupervised Neural Networks",
"authors": [
{
"first": "G",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Dayan",
"suffix": ""
},
{
"first": "B",
"middle": [
"J"
],
"last": "Frey",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Neal",
"suffix": ""
}
],
"year": 1995,
"venue": "Science",
"volume": "268",
"issue": "",
"pages": "1158--1161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinton, G.E., P. Dayan, B.J. Frey and R.M. Neal. 1995. \"The 'Wake-Sleep' Algorithm for Unsupervised Neural Networks\". Science 268:1158-1161.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word Grammar. Blackwell, Oxford",
"authors": [
{
"first": "R",
"middle": [
"A"
],
"last": "Hudson",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hudson, R.A. 1984. Word Grammar. Blackwell, Ox- ford.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multi-Site Data Collection and Evaluation in Spoken Language Understanding",
"authors": [
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Dahl",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Garofolo",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pallett",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hunicke-Smith",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Price",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rudnicky",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Tzoukermann",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Human Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "19--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirschman, L., M. Bates, D. Dahl, W. Fisher, J. Garo- folo, D. Pallett, K. Hunicke-Smith, P. Price, A. Rud- nicky, and E. Tzoukermann. 1993. \"Multi-Site Data Collection and Evaluation in Spoken Language Under- standing\". In Proceedings of the Human Language Tech- nology Workshop, Morgan Kaufmann, San Francisco, 19-24.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "X-bar Syntax: A Study of Phrase Structure",
"authors": [
{
"first": "P",
"middle": [],
"last": "Isabelle",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Macklovitch",
"suffix": ""
}
],
"year": 1977,
"venue": "Eleventh International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "115--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabelle, P. and E. Macklovitch. 1986. \"Transfer and MT Modularity\", Eleventh International Conference on Computational Linguistics, Bonn, Germany, 115-117. Jackendoff, R.S. 1977. X-bar Syntax: A Study of Phrase Structure. MIT Press, Cambridge, Mas- sachusetts.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Principles of Lexical Language Modeling for Speech Recognition",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 199P AAAI Fall Symposium on Probabilistic Approaches to Natural Language",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, F., R.L. Mercer and S. Roukos. 1992. \"Prin- ciples of Lexical Language Modeling for Speech Recog- nition\". In S. Furui and M.M. Sondhi (eds.), Advances in Speech Signal Processing, Marcel Dekker, New York. Lafferty, J., D. Sleator and D. Temperley. 1992. \"Grammatical Trigrams: A Probabilistic Model of Link Grammar\". In Proceedings of the 199P AAAI Fall Sym- posium on Probabilistic Approaches to Natural Language, 89-97.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Complex Transfer in MT: A Survey of Examples",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Workshop on Parsing Technologies, Pittsburg",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kay, M. 1989. \"Head Driven Parsing\". In Proceed- ings of the Workshop on Parsing Technologies, Pitts- burg, 1989. Lindop, J. and 3. Tsujii. 1991. \"Complex Transfer in MT: A Survey of Examples\". Technical Report 91/5, Centre for Computational Linguistics, UMIST, Manch- ester, UK.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Probabilistic Tree-Adjoining Grammar as a Framework for Statistical Natural Language Processing",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of COLING-9P",
"volume": "",
"issue": "",
"pages": "418--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, P. 1992. \"Probabilistic Tree-Adjoining Gram- mar as a Framework for Statistical Natural Language Processing\". In Proceedings of COLING-9P, Nantes, France, 418-424.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stochastic Lexicalized Tree-Adjoining Grammars",
"authors": [
{
"first": "G",
"middle": [],
"last": "Sata",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Stock",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "426--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sata, G. and O. Stock. 1989. \"Head-Driven Bidi- rectional Parsing\". In Proceedings of the Workshop on Parsing Technologies, Pittsburg, 1989. Schabes, Y. 1992. \"Stochastic Lexicalized Tree- Adjoining Grammars\". In Proceedings of COLING-9P, Nantes, France, 426-432.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Recognition and Parsing of Context-Free Languages in Time n 3. Information and Control",
"authors": [
{
"first": "P",
"middle": [
"J"
],
"last": "Whitelock",
"suffix": ""
}
],
"year": 1967,
"venue": "Proceedings of COLING92, the International Conference on Computational Linguistics",
"volume": "10",
"issue": "",
"pages": "189--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Whitelock, P.J. 1992. \"Shake-and-Bake Translation\". Proceedings of COLING92, the International Conference on Computational Linguistics, Nantes, France. Younger, D. 1967. Recognition and Parsing of Context-Free Languages in Time n 3. Information and Control, 10, 189-208.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Head automaton m scans left and right sequences of relations ri for dependents wi of w.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "mapping function, i.e. a (possibly partial) function from the nodes of Hi to the nodes of Gi.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "tiling of a source graph with respect to a transfer model is a set of entry matches {(El, gz, A1, cl), \u2022 \u2022 \", (E~, gk, At, ck)} which is such that gi Transfer matching and mapping functions",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "For primary nodes nl and nj of two distinct entries Ei and Ej, gi(ni) and gi(nj) are distinct. \u2022 The sets of edges Ai form a partition of the edges of S. \u2022 The images gi(Li) form a partition of the nodes of S, where Li is the set of labeled source nodes in the source fragment Hi of Ei. \u2022 ci is the cost of the match specified by the parameter table.",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "Discriminative model: The costs in this model are likelihood ratios comparing positive and negative solutions, for example correct and incorrect translations. (See Dunning 1993 on the application of likelihood ratios in computational linguistics.) Let n-(elc ) be the count for choice (e]c) leading to negative solutions. The cost function for the discriminative model is estimated as C(elc) ~ In(n-(elc)) -ln(n+(ele)).",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "Eh(c) be the average of h(t~, ts) for solutions derived from sequences of choices including the context c. The cost parameter for (elc) in the normalized distance model is C(elc) = Bh(c) '",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>4</td><td>Transfer Maps</td></tr><tr><td colspan=\"2\">4.1 Transfer Model Bilingual Lexicon</td></tr><tr><td colspan=\"2\">The transfer model defines possible mappings, with</td></tr><tr><td colspan=\"2\">associated costs, of dependency trees with source-</td></tr><tr><td colspan=\"2\">language word node labels into ones with target-</td></tr><tr><td colspan=\"2\">language word labels. Unlike the head automata</td></tr><tr><td colspan=\"2\">monolingual models, the transfer model operates</td></tr><tr><td colspan=\"2\">with unordered dependency trees, that is, it treats</td></tr><tr><td colspan=\"2\">the dependents of a word as an unordered bag. The</td></tr><tr><td colspan=\"2\">model is general enough to cover the common trans-</td></tr><tr><td colspan=\"2\">lation problems discussed in the literature (e.g. Lin-</td></tr><tr><td colspan=\"2\">dop and Tsujii 1991 and Dorr 1994) including many-</td></tr><tr><td colspan=\"2\">to-many word mapping, argument switching, and</td></tr><tr><td colspan=\"2\">head switching.</td></tr><tr><td/><td>A transfer model consists of a bilingual lexicon</td></tr><tr><td colspan=\"2\">and a transfer parameter table. The model uses de-</td></tr><tr><td colspan=\"2\">pendency tree fragments, which are the same as un-</td></tr><tr><td colspan=\"2\">ordered dependency trees except that some nodes</td></tr><tr><td colspan=\"2\">may not have word labels. In the bilingual lexicon,</td></tr><tr><td colspan=\"2\">an entry for a source word wi (see top portion of</td></tr><tr><td colspan=\"2\">Figure 2) has the form</td></tr><tr><td/><td>(wi, Hi, hi, Gi, fi)</td></tr></table>",
"num": null,
"html": null,
"text": "Hi is a source language tree fragment, ni (the primary node) is a distinguished node of Hi with label wi, Gi is a target tree fragment, and fi is a"
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Method</td><td>Meaning and</td><td>Meaning</td></tr><tr><td/><td>Grammar (%)</td><td>Preservation (%)</td></tr><tr><td>A'</td><td>29</td><td>71</td></tr><tr><td>D</td><td>37</td><td>71</td></tr><tr><td>B</td><td>46</td><td>82</td></tr><tr><td>C</td><td>52</td><td>83</td></tr><tr><td>E</td><td>54</td><td>83</td></tr></table>",
"num": null,
"html": null,
"text": "Translation performance of different cost assignment methods"
}
}
}
} |