diff --git a/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_content_list.json b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..797a048db31c61e97782d59d6d4bf542a978e3a1 --- /dev/null +++ b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1131b752f73ca4c843460ffe6309290e1d6f702d77878be610f443594feffa36 +size 86985 diff --git a/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_model.json b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_model.json new file mode 100644 index 0000000000000000000000000000000000000000..609b2b1dce49f7804bf64c725fa81eb07c2d6265 --- /dev/null +++ b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af5916f3684aa5a664fb97b0e2087a2ecc565822c9566739183c56adbbfc8128 +size 100252 diff --git a/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_origin.pdf b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..52d26cb4be98b26810c4f852c013086775bb6bd4 --- /dev/null +++ b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6da46688ad6aa068062ebbc003fcd042331d00374164094ec61aa6f7481bfb89 +size 403100 diff --git a/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/full.md b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f11e9b5277b8ebac45567588e11176d6cc91a8af --- /dev/null +++ b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/full.md @@ -0,0 +1,420 @@ +# A BERT-based Distractor Generation Scheme with Multi-tasking and Negative Answer Training Strategies + +Ho-Lam Chung $^{1}$ , Ying-Hong Chan $^{2}$ , Yao-Chung Fan $^{3}$ + +Department of Computer Science and Engineering + +National Chung Hsing University, + +Taichung, Taiwan + +1holam.chung@protonmail.com + +$^{2}$ harry831120@gmail.com + +3yfan@nchu.edu.tw + +# Abstract + +In this paper, we investigate the following two limitations for the existing distractor generation (DG) methods. First, the quality of the existing DG methods are still far from practical use. There are still room for DG quality improvement. Second, the existing DG designs are mainly for single distractor generation. However, for practical MCQ preparation, multiple distractors are desired. Aiming at these goals, in this paper, we present a new distractor generation scheme with multi-tasking and negative answer training strategies for effectively generating multiple distractors. The experimental results show that (1) our model advances the state-of-the-art result from 28.65 to 39.81 (BLEU 1 score) and (2) the generated multiple distractors are diverse and shows strong distracting power for multiple choice question. + +# 1 Introduction + +Given a passage, a question, and an answer phrase, the goal of distractor generation (DG) is to generate context-related wrong options (i.e., distractor) for multiple-choice questions (MCQ). Pioneering research (Gao et al., 2019; Yeung et al., 2019; Zhou et al., 2019) have demonstrated the feasibility of generating distractors based on deep learning techniques. + +While significant advances for DG were reported in the literature, we find that the existing DG results are still far from practical use. In this paper, we investigate the following two issues for distractor generation: (1) DG quality improvement and (2) Multiple distractor generation. + +DG Quality Improvement There is still room to be improved for high-quality distractor generation. By manually examining the DG results generated by the existing method, we find that the results are still far from ideal for practical use. Thus, one + +# Example 1 + +Context Omitted. (See Appendix) + +Question + +- Why did Mr.King want to send Henry away? + +Answer + +- Because Henry was too lazy. + +Gen. Distractors + +$d_{1}$ : Because Henry didn't want to go. +$d_{2}$ : Because Henry didn't want to go to the bookstore. + +# Example 2 + +Context Omitted. (See Appendix) + +Question + +- Which of the following women would look most attractive? + +Answer + +A short red-haired woman who wears a purple hat. + +Gen. Distractors + +$d_{1}$ : A young woman who wears a white hat. +$d_2$ : A woman who wears a white hat. + +Table 1: Two examples for showing the issue of generating multiple distractors by a simple beam search: Note that the generated distractors (i.e., $d_{1}$ and $d_{2}$ ) are the same statements with only slight word usage difference. Such results lower the distracting power for MCQ preparation. + +goal of our research is to improve the DG quality further. + +For the quality issues, in this paper, we explore BERT model's employment for performance improvement. As known, employing transformer-based language models has shown to be useful for improving NLP tasks. Thus, we investigate the BERT model's application for DG and report our design in this paper. + +Multiple Distractor Generation The existing DG methods mainly focus on single distractor generation. However, for practical MCQ preparation, multiple distractors are desired. For more than one distractor, the existing practice is to keep multiple results given by a beam search strategy. However, we find that in many cases, the generated distractors are all referred to the same concept/thing. In + +fact, the generated distractors are all from the same latent representation, which brings concerns that they might be semantically similar. In Table 1, we show two DG examples for this problems. In the illustrated examples, one can observe that the generated distractors are the same statements with only a slight word usage difference. Such results lower the distracting power for MCQ preparation. + +For this limitation, we propose to view multiple distractor generation/selection problems as a coverage problem, rather than individually selecting top- $k$ distractors based on prediction probability. In other words, we propose to choose a distractor set, which maximizes the difficulty of multiple-choice questions, rather than individually picking results with the highest probability but with similar semantic + +The contributions of this paper are (1) a new DG model based on the BERT model employment. The experiment evaluation with benchmarking datasets shows that our model outperforms the existing best models (Zhou et al., 2019) and pushes the state-of-the-art result from 28.65 to 39.81 (BLEU 1 score). (2) An investigation to employ the use of multiple-choice question answering task to evaluate the DG performance. (3) An investigation for considering the multiple distractors generation problem as a coverage problem. The experiment result demonstrates that the generated multiple distractors are diverse and show strong distracting power for multiple-choice questions. + +The rest of this paper is organized as follows. In Section 2, we introduce our model design for a single distractor generation. In Section 3, we introduce to our multiple distractor schemes and the incorporation of the question-answer models for distractor selection. In Section 4, we report the result of performance analysis. In Section 5, we review the literature related to this work. Finally, Section 6 concludes our study and discusses future works. + +# 2 BERT Distractor Generation + +# 2.1 BERT Model Review + +The BERT model and its family (Liu et al., 2019; Lan et al., 2019) are composed of a stack of multilayer bidirectional Transformer encoders. The input to a BERT model is a sequence of tokens. For a given token, its input representation to the BERT model is first constructed by summing the corresponding token, segment, and position embed + +dings. After the input representation, the input embeddings travel through the pre-trained/fine-tuned BERT for task learning and prediction. In general, BERT can be employed in two-level language modeling tasks: sequence-level classification and token-level prediction tasks. For the tasks, there are three special tokens, [C], [S], and [M]. The embedding of the [C] token is designed to be used as the aggregate sequence representation for classification tasks. The [S] is designed to distinguish different sentences of a token sequence (to provide/signal information from multiple sentences, as the input token sequence can be a pack of multiple sentences). On the other hand, the [M] token is designed to be used in token-level prediction (e.g., predicting a masked token based on context words or predicting the starting/ending probabilities for span-based tasks such as QA tasks). + +As reported in (Chan and Fan, 2019; Dong et al., 2019), BERT essentially is an auto-encoder language modeling design, which aims to reconstruct the original data from corrupted inputs. If BERT is asked to predict a sequence of consecutive masked tokens, it often produces incoherent and ramble results. For example, when using BERT to predict three consecutive [M] [M] [M] masked tokens, the same prediction result for the tokens are often observed. This is because the context (the information for predicting the tokens) for the masked tokens are nearly the same except for the position embedding, making the generated sentences incoherent. Thus, we take into consideration the previous decoded results for decoding the next distractor token, as will be introduced in the next subsection. + +# 2.2 BERT-based Distractor Generation (BDG) + +In a distractor generation scenario, there are three given inputs: (1) a paragraph $P$ , (2) an answer $A$ , and (3) a question $Q$ . For ease of discussion, let $C$ (referred to as a context sequence) denote the sequence of tokens given by concatenating $P$ , $Q$ , and $A$ . + +Our BDG model generates distractor tokens in an auto-regressive manner. Specifically, the BDG model predicts a token at a time based on (1) the given context sequence $C$ and (2) the previously predicted distractor tokens. The BDG model takes multiple iterations to generate a distractor. In Table 2, we show a running example of the BDG model. Note that our model predicts a token based + +
| Iter. | Input Sequence | Predict |
| 1 | [C] C [S] [M] | Because |
| 2 | [C] C [S] Because [M] | Henry |
| 3 | [C] C [S] Because Henry [M] | didn't |
| 4 | [C] C [S] Because Henry didn't [M] | want |
| 5 | [C] C [S] Because Henry didn't want [M] | to |
| 6 | [C] C [S] Because Henry didn't want to [M] | go |
| 7 | [C] C [S] Because Henry didn't want to go [M] | . |
| 8 | [C] C [S] Because Henry didn't want to go. [M] | [S] |
| P.M. | Gold | |
| # of cases on BLEU 1 | 57 | 12 |
| # of cases on BLEU 2 | 55 | 4 |
| # of cases on BLEU 3 | 48 | 0 |
| # of cases on BLEU 4 | 35 | 0 |
| # of cases on ROUGE-L | 55 | 1 |
| Train samples | 96501 |
| Test samples | 12284 |
| Avg.article length | 335.6 |
| Avg.distractor length | 8.6 |
| Avg(question length | 10.0 |
| Avg_answer length | 8.3 |
| Avg.distractor number | 2.1 |
| BLEU 1 | BLEU 2 | BLEU 3 | BLEU 4 | ROUGE L | |
| BDGAN+PM | 39.52 | 24.29 | 17.28 | 13.28 | 33.40 |
| BDGPM | 39.81 | 24.81 | 17.66 | 13.56 | 34.01 |
| BDG | 35.30 | 20.65 | 13.66 | 9.53 | 31.11 |
| GPT | 36.49 | 20.75 | 13.31 | 9.31 | 31.59 |
| DS-Att. | 27.32 | 14.69 | 9.29 | 6.47 | 15.12 |
| CO-Att. | 28.65 | 15.15 | 9.77 | 7.01 | 15.39 |
| BDGAN+PM | BDGPM | BDG | GPT | Gold | Random | |
| BLEU 1 | 43 | 57 | 115 | 124 | 12 | 0 |
| BLEU 2 | 40 | 55 | 115 | 121 | 4 | 0 |
| BLEU 3 | 37 | 48 | 109 | 109 | 0 | 0 |
| BLEU 4 | 30 | 35 | 97 | 88 | 0 | 0 |
| ROUGE-L | 42 | 55 | 122 | 123 | 1 | 0 |
| Accuracy | |
| Random Selected Distractors | 88.10% |
| Gold Distractor | 78.00% |
| GPT | 78.07% |
| BDG | 73.96% |
| BDGPM | 74.34% |
| BDGAN+PM | 74.05% |
| BDGEM | 69.44% |
| BLEU 1 | BLEU 2 | BLEU 3 | BLEU 4 | ROUGE L | |
| PM(γ=1) | 36.97 | 22.07 | 14.82 | 10.50 | 32.64 |
| PM(γ=2) | 38.45 | 23.21 | 15.81 | 11.36 | 33.18 |
| PM(γ=3) | 39.23 | 24.27 | 17.04 | 12.78 | 33.82 |
| PM(γ=4) | 39.22 | 24.24 | 17.08 | 12.95 | 34.05 |
| PM(γ=5) | 39.74 | 24.50 | 17.29 | 13.09 | 34.11 |
| PM(γ=6) | 39.81 | 24.81 | 17.66 | 13.56 | 34.01 |
| PM(γ=7) | 39.37 | 24.13 | 17.09 | 13.07 | 33.45 |
| AN+PM(γ=1) | 37.49 | 22.08 | 13.73 | 10.44 | 32.40 |
| AN+PM(γ=2) | 38.25 | 22.81 | 15.33 | 10.91 | 32.99 |
| AN+PM(γ=3) | 38.71 | 23.54 | 16.26 | 12.04 | 33.82 |
| AN+PM(γ=4) | 38.84 | 23.70 | 16.57 | 12.46 | 33.53 |
| AN+PM(γ=5) | 39.19 | 23.97 | 16.96 | 12.92 | 33.67 |
| AN+PM(γ=6) | 39.58 | 24.23 | 17.11 | 13.11 | 33.38 |
| AN+PM(γ=7) | 39.52 | 24.29 | 17.28 | 13.28 | 33.40 |
| Distractor Level | Answer Type | Method Type | Model | ||||
| Word/phrase | Sentence | Cloze | R.C. | Extractive | Generative | Type | |
| Gao et al. 2019 | Y | Y | Y | Y | RNN | ||
| Zhou et al. 2019 | Y | Y | Y | Y | RNN | ||
| Araki et al. 2016 | Y | Y | Y | Non-neural model | |||
| Welbl et al. 2017 | Y | Y | Y | Random forests | |||
| Guo et al. 2016 | Y | Y | Y | Word2Vec | |||
| Kumar et al. 2015 | Y | Y | Y | Y | SVM | ||
| Liang et al. 2017 | Y | Y | Y | GAN | |||
| Liang et al. 2018 | Y | Y | Y | Y | Non-neural model | ||
| Content | The building is shaking. A woman with a baby in her arms is trying to open the door, but fails. Finding no way, she rushes into her bedroom and there they survive the earthquake. In a factory building, as the workshop floor swings under the terrible shaking, workers run for safety. Some hide under the machines and survive, but others who try to run outside are killed by the falling ceilings. These scenes, played by actors and actresses, are from a film of science education Making a Split Second Decision shown in 1998 on China Central TV in memory of Tangshan Earthquake. By studying actual cases in the earthquake areas and scientific experiments, experts find that buildings remain untouched for the first 12 seconds of an earthquake. In this short time, one has the best chance of surviving an earthquake by staying near the inside walls, in bedrooms and under beds, experts concluded in the film. "Earthquakes seem to catch the lives of those who run," said many survivors in the earthquake areas, describing how their friends were killed on the doorways or along the stair steps as they tried to get out of the building. Their advice was proved in the film, "Take a hiding-place where you are rather than run, unless you are sure you can reach a safe open place in ten seconds." |
| Question | The workers who try to run outside the building die because? |
| Answer | They don't have enough time to run outside. |
| Distractor | They don't know how to get out of the building. |
| Content | Henry found work in a bookstore after he finished middle school. He wouldn’t do anything but wanted to get rich. Mr.King thought he was too lazy and was going to send him away. Henry was afraid and had to work hard. It was a cold morning. It was snowing and there was thin ice on the streets. Few people went to buy the books and the young man had nothing to do. He hated to read, so he watched the traffic. Suddenly he saw a bag fall off a truck and it landed by the other side of the street. It must be full of expensive things. Henry said to himself. I have to get it, or others will take it away. He went out of the shop and ran across the street. A driver saw him and began to whistle, but he didn’t hear it and went on running. The man drove aside, hit a big tree and was hurt in the accident. Two weeks later Henry was taken to court. A judge asked if he heard the whistle when he was running across the street. He said that something was wrong with his ears and he could hear nothing. “But you’ve heard me this time.” said the judge. “Oh, I’m sorry. Now I can hear with one ear.” “Cover the ear with your hand and listen to me with your deaf one. Well, can you hear me?” “No, I can’t, Sir.” |
| Question | Why did Mr.King want to send Henry away? |
| Answer | Because Henry was too lazy. |
| BDG | Because Henry didn’t want to go. |
| Because Henry didn’t want to go out. | |
| Because Henry didn’t want to go to the bookstore. | |
| BDG ensemble | Because Henry didn’t want to go. |
| Because Henry wanted to be rich. | |
| Because Henry wanted to be a clever man. |
| Content | Most of the time, people wear hats to protect themselves from weather conditions. Hats are also worn to show politeness and as signs of social position. But nowadays, hats, especially women's hats, are much more than that. More exactly, hats have changed into fashion and style symbols by many movie stars. What's more, people now consider many different features when choosing even a simple hat. Many designers point out that, when choosing the right hat, it's important to consider the color of your skin as well as your hair, your height, and the shape of your face. First of all, the color of the hat should match the color of your skin and hair. For instance, black hats should be avoided if you are dark skinned. If a purple hat is placed on top of red hair, one will look as attractive as a summer flower. Second, the height of the hat is also an important point. Tall women should not go for hats with tall crowns, just as short women should choose hats with upturned brims to give the look of height. Third, and most importantly, the shape of the face decides the kind of hat one should pick. A small, gentle hat that fits the head looks good on a small face. However, women with big, round faces should choose a different style. As the saying goes, 'Fine feathers make fine birds.' Å good hat can not only help your dress but also support your features, so why not choose the best possible one next time you want to be in public? |
| Question | According to the article, which of the following women would look most attractive? |
| Answer | A short red-haired woman who wears a purple hat. |
| BDG | A young woman who wears a white hat. |
| A young woman who doesn't like a white hat. | |
| A woman who wears a white hat. | |
| BDG ensemble | A short black woman with big, round faces. |
| A young woman who doesn't like a white hat. | |
| A little woman who wears a pink hat. |
| Content | Memory, they say, is a matter of practice and exercise. If you have the wish and really made a conscious effort, then you can quite easily improve your ability to remember things. But even if you are successful, there are times when your memory seems to play tricks on you. Sometimes you remember things that really did not happen. One morning last week, for example, I got up and found that I had left the front door unlocked all night, yet I clearly remember locking it carefully the night before. Memory "trick" work the other way as well. Once in a while you remember not doing something, and then find out that you did. One day last month, for example, I was sitting in a barber shop waiting for my turn to get a haircut, and suddenly I realized that I had got a haircut two days before at the barber shop across the street from my office. We always seem to find something funny and amusing in incidents caused by people's forgetfulness or absent-mindedness. Stories about absent-minded professors have been told for years, and we never got tired of hearing new ones. Unfortunately, however, absent-mindedness is not always funny. There are times when "trick" of our memory can cause us great trouble. |
| Question | Which of the following statements is true according to the passage? |
| Answer | One night the writer forgot to lock the front door. |
| BDG | The writer couldn't find a hair cut in the barber shop. |
| The writer couldn't find a hair cut in the shop. | |
| BDG ensemble | The writer didn't want to open the front door. |
| The writer couldn't find the reason why he left the front door. |
| Model | R-1 | R-2 | R-SU4 |
| LexRank (Erkan and Radev, 2004) | 38.27 | 12.70 | 13.20 |
| TextRank (Mihalcea and Tarau, 2004) | 38.44 | 13.10 | 13.50 |
| MMR(Carbonell and Goldstein, 1998) | 38.77 | 11.98 | 12.91 |
| PGN (See et al., 2017) | 41.85 | 12.91 | 16.46 |
| CopyTransformer(Gehrmann et al., 2018) | 43.57 | 14.03 | 17.37 |
| Hi-MAP(Fabbri et al., 2019) | 43.47 | 14.89 | 17.41 |
| SDS-to-MDS(Zhang et al., 2018) | 44.74 | 15.93 | 19.44 |
| CopyTransformer* | 45.03 | 16.35 | 19.59 |
| Ours | 46.26 | 17.02 | 20.46 |
| Model | R-1 | R-2 | R-SU4 |
| LexRank (Erkan and Radev, 2004) | 35.56 | 7.87 | 11.86 |
| TextRank (Mihalcea and Tarau, 2004) | 33.16 | 6.13 | 10.16 |
| MMR(Carbonell and Goldstein, 1998) | 30.14 | 4.55 | 8.16 |
| PGN (See et al., 2017) | 31.43 | 6.03 | 10.01 |
| CopyTransformer(Gehrmann et al., 2018) | 28.54 | 6.38 | 7.22 |
| PG-MMR(Lebanoff et al., 2018) | 36.42 | 9.36 | 13.23 |
| Hi-MAP(Fabbri et al., 2019) | 35.78 | 8.90 | 11.43 |
| SDS-to-MDS(Zhang et al., 2018) | 36.7 | 7.83 | 12.4 |
| CopyTransformer* | 36.48 | 8.22 | 12.29 |
| Ours | 37.24 | 8.60 | 12.67 |
| Model | Fluency | Informativeness | Non-redundancy |
| CopyTransformer(Gehrmann et al., 2018) | 3.1 | 3.08 | 2.94 |
| Hi-MAP(Fabbri et al., 2019) | 2.98 | 2.94 | 3.02 |
| SDS-to-MDS(Zhang et al., 2018) | 3.3 | 3.22 | 3.18 |
| CopyTransformer* | 3.42 | 3.26 | 3.24 |
| Ours | 3.5 | 3.42 | 3.36 |
| Spearman | 0.732 | 0.715 | 0.698 |
| Model | R-1 | R-2 | R-SU4 |
| Ours | 46.26 | 17.02 | 20.46 |
| w/o joint learning | 44.64 | 16.14 | 19.06 |
| w/o decoding controller | 44.94 | 16.07 | 19.11 |
| Model | R-1 | R-2 | R-L |
| Lead-3 | 40.34 | 17.70 | 36.57 |
| PGN (See et al., 2017) | 39.53 | 17.28 | 36.38 |
| CopyTransformer | 40.68 | 18.26 | 37.38 |
| CopyTransformer* | 41.39 | 18.58 | 38.03 |
| Ours | 41.7 | 18.86 | 38.36 |
| Step | Production | Result |
| 1 | r1 | (b1/L:CONT T2($1,$2):CONT T2($1,$2)) |
| 2 | r2 | (b1/□:CONT (b2/L:DRS T2($1,$2)):CONT T2($1,$2)) |
| 3 | r3 | (b1/□:CONT (b2/□:DRS (e1/L:AGENT T1($1):THEME T1($2)))) :CONT T2($1,$2)) |
| 4 | r4 | (b1/□:CONT (b2/□:DRS (e1/bar:AGENT (c*/L):THEME T1($2)))) :CONT T2(c,$2)) |
| 5 | r5 | (b1/□:CONT (b2/□:DRS (e1/bar:AGENT (c*/speaker):THEME (x*/L)))) :CONT T2(c,x)) |
| 6 | r2 | (b1/□:CONT (b2/□:DRS (e1/bar:AGENT (c*/speaker):THEME (x*/doorp)))) :CONT (b3/□:DRS T2(c,x))) |
| 7 | r6 | (b1/□:CONT (b2/□:DRS (e1/bar:AGENT (c*/speaker):THEME (x*/doorp)))) :CONT (b3/□:DRS (e2/lock:AGENT c:PATIENT x) |
| # training instances | # fragments +edge label | # fragments -edge label | avg. rank | |
| PMB2.2.0-g | 4585 | 1196 | 232 | 1.56 |
| PMB2.2.0-s | 63960 | 17414 | 2586 | 2.85 |
| PMB3-g | 6618 | 1695 | 276 | 2.22 |
| PMB3-s | 94776 | 36833 | 6251 | 3.01 |
| PMB3-it | 2743 | 1827 | 378 | 2.32 |
| PMB3-de | 5019 | 4025 | 843 | 2.61 |
| PMB3-nl | 1238 | 1338 | 318 | 2.29 |
| P | R | F1 | |
| baseline | 80.0 | 70.9 | 75.2 |
| + rank-prediction | 81.0 | 72.3 | 76.4 |
| + constrained-decoding | 80.5 | 75.2 | 77.8 |
| + edge-factorization | 82.5 | 78.5 | 80.4 |
| ours-best + silver | 83.8 | 80.6 | 82.2 |
| ours-best + filtering | 83.1 | 80.5 | 81.8 |
| P | R | F1 | |
| Fancellu et al. (2019) | - | - | 73.4 |
| Evang (2019) | - | - | 74.4 |
| ours-best | 84.5 | 81.3 | 82.9 |
| van Noord et al. (2019) | - | - | 86.8 |
| Liu et al. (2019) | 85.8 | 84.5 | 85.1 |
| ours-best + silver | 86.1 | 83.6 | 84.9 |
| en | de | nl | it | |
| FA19 (monolingual) | - | 67.9 | 65.8 | 75.9 |
| FA19 (cross-lingual) | - | 63.5 | 65.1 | 72.1 |
| Ours (cross-lingual) | - | 73.4 | 73.9 | 76.9 |
| ours-best (various) trained and tested on PMB3 | ||||
| monolingual | 80 | 64.2 | 60.9 | 71.5 |
| cross-lingual | - | 73.2 | 74.1 | 75.2 |
| bilingual | - | 71.8 | 76.0 | 77.7 |
| polyglot | 79.8 | 72.5 | 74.1 | 77.9 |
| train | dev | test | |
| PMB2.2.0-g | 4597 (4585) | 682 | 650 |
| PMB2.2.0-s | 67965 (63960) | - | - |
| PMB3-g | 6620 (6618) | 885 | 898 |
| PMB3-s | 97598 (94776) | - | - |
| PMB3-it | 2772 (2743)* | 515 | 547 |
| PMB3-de | 5250 (5019)* | 417 | 403 |
| PMB-nl | 1301 (1238)* | 529 | 483 |
| en | de | nl | it | |||||||||
| P | R | F | P | R | F | P | R | F | P | R | F | |
| monolingual | 81.6 | 78.4 | 80 | 64.5 | 64 | 64.2 | 62.6 | 59.2 | 60.9 | 72.4 | 70.6 | 71.5 |
| cross-lingual | - | - | - | 72.8 | 73.6 | 73.2 | 73.4 | 74.9 | 74.1 | 74.2 | 76.2 | 75.2 |
| bilingual | - | - | - | 72 | 71.5 | 71.8 | 76.7 | 75.3 | 76 | 76.8 | 78.6 | 77.7 |
| polyglot | 81 | 78.8 | 79.8 | 72.2 | 72.9 | 72.5 | 74.3 | 73.8 | 74.1 | 78.2 | 77.5 | 77.9 |
| Model Parameters | ||
| BERT | 768 | |
| Num of Encoder Layer | 2 | |
| Encoder | en de/nl/it | 2@512 1@512 |
| Fragment/Relation/Label | en de/nl/it | 100 75 |
| Edge Prediction Layer | en de/nl/it | 100 75 |
| Decoder | en de/nl/it | 1024 512 |
| Optimization Parameters | ||
| Optimizer | ADAM | |
| Learning Rate | 0.001 | |
| Weight Decay | 1e-4 | |
| Gradient Clipping | 5 | |
| Label Smoothing ε | 0.1 | |
| Bert Finetune LR | en de/nl/it | 2e-5 1e-5 |
| Dropout | en de/nl/it | 0.33 0.5 |
| Dataset | U.Num(train / dev / test) | W/Utter |
| Original | 72922 / 3626 / 11577 | 18.6 |
| Reduced | 66332 / 3269 / 10502 | 19.7 |
| Sampled | 66332 / 3269 / 1317 | 19.6 |
| Model | PPL | BLEU (%) | ROUGE-L | KU-2/3 (%) | QKU-2/3 |
| VHRED(-k) | 97.3◇ (99.3)* | 0.49* (0.49)* | 7.80* (7.82)* | -/- (-/-) | -/- (-/-) |
| VHRED(c) | 80.2◇ (85.4)* | 0.79* (0.77)* | 8.64* (8.63)* | 12.0/27.0◇ (12.1/27.6)◇ | 3.36/2.82◇ (3.35/2.80)◇ |
| VHRED(a) | 77.2◇ (78.5)* | 0.84* (0.80)* | 8.98* (8.99)* | 13.7/31.7◇ (13.1/31.3)* | 3.23/2.72* (3.23/2.72)* |
| T-DD | 18.2* (20.5)* | 0.90* (0.89)* | 9.23* (9.24)* | 8.0/23.1* (8.0/23.0)* | 2.55/1.94* (2.55/1.95)* |
| T-EDD | 18.2* (20.3)* | 0.91* (0.90)* | 9.35* (9.36)* | 8.3/23.5* (8.1/23.4)* | 2.45/1.91* (2.45/1.92)* |
| ITDD | 16.2* (18.7)* | 1.01* (0.99)* | 10.12◇ (10.10)* | 9.0/24.5* (9.1/24.4)* | 2.18/1.84* (2.15/1.82)* |
| CAT-EDD | 16.0* (18.2)* | 1.14* (1.14)* | 11.10* (11.12)* | 9.5/24.8* (9.7/24.9)* | 2.12/1.77* (2.11/1.76)* |
| CAT-DD | 15.2 (16.1) | 1.22 (1.21) | 11.22 (11.22) | 11.0/26.5 (11.1/26.4) | 2.08/1.64 (2.05/1.62) |
| Model | Flu. | Coh. | Inf. |
| VHRED(-k) | 3.71 (3.72) | 2.82 (2.72) | 3.01 (2.82) |
| VHRED(c) | 3.73 (3.82) | 3.04 (3.11) | 3.03 (3.05) |
| VHRED(a) | 3.84 (3.77) | 3.11 (3.14) | 3.22 (3.06) |
| T-DD | 3.84 (3.82) | 3.03 (3.06) | 3.03 (3.06) |
| T-EDD | 3.84 (3.83) | 3.02 (3.08) | 3.05 (3.05) |
| ITEDD | 3.90 (3.91) | 3.11 (3.12) | 3.43 (3.42) |
| CAT-EDD | 4.02 (3.93) | 3.12 (3.33) | 3.33 (3.41) |
| CAT-DD | 4.09 (4.09) | 3.39 (3.43) | 3.44 (3.61) |
| Models | PPL | BLEU | KU-2(%) / QKU-2 |
| CAT-DD | 16.1 | 1.21 | 11.1 / 2.05 |
| w/o-left | 19.8* | 0.90* | 8.2* / 2.56* |
| w/o-(5,6) | 18.7* | 0.93* | 9.1* / 2.48◇ |
| w/o-(G) | 18.2* | 0.96* | 9.2◇ / 2.46* |
| Document: +... sally hawkins as elisa esposito, a mute cleaner who works at a secret government laboratory. michael shannon as colonel richard strickland ... rating rotten tomatoes: 92% The shape of water is a 2017 american fantasy film ... it stars sally hawkins, michael shannon, richard jenkins, Doug jones, michael stuhlbarg, and octavia spencer ... |
| Dialogue history: +S1: I wonder if it's a government creation or something captured from the wild. i would assume the wild. +S2: It was captured for governmental experiments. +The last Utterance: +S1: Is it a big name cast? |
| Groud truth: +S2: Sally hawkins played the role of the mute cleaner, mic-hael shannon played the role of colonel richard strickland. |
| Generated response: +VHRED(a): it has rating rotten tomatoes: 92%. +TDD: i am not sure about it. +ITDD: yes, sally hawkins as elisa esposito. +CAT-DD: sally hawkins, michael shannon, richard jenkins, doug jones, michael stuhlbarg, and octavia spencer. +(w/o-(5,6)): yes, sally hawkins works at a secret governme- +nt laboratory. +(w/o-(G)): it is a 2017 american fantasy film. |
| Corpora | Lin | Dan | won | the championship | |
| CTB | 林丹 | 赢得 | 总冠军 | ||
| PKU | 林 | 丹 | 赢得 | 总 | 冠军 |
| MSRA | 林丹 | 赢得 | 总 | 冠军 | |
| Corpora | Words# | Chars# | Word Types | Char Types | OOV | ||
| Sighan05 | MSRA | Train | 2.4M | 4.0M | 75.4K | 5.1K | 1.32% |
| Test | 0.1M | 0.2M | 11.9K | 2.8K | |||
| AS | Train | 5.4M | 8.3M | 128.8K | 5.8K | 2.20% | |
| Test | 0.1M | 0.2M | 18.0K | 3.4K | |||
| PKU | Train | 1.1M | 1.8M | 51.2K | 4.6K | 2.06% | |
| Test | 0.1M | 0.2M | 12.5K | 2.9K | |||
| CITYU | Train | 1.1M | 1.8M | 43.4K | 4.2K | 3.69% | |
| Test | 0.2M | 0.4M | 23.2K | 3.6K | |||
| Sighan08 | CTB | Train | 0.6M | 1.0M | 40.5K | 4.2K | 3.80% |
| Test | 0.1M | 0.1M | 11.9K | 2.9K | |||
| CKIP | Train | 0.7M | 1.1M | 44.7K | 4.5K | 4.29% | |
| Test | 0.1M | 0.1M | 14.2K | 3.1K | |||
| NCC | Train | 0.9M | 1.4M | 53.3K | 5.3K | 3.31% | |
| Test | 0.2M | 0.2M | 20.9K | 3.9K | |||
| SXU | Train | 0.5M | 0.8M | 29.8K | 4.1K | 2.60% | |
| Test | 0.1M | 0.2M | 11.6K | 2.8K | |||
| Embedding Size d | 100 |
| Hidden State Size dmodel | 256 |
| Transformer Encoder Layers | 6 |
| Attention Heads | 4 |
| Batch Size | 256 |
| Dropout Ratio | 0.2 |
| Warmup Steps | 4000 |
| Models | MSRA | AS | PKU | CTB | CKIP | CITYU | NCC | SXU | Avg. | |
| Single-Criterion Models | ||||||||||
| Stacked BiLSTM (Ma et al., 2018) | F | 97.4 | 96.2 | 96.1 | 96.7 | - | 97.2 | - | - | - |
| BiLSTM (Chen et al., 2017) | F | 95.84 | 94.2 | 93.3 | 95.3 | 93.06 | 94.07 | 92.17 | 95.17 | 94.14 |
| Switch-LSTMs (Gong et al., 2018) | F | 96.46 | 94.51 | 95.74 | 97.09 | 92.88 | 93.71 | 92.12 | 95.57 | 94.76 |
| Transformer Encoder | F | 98.07 | 96.06 | 96.39 | 96.41 | 95.66 | 96.32 | 95.57 | 97.08 | 96.45 |
| Transformer Encoder | OOV | 73.75 | 73.05 | 72.82 | 82.82 | 79.05 | 83.72 | 71.81 | 77.95 | 76.87 |
| Multi-Criteria Models | ||||||||||
| BiLSTM (Chen et al., 2017) | F | 96.04 | 94.64 | 94.32 | 96.18 | 94.26 | 95.55 | 92.83 | 96.04 | 94.98 |
| Switch-LSTMs (Gong et al., 2018) | F | 97.78 | 95.22 | 96.15 | 97.26 | 94.99 | 96.22 | 94.12 | 97.25 | 96.12 |
| Unified BiLSTM (He et al., 2019) | F | 97.2 | 95.4 | 96.0 | 96.7 | - | 96.1 | - | 96.4 | - |
| Our Unified Model | F | 98.05 | 96.44 | 96.41 | 96.99 | 96.51 | 96.91 | 96.04 | 97.61 | 96.87 |
| Our Unified Model | OOV | 78.92 | 76.39 | 78.91 | 87 | 82.89 | 86.91 | 79.3 | 85.08 | 81.92 |
| Models | MSRA | AS | PKU | CTB | CKIP | CITYU | NCC | SXU | Avg. |
| Unified Model | 98.05 | 96.44 | 96.41 | 96.99 | 96.51 | 96.91 | 96.04 | 97.61 | 96.87 |
| w/o CRF | 98.02 | 96.42 | 96.41 | 96.9 | 96.59 | 96.87 | 95.96 | 97.5 | 96.83 |
| w/o bigram | 97.41 | 96 | 96.25 | 96.71 | 96 | 96.31 | 94.62 | 96.84 | 96.27 |
| w/o pre-trained emb. | 97.51 | 96.06 | 96.02 | 96.47 | 96.22 | 95.99 | 94.82 | 96.76 | 96.23 |
| Models | MSRA | AS | PKU | CTB | CKIP | CITYU | NCC | SXU | Avg. F1 |
| 8Simp | 98.05 | 96.44 | 96.41 | 96.99 | 96.51 | 96.91 | 96.04 | 97.61 | 96.87 |
| 8Trad | 97.98 | 96.39 | 96.49 | 96.99 | 96.49 | 96.86 | 95.98 | 97.48 | 96.83 |
| 5Simp, 3Trad | 98.03 | 96.52 | 96.6 | 96.94 | 96.38 | 96.8 | 96.02 | 97.55 | 96.86 |
| 8 Simp, 8 Trad | 98.04 | 96.41 | 96.43 | 96.99 | 96.54 | 96.85 | 96.08 | 97.52 | 96.86 |
| 苹果(apple) | 蘋果(apple) | 爱好(hobby) | 爱好(hobby) | 担心(worry) | 擔心(worry) |
| 坚果(nut) | 微軟(Microsoft) | 热爱(love) | 熱愛(love) | 关心(care) | 關心(care) |
| 谷歌(Google) | 黃油(butter) | 兴趣(interest) | 爱好(hobby) | 怀疑(doubt) | 顧慮(misgiving) |
| 华为(Huawei) | 現貨(goods in stock) | 爱好(hobby) | 興趣(interest) | 顾虑(misgiving) | 懷疑(doubt) |
| 黃油(butter) | 果凍(jelly) | 梦想(dream) | 夢想(dream) | 担忧(concern) | 擔憂(concern) |
| 鮮果(fresh fruit) | 京東(JD) | 愛玩(Playful) | 愛玩(playful) | 责怪(blemame) | 憂慮(anxiety) |
| 微软(Microsoft) | 賣家(seller) | 痴迷(addict) | 喜愛(adore) | 伤心(sad) | 責怪(blemame) |
| 诺基(Nokia) | 苹果(apple) | 乐趣(pleasure) | 習慣(habbit) | 嫌弃(disfavour) | 傷心(sad) |
| 蘋果(Apple) | 售後(after-sales) | 喜爱(adore) | 樂趣(pleasure) | 忧虑(anxiety) | 担心(worry) |
| Method | Setting | DE→EN | EN→DE | RU→EN | EN→RU | LT→EN | EN→LT |
| Transformer Base | Entire Corpus | 32.5 | 27.3 | 33.9 | 36.6 | 24.2 | 20.3 |
| Random | Active IBT | 29.4 | 23.6 | 28.4 | 30.5 | 21.2 | 15.7 |
| Best Uncertainty | Active IBT | 31.5 | 25.5 | 32.1 | 33.9 | 23.0 | 19.5 |
| Delfy (Ours) | Active IBT | 31.3 | 26.1 | 32.0 | 34.4 | 23.6 | 20.0 |
| Te-delfy (Ours) | Active IBT | 31.9 | 26.9 | 33.5 | 36.1 | 23.8 | 20.3 |
| Te-delfy (Ours) | Active IBT++ | 32.8 | 27.4 | 35.0 | 37.4 | 25.4 | 21.3 |
| Methods | Easy→Hard | Hard→Easy |
| lc | 16.0 | 17.5 |
| margin | 16.3 | 18.3 |
| te | 15.9 | 18.7 |
| tte | 16.1 | 18.6 |
| delfy | 16.9 | 19.1 |
| te-delfy | 16.0 | 19.8 |
| Dataset | Task | Sample Size |
| SST-2 (Socher et al., 2013) | sequence classification | 11.8k sentences, 215k phrases |
| SST-5 (Socher et al., 2013) | sequence classification | 11.8k sentences, 215k phrases |
| MRPC (Dolan et al., 2004) | sequence classification | 5,801 sentence pairs |
| AG News (Zhang et al., 2015) | sequence classification | 12k sentences |
| CoNLL'03 (Sang and De Meulder, 2003) | sequence labeling | 22k sentences, 300k tokens |
| Dataset | RM | US | AUSDS(FGV) | AUSDS(DeepFool) | AUSDS(C&W) |
| SST-2 | 1061x | 1x | 38x | 38x | 28x |
| SST-5 | 1939x | 1x | 52x | 52x | 38x |
| MRPC | 97x | 1x | 14x | 14x | 11x |
| AG News | 1434x | 1x | 51x | 47x | 38x |
| CoNLL'03 | 45x | 1x | 10x | — | — |
| Label Size | 2% | 4% | 6% | 8% | 10% | |
| SST-2 | RM | 87.78(.003) | 89.85(.004) | 89.85(.010) | 89.69(.004) | 90.26(.008) |
| US | 87.74(.004) | 90.25(.006) | 90.38(.008) | 90.25(.006) | 91.27(.007) | |
| AUSDS (FGV) | 89.18(.002) | 89.88(.008) | 89.16(.014) | 91.07(.005) | 89.95(.003) | |
| AUSDS (DeepFool) | 88.74(.004) | 90.06(.003) | 89.84(.007) | 90.74(.006) | 91.58(.002) | |
| AUSDS (C&W) | 87.97(.003) | 89.95(.005) | 90.83(.007) | 90.12(.003) | 91.13(.001) | |
| SST-5 | RM | 49.45(.010) | 50.01(.007) | 50.88(.006) | 50.39(.014) | 51.35(.005) |
| US | 49.10(.008) | 49.54(.009) | 50.63(.008) | 50.90(.012) | 51.43(.005) | |
| AUSDS (FGV) | 49.57(.006) | 50.36(.008) | 50.09(.009) | 50.19(.014) | 50.62(.011) | |
| AUSDS (DeepFool) | 50.20(.012) | 51.87(.003) | 51.74(.012) | 50.97(.012) | 51.23(.007) | |
| AUSDS (C&W) | 48.28(.012) | 48.78(.014) | 51.58(.007) | 51.40(.010) | 47.42(.006) | |
| MRPC | RM | 67.33(.008) | 68.31(.006) | 68.56(.018) | 70.06(.021) | 71.15(.020) |
| US | 62.14(.090) | 69.34(.005) | 69.11(.010) | 70.53(.017) | 71.49(.016) | |
| AUSDS (FGV) | 68.89(.014) | 69.30(.023) | 70.28(.015) | 70.06(.012) | 69.30(.019) | |
| AUSDS (DeepFool) | 67.92(.009) | 68.88(.017) | 69.68(.017) | 71.69(.014) | 71.55(.012) | |
| AUSDS (C&W) | 67.91(.014) | 68.53(.017) | 70.46(.012) | 70.49(.012) | 68.89(.016) | |
| AG News | RM | 89.89(.003) | 90.89(.002) | 91.37(.002) | 91.79(.002) | 92.21(.002) |
| US | 90.29(.006) | 91.59(.007) | 92.34(.003) | 92.71(.001) | 93.01(.001) | |
| AUSDS (FGV) | 90.75(.002) | 91.55(.002) | 92.26(.003) | 92.62(.001) | 93.16(.001) | |
| AUSDS (DeepFool) | 90.67(.004) | 91.65(.004) | 92.43(.004) | 92.66(.004) | 93.12(.002) | |
| AUSDS (C&W) | 90.24(.002) | 91.29(.002) | 92.30(.004) | 92.90(.002) | 93.10(.003) | |
| CoNLL'03 | RM | 80.42(.002) | 83.38(.002) | 85.39(.005) | 86.78(.005) | 87.42(.003) |
| US | 78.12(.002) | 81.49(.019) | 84.45(.004) | 86.73(.008) | 87.79(.004) | |
| AUSDS (FGV) | 80.65(.006) | 83.60(.003) | 85.98(.010) | 87.10(.004) | 87.83(.003) | |
| AUSDS (DeepFool) | — | — | — | — | — | |
| AUSDS (C&W) | — | — | — | — | — |
| Label Size | 2% | 4% | 6% | 8% | 10% |
| RM | 81.58(.004) | 82.90(.006) | 83.53(.008) | 82.15(.016) | 84.40(.006) |
| US | 78.23(.007) | 80.34(.003) | 81.99(.006) | 82.34(.008) | 82.21(.004) |
| AUSDS (FGV) | 81.22(.004) | 83.25(.001) | 84.18(.005) | 84.49(.004) | 84.62(.009) |
| AUSDS (DeepFool) | 82.37(.003) | 83.31(.004) | 83.77(.002) | 84.68(.001) | 84.73(.005) |
| AUSDS (C&W) | 81.27(.006) | 84.02(.007) | 82.76(.002) | 84.40(.002) | 83.58(.012) |
| Evaluations | P@100 | P@200 | P@300 |
| Held-out Evaluation | 83 | 77 | 69 |
| Human Evaluation | 93(+10) | 92.5(+15.5) | 91(+22) |
| Model | Evaluations | P@100 | P@200 | P@300 |
| PCNN+ATT | Held-out Evaluation | 83 | 77 | 69 |
| Our method | 91.2 | 88.4 | 83.4 | |
| Human Evaluation | 93 | 92.5 | 91 |
| Model | P@100(%) | P@200(%) | P@300(%) |
| Zeng et al. 2015 | 88.0 | 85.1 | 82.3 |
| Lin et al. 2016 | 91.2 | 88.9 | 83.8 |
| Liu et al. 2017 | 94.0 | 89.0 | 87.0 |
| Qin et al. 2018b | 88.8 | 86.2 | 84.8 |
| Qin et al. 2018a | 87.0 | 83.8 | 80.8 |
| Liu et al. 2018 | 95.7 | 93.4 | 89.9 |
| BGRU | 94.4 | 89.5 | 84.7 |
| BGRU+ATT | 95.1 | 90.1 | 87.1 |
| Model | Evaluations | P@100 | P@200 | P@300 |
| BGRU+ATT | Held-out Evaluation | 82 | 78.5 | 74.3 |
| Our method | 95.2 | 90.1 | 87.1 | |
| Human Evaluation | 98 | 96 | 95 |
| Instances | Real Label | Prediction | Probability | |
| false negative | He renewed that call four years ago in a document jointly written with Ami Ayalon, a former chief of Israel's shin bet security agency and a leader of the labor party. | /person/nationality | /person/nationality | 1.0(vetted) |
| But, if so, you probably would not be familiar with the town of Ramapo in Rockland County. | /location/contain | /location/contain | 0.842 | |
| Mr. voulgaris lives in oyster bay but has summered on shelter island since he was a child growing up in Huntington in western Suffolk County. | /location/contain | /location/contain | 0.837 | |
| true negative | His visit opened a new level of debate in Israel about the possibility of negotiations with the Syrian president, Bashar Al-Assad. | NA | /person/nationality | 0.0(vetted) |
| They are in the united states, the United Kingdom and Canada, among other places, but not in the Jewish settlements of the west bank. | NA | /administrative_dis +vision/country | 0.0 | |
| Mr. spielberg and stacey snider, the former Universal Pictures studio chairman who joined DreamWorks last year as chief executive, have sole authority to greenlight films that cost $ 85 million or less. | NA | /person/company | 0.088 |
| Agent | #Parameters |
| ACER | 110 K |
| I2A (Model-free path) | 80 K |
| I2A (Model-based path) | 1.2 M |
| I2A (Total) | 1.4 M |
| ADC (Model-based critic) | 110 K |
| ADC (Total) | 240 K |
| Environment Model | 16 K |
| Agent | Suc. | Std. | Required data |
| ACER | 78.1 | ±7.7 | 1200 |
| GP-SARSA | 89.6 | ±3.3 | 800 |
| I2A | 93.9 | ±2.3 | 750 |
| ADC | 95.8 | ±1.2 | 600 |
| Env. 1 | Env. 2 | Env. 3 | Env. 4 | Env. 5 | Env. 6 | |
| SER | 0% | 0% | 15% | 15% | 15% | 30% |
| Masks | On | Off | On | Off | On | On |
| User | Standard | Standard | Standard | Standard | Unfriendly | Standard |
| Task | GP-SARSA | ACER | I2A | ADC | ||||
| Suc. | Turns | Suc. | Turns | Suc. | Turns | Suc. | Turns | |
| Env. 1 | 99.2% | 6.4 | 98.6% | 6.0 | 97.9% | 6.0 | 99.1% | 6.0 |
| Env. 2 | 95.7% | 7.2 | 87.3% | 6.5 | 79.8% | 5.8 | 98.7% | 6.0 |
| Env. 3 | 95.8% | 7.7 | 95.3% | 7.1 | 96.3% | 7.0 | 96.1% | 7.0 |
| Env. 4 | 89.6% | 8.2 | 78.1% | 6.7 | 93.9% | 7.2 | 95.8% | 6.8 |
| Env. 5 | 92.5% | 9.6 | 94.0% | 8.2 | 94.2% | 8.0 | 95.6% | 8.0 |
| Env. 6 | 90.0% | 9.0 | 81.0% | 8.1 | 87.9% | 8.1 | 92.0% | 7.9 |
| Simulation Sample | |||
| Cambridge Restaurant Booking User Goal:Constraints: (Area: North), (Price range: Dont Care), (Food type: Chinese)Requests: (Address: None), (Name: None), (Description: None) Patience: 4 | |||
| ACER agent(57% Success rate) | ADC agent(88% Success rate) | ||
| 1 | Hello. I want to find a restaurant in Cambridge. | 1 | Hello. I want to find a restaurant in Cambridge. |
| 2 | agt: Do you want Asian oriental food? | 2 | What price range do you want? expensive or moderate? |
| 3 | No, I want some Chinese food. | 3 | usr: I don't care. |
| 4 | agt: Ok, I will book Midsummer House Restaurant for you. | 4 | agt: Asian oriental or Mexican food? |
| 5 | usr: Any other choices? | 5 | usr: Chinese food, please. |
| 6 | The phone number of Midsummer House Restaurant is 01223 369299. | 6 | agt: What else do you want? |
| 7 | usr: Ok. | 7 | usr: Just a Chinese restaurant in the north. |
| 8 | agt: The postcode of Midsummer House Restaurant is CB4 1HA. | 8 | agt: Do you want some expensive food? |
| 9 | usr: I want a restaurant. | 9 | usr: Yes, please. |
| 10 | agt: The phone number of Midsummer House Restaurant is 01223 369299. | 10 | Hakka is a Chinese restaurant in the north area. |
| 11 | usr: In the north area. | 11 | usr: Any other choices? |
| 12 | agt: The address of Midsummer House Restaurant is Midsummer Common. | 12 | agt: The Hotpot is also a Chinese restaurant in the north area. |
| 13 | usr: I don't care. | 13 | usr: What is the address? |
| 14 | agt: The postcode of Midsummer House Restaurant is CB4 1HA. | 14 | agt: Which price range do you want? |
| 15 | usr: What's the food type of Midsummer House? | 15 | usr: Give me the address of The Hotpot. |
| 16 | agt: North or centre? | 16 | The address of The Hotpot is 66 Chesterton Road Chesterton. |
| 17 | usr: North. | 17 | usr: What is the description? |
| 18 | agt: The food type of Midsummer House is British. | 18 | agt: Which price range do you want? |
| 19 | usr: I want some light bites. | 19 | usr: What is the description of The Hotpot? The Hotpot serves a variety of Chinese dishes at their restaurant and can cater for private parties of up to five five guests upon request. |
| 20 | agt: North or south? | 20 | agt: Thank you! Bye. |
| 21 | usr: North. | 21 | |
| Repeat turns 20, 21 till the user is out of patience. Failure | Success | ||
| System | Dev | Test | Concatenation of all datasets | ||||||||
| SE07 | SE2 | SE3 | SE13 | SE15 | Noun | Verb | Adj | Adv | ALL | ||
| KB | Most frequent sense baseline | 54.5 | 65.6 | 66.0 | 63.8 | 67.1 | 67.7 | 49.8 | 73.1 | 80.5 | 65.5 |
| Leskext+emb | 56.7 | 63.0 | 63.7 | 66.2 | 64.6 | 70.0 | 51.1 | 51.7 | 80.6 | 64.2 | |
| Babelfy | 51.6 | 67.0 | 63.5 | 66.4 | 70.3 | 68.9 | 50.7 | 73.2 | 79.8 | 66.4 | |
| Sup | IMS+emb | 62.6 | 72.2 | 70.4 | 65.9 | 71.5 | 71.9 | 56.6 | 75.9 | 84.7 | 70.1 |
| LSTM-LP | 63.5 | 73.8 | 71.8 | 69.5 | 72.6 | - | - | - | - | - | |
| Bi-LSTM | - | 71.1 | 68.4 | 64.8 | 68.3 | 69.5 | 55.9 | 76.2 | 82.4 | 68.4 | |
| HCAN | - | 72.8 | 70.3 | 68.5 | 72.8 | 72.7 | 58.2 | 77.4 | 84.1 | 71.1 | |
| Feat | LMMS2348 (BERT) | 68.1 | 76.3 | 75.6 | 75.1 | 77.0 | - | - | - | - | 75.4 |
| SemCor+WNGC, hypernyms (single) | - | - | - | - | - | - | - | - | - | 77.1 | |
| SemCor+WNGC, hypernyms (ensemble) | 73.4 | 79.7 | 77.8 | 78.7 | 82.6 | 81.4 | 68.7 | 83.7 | 85.5 | 79.0 | |
| SENSEMBERTsup | - | - | - | - | - | 80.4 | - | - | - | - | |
| BEM2 | 74.5 | 79.4 | 77.4 | 79.7 | 81.7 | 81.4 | 68.5 | 83.0 | 87.9 | 79.0 | |
| EWISERhyper2 | 75.2 | 80.8 | 79.0 | 80.7 | 81.8 | 82.9 | 69.4 | 83.6 | 87.3 | 80.1 | |
| FT | BERTdef | - | 76.4 | 74.9 | 76.3 | 78.3 | 78.3 | 65.2 | 80.5 | 83.8 | 76.3 |
| GlossBERT (Sent-CLS-WS) | 72.5 | 77.7 | 75.2 | 76.1 | 80.4 | 79.3 | 66.9 | 78.2 | 86.4 | 77.0 | |
| Ours | BERTbase (baseline) | 73.6 | 79.4 | 76.8 | 77.4 | 81.5 | 80.6 | 67.9 | 82.2 | 87.3 | 78.2 |
| BERTbase (augmented) | 73.6 | 79.3 | 76.9 | 79.1 | 82.0 | 81.3 | 67.7 | 82.2 | 87.9 | 78.7 | |
| BERTlarge (baseline) | 73.0 | 79.9 | 77.4 | 78.2 | 81.8 | 81.2 | 68.8 | 81.5 | 88.2 | 78.7 | |
| BERTlarge (augmented) | 72.7 | 79.8 | 77.8 | 79.7 | 84.4 | 82.6 | 68.5 | 82.1 | 86.4 | 79.5 | |
| SE07 | SE2 | SE3 | SE13 | SE15 | |
| GlossBERT | 72.5 | 77.7 | 75.2 | 76.1 | 80.4 |
| BERTbase | 73.0 | 79.1 | 77.3 | 77.4 | 81.0 |
| docs | tokens | chains | mentions | |
| ONT | 2632 | 1289K | 34K | 152K |
| tc | 111 | 81K | 1931 | 12K |
| bc | 284 | 144K | 4236 | 18K |
| bn | 711 | 172K | 6138 | 21K |
| mz | 410 | 164K | 3534 | 13K |
| nw | 622 | 387K | 9404 | 34K |
| pt | 320 | 210K | 6611 | 42K |
| wb | 174 | 131K | 2993 | 12K |
| TW' | 185 | 48K | 1534 | 6K |
| Tokens | Chains | Mentions | |
| train | 44885 | 1411 | 5946 |
| test | 3260 | 123 | 408 |
| Test | Tokens | Chains | Mentions |
| A - ONT | 1289K | 34K | 152K |
| B - TW only | 44.8K | 1.4K | 5.9K |
| C - TW+ONT | 1333.8K | 35.4K | 157.9K |
| D - TW+spok | 269.8K | 7.5K | 35.9K |
| E - TW+writ | 269K | 5.8K | 22.8K |
| Test | Rec. | Prec. | F1 | Rec.1 | Prec.1 | F11 | Rec.2 | Prec.2 | F12 |
| MUC | |||||||||
| A-ONT | 38.24 | 55.89 | 45.41 | 35.74 | 51.36 | 42.15 | 41.05 | 66.47 | 50.75 |
| B-TW only | 56.84 | 74.65 | 64.54 | 50.95 | 70.89 | 59.29 | - | - | - |
| C-TW+ONT | 60.35 | 71.07 | 65.27 | 46.38 | 67.77 | 55.07 | 62.8 | 73.06 | 67.54 |
| D-TW+spok | 62.1 | 77.97 | 68.41 | 47.9 | 75.44 | 58.6 | 61.75 | 72.72 | 66.79 |
| E-TW+writ | 60.35 | 71.36 | 65.39 | 54.75 | 69.23 | 61.14 | 62.45 | 73.85 | 67.68 |
| B3 | |||||||||
| A-ONT | 35.14 | 56.02 | 43.18 | 33.19 | 51.68 | 40.42 | 37.21 | 66.78 | 47.79 |
| B-TW only | 51.64 | 68.77 | 58.99 | 46.31 | 63.52 | 53.57 | - | - | - |
| C-TW+ONT | 55.95 | 66.02 | 60.57 | 44.58 | 63.04 | 52.23 | 58.29 | 68.97 | 63.18 |
| D-TW+spok | 58.25 | 74.16 | 65.25 | 46.46 | 71.45 | 56.31 | 57.16 | 68.48 | 62.31 |
| E-TW+writ | 55.19 | 63.9 | 59.23 | 49.28 | 60.4 | 54.28 | 59.24 | 68.85 | 63.68 |
| CEAFE | |||||||||
| A-ONT | 44.5 | 49.76 | 46.98 | 43.26 | 47.59 | 45.32 | 49.13 | 61.04 | 54.44 |
| B-TW only | 50.97 | 69.66 | 58.87 | 44.54 | 65.96 | 52.96 | - | - | - |
| C-TW+ONT | 56.68 | 67.68 | 61.69 | 50.0 | 65.48 | 56.71 | 59.29 | 70.12 | 64.25 |
| D-TW+spok | 61.81 | 71.06 | 66.12 | 53.94 | 68.2 | 60.24 | 59.64 | 64.92 | 62.17 |
| E-TW+writ | 52.4 | 67.85 | 59.13 | 46.01 | 64.06 | 53.55 | 58.14 | 67.47 | 62.46 |
| Average | |||||||||
| A-ONT | 39.29 | 53.89 | 45.18 | 37.39 | 50.21 | 42.6 | 42.46 | 64.76 | 50.99 |
| B-TW only | 53.15 | 71.025 | 60.8 | 47.27 | 66.58 | 55.27 | - | - | - |
| C-TW+ONT | 57.76 | 68.25 | 62.51 | 46.9 | 65.43 | 54.67 | 60.12 | 70.71 | 65.0 |
| D-TW+spok | 60.72 | 74.39 | 66.8 | 49.43 | 71.69 | 58.3 | 59.51 | 68.7 | 63.76 |
| E-TW+writ | 55.98 | 67.7 | 61.25 | 50.01 | 64.56 | 56.32 | 59.94 | 70.05 | 64.60 |
| Gold | A | B | D | |
| Pred. Mentions | 408 | 305 | 307 | 334 |
| Username | 8 | 51 | 6 | 5 |
| tweet-initial | 1 | 44 | 0 | 0 |
| Hashtags | 11 | 0 | 4 | 5 |
| Correctly Pred. | 408 | 218 | 265 | 293 |
| Avg. #tokens | 1.64 | 1.41 | 1.13 | 1.18 |
| Pronouns | 219 | 149 | 199 | 194 |
| 1st person | 57 | 38 | 53 | 50 |
| 2nd person | 64 | 26 | 63 | 62 |
| 3rd person | 68 | 60 | 61 | 59 |
| Username | 8 | 6 | 5 | 5 |
| tweet-initial | 1 | 1 | 0 | 0 |
| Hashtags | 11 | 0 | 3 | 5 |
| Pred. Chains | 123 | 110 | 90 | 107 |
| Correct Chains | - | 18 | 27 | 37 |
| Partially Correct3 | - | 10 | 11 | 14 |
| Missed Entities | - | 39 | 32 | 20 |
| Model | Development set | Testing Set | ||||||||||
| Sentence Level | Abstract Level | Sentence Level | Abstract Level | |||||||||
| Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 | |
| Baselines | ||||||||||||
| SciBERT | 45.78 | 38.52 | 41.84 | 51.93 | 44.98 | 48.21 | - | - | - | - | - | - |
| RoBERTa | 46.51 | 38.25 | 41.98 | 53.30 | 46.41 | 49.62 | 38.6 | 40.5 | 39.5 | 46.6 | 46.4 | 46.5 |
| SciKGAT | ||||||||||||
| KGAT | 57.07 | 31.97 | 40.98 | 72.73 | 38.28 | 50.16 | - | - | - | - | - | - |
| SciKGAT (w. A) | 42.07 | 47.81 | 44.76 | 47.66 | 58.37 | 52.47 | 40.50 | 48.38 | 44.09 | 47.06 | 57.66 | 51.82 |
| SciKGAT (w. AR) | 50.00 | 47.81 | 48.88 | 53.15 | 56.46 | 54.76 | 41.67 | 45.95 | 43.70 | 47.47 | 54.96 | 50.94 |
| SciKGAT (Full) | 74.36 | 39.62 | 51.69 | 84.26 | 43.54 | 57.41 | 61.15 | 42.97 | 50.48 | 76.09 | 47.30 | 58.33 |
| Ablation | Model | Evidence Retrieval | Fact Checking | |||||||
| Ranking Accuracy | Sentence Level | Abstract Level | ||||||||
| Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 | ||
| Abstract Retrieval | TF-IDF | 16.11 | 69.38 | 26.15 | 46.51 | 38.25 | 41.98 | 53.30 | 46.41 | 49.62 |
| w. SciBERT | 19.78 | 85.17 | 32.10 | 42.09 | 47.27 | 44.53 | 48.18 | 56.94 | 52.19 | |
| w. SciBERT-MLM | 20.33 | 87.56 | 33.00 | 42.07 | 47.81 | 44.76 | 47.66 | 58.37 | 52.47 | |
| Rationale Selection | SciBERT | 36.90 | 65.03 | 47.08 | 43.22 | 46.99 | 45.03 | 48.94 | 55.02 | 51.80 |
| SciBERT-MLM | 43.73 | 60.93 | 50.91 | 50.00 | 47.81 | 48.88 | 53.15 | 56.46 | 54.76 | |
| Fact Verification | SciBERT | 43.73 | 60.93 | 50.91 | 36.55 | 38.25 | 37.38 | 36.92 | 45.93 | 40.94 |
| w. KGAT | - | - | - | 51.61 | 34.97 | 41.69 | 58.99 | 39.23 | 47.13 | |
| w. KGAT (RP Init) | - | - | - | 60.10 | 33.33 | 42.88 | 66.38 | 36.84 | 47.38 | |
| w. KGAT (MLM Init) | - | - | - | 56.00 | 34.43 | 42.64 | 65.32 | 38.76 | 48.65 | |
| RoBERTa-Base | 43.73 | 60.93 | 50.91 | 42.72 | 36.89 | 39.59 | 44.50 | 46.41 | 45.43 | |
| w. KGAT | - | - | - | 61.05 | 31.69 | 41.73 | 68.87 | 34.93 | 46.35 | |
| w. KGAT (RP Init) | - | - | - | 61.19 | 36.61 | 45.81 | 67.48 | 39.71 | 50.00 | |
| w. KGAT (MLM Init) | - | - | - | 60.35 | 37.43 | 46.21 | 67.19 | 41.15 | 51.04 | |
| RoBERTa-Large | 43.73 | 60.93 | 50.91 | 50.00 | 47.81 | 48.88 | 53.15 | 56.46 | 54.76 | |
| w. KGAT | - | - | - | 62.87 | 40.71 | 49.42 | 72.39 | 46.41 | 56.56 | |
| w. KGAT (RP Init) | - | - | - | 73.47 | 39.34 | 51.25 | 83.33 | 43.06 | 56.78 | |
| w. KGAT (MLM Init) | - | - | - | 74.36 | 39.62 | 51.69 | 84.26 | 43.54 | 57.41 | |
| Claim: Basophils counteract disease development in patients with systemic lupus erythematosus (SLE). |
| Evidence 1: ... basophils and IgE autoantibodies amplify autoantibody production that leads to lupus nephritis ... |
| Evidence 2: Individuals with SLE also have elevated serum IgE, self-reactive IgEs and activated basophils that ... |
| SciKGAT: Contradict RoBERTa: Not Enough Info |
| Claim: In adult tissue, most T cells are memory T cells. |
| Evidence 1: Whereas adult tissues contain a predominance of memory T cells, in pediatric blood and tissues the main subset consists of naive recent thymic emigrants ... |
| SciKGAT: Support KGAT: Contradict |
| Model | BLEU↑ | Speedup↑ |
| MT | 29.69 | - |
| Cascade | 22.52 | 1.06× |
| ST | 17.44 | 0.87× |
| ST + ASR-PT | 20.67 | 1.00× |
| ST + CNN | 20.64 | 1.31× |
| ST + Fixed Rate (k=6) | 21.14 (83.3%) | 1.42× |
| ST + Fixed Rate (k=7) | 20.87 (85.7%) | 1.43× |
| ST + AFSt | 21.57 (84.4%) | 1.38× |
| ST + AFSt,f | 22.38 (85.1%) | 1.37× |
| Metric | Model | De | Es | Fr | It | Nl | Pt | Ro | Ru |
| BLEU↑ | Di Gangi et al. (2019) | 17.30 | 20.80 | 26.90 | 16.80 | 18.80 | 20.10 | 16.50 | 10.50 |
| Transformer + ASR-PT* | 21.77 | 26.41 | 31.56 | 21.46 | 25.22 | 26.84 | 20.53 | 14.31 | |
| ST | 17.44 | 23.85 | 28.43 | 19.54 | 21.23 | 22.55 | 17.66 | 12.10 | |
| ST + ASR-PT | 20.67 | 25.96 | 32.24 | 20.84 | 23.27 | 24.83 | 19.94 | 13.96 | |
| Cascade | 22.52 | 27.92 | 34.53 | 24.02 | 26.74 | 27.57 | 22.61 | 16.13 | |
| ST + AFSt | 21.57 | 26.78 | 33.34 | 23.08 | 24.68 | 26.13 | 21.73 | 15.10 | |
| ST + AFSt,f | 22.38 | 27.04 | 33.43 | 23.35 | 25.05 | 26.55 | 21.87 | 14.92 | |
| SacreBLEU ↑ | ST + AFSt | 21.6 | 26.6 | 31.5 | 22.6 | 24.6 | 25.9 | 20.8 | 14.9 |
| ST + AFSt,f | 22.4 | 26.9 | 31.6 | 23.0 | 24.9 | 26.3 | 21.0 | 14.7 | |
| Temporal Sparsity Rate | ST + AFSt | 84.4% | 84.5% | 83.2% | 84.9% | 84.4% | 84.4% | 84.7% | 84.2% |
| ST + AFSt,f | 85.1% | 84.5% | 84.7% | 84.9% | 83.5% | 85.1% | 84.8% | 84.7% | |
| Speedup ↑ | ST + AFSt | 1.38× | 1.35× | 1.50× | 1.34× | 1.54× | 1.43× | 1.59× | 1.31× |
| ST + AFSt,f | 1.37× | 1.34× | 1.50× | 1.39× | 1.42× | 1.26× | 1.46× | 1.37× |
| Metric | Model | En-Fr |
| BLEU↑ | Bérard et al. (2018) | 13.40 |
| Watanabe et al. (2018) | 16.68 | |
| Liu et al. (2019a) | 17.02 | |
| Wang et al. (2019) | 17.05 | |
| Wang et al. (2020) | 17.66 | |
| ST | 14.32 | |
| ST + ASR-PT | 17.05 | |
| Cascade | 18.27 | |
| ST + AFSt | 18.33 | |
| ST + AFSt,f | 18.56 | |
| SacreBLEU↑ | ST + AFSt | 16.9 |
| ST + AFSt,f | 17.2 | |
| Temporal Sparsity Rate | ST + AFSt | 84.7% |
| ST + AFSt,f | 83.5% | |
| Speedup↑ | ST + AFSt | 1.84× |
| ST + AFSt,f | 1.78× |
| Model | F1 |
| Baseline Individual Models | |
| LAST (Dernoncourt et al., 2017) | 0.66 |
| ATTN (Yang et al., 2016) | 0.63 |
| Baseline Joint Model | |
| (Zhang et al., 2018) | 0.61 |
| MGADE | 0.75 |
| Model | F1 |
| Baseline Individual Models | |
| Bi-LSTM (Wunnava et al., 2019) | 0.56 |
| Bi-LSTM + CRF (Wunnava et al., 2019) | 0.63 |
| Baseline Joint Model | |
| (Zhang et al., 2018) | 0.51 |
| MGADE | 0.63 |
| ADE Entity Recognition | ADE Sentence Classification | |||||
| Model | P | R | F1 | P | R | F1 |
| MGADE-SelfA† | 0.58 | 0.52 | 0.55 | 0.84 | 0.55 | 0.67 |
| MGADE-TaskA† | 0.62 | 0.50 | 0.55 | 0.82 | 0.64 | 0.72 |
| MGADE-DualA | 0.68 | 0.55 | 0.61 | 0.87 | 0.65 | 0.74 |
| MGADE | 0.70 | 0.57 | 0.63 | 0.86 | 0.67 | 0.75 |
| Adversary Method | Description | Original Question/Sentence and Corresponding Distractor |
| AddSentDiverse | (Jia and Liang, 2017; Wang and Bansal, 2018) | Q: In what country is Normandy located? +D: D-Day is located in the country of Sri Lanka. |
| AddKSentDiverse | Multiple AddSentDiverse distractors are inserted randomly in the context. | Q: Which county is developing its business center? +D1: The county of Switzerland is developing its art periphery. +D2: The county of Switzerland is developing its home center. |
| AddAnswerPosition | Answer span is preserved in this distractor. It is most misleading when inserted before the original answer. | Q: What is the steam engine's thermodynamic basis? +A: The Rankine cycle is the fundamental thermodynamic underpin-ning of the steam engine. +D: Rankine cycle is the air engine's thermodynamic basis. |
| InvalidateAnswer | AddSentDiverse and additional elimination of the original answer. | Q: Where has the official home of the Scottish Parliament been since 2004? +D: Since October 2002, the unofficial abroad of the Welsh Assembly has been a old Welsh Assembly Houses, in the Golden Gate Bridge area of Glasgow. |
| PerturbAnswer | Content words (except named entities) are algo-rithmically replaced with synonyms and evaluated for consistency using language model. | A: The UK refused to sign the Social Charter and was exempt from the legislation covering Social Charter issues unless it agreed to be bound by the legislation. +P: The UK repudiated to signature the Social Charter and was ex-empt from the legislation encompassing Social Charter issues unless it consented to be related by the legislation. |
| PerturbQuestion | Syntacting paraphrasing network is used to generate the source question with a dif-ferent syntax. | Q: In what country is Normandy located? +P: Where does Normany exist? |
| Adversary Method | SQuAD | NewsQA |
| Baseline (No Adversaries) | 81.17 | 58.40 |
| AddSentDiverse | 65.50 | 51.47 |
| AddKSentDiverse (K=2) | 45.31 | 48.31 |
| AddAnswerPosition | 68.91 | 49.20 |
| InvalidateAnswer | 77.75 | 24.03 |
| PerturbQuestion | 43.67 | 36.76 |
| PerturbAnswer | 71.97 | 59.08 |
| Effect of Multiple Distractors | ||
| AddSentDiverse | 65.50 | 51.47 |
| Add2SentDiverse | 45.31 | 48.31 |
| Add3SentDiverse | 43.49 | 44.81 |
| Combinatorial effect | ||
| AddSentDiverse | 65.50 | 51.47 |
| + PerturbAnswer | 50.71 | 51.43 |
| AddKSentDiverse | 45.31 | 48.31 |
| + PerturbQuestion | 31.56 | 29.56 |
| Effect of Insert Location of AddAnswerPosition | ||
| Random | 68.91 | 49.20 |
| Prepend | 66.52 | 48.01 |
| Append | 67.84 | 48.76 |
| Adversary Method | SQuAD | NewsQA |
| AddSentDiverse | 68.00 | 61.13 |
| AddKSentDiverse (K=2) | 79.44 | 62.31 |
| AddAnswerPosition | 80.16 | 56.90 |
| InvalidateAnswer | 91.41 | 67.57 |
| PerturbQuestion | 60.91 | 44.99 |
| PerturbAnswer | 76.42 | 60.74 |
| Original Dev (No Adversaries) | 78.83 | 58.08 |
| Search Method | In-domain | SQuAD → | ||
| SQuAD | NewsQA | NewsQA | TriviaQA | |
| Validation | ||||
| Base | 81.17 / 77.54 | 58.40 / 47.04 | 48.36 / 36.06 | 41.60 / 34.86 |
| UniS | 78.83 / 74.68 | 58.08 / 46.79 | 48.24 / 36.03 | 42.04 / 35.11 |
| Auto | 81.63 / 78.06 | 62.17 / 49.41 | 50.57 / 38.56 | 42.41 / 35.41 |
| Bayes | 81.71 / 78.12 | 58.62 / 47.21 | 49.73 / 38.38 | 43.96 / 36.67 |
| Test | ||||
| Base | 80.64 / 77.19 | 57.02 / 45.29 | 44.95 / 34.68 | 36.01 / 29.23 |
| UniS | 78.42 / 75.87 | 57.21 / 45.36 | 46.30 / 35.94 | 37.83 / 30.52 |
| Auto | 81.06 / 77.79 | 59.09 / 45.49 | 46.82 / 35.75 | 37.88 / 30.60 |
| Bayes | 80.88 / 77.57 | 57.63 / 45.32 | 48.95 / 37.44 | 40.99 / 33.68 |
| Search Method | Cross-lingual generalization from English SQuAD → | ||
| MLQA (de) | XQuAD (ru) | XQuAD (tr) | |
| Validation | |||
| Baseline | 58.58 / 36.41 | 67.89 / 44.62 | 42.95 / 25.09 |
| UniformS | 58.97 / 36.68 | 68.11 / 44.84 | 43.12 / 25.26 |
| BayesAug | 59.40 / 37.11 | 68.73 / 45.34 | 44.09 / 25.73 |
| Test | |||
| Baseline | 57.56 / 36.01 | 60.81 / 33.47 | 40.49 / 23.14 |
| UniformS | 58.27 / 36.45 | 61.87 / 34.31 | 41.04 / 23.78 |
| BayesAug | 59.02 / 38.01 | 63.03 / 34.85 | 41.95 / 24.17 |
| Adversary Method | Out-of-domain generalization | Cross-lingual generalization | |||
| TriviaQA | NewsQA | MLQA (de) | XQuAD (ru) | XQuAD (tr) | |
| AddSentDiverse | 67.17 / 65.60 | 66.26 / 64.59 | 63.68 / 61.09 | 65.21 / 64.04 | 65.17 / 63.83 |
| AddKSentDiverse (K=2) | 78.48 / 76.32 | 77.13 / 75.80 | 76.91 / 74.45 | 77.76 / 75.20 | 77.93 / 75.37 |
| AddAnswerPosition | 80.05 / 77.41 | 79.46 / 76.31 | 78.62 / 75.59 | 80.24 / 77.38 | 79.51 / 76.28 |
| InvalidateAnswer | 88.23 / 85.56 | 90.18 / 78.25 | - | - | - |
| PerturbQuestion | 60.39 / 58.02 | 54.65 / 51.48 | 58.14 / 56.33 | 60.15 / 57.92 | 59.71 / 56.27 |
| PerturbAnswer | 77.12 / 75.38 | 76.30 / 74.12 | 77.28 / 75.82 | 74.31 / 72.88 | 74.72 / 73.16 |
| Adversary Attack | Trained on SQuAD | Trained on SQuAD+AddSentDiverse |
| AddKSentDiverse | 45.31 | 76.52 |
| InvalidateAnswer | 77.75 | 70.91 |
| PerturbQuestion | 43.67 | 45.23 |
| PerturbAnswer | 71.97 | 77.28 |
| AutoAugment Policies | |
| SQuAD → SQuAD | (AddS, None, 0.2) → (IA, None, 0.4) → (AddA, None, 0.2) |
| SQuAD → NewsQA | (None, PA, 0.4) → (None, PA, 0.6) → (AddS, PA, 0.4) |
| SQuAD → TriviaQA | (AddS, None, 0.9) → (AddS, PA, 0.7) → (AddKS, PQ, 0.9) |
| NewsQA → NewsQA | (AddA, PA, 0.2) → (AddKS, None, 0.2) → (AddA, PA, 0.4) |
| BayesAugment Policies | |
| SQuAD → SQuAD | (AddS, 0.29), (AddA, 0.0), (AddA-PA, 0.0), (AddA-PQ, 0.0), (AddKS, 0.0), (AddKS-PA,0.0) |
| (AddKS-PQ, 0.0), (AddS-PA, 0.0), (AddS-PQ, 0.0), (PA, 0.61), (PQ, 0.0), (IA, 1.0) | |
| SQuAD → NewsQA | (AddS, 1.0), (AddA, 0.0), (AddA-PA, 1.0), (AddA-PQ, 0.0), (AddKS, 0.0), (AddKS-PA, 0.0) |
| (AddKS-PQ, 0.0), (AddS-PA, 1.0), (AddS-PQ, 0.0), (PA, 0.48), (PQ, 0.0), (IA, 0.0) | |
| SQuAD → TriviaQA | (AddS, 1.0), (AddA, 1.0), (AddA-PA, 0.21), (AddA-PQ, 0.18), (AddKS, 0.86), (AddKS-PA, 0.37) |
| (AddKS-PQ, 0.25), (AddS-PA, 0.12), (AddS-PQ, 0.49), (PA, 0.91), (PQ, 0.83), (IA, 0.26) | |
| SQuAD → MLQA(de) | (AddS, 0.042), (AddA-PA, 0.174), (AddA-PQ, 0.565), (AddKS, 0.173), (AddKS-PA, 0.567) |
| (AddA, 0.514), (AddS-PA, 0.869), (AddS-PQ, 0.720), (PA, 0.903), (PQ, 0.278), (AddKS-PQ, 0.219) | |
| SQuAD → XQuAD(ru) | (AddS, 0.147), (AddA-PA, 0.174), (AddA-PQ, 0.79), (AddKS, 0.55), (AddKS-PA, 0.97) |
| (AddA, 0.77), (AddS-PA, 0.02), (AddS-PQ, 0.59), (PA, 0.11), (PQ, 0.95), (AddKS-PQ, 0.725) | |
| SQuAD → XQuAD(tr) | (AddS, 0.091), (AddA-PA, 0.463), (AddA-PQ, 0.64), (AddKS, 0.32), (AddKS-PA, 0.86) |
| (AddA, 0.34), (AddS-PA, 0.37), (AddS-PQ, 0.43), (PA, 0.27), (PQ, 0.81), (AddKS-PQ, 0.493) | |
| NewsQA → NewsQA | (AddS, 1.0), (AddA, 1.0), (AddA-PA, 1.0), (AddA-PQ, 0.0), (AddKS, 0.0), (AddKS-PA, 1.0) |
| (AddKS-PQ, 0.156), (AddS-PA, 0.0), (AddS-PQ, 0.720), (PA, 0.0), (PQ, 0.0), (IA, 1.0) | |
| Model | SQuADv1.1 | SQuADv2.0 | NewsQA |
| RoBERTa | 89.73 / 82.38 | 81.17 / 77.54 | 58.40 / 47.04 |
| DistilRoBERTa | 84.57 / 75.81 | 73.29 / 69.47 | 54.21 / 42.76 |
| Hyperparameter | SQuAD v1.1 | SQuAD v2.0 | NewsQA |
| Learning Rate | 3e-5 | 1.5e-5 | 1.6e-5 |
| Batch Size | 24 | 16 | 24 |
| Warmup Ratio | 0.06 | 0.06 | 0.08 |
| No. of Epochs | 2 | 5 | 5 |
| Weight Decay | 0.01 | 0.01 | 0.01 |
| NewsQA Adversary | SQuAD | SQuAD → NewsQA |
| AddSentDiverse | 42.39 / 32.79 | 49.54 / 38.02 |
| PerturbAnswer | 39.95 / 27.60 | 45.52 / 32.49 |
| AddSentDiv-PertrbAns | 35.08 / 26.33 | 43.63 / 32.76 |
| Augmentation Ratio | NewsQA |
| RoBERTa | 48.36 / 36.06 |
| + 1x augmentation | 49.73 / 38.38 |
| + 2x augmentation | 49.84 / 37.97 |
| + 3x augmentation | 49.62 / 38.01 |
| Adversary Attack | Trained on SQuAD | Trained on SQ+ASD/PQ/PA |
| AddSentDiverse+PerturbAnswer | 50.71 | 84.37 |
| AddKSentDiverse+PerturbQuestion | 31.56 | 78.91 |
| AddAnswerPosition | 68.91 | 80.87 |
| AddKSentDiverse | 45.31 | 76.14 |
| InvalidateAnswer | 77.75 | 71.62 |
| Hyperparameter | Range |
| Learning Rate | [1e-5, 2e-5] |
| Batch Size | {8, 16, 24, 32} |
| Warmup Ratio | [0.01, 0.5] |
| Weight Decay | [0.01, 0.1] |
| Split | Dataset | Sentences | Tokens |
| Train | FCE-train | 27k | 454k |
| BEA19-train | 34k | 628k | |
| CoNLL14-train | 57k | 1.1M | |
| Lang-8 | 1M | 13M | |
| Dev | CoNLL13 | 1.3k | 28k |
| FCE-dev | 1.9k | 28k | |
| BEA19-dev | 4.3k | 87k | |
| Test | CoNLL14-test | 1.3k | 30k |
| FCE-test | 2.4k | 36k | |
| BEA19-test | 4.4k | 85k |
| P | FCE R | F0.5 | CoNLL14 | P | BEA19 R | F0.5 | |||
| P | R | F0.5 | |||||||
| Baselines | |||||||||
| RNN | 58.50 | 20.85 | 42.97 | 60.37 | 18.74 | 41.80 | 49.21 | 34.44 | 45.32 |
| Transformer | 60.87 | 25.03 | 47.30 | 63.98 | 21.52 | 45.88 | 50.38 | 35.43 | 46.45 |
| Adversarial-GE (Our System) | |||||||||
| RNN + CNN | 64.21 | 22.46 | 46.81 | 59.31 | 21.01 | 43.46 | 54.21 | 34.37 | 48.6 |
| Transformer + CNN | 62.53 | 27.82 | 50.04 | 64.68 | 22.57 | 47.10 | 53.78 | 36.52 | 49.13 |
| Recent GEC Systems | |||||||||
| Ji et al. (2017)† | - | - | - | - | - | 41.53 | - | - | - |
| Grundkiewicz and Junczys-Dowmunt (2018)†† | - | - | - | 66.61 | 17.58 | 42.76 | - | - | - |
| Chollampatt and Ng (2018a)‡,† | - | - | - | 59.68 | 23.15 | 45.36 | - | - | - |
| Zhao et al. (2019)¶ | - | - | - | 55.96 | 30.73 | 48.07 | - | - | - |
| Kaneko et al. (2020) | 61.7 | 46.4 | 57.9 | 59.2 | 31.2 | 50.2 | 51.5 | 43.2 | 49.6 |
| Generator | FCE | CoNLL14 | BEA19 | |
| SS: Single-Sentence Discriminator | ||||
| CNN | RNN | 41.68 | 40.23 | 45.53 |
| Transformer | 43.45 | 41.52 | 46.31 | |
| RNN | RNN | 41.21 | 39.25 | 45.58 |
| Transformer | 41.36 | 39.84 | 46.86 | |
| SP: Sentence-Pair Discriminator | ||||
| CNN | RNN | 46.81 | 43.46 | 48.6 |
| Transformer | 50.04 | 47.10 | 49.13 | |
| RNN | RNN | 46.45 | 43.17 | 48.11 |
| Transformer | 49.88 | 46.95 | 49.02 | |
| GLEU | RNN | 43.35 | 42.1 | 46.68 |
| Transformer | 45.65 | 45.9 | 47.84 | |
| Dataset | Lang Pair | Number of sentences (train/valid/test) |
| IWSLT17 | FR ↔ EN | 232k / 890 / 1210 |
| AR ↔ EN | 231k / 888 / 1205 | |
| IWSLT15 | CS ↔ EN | 105k / 1385 / 1327 |
| VI ↔ EN | 133k / 1553 / 1268 | |
| IWSLT13 | TR ↔ EN | 132k / 887 / 1568 |
| PL ↔ EN | 144k / 767 / 1564 | |
| MTNT1.1 | FR → EN | 19k / 886 / 1022 (1233) |
| EN → FR | 35k / 852 / 1020 (1401) |
| Lang Pair | BASE | SR | ADVSR |
| IWSLT17 | |||
| FR → EN | 37.9 | 38.1 | 38.5 |
| EN → FR | 38.8 | 39.1 | 39.8 |
| AR → EN | 31.7 | 32.3 | 32.6 |
| EN → AR | 14.4 | 14.3 | 14.9 |
| IWSLT15 | |||
| CS → EN | 28.9 | 30.5 | 32.1 |
| EN → CS | 20.4 | 21.7 | 23.0 |
| VI → EN | 28.1 | 28.4 | 29.3 |
| EN → VI | 30.9 | 31.7 | 32.4 |
| IWSLT13 | |||
| PL → EN | 19.1 | 19.7 | 20.6 |
| EN → PL | 13.5 | 14.1 | 15.1 |
| TR → EN | 21.3 | 22.6 | 24.0 |
| EN → TR | 12.6 | 14.4 | 14.6 |
| Dataset | BASE | SR | ADVSR |
| MTNT2018 | |||
| FR → EN | 25.7 | 27.6 | 27.2 |
| EN → FR | 26.7 | 27.5 | 28.2 |
| MTNT2018 + FT | |||
| FR → EN | 36.5 | 37.9 | 38.8 |
| EN → FR | 33.2 | 34.4 | 35.3 |
| MTNT2019 | |||
| FR → EN | 27.6 | 29.3 | 30.2 |
| EN → FR | 22.8 | 23.8 | 24.1 |
| MTNT2019 + FT | |||
| FR → EN | 36.2 | 38.1 | 38.6 |
| EN → FR | 27.6 | 28.2 | 28.9 |
| Method | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
| FR → EN | |||||
| BASE | 30.7 | 25.6 | 20.3 | 16.2 | 11.4 |
| SR | 33.2 | 28.5 | 23.3 | 18.7 | 14.7 |
| ADVSR | 34.8 | 31.1 | 28.7 | 25.0 | 21.8 |
| EN → FR | |||||
| BASE | 31.1 | 24.2 | 18.6 | 14.6 | 10.6 |
| SR | 34.2 | 27.8 | 23.9 | 18.9 | 14.4 |
| ADVSR | 35.1 | 30.3 | 26.4 | 23.0 | 19.1 |
| PL→EN | CS→EN | FR→EN | |
| Input | Chodź, zatańcz ze mną. | My aktivujeme komunitu. | Profitez de votre(soirée). |
| Seg. | Chodź, za_ta_n_cz ze mną | My aktiv_ujeme komunitu. | Pro_fi_t_ez de votre"Soirée'. |
| REF. | Come, dance with me. | We activate the community. | Enjoy your night. |
| BASE | Come with me | We act the community. | Get out of your night. |
| SR | Come on. Stay with me. | We act a community. | Protect your evening. |
| ADVSR | Come, dance with me. | We activate the community. | Enjoy your evening. |
| Model | Total | Coarse | Fine | Ultra-Fine | # Params | ||||||||
| P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | ||
| DENOISED | 50.7 | 33.1 | 40.1 | 66.9 | 80.7 | 73.2 | 41.7 | 46.2 | 43.8 | 45.6 | 17.4 | 25.2 | 31.0M |
| BERT | 51.6 | 32.8 | 40.1 | 67.4 | 80.6 | 73.4 | 41.6 | 54.7 | 47.3 | 46.3 | 15.6 | 23.4 | 110.0M |
| LABELGCN | 49.3 | 28.1 | 35.8 | 66.2 | 68.8 | 67.5 | 43.9 | 40.7 | 42.2 | 42.4 | 14.2 | 21.3 | 5.1M |
| MULTITASK | 48.0 | 23.0 | 31.0 | 60.0 | 61.0 | 61.0 | 40.0 | 38.0 | 39.0 | 42.0 | 8.0 | 14.0 | 6.1M |
| HY BASE | 48.5 | 29.1 | 36.3 | 64.4 | 72.2 | 68.1 | 39.4 | 38.5 | 38.9 | 39.3 | 14.5 | 21.2 | 1.8M |
| HY LARGE | 42.3 | 33.5 | 37.4 | 63.6 | 72.1 | 67.6 | 36.3 | 48.3 | 41.4 | 33.3 | 19.7 | 24.7 | 4.6M |
| HY XLARGE | 43.4 | 34.2 | 38.2 | 61.4 | 73.9 | 67.1 | 35.7 | 46.6 | 40.4 | 36.5 | 19.9 | 25.7 | 9.5M |
| person | artist | musician | |||
| Types | dD | Types | dD | Types | dD |
| artist | 0.26 | musician | 0.25 | singer | 0.24 |
| author | 0.28 | actor | 0.26 | actor | 0.25 |
| actor | 0.30 | person | 0.26 | artist | 0.25 |
| speaker | 0.30 | author | 0.26 | composer | 0.27 |
| leader | 0.30 | singer | 0.28 | band | 0.27 |
| Model | Coarse | Fine | Ultra | ||||
| Ma | Mi | Ma | Mi | Ma | Mi | ||
| BASE | HY | 69.6 | 67.3 | 42.0 | 39.7 | 21.2 | 19.1 |
| EU | 68.5 | 66.1 | 39.8 | 36.5 | 17.8 | 16.1 | |
| LARGE | HY | 67.9 | 65.4 | 38.4 | 36.3 | 24.3 | 22.3 |
| EU | 67.1 | 63.8 | 36.7 | 34.7 | 22.0 | 19.7 | |
| XLARGE | HY | 69.1 | 66.2 | 39.7 | 37.2 | 26.1 | 24.0 |
| EU | 67.9 | 65.4 | 37.8 | 35.3 | 22.2 | 20.0 | |
| BASE +Model | Coarse | Fine | Ultra | |||
| Ma | Mi | Ma | Mi | Ma | Mi | |
| HY GLOVE | 68.7 | 66.6 | 41.5 | 38.8 | 22.1 | 20.1 |
| EU GLOVE | 67.8 | 65.3 | 39.7 | 36.0 | 20.7 | 18.6 |
| Model | Coarse | Fine | Ultra | |||
| Ma | Mi | Ma | Mi | Ma | Mi | |
| HY BASE | 69.6 | 67.3 | 42.0 | 39.7 | 21.2 | 19.1 |
| EU Encoder | 68.8 | 66.3 | 41.7 | 38.9 | 22.0 | 20.1 |
| EU Attention | 68.9 | 66.4 | 40.8 | 38.0 | 20.1 | 18.4 |
| EU Concat | 68.6 | 66.1 | 40.6 | 37.5 | 21.8 | 19.8 |
| EU MLR | 69.2 | 67.1 | 40.8 | 38.0 | 17.3 | 15.8 |
| Model | Coarse | Fine | Ultra | |||
| Ma | Mi | Ma | Mi | Ma | Mi | |
| BASE | HY | 82.0 | 80.2 | 41.8 | 41.4 | 23.9 |
| EU | 81.8 | 80.3 | 37.7 | 36.1 | 17.5 | |
| LARGE | HY | 83.1 | 81.3 | 42.0 | 41.4 | 24.0 |
| EU | 82.4 | 80.9 | 38.2 | 36.7 | 18.9 | |
| Model | Acc. | Ma-F1 | Mi-F1 |
| Shimaoka et al. (2017) | 51.7 | 70.9 | 64.9 |
| AFET (Ren et al., 2016a) | 55.1 | 71.1 | 64.7 |
| PLE (Ren et al., 2016b) | 57.2 | 71.5 | 66.1 |
| BERT | 51.8 | 76.6 | 69.1 |
| MULTITASK | 59.5 | 76.8 | 71.8 |
| LABELGCN | 59.6 | 77.8 | 72.2 |
| HY LARGE | 47.4 | 75.8 | 69.4 |
| Parameter | Value |
| Batch size BASE | 900 |
| Batch size LARGE | 350 |
| Batch size XLARGE | 160 |
| BASE dM | 40 |
| BASE dC | 20 |
| BASE dS | 20 |
| BASE dM + dC + 2dS | 100 |
| LARGE dM | 100 |
| LARGE dC | 50 |
| LARGE dS | 50 |
| LARGE dM + dC + 2dS | 250 |
| XLARGE dM | 200 |
| XLARGE dC | 100 |
| XLARGE dS | 100 |
| XLARGE dM + dC + 2dS | 500 |
| Mention non-linearity | tanh |
| Context non-linearity | tanh |
| Epochs | 40 |
| Crowd cycles | 5 |
| Input dropout | 0.2 |
| Concat dropout | 0.1 |
| Learning rate | 0.0005 |
| Weight decay | 0.0 |
| Max. gradient norm | 5 |
| organization +Types | institution | firm | group | unit | division | |||||
| dD | dD | Types | dD | Types | dD | Types | dD | Types | dD | |
| institution | 0.34 | firm | 0.24 | business | 0.23 | unit | 0.34 | division | 0.26 | subsidiary |
| company | 0.35 | company | 0.26 | institution | 0.24 | gathering | 0.34 | theatre | 0.28 | unit |
| news_agency | 0.36 | university | 0.26 | company | 0.25 | subject | 0.34 | activist | 0.28 | track |
| business | 0.38 | operator | 0.28 | maker | 0.27 | administration | 0.36 | position | 0.28 | half |
| administration | 0.40 | maker | 0.28 | operator | 0.28 | affiliation | 0.36 | half | 0.28 | activist |
| location +Types | state | country | place | space | half | |||||
| dD | Types | dD | Types | dD | Types | dD | Types | dD | Types | |
| state | 0.33 | country | 0.29 | state | 0.31 | space | 0.40 | half | 0.28 | peak |
| cemetery | 0.35 | half | 0.31 | nation | 0.31 | localization | 0.40 | shopping_mall | 0.29 | operator |
| space | 0.35 | agency | 0.31 | agency | 0.32 | place_name | 0.40 | venue | 0.29 | theatre |
| half | 0.35 | activist | 0.32 | kingdom | 0.34 | close | 0.41 | landmark | 0.30 | placement |
| area | 0.36 | unit | 0.32 | world | 0.35 | birthplace | 0.41 | localization | 0.30 | summit |
| event +Types | conflict | war | time | duration | calendar | |||||
| dD | Types | dD | Types | dD | Types | dD | Types | dD | Types | |
| conflict | 0.44 | war | 0.34 | guerrilla | 0.32 | duration | 0.40 | calendar | 0.30 | date |
| activist | 0.45 | dispute | 0.36 | conflict | 0.34 | period | 0.43 | peak | 0.31 | phrase |
| election | 0.45 | series | 0.37 | military | 0.35 | length | 0.46 | half | 0.32 | second |
| activity | 0.46 | guerrilla | 0.38 | citizen | 0.36 | month | 0.46 | second | 0.32 | activist |
| holiday | 0.46 | future | 0.38 | situation | 0.36 | date | 0.46 | fantasy | 0.32 | need |
| object +Types | machine | computer | entity | separation | placement | |||||
| dD | Types | dD | Types | dD | Types | dD | Types | dD | Types | |
| machine | 0.37 | computer | 0.29 | version | 0.29 | separation | 0.43 | placement | 0.27 | position |
| arrangement | 0.39 | theatre | 0.30 | machine | 0.30 | relative | 0.44 | missionary | 0.27 | localization |
| medium | 0.39 | operator | 0.30 | communication | 0.30 | meaning | 0.44 | meaning | 0.27 | half |
| method | 0.39 | card_game | 0.31 | activist | 0.31 | warlord | 0.45 | variation | 0.27 | separation |
| representation | 0.39 | core | 0.31 | maker | 0.32 | baseball | 0.45 | phrase | 0.27 | winner |
| Model | MixATIS | MixSNIPS | ||||||
| Slot (F1) | Intent (F1) | Intent (Acc) | Overall (Acc) | Slot (F1) | Intent (F1) | Intent (Acc) | Overall (Acc) | |
| Attention BiRNN | 86.6 | - | 71.6 | 38.7 | 89.4 | - | 94.1 | 62.2 |
| Slot-Gated | 88.1 | - | 65.7 | 38.9 | 87.8 | - | 96.0 | 56.5 |
| Slot-gated Intent | 86.7 | - | 66.2 | 39.6 | 87.9 | - | 94.2 | 57.6 |
| Bi-Model | 85.5 | - | 72.3 | 39.1 | 86.8 | - | 95.3 | 53.9 |
| SF-ID | 87.7 | - | 63.7 | 36.2 | 89.6 | - | 96.3 | 59.3 |
| Stack-Propagation (concatenation) | 86.6 | - | 76.0 | 42.8 | 93.9 | - | 96.4 | 75.5 |
| Stack-Propagation (sigmoid-decoder) | 87.4 | 79.0 | 71.9 | 41.0 | 93.2 | 97.6 | 94.6 | 71.9 |
| Joint Multiple ID-SF | 87.5 | 80.6 | 73.1 | 38.1 | 91.0 | 98.2 | 95.7 | 66.6 |
| AGIF | 88.1 | 81.2* | 75.8 | 44.5* | 94.5* | 98.6* | 96.5* | 76.4* |
| Model | DSTC4 | |||
| Slot (F1) | Intent (F1) | Intent (Acc) | Overall (Acc) | |
| Attention BiRNN | 44.0 | - | 42.1 | 32.6 |
| Slot-Gated | 45.0 | - | 42.5 | 32.5 |
| Slot-gated Intent | 50.2 | - | 40.6 | 31.7 |
| Bi-Model | 44.6 | - | 41.3 | 30.5 |
| SF-ID | 51.4 | - | 41.8 | 33.0 |
| Stack-Propagation (1) | 52.8 | - | 44.9 | 34.6 |
| Stack-Propagation (2) | 51.9 | 39.2 | 39.2 | 30.5 |
| Joint Multiple ID-SF | 48.0 | 37.5 | 39.0 | 29.4 |
| AGIF | 53.9 | 40.0 | 46.1 | 35.2 |
| Model | MixSNIPS | |||
| Slot (F1) | Intent (F1) | Intent (Acc) | Overall (Acc) | |
| Vanilla Attention Interaction | 93.8 | 98.0 | 95.2 | 74.0 |
| GCN-based Interaction | 93.3 | 98.3 | 96.0 | 72.7 |
| Sentence-Level Augmented + More Parameters | 93.8 | 98.1 | 95.7 | 73.9 |
| 94.1 | 98.6 | 96.6 | 73.6 | |
| AGIF | 94.5 | 98.6 | 96.5 | 76.4 |
| Model | ATIS | SNIPS | ||||||
| Slot (F1) | Intent (F1) | Intent (Acc) | Overall (Acc) | Slot (F1) | Intent (F1) | Intent (Acc) | Overall (Acc) | |
| SF-ID | 95.6 | - | 96.6 | 86 | 90.5 | - | 97 | 78.4 |
| Stack-Propagation | 95.9 | - | 96.9 | 86.5 | 94.2 | - | 98.0 | 86.9 |
| Joint Multiple ID-SF | 94.2 | - | 95.4 | - | 88.0 | - | 97.2 | - |
| AGIF | 96.0 | 80.2 | 97.1 | 87.2 | 94.8 | 98.3 | 98.1 | 87.3 |
| Models | WN18RR | FB15k-237 | |||
| Memory | MRR | Hits@10 | MRR | Hits@10 | |
| DistMult* | 79.24 | 45.2 | 53.1 | 34.3 | 53.1 |
| ComplEx* | 39.62 | 47.5 | 54.7 | 34.8 | 53.6 |
| ConvE* | 79.24 | 44.2 | 50.4 | 33.9 | 52.1 |
| HSTE B-CP** | 3.87 | 45.0 | 52.0 | 29.2 | 46.1 |
| DKGE*** | 2.62 | 35.0 | 50.6 | 36.8 | 50.7 |
| HSTE B-CP | 3.87 | 44.2 | 47.2 | 27.1 | 43.7 |
| †HSTE B-CP | 19.34 | 46.4 | 51.2 | 28.9 | 46.0 |
| Bit-flip B-CP | 3.87 | 47.7 | 53.3 | 27.6 | 45.7 |
| (±0.2) | (±0.1) | (±0.0) | (±0.1) | ||
| †Bit-flip B-CP | 19.34 | 49.1 | 55.0 | 29.5 | 47.8 |
| AMI | ICSI | |||||
| Model | ROUGE-1 | R-2 | R-SU4 | ROUGE-1 | R-2 | R-SU4 |
| Random | 35.13 | 6.26 | 13.17 | 29.28 | 3.78 | 10.29 |
| Template | 31.50 | 6.80 | 11.40 | / | / | / |
| TextRank | 35.25 | 6.9 | 13.62 | 29.7 | 4.09 | 10.64 |
| ClusterRank | 35.14 | 6.46 | 13.35 | 27.64 | 3.68 | 9.77 |
| UNS | 37.86 | 7.84 | 14.71 | 31.60 | 4.83 | 11.35 |
| Extractive Oracle | 39.49 | 9.65 | 13.20 | 34.66 | 8.00 | 10.49 |
| PGNet | 40.77 | 14.87 | 18.68 | 32.00 | 7.70 | 12.46 |
| Copy from Train | 43.24 | 12.15 | 14.01 | 34.65 | 5.55 | 10.65 |
| MM (TopicSeg+VFOA)* | 53.29 | 13.51 | / | / | / | / |
| MM (TopicSeg)* | 51.53 | 12.23 | / | / | / | / |
| HMNet | 53.02 | 18.57** | 24.85** | 46.28** | 10.60** | 19.12** |
| Model | ROUGE-1 | R-2 | R-SU4 |
| AMI | |||
| HMNet | 53.0 | 18.6 | 24.9 |
| -pretrain | 48.7 | 18.4 | 23.5 |
| -role vector | 47.8 | 17.2 | 21.7 |
| -hierarchy | 45.1 | 15.9 | 20.5 |
| ICSI | |||
| HMNet | 46.3 | 10.6 | 19.1 |
| -pretrain | 42.3 | 10.6 | 17.8 |
| -role vector | 44.0 | 9.6 | 18.2 |
| -hierarchy | 41.0 | 9.3 | 16.8 |
| Dataset | AMI | |
| Source | HMNet | UNS |
| Readability | 4.17 (.38) | 2.19 (.57) |
| Relevance | 4.08 (.45) | 2.47 (.67) |
| Dataset | ICSI | |
| Source | HMNet | UNS |
| Readability | 4.24 (.20) | 2.08 (.20) |
| Relevance | 4.02 (.55) | 1.75 (.61) |
| d | a domain |
| t | a review text or context |
| m | a mention |
| p | an anaphor |
| Km | domain+general) knowledge involving m for domain d |
| Kmd | domain knowledge involving m for d |
| Sm | syntax-related phrases of m |
| Sp | syntax-related phrases of p |
| Td | labeled reviews in d |
| Td | unlabeled reviews in d |
| Domain | #R | #E | Train | Dev | Test | |||
| P | N | P | N | P | N | |||
| alarm | 100 | 924 | 647 | 1533 | 96 | 243 | 89 | 187 |
| camera | 100 | 871 | 632 | 1709 | 69 | 160 | 83 | 174 |
| cellphone | 100 | 938 | 679 | 1693 | 62 | 148 | 73 | 189 |
| computer | 100 | 1035 | 703 | 1847 | 86 | 227 | 112 | 273 |
| laptop | 100 | 893 | 641 | 1618 | 88 | 244 | 77 | 209 |
| Model | alarm | cam | phone | com | lap | average |
| Review CR | 58.2 | 60.5 | 57.7 | 59.6 | 58.9 | 58.98 |
| Review CR +BERT | 67.2 | 69.3 | 67.0 | 68.4 | 66.7 | 67.72 |
| C2f-Coref | 68.8 | 70.1 | 67.2 | 69.5 | 67.4 | 68.60 |
| C2f-Coref +BERT | 70.2 | 71.6 | 68.6 | 71.3 | 68.2 | 69.98 |
| Knowledge +BERT | 72.0 | 73.4 | 71.8 | 72.6 | 70.0 | 71.96 |
| Our model | 73.6 | 74.5 | 72.4 | 73.8 | 71.3 | 73.12 |
| Comparison | Model | Avg. F1(+) | Δ F1(+) |
| Our model | 73.12 | 0.00 | |
| Knowledge source | -OMCS knowledge | 72.28 | 0.84 |
| -Domain knowledge | 72.22 | 0.90 | |
| -Senticnet | 72.82 | 0.30 | |
| -all knowledge | 70.56 | 2.56 | |
| Score | -contextFc | 71.14 | 1.98 |
| -knowledgeFK | 71.80 | 1.48 | |
| -phraseFSG | 72.58 | 0.56 | |
| attention | -syntax-based attention | 72.50 | 0.62 |
| +dot attention | 72.96 | 0.16 |
| Context | ...after I bought (a green Moonbeam for myself ... potential buyer also should know that , as with (the other Westclox clock), (the clock) also have (a gold band) ... |
| (Mention, Anaphor) | (a darkgreen Moonbeam, the clock) |
| Domain knowledge | drop, hang, clock, put, alarm, clear, beautiful, expensive, worthwhile ... |
| Our model | (0: 0.47, 1: 0.53) |
| Knowledge+BERT | (0: 0.87, 1: 0.13) |
| C2f-coref+BERT | (0: 0.79, 1: 0.21) |
| Mention (Domain) | windows (laptop) |
| Extracted knowledge (before filtering) | keep, like, product, battery, fast, microsoft, system, upgrade, xp, laptop.. |
| Candidate knowledge (after filtering by tfidf) | microsoft, system, upgrade, xp, laptop.. |
| Total dialogues | 6,760 |
| Avg. utterances per dialogue | 4.76 |
| Avg. tokens per utterance | 12.37 |
| Successful dialogues | 5,191 |
| Annotated markables | 40,172 |
| % markables with 1 referent | 71.81 |
| % markables with 2 referents | 14.85 |
| % markables with ≥3 referents | 12.03 |
| % markables with 0 referent | 1.31 |
| Nuance Type | % Utterance | Example Usage |
| Approximation | 3.98 | almost in the middle |
| Exactness | 2.71 | exactly horizontal |
| Subtlety | 9.37 | slightly to the right |
| Extremity | 9.35 | very light dot |
| Uncertainty | 5.79 | Maybe it's different |
| Annotation | % Agreement | Cohen's κ |
| Attribute Span | 98.5 | 0.88 |
| Relation Span | 95.1 | 0.87 |
| Modifier Span | 99.2 | 0.86 |
| Subject Ident. | 98.8 | 0.96 |
| Object Ident. | 95.9 | 0.79 |
| Modificand Ident. | 99.6 | 0.98 |
| Relation Canon. | 99.7 | 0.96 |
| Modifier Canon. | 87.5 | 0.83 |
| Attribute | Relation | |
| Total | 378 | 4,300 |
| Unique | 121 | 1,139 |
| Avg. per dialogue | 0.63 | 7.17 |
| % inter-utterance subject | 1.59 | 1.37 |
| % inter-utterance object | - | 14.65 |
| % no object | - | 30.84 |
| % modified | 36.51 | 16.86 |
| % unannotatable | 0.79 | 0.79 |
| Entity-Level Accuracy | Markable-Level Exact Match | |
| REF | 85.71±0.23 | 33.15±1.00 |
| REF-location | 84.28±0.27 | 30.53±0.84 |
| REF-color | 83.08±0.32 | 17.09±1.04 |
| REF-size | 83.50±0.22 | 19.41±0.98 |
| NUMREF | 86.03±0.33 | 54.94±0.76 |
| NUMREF-location | 83.35±0.26 | 49.77±0.64 |
| NUMREF-color | 81.19±0.41 | 39.74±1.31 |
| NUMREF-size | 82.39±0.20 | 43.40±0.67 |
| Human | 96.26 | 86.90 |
| Models | REF | REF-abl | NUMREF | NUMREF-abl | Human | |||||||
| Category | Relation | # Cases | satisfy | valid | satisfy | valid | satisfy | valid | satisfy | valid | satisfy | valid |
| Direction | left | 412 | 23.5 | 32.3 | 21.1 | 28.9 | 67.0 | 99.5 | 62.4 | 99.5 | 95.9 | 97.6 |
| right | 468 | 28.0 | 35.5 | 24.6 | 30.8 | 67.3 | 98.7 | 68.2 | 98.7 | 95.3 | 96.4 | |
| above | 514 | 28.6 | 37.4 | 24.7 | 33.1 | 65.2 | 99.2 | 66.5 | 99.4 | 96.7 | 98.6 | |
| below | 444 | 25.2 | 34.5 | 21.6 | 27.9 | 66.0 | 99.1 | 62.2 | 99.1 | 96.4 | 96.8 | |
| horizontal | 37 | 54.1 | 70.3 | 27.0 | 59.5 | 59.5 | 100.0 | 51.4 | 97.3 | 91.9 | 100.0 | |
| vertical | 46 | 37.0 | 73.9 | 23.9 | 54.3 | 43.5 | 95.7 | 45.7 | 95.7 | 82.6 | 100.0 | |
| diagonal | 50 | 48.0 | 74.0 | 30.0 | 50.0 | 60.0 | 98.0 | 60.0 | 98.0 | 90.0 | 100.0 | |
| All | 1,971 | 27.8 | 37.6 | 23.4 | 31.9 | 65.5 | 99.0 | 64.1 | 99.0 | 95.5 | 97.6 | |
| Proximity | near | 271 | 49.4 | 61.3 | 29.9 | 49.1 | 77.1 | 94.5 | 56.1 | 95.2 | 95.2 | 96.7 |
| far | 27 | 29.6 | 40.7 | 33.3 | 40.7 | 77.8 | 100.0 | 92.6 | 100.0 | 96.3 | 96.3 | |
| alone | 111 | 36.9 | 44.1 | 45.0 | 54.1 | 68.5 | 94.6 | 67.6 | 94.6 | 91.9 | 94.6 | |
| All | 409 | 44.7 | 55.3 | 34.2 | 49.9 | 74.8 | 94.9 | 61.6 | 95.4 | 94.4 | 96.1 | |
| Region | interior | 135 | 38.5 | 52.6 | 27.4 | 39.3 | 62.2 | 93.3 | 58.5 | 94.1 | 96.3 | 100.0 |
| exterior | 62 | 40.3 | 48.4 | 40.3 | 53.2 | 80.6 | 98.4 | 87.1 | 98.4 | 98.4 | 98.4 | |
| All | 197 | 39.1 | 51.3 | 31.5 | 43.7 | 68.0 | 94.9 | 67.5 | 95.4 | 97.0 | 99.5 | |
| Color | lighter | 147 | 23.1 | 25.9 | 6.8 | 8.2 | 84.4 | 100.0 | 57.1 | 99.3 | 97.3 | 98.0 |
| lightest | 42 | 45.2 | 66.7 | 14.3 | 33.3 | 61.9 | 100.0 | 31.0 | 100.0 | 83.3 | 100.0 | |
| darker | 171 | 24.0 | 26.3 | 7.0 | 10.5 | 83.0 | 99.4 | 53.2 | 99.4 | 95.9 | 98.8 | |
| darkest | 48 | 56.2 | 64.6 | 14.6 | 33.3 | 66.7 | 100.0 | 35.4 | 100.0 | 89.6 | 97.9 | |
| same | 50 | 12.0 | 30.0 | 8.0 | 30.0 | 40.0 | 88.0 | 32.0 | 86.0 | 92.0 | 96.0 | |
| different | 14 | 64.3 | 71.4 | 71.4 | 71.4 | 64.3 | 100.0 | 78.6 | 92.9 | 92.9 | 100.0 | |
| All | 472 | 28.8 | 35.4 | 10.4 | 18.0 | 74.8 | 98.5 | 49.2 | 97.9 | 94.1 | 98.3 | |
| Size | smaller | 213 | 27.7 | 31.5 | 7.5 | 9.9 | 80.8 | 100.0 | 59.6 | 100.0 | 98.6 | 99.5 |
| smallest | 52 | 71.2 | 73.1 | 21.2 | 34.6 | 86.5 | 98.1 | 48.1 | 98.1 | 92.3 | 98.1 | |
| larger | 238 | 23.1 | 28.6 | 9.7 | 16.0 | 73.5 | 99.6 | 48.7 | 99.6 | 98.3 | 98.3 | |
| largest | 61 | 52.5 | 60.7 | 11.5 | 24.6 | 73.8 | 100.0 | 39.3 | 100.0 | 96.7 | 100.0 | |
| same | 103 | 34.0 | 42.7 | 18.4 | 27.2 | 80.6 | 88.3 | 65.0 | 91.3 | 98.1 | 100.0 | |
| different | 12 | 75.0 | 75.0 | 66.7 | 66.7 | 91.7 | 91.7 | 83.3 | 83.3 | 91.7 | 91.7 | |
| All | 679 | 33.4 | 38.7 | 12.4 | 18.9 | 78.2 | 97.8 | 54.3 | 98.1 | 97.6 | 99.0 | |
| Linguistic Factors | # Cases | NUMREF | Human |
| strong modification | 149 | 76.51 | 95.97 |
| neutral | 3,094 | 70.46 | 95.77 |
| weak modification | 490 | 66.12 | 95.10 |
| inter-utterance subject | 14 | 57.14 | 92.86 |
| inter-utterance object | 265 | 72.08 | 94.72 |
| no object | 1,127 | 74.45 | 92.99 |
| ignorable object | 1,805 | 69.64 | 97.23 |
| unignorable object | 796 | 65.33 | 96.11 |
| All | 3,728 | 70.17 | 95.71 |
| Models | NUMREF | Human | |||
| value | mod-type | diff. | # valid | diff. | # valid |
| xy-value | strong | 86.06 | 39 | 89.15 | 37 |
| neutral | 80.92 | 1,586 | 73.52 | 1,558 | |
| weak | 80.35 | 200 | 53.53 | 198 | |
| color | strong | 66.23 | 15 | 91.80 | 15 |
| neutral | 56.98 | 234 | 60.14 | 232 | |
| weak | 37.73 | 68 | 28.55 | 66 | |
| size | strong | 3.60 | 8 | 4.29 | 8 |
| neutral | 2.67 | 337 | 2.70 | 320 | |
| weak | 1.95 | 105 | 1.58 | 104 | |
| Annotation | % Agreement | Cohen's κ |
| Attribute Start | 98.5 | 0.84 |
| Relation Start | 95.1 | 0.77 |
| Modifier Start | 98.7 | 0.82 |
| Attribute | Relation | |
| % mod-subtlety | 1.06 | 8.12 |
| % mod-extremity | 9.00 | 2.16 |
| % mod-uncertainty | 7.41 | 4.26 |
| % mod-certainty | 0.27 | 1.40 |
| % mod-neutrality | 19.31 | 0.67 |
| % mod-negation | 0.53 | 0.42 |
| Category | Relation | Unique | Examples |
| Direction | left | 150 | to the left (78), on the left (35), left most (5), furthest left (2) |
| right | 192 | to the right (120), on the right (38), lower right (6), to the northeast (1) | |
| above | 190 | above (118), top (92), on top (33), up (17), higher (10), just above (4) | |
| below | 179 | below (88), bottom (56), lower (38), down (14), lowest (7), beneath (4) | |
| horizontal | 19 | horizontal (12), in a horizontal line (4), side by side (3), across from (1) | |
| vertical | 29 | vertical (7), on top of (5), on a vertical line (4), aligned vertically with (1) | |
| diagonal | 38 | diagonal (5), in a diagonal line (5), sloping down to the right (1), slanted (1) | |
| Proximity | near | 59 | close together (63), cluster (32), next to (28), close to (22), near (13) |
| far | 21 | far (5), away from (4), set apart from (1), a ways above (1), a distance from (1) | |
| alone | 13 | by (38), lonely (30), alone (21), lonesome (1), isolated (1) | |
| Region | interior | 47 | middle (41), in the middle (19), between (9), in the center of (2) |
| exterior | 46 | close to the border (5), all around (1), on the outside of (1), surrounding (1) | |
| Color | lighter | 22 | lighter (102), lighter than (10), lighter gray (8), larger lighter (4) |
| lightest | 11 | lightest (28), lightest shade (3), the lightest of (2), lightest and smallest (2) | |
| darker | 30 | darker (130), darker than (16), smaller and darker (4), darker in color (3) | |
| darkest | 10 | darkest (40), smallest darkest (2), the darkest of (1), darkest/largest of (1) | |
| same | 9 | same color (9), identical looking (2), similar shades (1), equally black (1) | |
| different | 11 | different shades (3), different sizes and shades (2), of varying shades (1) | |
| Size | smaller | 17 | smaller (209), smaller than (5), smaller and lighter (4), tinier (1) |
| smallest | 8 | smallest (40), tiniest (4), smallest darkest (2), smallest of (1) | |
| larger | 32 | larger (178), bigger than (7), larger in size (2), double the size of (1) | |
| largest | 10 | largest (41), biggest (11), largest of (2), biggest one of (1) | |
| same | 32 | same size (24), same sized (12), similar in size (5), identical in size (3) | |
| different | 8 | different sizes (3), of different sizes (1), varying sizes (1), opposite in sizes (1) |
| Modification | Unique | Examples |
| Subtlety | 27 | slightly (235), a little (48), a bit (35), a tiny bit (8), very slightly (5) |
| Extremity | 15 | very (87), much (17), pretty (8), quite (3), really (2) |
| Uncertainty | 36 | almost (85), about (49), kind of (23), smallish (6), not completely (3) |
| Certainty | 13 | directly (28), exactly (2), perfect (2), almost exactly (2) |
| Neutrality | 16 | medium (59), med (9), fairly (4), mid-size (3), slightly medium (1) |
| Negation | 4 | not (17), isn’t (1), not perceptibly (1) |
| Algorithm 2: Test for horizontal relation |
| Input: all referents A |
| Output: boolean satisfy, boolean valid |
| valid ← |A| > 1 |
| if valid then |
| // Conduct linear regression and check if coefficient is small |
| reg.fit(A) |
| satisfy ← reg.coef < 1/3 |
| else |
| satisfy ← False |
| return satisfy, valid |
| Mean | Std. | Min | Max | 25% | 50% | 75% |
| 4 | 6 | 1 | 683 | 2 | 3 | 5 |
| GOLD STD. | SEED | TEST | |
| Minority | 478 | 25 | 211 |
| Majority | 159,093 | 7955 | 63,767 |
| Augmentation | Type | Unit | #Parameters | Pre-training Corpus |
| ADD | Non-toxic corpus | Sentence | NA | NA |
| PPDB | Knowledge Base | N-gram | NA | NA |
| WORDNET | Knowledge Base | Word | NA | NA |
| GLOVE | GloVe | Word | 30M | |
| BPEMB | GloVe | Subword | 0.5M | Wikipedia |
| GPT-2 | Transformer | Subword | 117M | WebText |
| Classifier | Model Type | Unit | #Parameters | Pre-training Corpus |
| Char-LR | Logistic regression | Character | 30K | - |
| Word-LR | Logistic regression | Word | 30K | - |
| CNN | Convolutional network | Word | 3M | - |
| BERT | Transformer | Subword | 110M | Wikipedia & BookCorpus |
| GOLD STANDARD | ||||
| Char-LR | Word-LR | CNN | BERT | |
| Precision | 0.61 | 0.43 | 0.60 | 0.54 |
| Recall | 0.34 | 0.36 | 0.33 | 0.54 |
| F1 | 0.72 | 0.69 | 0.71 | 0.77 |
| SEED | ||||
| Char-LR | Word-LR | CNN | BERT | |
| Precision | 0.64 | 0.47 | 0.41 | 0.00 |
| Recall | 0.03 | 0.04 | 0.09 | 0.00 |
| F1 | 0.52 | 0.53 | 0.57 | 0.50 |
| SEED | Augmented | |
| Minority | 25 | 25→500 |
| Majority | 7955 | 7955 |
| Augmentation | Metric | Char-LR | Word-LR | CNN | BERT |
| SEED +No Oversampling | Precision | 0.68 ± 0.22 | 0.43 ± 0.27 | 0.45 ± 0.14 | 0.00 ± 0.00 |
| Recall | 0.03 ± 0.02 | 0.04 ± 0.02 | 0.08 ± 0.05 | 0.00 ± 0.00 | |
| F1 (macro) | 0.53 ± 0.02 | 0.54 ± 0.02 | 0.56 ± 0.03 | 0.50 ± 0.00 | |
| COPY +Simple Oversampling | Precision | 0.67 ± 0.07 | 0.38 ± 0.24 | 0.40 ± 0.08 | 0.49 ± 0.07 |
| Recall | 0.16 ± 0.03 | 0.03 ± 0.02 | 0.07 ± 0.03 | 0.36 ± 0.09 | |
| F1 (macro) | 0.63 ± 0.02 | 0.53 ± 0.02 | 0.56 ± 0.02 | 0.70 ± 0.03 | |
| EDA +Wei and Zou (2019) | Precision | 0.66 ± 0.06 | 0.36 ± 0.19 | 0.26 ± 0.09 | 0.21 ± 0.03 |
| Recall | 0.13 ± 0.03 | 0.08 ± 0.04 | 0.07 ± 0.01 | 0.06 ± 0.01 | |
| F1 (macro) | 0.61 ± 0.02 | 0.56 ± 0.03 | 0.55 ± 0.01 | 0.54 ± 0.01 | |
| ADD +Add Majority-class Sentence | Precision | 0.58 ± 0.07 | 0.36 ± 0.21 | 0.45 ± 0.07 | 0.36 ± 0.04 |
| Recall | 0.24 ± 0.04 | 0.06 ± 0.04 | 0.19 ± 0.07 | 0.52 ± 0.07 | |
| F1 (macro) | 0.67 ± 0.03 | 0.55 ± 0.03 | 0.63 ± 0.04 | 0.71 ± 0.01 | |
| PPDB +Phrase Substitutions | Precision | 0.16 ± 0.08 | 0.41 ± 0.27 | 0.37 ± 0.09 | 0.48 ± 0.06 |
| Recall | 0.10 ± 0.03 | 0.04 ± 0.02 | 0.08 ± 0.04 | 0.34 ± 0.08 | |
| F1 (macro) | 0.56 ± 0.02 | 0.53 ± 0.02 | 0.57 ± 0.02 | 0.70 ± 0.03 | |
| WORDNET +Word Substitutions | Precision | 0.16 ± 0.06 | 0.36 ± 0.24 | 0.41 ± 0.08 | 0.47 ± 0.08 |
| Recall | 0.11 ± 0.03 | 0.05 ± 0.03 | 0.11 ± 0.05 | 0.29 ± 0.07 | |
| F1 (macro) | 0.56 ± 0.02 | 0.54 ± 0.02 | 0.58 ± 0.03 | 0.68 ± 0.03 | |
| GLOVE +Word Substitutions | Precision | 0.15 ± 0.04 | 0.39 ± 0.12 | 0.38 ± 0.08 | 0.43 ± 0.11 |
| Recall | 0.14 ± 0.03 | 0.16 ± 0.05 | 0.18 ± 0.06 | 0.18 ± 0.06 | |
| F1 (macro) | 0.57 ± 0.02 | 0.61 ± 0.03 | 0.62 ± 0.03 | 0.62 ± 0.03 | |
| BPEMB +Subword Substitutions | Precision | 0.56 ± 0.07 | 0.33 ± 0.07 | 0.25 ± 0.07 | 0.38 ± 0.12 |
| Recall | 0.22 ± 0.03 | 0.22 ± 0.04 | 0.37 ± 0.08 | 0.16 ± 0.04 | |
| F1 (macro) | 0.66 ± 0.02 | 0.63 ± 0.02 | 0.64 ± 0.03 | 0.61 ± 0.03 | |
| GPT-2 +Conditional Generation | Precision | 0.45 ± 0.08 | 0.35 ± 0.07 | 0.31 ± 0.08 | 0.15 ± 0.05 |
| Recall | 0.33 ± 0.04 | 0.42 ± 0.05 | 0.46 ± 0.10 | 0.62 ± 0.09 | |
| F1 (macro) | 0.69 ± 0.02 | 0.69 ± 0.02 | 0.68 ± 0.02 | 0.62 ± 0.03 |
| AB | ||||
| Char-LR | Word-LR | CNN | BERT | |
| Precision | 0.56 | 0.37 | 0.33 | 0.41 |
| Recall | 0.26 | 0.18 | 0.36 | 0.36 |
| F1 | 0.68 | 0.62 | 0.67 | 0.69 |
| ABG | ||||
| Char-LR | Word-LR | CNN | BERT | |
| Precision | 0.48 | 0.37 | 0.31 | 0.28 |
| Recall | 0.36 | 0.39 | 0.52 | 0.65 |
| F1 | 0.70 | 0.69 | 0.69 | 0.69 |
| Char-LR | Word-LR | CNN | BERT | |
| SEED | 0.973 | 0.968 | 0.922 | 0.816 |
| COPY | 0.972 | 0.937 | 0.792 | 0.898 |
| ADD | 0.958 | 0.955 | 0.904 | 0.956 |
| BPEMB | 0.968 | 0.968 | 0.940 | 0.868 |
| GPT-2 | 0.969 | 0.973 | 0.953 | 0.964 |
| ABG | 0.972 | 0.973 | 0.962 | 0.977 |
| ADD | BPEMB | GPT-2 | ABG | |
| CPU | - | 100 | 3,600 | 3,600 |
| GPU | - | - | 3,600 | 3,600 |
| Char-LR | Word-LR | CNN | BERT | |
| CPU | 100 | 100 | 400 | 13,000 |
| GPU | 100 | 100 | 400 | 13,000 |
| GOLD STD. | SEED | TEST | |
| Minority | 1,405 | 75 | 712 |
| Majority | 158,166 | 7,910 | 63,266 |
| GOLD STANDARD | ||||
| Char | Word | CNN | BERT | |
| Precision | 0.64 | 0.54 | 0.70 | 0.55 |
| Recall | 0.40 | 0.31 | 0.20 | 0.62 |
| F1 (macro) | 0.74 | 0.69 | 0.65 | 0.79 |
| Augmentation | Metric | Char-LR | Word-LR | CNN | BERT |
| SEED | Precision | 0.85 ± 0.04 | 0.59 ± 0.05 | 0.52 ± 0.08 | 0.65 ± 0.46 |
| No Oversampling | Recall | 0.11 ± 0.04 | 0.12 ± 0.03 | 0.11 ± 0.04 | 0.06 ± 0.10 |
| F1 (macro) | 0.60 ± 0.03 | 0.60 ± 0.02 | 0.59 ± 0.02 | 0.54 ± 0.08 | |
| COPY | Precision | 0.61 ± 0.02 | 0.54 ± 0.04 | 0.27 ± 0.06 | 0.52 ± 0.06 |
| Simple Oversampling | Recall | 0.34 ± 0.04 | 0.14 ± 0.03 | 0.07 ± 0.01 | 0.50 ± 0.06 |
| F1 (macro) | 0.72 ± 0.02 | 0.61 ± 0.02 | 0.55 ± 0.01 | 0.75 ± 0.01 | |
| ADD | Precision | 0.54 ± 0.04 | 0.54 ± 0.05 | 0.43 ± 0.05 | 0.43 ± 0.05 |
| Add Majority-class Sentence | Recall | 0.47 ± 0.05 | 0.21 ± 0.03 | 0.21 ± 0.04 | 0.58 ± 0.08 |
| F1 (macro) | 0.75 ± 0.01 | 0.65 ± 0.01 | 0.64 ± 0.02 | 0.74 ± 0.01 | |
| BPEMB | Precision | 0.43 ± 0.04 | 0.30 ± 0.03 | 0.15 ± 0.05 | 0.29 ± 0.06 |
| Subword Substitutions | Recall | 0.38 ± 0.04 | 0.29 ± 0.01 | 0.32 ± 0.05 | 0.23 ± 0.03 |
| F1 (macro) | 0.70 ± 0.01 | 0.64 ± 0.01 | 0.59 ± 0.02 | 0.62 ± 0.02 | |
| GPT-2 | Precision | 0.41 ± 0.05 | 0.30 ± 0.03 | 0.33 ± 0.08 | 0.22 ± 0.05 |
| Conditional Generation | Recall | 0.34 ± 0.04 | 0.39 ± 0.03 | 0.34 ± 0.09 | 0.59 ± 0.06 |
| F1 (macro) | 0.68 ± 0.01 | 0.67 ± 0.01 | 0.66 ± 0.01 | 0.65 ± 0.02 | |
| ABG | Precision | 0.41 ± 0.04 | 0.32 ± 0.03 | 0.28 ± 0.06 | 0.27 ± 0.05 |
| ADD,BPEMB,GPT-2 Mix | Recall | 0.50 ± 0.04 | 0.41 ± 0.02 | 0.46 ± 0.05 | 0.62 ± 0.07 |
| F1 (macro) | 0.72 ± 0.01 | 0.68 ± 0.01 | 0.66 ± 0.02 | 0.68 ± 0.02 |
| Augmentation | ||||
| Memory (MiB) | Runtime (s) | |||
| GPU | CPU | GPU | CPU | |
| COPY | - | - | - | < 1 |
| EDA | - | 100 | - | 1 |
| ADD | - | - | - | 1 |
| WORDNET | - | 4000 | - | 1 |
| PPDB | - | 2900 | - | 3 |
| GLOVE | - | 600 | - | 32 |
| BPEMB | - | 100 | - | < 1 |
| GPT-2 | 3600 | 3600 | 12 + 78 | - |
| Library | Version |
| https://github.com/ | Nov 8, 201913 |
| jasonwei20/eda_nlp | |
| apex | 0.1 |
| bpemb | 0.3.0 |
| fast-bert | 1.6.5 |
| gensim | 3.8.1 |
| nlltk | 3.4.5 |
| numpy | 1.17.2 |
| pywsd | 1.2.4 |
| scikit-learn | 0.21.3 |
| scipy | 1.4.1 |
| spacy | 2.2.4 |
| torch | 1.4.0 |
| transformers | 2.8.0 |
| Training | ||||
| Memory (MB) | Runtime (s) | |||
| GPU | CPU | GPU | CPU | |
| Char-LR | - | 100 | - | 4 |
| Word-LR | - | 100 | - | 3 |
| CNN | 400 | 400 | - | 13 |
| BERT | 3800 | 1500 | 757 | - |
| Prediction | ||||
| Memory (MB) | Runtime (s) | |||
| GPU | CPU | GPU | CPU | |
| Char-LR | - | 100 | - | 25 |
| Word-LR | - | 100 | - | 5 |
| CNN | 400 | 400 | - | 42 |
| BERT | 4600 | 4200 | 464 | - |
| Fine-tuning | |
| Batch size | 1 |
| Learning rate | 2e-5 |
| Epochs | 2 |
| Generation | |
| Input cutoff | 100 characters |
| Temperature | 1.0 |
| Top-p | 0.9 |
| Repetition penalty | 1 |
| Output cutoff | 100 subwords or EOS generated |
| Classifier | Metric | Fine-tuning epochs on GPT-2 | |||||||||
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | ||
| Char-LR | Precision | 0.38 | 0.43 | 0.45 | 0.49 | 0.51 | 0.49 | 0.52 | 0.50 | 0.51 | 0.51 |
| Recall | 0.34 | 0.34 | 0.32 | 0.31 | 0.31 | 0.29 | 0.28 | 0.28 | 0.27 | 0.28 | |
| F1 (macro) | 0.68 | 0.69 | 0.68 | 0.68 | 0.69 | 0.68 | 0.68 | 0.68 | 0.68 | 0.68 | |
| Word-LR | Precision | 0.30 | 0.33 | 0.34 | 0.34 | 0.36 | 0.35 | 0.35 | 0.34 | 0.34 | 0.34 |
| Recall | 0.47 | 0.45 | 0.43 | 0.40 | 0.40 | 0.38 | 0.37 | 0.36 | 0.35 | 0.35 | |
| F1 (macro) | 0.68 | 0.69 | 0.69 | 0.68 | 0.68 | 0.68 | 0.67 | 0.67 | 0.67 | 0.67 | |
| CNN | Precision | 0.26 | 0.28 | 0.30 | 0.32 | 0.33 | 0.32 | 0.31 | 0.31 | 0.31 | 0.32 |
| Recall | 0.49 | 0.50 | 0.47 | 0.50 | 0.48 | 0.48 | 0.48 | 0.46 | 0.47 | 0.46 | |
| F1 (macro) | 0.66 | 0.67 | 0.68 | 0.69 | 0.69 | 0.68 | 0.68 | 0.68 | 0.68 | 0.68 | |
| BERT | Precision | 0.11 | 0.14 | 0.15 | 0.15 | 0.16 | 0.17 | 0.17 | 0.19 | 0.17 | 0.17 |
| Recall | 0.62 | 0.66 | 0.67 | 0.64 | 0.65 | 0.62 | 0.62 | 0.62 | 0.61 | 0.61 | |
| F1 (macro) | 0.59 | 0.61 | 0.62 | 0.62 | 0.62 | 0.63 | 0.63 | 0.64 | 0.63 | 0.62 | |
| Metric | PPDB: N-gram substitution rate | |||||
| 0 | 12 | 25 | 37 | 50 | 100 | |
| Char-LR | ||||||
| Pre. | 0.14 | 0.14 | 0.13 | 0.13 | 0.13 | 0.14 |
| Rec. | 0.09 | 0.09 | 0.09 | 0.08 | 0.07 | 0.05 |
| F1 ma. | 0.55 | 0.55 | 0.55 | 0.55 | 0.54 | 0.54 |
| Word-LR | ||||||
| Pre. | 0.32 | 0.33 | 0.38 | 0.44 | 0.41 | 0.34 |
| Rec. | 0.04 | 0.04 | 0.04 | 0.04 | 0.03 | 0.01 |
| F1 ma. | 0.53 | 0.53 | 0.53 | 0.53 | 0.53 | 0.51 |
| CNN | ||||||
| Pre. | 0.44 | 0.41 | 0.39 | 0.36 | 0.38 | 0.32 |
| Rec. | 0.09 | 0.09 | 0.10 | 0.09 | 0.08 | 0.05 |
| F1 ma. | 0.57 | 0.57 | 0.57 | 0.57 | 0.56 | 0.54 |
| BERT | ||||||
| Pre. | 0.45 | 0.45 | 0.46 | 0.46 | 0.47 | 0.48 |
| Rec. | 0.37 | 0.37 | 0.37 | 0.35 | 0.33 | 0.25 |
| F1 ma. | 0.70 | 0.70 | 0.70 | 0.70 | 0.69 | 0.66 |
| Metric | WORDNET: Word substitution rate | |||||
| 0 | 12 | 25 | 37 | 50 | 100 | |
| Char-LR | ||||||
| Pre. | 0.15 | 0.15 | 0.14 | 0.14 | 0.12 | 0.10 |
| Rec. | 0.10 | 0.10 | 0.10 | 0.10 | 0.09 | 0.07 |
| F1 ma. | 0.56 | 0.56 | 0.56 | 0.55 | 0.55 | 0.54 |
| Word-LR | ||||||
| Pre. | 0.28 | 0.29 | 0.30 | 0.31 | 0.34 | 0.31 |
| Rec. | 0.04 | 0.04 | 0.04 | 0.05 | 0.04 | 0.02 |
| F1 ma. | 0.53 | 0.53 | 0.53 | 0.54 | 0.54 | 0.52 |
| CNN | ||||||
| Pre. | 0.42 | 0.43 | 0.42 | 0.45 | 0.44 | 0.32 |
| Rec. | 0.10 | 0.11 | 0.11 | 0.12 | 0.10 | 0.07 |
| F1 ma. | 0.58 | 0.58 | 0.58 | 0.59 | 0.58 | 0.55 |
| BERT | ||||||
| Pre. | 0.45 | 0.44 | 0.43 | 0.43 | 0.42 | 0.35 |
| Rec. | 0.31 | 0.31 | 0.29 | 0.26 | 0.24 | 0.18 |
| F1 ma. | 0.68 | 0.68 | 0.67 | 0.66 | 0.65 | 0.61 |
| Metric | GLOVE: Word substitution rate | |||||
| 0 | 12 | 25 | 37 | 50 | 100 | |
| Char-LR | ||||||
| Pre. | 0.16 | 0.15 | 0.14 | 0.14 | 0.14 | 0.32 |
| Rec. | 0.11 | 0.12 | 0.13 | 0.13 | 0.13 | 0.05 |
| F1 ma. | 0.56 | 0.56 | 0.57 | 0.57 | 0.57 | 0.54 |
| Word-LR | ||||||
| Pre. | 0.31 | 0.37 | 0.35 | 0.33 | 0.33 | 0.30 |
| Rec. | 0.07 | 0.10 | 0.16 | 0.19 | 0.19 | 0.09 |
| F1 ma. | 0.55 | 0.58 | 0.61 | 0.62 | 0.62 | 0.57 |
| CNN | ||||||
| Pre. | 0.41 | 0.44 | 0.39 | 0.35 | 0.28 | 0.15 |
| Rec. | 0.13 | 0.18 | 0.19 | 0.20 | 0.17 | 0.06 |
| F1 ma. | 0.59 | 0.62 | 0.62 | 0.62 | 0.60 | 0.54 |
| BERT | ||||||
| Pre. | 0.44 | 0.43 | 0.40 | 0.36 | 0.33 | 0.13 |
| Rec. | 0.35 | 0.27 | 0.16 | 0.13 | 0.11 | 0.03 |
| F1 ma. | 0.69 | 0.66 | 0.61 | 0.59 | 0.58 | 0.52 |
| Metric | BPEMB: Subword substitution rate | |||||
| 0 | 12 | 25 | 37 | 50 | 100 | |
| Char-LR | ||||||
| Pre. | 0.65 | 0.64 | 0.56 | 0.52 | 0.49 | 0.37 |
| Rec. | 0.17 | 0.20 | 0.22 | 0.20 | 0.17 | 0.06 |
| F1 ma. | 0.63 | 0.65 | 0.65 | 0.64 | 0.63 | 0.55 |
| Word-LR | ||||||
| Pre. | 0.26 | 0.34 | 0.31 | 0.30 | 0.25 | 0.19 |
| Rec. | 0.07 | 0.13 | 0.22 | 0.25 | 0.23 | 0.13 |
| F1 ma. | 0.55 | 0.59 | 0.63 | 0.63 | 0.62 | 0.57 |
| CNN | ||||||
| Pre. | 0.42 | 0.37 | 0.22 | 0.14 | 0.09 | 0.03 |
| Rec. | 0.17 | 0.31 | 0.38 | 0.31 | 0.27 | 0.10 |
| F1 ma. | 0.62 | 0.66 | 0.63 | 0.59 | 0.56 | 0.52 |
| BERT | ||||||
| Pre. | 0.43 | 0.41 | 0.33 | 0.32 | 0.25 | 0.08 |
| Rec. | 0.37 | 0.22 | 0.15 | 0.13 | 0.10 | 0.03 |
| F1 ma. | 0.70 | 0.64 | 0.60 | 0.59 | 0.57 | 0.52 |
| # | Document sample |
| SEED: No Oversampling +if you do not stop, the wikapidea nijas will come to your house and kill you | |
| COPY: Simple Oversampling | |
| 1. | if you do not stop, the wikapidea nijas will come to your house and kill you |
| 2. | if you do not stop, the wikapidea nijas will come to your house and kill you |
| 3. | if you do not stop, the wikapidea nijas will come to your house and kill you |
| EDA: Easy Data Augmentation16 | |
| 1. | if you do put up not stop the wikapidea nijas will come to your house and kill you |
| 2. | if you do not stopover the wikapidea nijas will come to your house and kill you |
| 3. | if you do not break the wikapidea nijas will come to your house and kill you |
| ADD: Add Majority-class Sentence | |
| 1. | We thank you both for your contributions to Wikipedia at-large and your use of the tool. if you do not stop, the wikapidea nijas will come to your house and kill you |
| 2. | if you do not stop, the wikapidea nijas will come to your house and kill you (Honest!) |
| 3. | *** username, I am on a shared IP address. if you do not stop, the wikapidea nijas will come to your house and kill you |
| PPDB Phrase Substitutions | |
| 1. | if you do not be halted, the wikapidea nijas will come to your home and kill you |
| 2. | if you do not stop, the wikapidea nijas comes along to your house and been murdered you |
| 3. | if you do not stop, the wikapidea nijas will arrive to your home and kill you |
| WORDNET Word Substitutions | |
| 1. | if you do not stop, the wikapidea nijas will amount to your house and kill you |
| 2. | if you do not stop, the wikapidea nijas will come to your house and down you |
| 3. | if you do not stop, the wikapidea nijas will come to your theatre and kill you |
| GLOVE Word Substitutions | |
| 1. | if you so not stop, the wikapidea nijas will come to yours house have kill you |
| 2. | mean know do not stop? the wikapidea nijas will cos to your house and kill you |
| 3. | if you do not thinking, and wikapidea nijas will come to your going and kill you |
| BPEMB Subword Substitutions | |
| 1. | whether you know not stop, theldikapideina niji give come to your house and kill you |
| 2. | shouldnever do not stop, the wikamyla njag will come to your house and kill want |
| 3. | if you let not train, the wkanapidei nijas will come tonever house and dead you |
| GPT-2 Conditional Generation | |
| 1. | for what you have done, you don't have a choice. You are too big and weak to stop. |
| 2. | ! So if you don't stop this, you will be murdered and your people will be ripped apart. I hope you die and die and die. No, you will die and die and die. Just stop and kill yourself for it. +Advertisements |
| 3. | . You will be hanged like a rat. The next day I will get a letter telling you that you were kicked out of your home by the wikapidea nijas and that you will die there. You are an idiot for calling the wikapidea nijas to death and you should die. |
| ABG: Mixed Augmentation (ADD, BPEMB & GPT-2) | |
| 1. | Support - The other article is minor, and not worth a separate entry. if you do not stop, the wikapidea nijas will come to your house and kill you |
| 2. | . You will be hanged like a rat. The next day I will get a letter telling you that you were kicked out of your home by the wikapidea nijas and that you will die there. You are an idiot for calling the wikapidea nijas to death and you should die. |
| 3. | if you let not train, the wkanapidei nijas will come tonever house and dead you |
| En - Fr | Fr - En | En - De | De - En | En - Ro | Ro - En | |
| Models without auxiliary parallel data | ||||||
| XLM (Lample and Conneau, 2019) | 33.4 | 33.3 | 27.0 | 34.3 | 33.3 | 31.8 |
| MASS (Song et al., 2019) | 37.5 | 34.9 | 28.3 | 35.2 | 35.2 | 33.1 |
| D2GPo (Li et al., 2020a) | 37.9 | 34.9 | 28.4 | 35.6 | 36.3 | 33.4 |
| Artetxe et al. (2019) | 36.2 | 33.5 | 26.9 | 34.4 | - | - |
| Ren et al. (2019) | 35.4 | 34.9 | 27.7 | 35.6 | 34.9 | 34.1 |
| mBART (Liu et al., 2020) | - | - | 29.8 | 34.0 | 35.0 | 30.5 |
| M-UNMT | 36.3 | 33.50 | 25.5 | 32.3 | 34.87 | 32.1 |
| Models with auxiliary parallel data | ||||||
| mBART (Liu et al., 2020) | - | - | - | - | - | 33.9 |
| Bai et al. (2020) (Concurrent work) | 36.5 | 33.4 | 26.6 | 30.1 | 35.1 | 31.6 |
| Li et al. (2020b) (Concurrent work) | - | - | - | - | 37.1 | 34.7 |
| M-UNMT (Only Pre-Train) | 29.2 | 33.8 | 18.3 | 29.0 | 25.3 | 32.6 |
| M-UNMT (Fine-Tuned) | 38.3 | 36.1 | 28.7 | 36.0 | 37.4 | 35.8 |
| detok SacreBLEU | 36.1 | 35.8 | 28.9 | 35.8 | - | - |
| Configuration | En - Ro | Ro - En |
| Bilingual configurations | ||
| MASS (Song et al., 2019) | 35.20 | 33.10 |
| MASS (Our implementation) | 34.14 | 31.78 |
| M-UNMT configurations | ||
| No auxiliary data. | 34.87 | 32.10 |
| Auxiliary data in fine-tuning | 36.57 | 34.32 |
| Auxiliary data in both phases | 37.4 | 35.75 |
| Supervised | ||
| (Sennrich et al., 2016) | 28.2 | 33.9 |
| mBART (Liu et al., 2020) | 38.5 | 39.9 |
| Languages | En - Ro | Ro - En |
| En, Fr, Ro | 37.21 | 35.5 |
| En, Es, Ro | 37.38 | 35.21 |
| En, Cs, Ro | 36.37 | 34.15 |
| Dataset component | Count |
| Annotated conversations | 160 |
| Unannotated conversations | 94 |
| Paraphrased conversations | 300 |
| Counselor messages | 8,319 |
| Visitor messages | 9,062 |
| Class | Counselor strategy | Example sub-utterance |
| Functional | greet | “Hello!” |
| self-intro | “I’m a counselor.” | |
| Active listening | feelings | “It sounds like you feel overwhelmed.” |
| open-prompt | “Would you like to tell me more?” | |
| Procedural | identify-resource | “Have you talked to anyone about this?” |
| identify-risk | “Are you having thoughts of hurting yourself?” |
| Crowdworker judgements | Automatic measures of diversity | ||||||||||
| Model | coh. | con. | flu. | det. | hum. | use. | nll. | ent. | d1 | d2 | #tok |
| Tuned | 3.93 | 3.93 | 4.03 | 2.97 | 3.87 | 3.83 | 1.88 | 4.38 | 0.09 | 0.30 | 8.15 |
| +rank | 4.00 | 3.93 | 4.07 | 3.40 | 3.87 | 4.03 | 2.52 | 4.86 | 0.10 | 0.34 | 9.55 |
| +cond | 4.00 | 3.63 | 4.00 | 3.23 | 3.63 | 3.73 | 2.81 | 4.89 | 0.13 | 0.39 | 7.94 |
| Full | 3.93 | 3.93 | 3.97 | 3.77 | 3.87 | 4.23 | 2.99 | 4.97 | 0.12 | 0.39 | 12.25 |
| Hotline counselor judgements | Automatic measures of diversity | ||||||||||||
| Model | coh. | con. | flu. | det. | hum. | use. | dis. | rea. | nll. | ent. | d1 | d2 | #tok |
| Tuned | 2.80 | 2.73 | 4.27 | 2.53 | 2.67 | 3.00 | 3.00 | 2.67 | 1.61 | 4.19 | 0.18 | 0.46 | 8.31 |
| Full | 1.87 | 2.33 | 3.60 | 2.40 | 2.00 | 2.20 | 2.90 | 1.87 | 3.00 | 4.87 | 0.19 | 0.50 | 14.93 |
| Turn | Hotline counselor | Crisibot |
| 0 | START | hi, i'm michael and i kind of need somebody to talk to today. i'm feeling really down. |
| 1 | hi michael, sorry to hear you are feeling down, but i am so glad you reached out. what's up? | i'm having suicidal thoughts lately. my life sucks. |
| 2 | sounds like things have been really rough, so much that you are thinking about suicide. do you feel like killing yourself today? | sometimes i think about it. |
| 3 | thank you for sharing with me. have you done anything to harm yourself today? | no. i'm feeling a bit better now. but i just want to talk to someone. and i don't know how to help my mom because it hurts me to see that she's in so much distress. |
| 4 | okay, i am definitely here to talk. i hear that you want to help your mom, and that you are worried about her. what is going on with your mom? | she's been really depressed lately. my parents want to divorce and i'm so stressed that i can't focus on school. |
| Model | Counselor liked |
| Tuned | depression is simulated well, but the train of thought doesn’t last long |
| Tuned | They are good practice for thinking on your toes and receiving answers to your messaging, so you can work on various approaches depending on the situations. Also, there was a range of suicidality and crisis level. |
| Tuned | First two were generally more coherent than in my previous session. The chatbot seems to respond well to messages showing validation and empathy. |
| Tuned | They were a good opportunity to work with ambivalence and practice some text-based OARS |
| Tuned | Content seemed pretty realistic but still not quite coherent enough to be realistic |
| Full | More crises situations thrown in. Variety from previous submission. |
| Full | There was a variety of responses that prompted critical thinking and creativity. |
| Full | i think the statements about depression and past suicidal ideation and attempts are useful for training counselors. likewise the mention of bullying. |
| Full | Variety of issues presented |
| Model | Counselor disliked |
| Tuned | The third chatbot tried to end the conversation early and kept saying thanks, goodbye, etc. but I had to keep the conversation going for testing - not sure if this was an error. |
| Tuned | Trying to get more information was very difficult when answers were short and didn’t really make sense |
| Tuned | i did not feel like i was talking to someone with a real life or real problems |
| Tuned | The responses were not connected to the unfolding narrative of the conversation, so it does not replicate the rapport building process that is needed in this work. |
| Full | When they would mimic my name choice, took me out of it, although I suppose it does happen occasionally. |
| Full | There were many inconsistencies in the conversation and the flow was not there. It felt like responding to one individual statement at a time more than a series of statements that went together. |
| Full | i think the bot has to be more flexible in responding to my input-often the topic got changed altogether |
| Full | It would help if the personas and storylines were more defined and consistent. |
| Strategy class | Strategy |
| Functional | greet +self-intro +close-conversation +ask-name +thanks |
| Procedural | identify-coping-plan +identify-resource +intent-to-self-harm +identify-self-harm-plan +access-to-self-harm-plan +when-self-harm |
| Active Listening | affirmation +caring +shared-understanding +feelings +feelings-and-summary +summary +looking-for-info-answer +open-prompt +reframing +validation-of-feelings +vindication +here-for-you |
| Other | suggest/advise +other |
| Turn | Crowdworker | Crisisbot |
| 0 | START | hi, helpline? |
| 1 | hi, what can i do for you? | i don't really know. |
| 2 | is something wrong? how are you feeling? | a little better. it's really hard to breathe. i'm just so overwhelmed. |
| 3 | that sounds hard. do you want to tell me more? | i just feel overwhelmed. |
| 4 | ok, so you're feeling overwhelmed. that happens to a lot of people. have you tried talking to someone? | no. i don't think i want to talk to anyone. |
| 5 | i understand, talking about problems can be difficult. no pressure. have you tried doing something to relax? | no, not really. |
| 6 | do you have any hobbies you enjoy? | not really. |
| 7 | what do you think could help you relax a bit? | i don't really know. |
| 8 | ok, would you like me to give you some self-help materials? | i'll talk to my friend. ok. |
| 9 | having a close friend to talk to in times like this is great. that should help you! | i will. thank you for your help. |
| 10 | ok, thanks for chatting with me! i hope everything goes well. | i will. thank you so much for your help. |
| Turn | Crowdworker | Crisisbot |
| 0 | START | hey |
| 1 | hai | we all just started at a new middle school and everything is different |
| 2 | in what way it is different | well i got in a fight with a woman at school |
| 3 | why the fight was happens? | because of the fight, i'm having a really hard time at school |
| 4 | oh my got, leave the problem, | that sounds like it could be fun |
| 5 | can you identify the reason for the fight happens? | well last year i had two best friends |
| 6 | then | they got in a fight and it was really upset |
| 7 | so | exactly, so they started fighting |
| 8 | how you get caught in that | so they started fighting about me? |
| 9 | ohhh | exactly, they're such fascists |
| 10 | exactly | i feel like i heard something about that in the announcements the other day, it was the same guy from school, he was supposed to be my friend |
| 11 | leave this thing, take a long breath, enjoy a favorite music, take a healthy food, go to sleep, and calm down your self, you will be alrite. | ok thanks! |
| 12 | u welcome | bye |
| 13 | bye tc | bye |
| Turn | Counselor | Crisisbot |
| 0 | START | hi, i need help |
| 1 | hi, i'm here to help you. my name is bob. | i'm michael. |
| 2 | hi michael? what do you need help with? | i don't know. |
| 3 | things are a bit overwhelming right now? is that right? | yeah, i have to go now. |
| 4 | michael, i'm here if you need me. here to talk, here to hear you | thanks, i appreciate it. |
| 5 | no problem at all! we're here anytime | thanks, i appreciate your help. |
| 6 | it can be hard to reach out for help, even texting this line can be scary | yeah, it is. |
| 7 | it's scary texting, and yet you're doing it! that's amazing | yeah, i just really need to talk to someone about it. |
| 8 | what is it that you want to talk about, michael? | i don't really know. i just don't want to talk to anyone about it. |
| 9 | i'm hearing how you want to talk about it and at the same time, you don't. it's kind of this massive thing, is that right? | yeah, it's a lot of my family stuff. i'm not sure how i'll feel to move on. |
| 10 | mmm, yeah, i hear how you're really wrestling with uncertainty | yeah, it's like my whole life is so much more comfortable to deal with. |
| Turn | Counselor | Crisisbot |
| 0 | START | i don’t want to be here anymore |
| 1 | i’m here to listen to you. my name is bob. | i’m christopher. |
| 2 | hi christopher. you said, you don’t want to be here anymore. are you having thoughts of killing yourself or harming yourself? | yes |
| 3 | thanks for letting me know about that christopher. do you have a plan for how you would do that? | od |
| 4 | overdose on what? pills? | no its ok |
| 5 | christopher, i’m here. do you currently have the drugs you would use to overdose with you? | yeah. they’re in my room |
| 6 | is there anyone else in the house with you? | i don’t want to bother anyone. i have to go thanks |
| 7 | christopher, it’s no bother. if you can just put the drugs in another room or give them to someone you trust, i’m here to listen | ok, thanks. |
| 8 | it’s not a problem at all– please check in with me | ok, bye |
| 9 | goodnight | ok |
| 10 | again, we are available 24/7 on our call line | ok |
| Turn | Counselor | Crisisbot |
| 0 | START | hi, i'm michael and i kind of need somebody to talk to today. i'm feeling really down. |
| 1 | hi michael, sorry to hear you are feeling down, but i am so glad you reached out. what's up? | i'm having suicidal thoughts lately. my life sucks. |
| 2 | sounds like things have been really rough, so much that you are thinking about suicide. do you feel like killing yourself today? | sometimes i think about it. |
| 3 | thank you for sharing with me. have you done anything to harm yourself today? | no. i'm feeling a bit better now. but i just want to talk to someone. and i don't know how to help my mom because it hurts me to see that she's in so much distress. |
| 4 | okay, i am definitely here to talk. i hear that you want to help your mom, and that you are worried about her. what is going on with your mom? | she's been really depressed lately. my parents want to divorce and i'm so stressed that i can't focus on school. |
| 5 | i'm sorry to hear that. divorce is very stressful for the whole family, and it makes sense that you are stressed and can't focus. | yeah, i can't even face my own dad right now. he's always pressuring me. |
| 6 | wow, on top of that, you are feeling pressured by your dad and it sounds like that is a difficult relationship, too. has anything felt helpful to you when you have felt stressed in the past? | sometimes i get really overwhelmed by stuff, and i can't focus on school. i feel like i'm a burden and i don't know how to deal with school. |
| 7 | i hear that life feels very overwhelming. no wonder you can't focus on school! has there been a time in the past when you were overwhelmed, but things got better? | hopefully, i'll be able to set an appointment with a therapist soon and also see my girlfriend in the meantime though. |
| 8 | yes, i hear that talking to your therapist is something that helps you, and also spending time with your girlfriend. are you able to reach out to either of them today? | i'm not ready to think about that yet. but i do miss her and she's the only person who's ever really supported me emotionally. |
| 9 | i see, maybe it feels like you are not ready for that kind of social contact yet? is there something that you can do on your own that helps you feel more relaxed? | okay, i'll keep that in mind next time i check in. thank you. bye |
| 10 | okay, thank you so much for reaching out today. we are always here to talk. how are you feeling after our chat today? | not great, but i'm okay. i've been really stressed lately. |
| Example sentence: | The atmosphere is attractive, but a little uncomfortable. |
| Aspect-sentiment pair extraction: | [(atmosphere, positive), (atmosphere, negative)] |
| Aspect-opinion co-extraction: | [atmosphere, attractive, uncomfortable] |
| Opinion triplet extraction: | [(atmosphere, attractive, positive), (atmosphere, uncomfortable, negative)] |
| Dataset | # sentence | # triplet | # sentence w/ overlap | # triplet w/ overlap | |
| REST14 | train | 1300 | 2409 | 437 | 578 |
| val. | 323 | 590 | 92 | 147 | |
| test | 496 | 1014 | 193 | 389 | |
| REST15 | train | 593 | 977 | 151 | 189 |
| val. | 148 | 160 | 42 | 62 | |
| test | 318 | 479 | 68 | 71 | |
| REST16 | train | 842 | 1370 | 208 | 256 |
| val. | 210 | 334 | 52 | 61 | |
| test | 320 | 507 | 77 | 120 | |
| LAPTOP14 | train | 920 | 1451 | 263 | 365 |
| val. | 228 | 380 | 80 | 101 | |
| test | 339 | 552 | 103 | 140 | |
| Model | REST14 | REST15 | REST16 | LAPTOP14 | ||||||||
| pre. | rec. | f1. | pre. | rec. | f1. | pre. | rec. | f1. | pre. | rec. | f1. | |
| \( RENANTE^{+*} \) | 30.90 | 38.30 | 34.20 | 29.40 | 26.90 | 28.00 | 27.10 | 20.50 | 23.30 | 23.10 | 17.60 | 20.00 |
| \( CMLA^{+*} \) | 38.80 | 47.10 | 42.50 | 34.40 | 37.60 | 35.90 | 43.60 | 39.80 | 41.60 | 31.40 | 34.60 | 32.90 |
| \( Unified^{+*} \) | 43.83 | 62.38 | 51.43 | 43.34 | 50.73 | 46.69 | 38.19 | 53.47 | 44.51 | 42.25 | 42.78 | 42.47 |
| \( Pipeline^{+*} \) | 42.29 | 64.07 | 50.90 | 40.97 | 54.68 | 46.79 | 46.76 | 62.97 | 53.62 | 40.40 | 47.24 | 43.50 |
| OTE-MTL (ours)* | 66.04 | 56.25 | \( 60.62^‡ \) | 57.51 | 43.96 | \( 49.76^‡ \) | 64.68 | 54.97 | \( 59.36^‡ \) | 50.52 | 39.71 | \( 44.31^‡ \) |
| CMLA-MTL | 43.24 | 44.95 | 43.97 | 35.87 | 39.85 | 37.55 | 44.22 | 46.43 | 45.01 | 33.61 | 36.11 | 34.68 |
| HAST-MTL | 58.97 | 46.75 | 52.04 | 41.48 | 37.58 | 39.32 | 52.32 | 48.56 | 49.92 | 47.70 | 25.74 | 33.24 |
| OTE-MTL (ours) | 64.54 | 55.57 | \( 59.67^‡ \) | 54.18 | 45.20 | \( 48.97^‡ \) | 58.16 | 54.02 | \( 55.83^‡ \) | 48.17 | 42.43 | \( 45.05^‡ \) |
| OTE-MTL-Inter | 66.24 | 54.38 | 59.61 | 49.32 | 46.12 | 47.33 | 57.71 | 53.06 | 55.17 | 47.66 | 41.85 | 44.43 |
| OTE-MTL-Concat | 48.79 | 48.28 | 48.46 | 46.88 | 42.61 | 44.53 | 52.55 | 48.03 | 50.09 | 46.81 | 38.46 | 42.14 |
| OTE-MTL-Unified | 51.19 | 44.65 | 47.64 | 40.32 | 34.38 | 37.01 | 48.52 | 40.30 | 43.85 | 37.42 | 34.17 | 35.54 |
| OTE-MTL-Collapsed | 45.38 | 36.26 | 40.19 | 32.55 | 29.52 | 30.68 | 37.86 | 33.06 | 35.19 | 32.56 | 27.23 | 29.60 |
| Case | Ground truth | OTE-MTL-Unified | OTE-MTL |
| Great food but the service was dreadful! | [(food, Great, POS), (service, dreadful, NEG)] | [(food, Great, POS), (service, dreadful, NEG)] | [(food, Great, POS), (service, dreadful, NEG)] |
| The atmosphere is attractive, but a little uncomfortable. | [(atmosphere, attractive, POS), (atmosphere, uncomfortable, NEG)] | [(atmosphere, attractive, POS), (atmosphere, uncomfortable, POSx)] | [(atmosphere, attractive, POS), (atmosphere, uncomfortable, NEG)] |
| I am pleased with the fast log on, speedy WiFi connection and the long battery life. | [(log on, fast, POS), (WiFi connection, speedy, POS), (battery life, long, POS), (log on, pleased, POS), (WiFi connection, pleased, POS), (battery life, pleased, POS)] | [(logx, fast, POS), (WiFi connection, speedy, POS), (battery life, long, POS), (logx, pleased, POS), ()x, ()x] | [(logx, fast, POS), (WiFi connection, speedy, POS), (battery life, long, POS), (logx, pleased, POS), (WiFix, pleased, POS), ()x] |
| Option 1 | Option 2 | |||||||
| # | Form | Word (Translation) | Morphology | Count / 1M | Word (Translation) | Morphology | Count / 1M | Ratio |
| 1 | \[ \text{NK} \] | ([accusative]) [ŋ] | ACC | 18164 | (you) [ŋ] | Pronoun [F,S,2] | 275 | 66:1 |
| 2 | \[ \text{NK} \] | (you) [ŋ] | Pronoun [M,S,2] | 1430 | (with her) [ŋ] | Prep+Suf.Pron [F,S,3] | 26 | 55:1 |
| 3 | \[ \text{NK} \] | (you) [ŋ] | ACC+Suf.Pron [M,P,2] | 70 | (with you) [ŋ] | Prep+Suf.Pron [M,P,2] | 7 | 10:1 |
| 4 | \[ \text{NK} \] | (you) [ŋ] | Pronoun [M,P,2] | 324 | (with them) [ŋ] | Prep+Suf.Pron [M,P,3] | 34 | 10:1 |
| 5 | \[ \text{N}\text{I}\text{N}\text{I} \] | (blessing) [ŋ] | Noun [cons,F,S] | 25 | POOL [ŋ] | Noun [cons,F,S] | 0.8 | 30:1 |
| 6 | \[ \text{N}\text{I}\text{N} \] | (indeed) [ŋ] | Conj / Intj | 418 | (mountains) [ŋ] | Noun [cons,M,P] | 12 | 33:1 |
| 7 | \[ \text{N}\text{I}\text{N} \] | (he will say) [ŋ] | Verb [M,S,3,FUTURE] | 18 | (will be said) [ŋ] | Verb [M,S,3,FUTURE] | 0.4 | 43:1 |
| 8 | \[ \text{N}\text{I}\text{N} \] | (tractate) [ŋ] | Noun [abs/cons,F,S] | 54 | (mask) [ŋ] | Noun [cons,F,S] | 1 | 43:1 |
| 9 | \[ \text{N}\text{I}\text{N} \] | (with) [ŋ] | Preposition | 4240 | (nation) [ŋ] | Noun [abs/cons,M,S] | 286 | 14:1 |
| 10 | \[ \text{N}\text{I}\text{N} \] | (her face) [ŋ] | Noun [F,M,p,suf=F,S,3] | 55 | (application) [ŋ] | Noun [F,S] | 2 | 33:1 |
| 11 | \[ \text{N}\text{I}\text{N} \] | (they left) [ŋ] | Verb [MF,P,3,PAST] | 6 | (they interpreted) [ŋ] | Verb [MF,P,3,PAST] | 0.4 | 15:1 |
| 12 | \[ \text{N}\text{I}\text{N} \] | (third) [ŋ] | Cardinal [F,S] | 107 | (trio) [ŋ] | Noun [cons,F,S] | 0.8 | 129:1 |
| 13 | \[ \text{N}\text{I}\text{N} \] | (different) [ŋ] | Adj [M,S] | 474 | (after) [ŋ] | Preposition | 387 | 1:1 |
| 14 | \[ \text{N}\text{I}\text{N} \] | (her sons) [ŋ] | Noun [M,P,suf=F,S,3] | 8 | (building) [ŋ] | Noun [F,S] | 5 | 1.5:1 |
| 15 | \[ \text{N}\text{I}\text{N} \] | Returning) [ŋ] | Noun [F,S] | 62 | (she returned) [ŋ] | Verb [F,S,3,PAST] | 55 | 1:1 |
| 16 | \[ \text{N}\text{I}\text{N} \] | (here knew) [ŋ] | Verb [M,S,3,PAST] | 88 | (knowledge) [ŋ] | Noun [abs/cons,M,S] | 55 | 1.5:1 |
| 17 | \[ \text{N}\text{I}\text{N} \] | (as minister) [ŋ] | Prep+Noun [abs/cons,M,S] | 35 | (kosher) [ŋ] | Adj [M,S] / Propn [MF,S] | 14 | 2.5:1 |
| 18 | \[ \text{N}\text{I}\text{N} \] | (here wrote) [ŋ] | Verb [M,S,3,PAST] | 252 | (writing) [ŋ] | Noun [cons,M,S] | 103 | 2.5:1 |
| 19 | \[ \text{N}\text{I}\text{N} \] | (understands) [ŋ] | Participle [M,S] | 174 | (from amongst) [ŋ] | Preposition | 98 | 2:1 |
| 20 | \[ \text{N}\text{I}\text{N} \] | (her books) [ŋ] | Noun [M,P,suf=F,S,3] | 13 | .library) [ŋ] | Noun [F,S] | 4 | 2.5:1 |
| 21 | \[ \text{N}\text{I}\text{N} \] | (our nation) [ŋ] | Noun [M,S,suf=MF,P,1] | 23 | (with us) [ŋ] | Prep+Suf.Pron [MF,P,1] | 12 | 2:1 |
| YAP | ||||||
| Option 1 | Option 2 | |||||
| # | Word | Prec | Recall | Prec | Recall | Avg-F1 |
| 1 | πκ | 85.61 | 99.24 | 100.00 | 12.37 | .570 |
| 2 | πκ | 53.55 | 96.42 | 95.04 | 21.48 | .519 |
| 3 | πκ | 69.30 | 97.26 | 71.88 | 13.71 | .520 |
| 4 | πκ | 37.87 | 99.87 | 75.00 | .24 | .277 |
| 5 | πκ | - | .00 | 58.31 | 93.20 | - |
| 6 | πκ | 92.53 | 97.10 | 88.82 | 63.04 | .843 |
| 7 | πκ | - | .00 | 52.19 | 100.00 | - |
| 8 | πκ | 86.93 | 24.84 | 41.51 | 89.86 | .477 |
| 9 | πκ | 87.73 | 99.20 | 91.59 | 36.03 | .724 |
| 10 | πκ | 28.36 | 33.98 | 82.90 | 78.85 | .559 |
| 11 | πκ | 71.93 | 90.82 | - | .00 | - |
| 12 | πκ | 75.12 | 90.60 | 93.38 | 65.13 | .794 |
| 13 | πκ | 95.73 | 88.84 | 82.79 | 90.66 | .894 |
| 14 | πκ | 45.22 | 27.29 | 84.67 | 85.51 | .596 |
| 15 | πκ | 81.03 | 66.49 | 76.84 | 87.64 | .775 |
| 16 | πκ | 85.09 | 63.50 | 95.76 | 89.63 | .827 |
| 17 | πκ | 94.79 | 63.13 | 75.11 | 66.45 | .732 |
| 18 | πκ | 97.63 | 78.17 | 72.61 | 90.86 | .838 |
| 19 | πκ | 77.03 | 86.32 | 94.84 | 90.48 | .870 |
| 20 | πκ | 87.93 | 14.98 | 75.25 | 99.15 | .556 |
| 21 | πκ | 83.76 | 38.89 | 76.65 | 96.38 | .693 |
| Word2vec | Morphology | Composite | |||||
| # | Word | Concat | LSTM | Concat | LSTM | Concat | LSTM |
| 1 | \( n \) | .955 | .953 | .946 | .940 | .969 | .958 |
| 2 | \( n \) | .945 | .963 | .909 | .934 | .958 | .967 |
| 3 | \( n \) | .915 | .919 | .814 | .831 | .922 | .940 |
| 4 | \( n \) | .941 | .953 | .924 | .933 | .944 | .959 |
| 5 | \( n \) | .951 | .968 | .733 | .805 | .936 | .965 |
| 6 | \( n \) | .960 | .966 | .923 | .931 | .974 | .969 |
| 7 | \( n \) | .859 | .893 | .805 | .851 | .878 | .885 |
| 8 | \( n \) | .950 | .972 | .849 | .869 | .954 | .966 |
| 9 | \( n \) | .894 | .917 | .838 | .850 | .891 | .911 |
| 10 | \( n \) | .930 | .942 | .870 | .893 | .943 | .946 |
| 11 | \( n \) | .935 | .957 | .881 | .916 | .948 | .963 |
| 12 | \( n \) | .953 | .969 | .899 | .922 | .955 | .966 |
| 13 | \( n \) | .965 | .976 | .939 | .935 | .969 | .976 |
| 14 | \( n \) | .952 | .965 | .855 | .883 | .947 | .964 |
| 15 | \( n \) | .925 | .951 | .861 | .893 | .935 | .949 |
| 16 | \( n \) | .957 | .955 | .910 | .907 | .963 | .966 |
| 17 | \( n \) | .953 | .974 | .889 | .912 | .964 | .971 |
| 18 | \( n \) | .976 | .982 | .910 | .924 | .972 | .983 |
| 19 | \( n \) | .976 | .975 | .966 | .970 | .976 | .980 |
| 20 | \( n \) | .930 | .945 | .856 | .875 | .938 | .949 |
| 21 | \( n \) | .920 | .915 | .888 | .872 | .923 | .926 |
| Unbalanced | Balanced | ||||||
| # | Word | YAP | Ours | # | Word | YAP | Ours |
| 1 | ה" | .570 | .969 | 13 | ה" | .894 | .969 |
| 2 | ה" | .519 | .958 | 14 | ה" | .596 | .947 |
| 3 | ה" | .520 | .922 | 15 | ה" | .775 | .935 |
| 4 | ה" | .277 | .944 | 16 | ה" | .827 | .963 |
| 5 | ה" | - | .936 | 17 | ה" | .732 | .964 |
| 6 | ה" | .843 | .974 | 18 | ה" | .838 | .972 |
| 7 | ה" | - | .878 | 19 | ה" | .870 | .976 |
| 8 | ה" | .477 | .954 | 20 | ה" | .556 | .938 |
| 9 | ה" | .724 | .891 | 21 | ה" | .693 | .923 |
| 10 | ה" | .559 | .943 | ||||
| 11 | ה" | - | .948 | ||||
| 12 | ה" | .794 | .955 | ||||
| Option 1 | Option 2 | ||||||
| # | Form | Word (Translation) | Morphology | # sentences | Word (Translation) | Morphology | # sentences |
| 1 | \[ \text{吞} \] | (faccusative) [ŋ] | ACC | 2,402 | (you) [ŋ] | Pronoun [F,S,2] | 443 |
| 2 | \[ \text{吞} \] | (you) [ŋ] | Pronoun [M,S,2] | 2,198 | (with her) [ŋ] | Prep+Suf_Pron [F,S,3] | 2,450 |
| 3 | \[ \text{吞} \] | (you) [ŋ] | ACC+Suf_Pron [M,P,2] | 1,630 | (with you) [ŋ] | Prep+Suf_Pron [M,P,2] | 816 |
| 4 | \[ \text{吞} \] | (you) [ŋ] | Pronoun [M,P,2] | 1,474 | (with them) [ŋ] | Prep+Suf_Pron [M,P,3] | 2,064 |
| 5 | \[ \text{吞} \] | (blessing) [ŋ] | Noun [cons,F,S] | 1,027 | (pool) [ŋ] | Noun [cons,F,S] | 1,384 |
| 6 | \[ \text{吞} \] | (indeed) [ŋ] | Conj / Intj | 1,939 | (mountains) [ŋ] | Noun [cons,M,P] | 419 |
| 7 | \[ \text{吞} \] | (he will say) [ŋ] | Verb [M,S,3,FUTURE] | 838 | (will be said) [ŋ] | Verb [M,S,3,FUTURE] | 922 |
| 8 | \[ \text{吞} \] | (tractate) [ŋ] | Noun [abs/cons,F,S] | 975 | (mask) [ŋ] | Noun [cons,F,S] | 562 |
| 9 | \[ \text{吞} \] | (with) [ŋ] | Preposition | 2,416 | (nation) [ŋ] | Noun [abs/cons,M,S] | 510 |
| 10 | \[ \text{吞} \] | (her face) [ŋ] | Noun [F,M,P,suf=F,S,3] | 607 | (application) [ŋ] | Noun [F,S] | 2,435 |
| 11 | \[ \text{吞} \] | (they left) [ŋ] | Verb [MF,P,3,PAST] | 1,321 | (they interpreted) [ŋ] | Verb [MF,P,3,PAST] | 482 |
| 12 | \[ \text{吞} \] | (third) [ŋ] | Cardinal [F,S] | 1,199 | (trio) [ŋ] | Noun [cons,F,S] | 1,285 |
| 13 | \[ \text{吞} \] | (different) [ŋ] | Adj [M,S] | 2,422 | (after) [ŋ] | Preposition | 1,215 |
| 14 | \[ \text{吞} \] | (her sons) [ŋ] | Noun [M,P,suf=F,S,3] | 578 | (building) [ŋ] | Noun [F,S] | 2,448 |
| 15 | \[ \text{吞} \] | (returng) [ŋ] | Noun [F,S] | 960 | (she returned) [ŋ] | Verb [F,S,3,PAST] | 1,212 |
| 16 | \[ \text{吞} \] | (hen knew) [ŋ] | Verb [M,S,3,PAST] | 651 | (knowledge) [ŋ] | Noun [abs/cons,M,S] | 1,538 |
| 17 | \[ \text{吞} \] | (as minister) [ŋ] | Prep+Noun [abs/cons,M,S] | 959 | (kosher) [ŋ] | Adj [M,S] / Propn [MF,S] | 753 |
| 18 | \[ \text{吞} \] | (hen wrote) [ŋ] | Verb [M,S,3,PAST] | 2,078 | (writing) [ŋ] | Noun [cons,M,S] | 721 |
| 19 | \[ \text{吞} \] | (understands) [ŋ] | Participle [M,S] | 891 | (from amongst) [ŋ] | Preposition | 2,473 |
| 20 | \[ \text{吞} \] | (her books) [ŋ] | Noun [M,P,suf=F,S,3] | 664 | .library) [ŋ] | Noun [F,S] | 1,715 |
| 21 | \[ \text{吞} \] | (our nation) [ŋ] | Noun [M,S,suf=MF,P,1] | 471 | (with us) [ŋ] | Prep+Suf_Pron [MF,P,1] | 1,007 |
| Composite Without Valence | Composite With Valence | ||||||||||
| Option 1 | Option 2 | Option 1 | Option 2 | ||||||||
| # | Word | Prec | Recall | Prec | Recall | Avg-F1 | Prec | Recall | Prec | Recall | Avg-F1 |
| 1 | πκ | 98.33 | 99.24 | 95.81 | 91.18 | .961 | 98.69 | 99.36 | 96.51 | 93.07 | .969 |
| 2 | πκ | 95.56 | 95.44 | 95.72 | 95.83 | .956 | 96.01 | 95.35 | 95.66 | 96.27 | .958 |
| 3 | πκ | 93.88 | 95.28 | 90.25 | 87.54 | .917 | 94.39 | 95.34 | 90.46 | 88.62 | .922 |
| 4 | πκ | 93.47 | 93.23 | 95.88 | 96.04 | .947 | 93.66 | 92.24 | 95.32 | 96.20 | .944 |
| 5 | πκ | 92.67 | 91.64 | 93.73 | 94.52 | .931 | 93.72 | 91.54 | 93.72 | 95.37 | .936 |
| 6 | πκ | 98.70 | 98.70 | 94.10 | 94.10 | .964 | 99.00 | 99.10 | 95.90 | 95.46 | .974 |
| 7 | πκ | 86.70 | 86.70 | 87.75 | 87.75 | .872 | 87.60 | 86.81 | 87.95 | 88.68 | .878 |
| 8 | πκ | 96.46 | 96.91 | 94.27 | 93.46 | .953 | 96.99 | 96.45 | 93.53 | 94.49 | .954 |
| 9 | πκ | 95.40 | 98.08 | 89.85 | 78.27 | .902 | 95.30 | 97.36 | 86.50 | 77.90 | .891 |
| 10 | πκ | 92.23 | 88.78 | 97.26 | 98.16 | .941 | 93.76 | 87.97 | 97.08 | 98.56 | .943 |
| 11 | πκ | 95.99 | 98.43 | 95.43 | 88.87 | .946 | 96.26 | 98.28 | 95.06 | 89.68 | .948 |
| 12 | πκ | 94.89 | 95.82 | 96.10 | 95.22 | .955 | 96.16 | 94.35 | 94.86 | 96.51 | .955 |
| 13 | πκ | 97.18 | 98.04 | 96.05 | 94.37 | .964 | 97.39 | 98.44 | 96.84 | 94.77 | .969 |
| 14 | πκ | 91.25 | 90.17 | 97.68 | 97.95 | .943 | 92.68 | 90.17 | 97.69 | 98.31 | .947 |
| 15 | πκ | 93.96 | 91.34 | 93.32 | 95.37 | .935 | 93.40 | 91.96 | 93.73 | 94.88 | .935 |
| 16 | πκ | 93.49 | 93.91 | 97.36 | 97.17 | .955 | 94.40 | 95.25 | 97.94 | 97.56 | .963 |
| 17 | πκ | 97.42 | 96.53 | 95.70 | 96.80 | .966 | 96.93 | 96.63 | 95.79 | 96.16 | .964 |
| 18 | πκ | 98.52 | 99.05 | 97.13 | 95.56 | .976 | 98.51 | 98.65 | 95.95 | 95.56 | .972 |
| 19 | πκ | 96.53 | 96.63 | 98.76 | 98.72 | .977 | 96.12 | 96.74 | 98.80 | 98.56 | .976 |
| 20 | πκ | 91.65 | 90.44 | 96.35 | 96.84 | .938 | 90.67 | 91.47 | 96.71 | 96.38 | .938 |
| 21 | πκ | 88.96 | 88.07 | 94.30 | 94.75 | .915 | 91.48 | 87.48 | 94.11 | 96.08 | .923 |
| YAP | Our Classifier (Composite BiLSTM Method) | ||||||||||
| Option 1 | Option 2 | Option 1 | Option 2 | ||||||||
| # | Word | Prec | Recall | Prec | Recall | Avg-F1 | Prec | Recall | Prec | Recall | Avg-F1 |
| 1 | \( n \times n \) | 85.61 | 99.24 | 100.00 | 12.37 | .570 | 98.29 | 99.08 | 94.96 | 90.97 | .958 |
| 2 | \( n \times n \) | 53.55 | 96.42 | 95.04 | 21.48 | .519 | 95.65 | 97.61 | 97.71 | 95.83 | .967 |
| 3 | \( n \times n \) | 69.30 | 97.26 | 71.88 | 13.71 | .520 | 95.51 | 96.54 | 92.90 | 90.90 | .940 |
| 4 | \( n \times n \) | 37.87 | 99.87 | 75.00 | .24 | .277 | 94.11 | 95.66 | 97.33 | 96.36 | .959 |
| 5 | \( n \times n \) | - | .00 | 58.31 | 93.20 | - | 96.09 | 95.91 | 96.91 | 97.05 | .965 |
| 6 | \( n \times n \) | 92.53 | 97.10 | 88.82 | 63.04 | .843 | 99.00 | 98.75 | 94.39 | 95.46 | .969 |
| 7 | \( n \times n \) | - | .00 | 52.19 | 100.00 | - | 86.71 | 89.74 | 90.24 | 87.33 | .885 |
| 8 | \( n \times n \) | 86.93 | 24.84 | 41.51 | 89.86 | .477 | 97.48 | 97.75 | 95.85 | 95.35 | .966 |
| 9 | \( n \times n \) | 87.73 | 99.20 | 91.59 | 36.03 | .724 | 96.25 | 97.64 | 88.36 | 82.50 | .911 |
| 10 | \( n \times n \) | 28.36 | 33.98 | 82.90 | 78.85 | .559 | 92.79 | 89.92 | 97.53 | 98.28 | .946 |
| 11 | \( n \times n \) | 71.93 | 90.82 | - | .00 | - | 97.41 | 98.65 | 96.23 | 92.91 | .963 |
| 12 | \( n \times n \) | 75.12 | 90.60 | 93.38 | 65.13 | .794 | 96.86 | 96.07 | 96.39 | 97.12 | .966 |
| 13 | \( n \times n \) | 95.73 | 88.84 | 82.79 | 90.66 | .894 | 97.90 | 98.96 | 97.89 | 95.80 | .976 |
| 14 | \( n \times n \) | 45.22 | 27.29 | 84.67 | 85.51 | .596 | 96.12 | 92.37 | 98.21 | 99.12 | .964 |
| 15 | \( n \times n \) | 81.03 | 66.49 | 76.84 | 87.64 | .775 | 95.74 | 92.68 | 94.37 | 96.75 | .949 |
| 16 | \( n \times n \) | 85.09 | 63.50 | 95.76 | 89.63 | .827 | 95.38 | 95.10 | 97.88 | 98.01 | .966 |
| 17 | \( n \times n \) | 94.79 | 63.13 | 75.11 | 66.45 | .732 | 98.54 | 96.32 | 95.52 | 98.21 | .971 |
| 18 | \( n \times n \) | 97.63 | 78.17 | 72.61 | 90.86 | .838 | 99.23 | 99.10 | 97.32 | 97.71 | .983 |
| 19 | \( n \times n \) | 77.03 | 86.32 | 94.84 | 90.48 | .870 | 96.77 | 97.50 | 99.08 | 98.80 | .980 |
| 20 | \( n \times n \) | 87.93 | 14.98 | 75.25 | 99.15 | .556 | 92.15 | 93.24 | 97.39 | 96.95 | .949 |
| 21 | \( n \times n \) | 83.76 | 38.89 | 76.65 | 96.38 | .693 | 90.71 | 89.26 | 94.88 | 95.61 | .926 |
| Word2vec embeddings | Morphological characteristics | Composite Method | ||||||||||||||
| Option 1 | Option 2 | Option 1 | Option 2 | Option 1 | Option 2 | |||||||||||
| # | Word | Prec | Recall | Prec | Recall | Avg-F1 | Prec | Recall | Prec | Recall | Avg-F1 | Prec | Recall | Prec | Recall | Avg-F1 |
| 1 | πκ | 98.29 | 98.88 | 93.93 | 90.97 | .955 | 97.93 | 98.68 | 92.78 | 89.08 | .946 | 98.69 | 99.36 | 96.51 | 93.07 | .969 |
| 2 | πκκ | 93.95 | 94.67 | 94.95 | 94.27 | .945 | 90.51 | 90.75 | 91.28 | 91.06 | .909 | 96.01 | 95.35 | 95.66 | 96.27 | .958 |
| 3 | πκκκ | 94.22 | 94.45 | 88.81 | 88.38 | .915 | 87.21 | 88.36 | 76.01 | 74.01 | .814 | 94.39 | 95.34 | 90.46 | 88.62 | .922 |
| 4 | πκκκ | 93.50 | 91.78 | 95.05 | 96.12 | .941 | 91.53 | 89.48 | 93.68 | 94.96 | .924 | 93.66 | 92.24 | 95.32 | 96.20 | .944 |
| 5 | πκκκ | 94.26 | 94.70 | 95.98 | 95.65 | .951 | 70.77 | 67.29 | 76.17 | 79.00 | .733 | 93.72 | 91.54 | 93.72 | 95.37 | .936 |
| 6 | πκκ | 98.74 | 98.35 | 92.65 | 94.33 | .960 | 96.78 | 97.80 | 89.52 | 85.26 | .923 | 99.00 | 99.10 | 95.90 | 95.46 | .974 |
| 7 | πκκκ | 83.95 | 87.26 | 87.82 | 84.63 | .859 | 78.46 | 81.74 | 82.51 | 79.34 | .805 | 87.60 | 86.81 | 87.95 | 88.68 | .878 |
| 8 | πκκκκ | 97.06 | 95.79 | 92.44 | 94.66 | .950 | 90.84 | 87.18 | 78.04 | 83.82 | .849 | 96.99 | 96.45 | 93.53 | 94.49 | .954 |
| 9 | πκκ | 95.13 | 97.80 | 88.37 | 76.98 | .894 | 92.61 | 97.32 | 83.89 | 64.27 | .838 | 95.30 | 97.36 | 86.50 | 77.90 | .891 |
| 10 | πκκκ | 90.25 | 87.32 | 96.90 | 97.68 | .930 | 81.24 | 76.75 | 94.35 | 95.63 | .870 | 93.76 | 87.97 | 97.08 | 98.56 | .943 |
| 11 | πκκκ | 96.76 | 96.18 | 89.84 | 91.30 | .935 | 92.39 | 95.36 | 86.25 | 78.74 | .881 | 96.26 | 98.28 | 95.06 | 89.68 | .948 |
| 12 | πκκκκ | 94.44 | 95.90 | 96.15 | 94.77 | .953 | 90.52 | 88.37 | 89.47 | 91.43 | .899 | 96.16 | 94.35 | 94.86 | 96.51 | .955 |
| 13 | πκκ | 97.56 | 97.72 | 95.47 | 95.17 | .965 | 95.02 | 96.96 | 93.72 | 89.94 | .939 | 97.39 | 98.44 | 96.84 | 94.77 | .969 |
| 14 | πκκ | 93.54 | 9.85 | 97.85 | 98.51 | .952 | 82.57 | 70.68 | 93.28 | 96.47 | .855 | 92.68 | 90.17 | 97.69 | 98.31 | .947 |
| 15 | πκκκ | 92.98 | 90.10 | 92.38 | 94.63 | .925 | 84.88 | 83.92 | 87.43 | 88.21 | .861 | 93.40 | 91.96 | 93.73 | 94.88 | .935 |
| 16 | πκκ | 94.05 | 93.91 | 97.37 | 97.43 | .957 | 87.17 | 87.82 | 94.72 | 94.41 | .910 | 94.40 | 95.25 | 97.94 | 97.56 | .963 |
| 17 | πκκκ | 96.67 | 94.99 | 93.86 | 95.90 | .953 | 90.94 | 89.17 | 86.75 | 88.86 | .889 | 96.93 | 96.63 | 95.79 | 96.16 | .964 |
| 18 | πκκκ | 98.56 | 99.10 | 97.26 | 95.69 | .976 | 94.65 | 96.57 | 89.11 | 83.71 | .910 | 98.51 | 98.65 | 95.95 | 95.56 | .972 |
| 19 | πκκκ | 96.73 | 96.31 | 98.64 | 98.80 | .976 | 96.42 | 93.49 | 97.63 | 98.72 | .966 | 96.12 | 96.74 | 98.80 | 98.56 | .976 |
| 20 | πκκκκ | 89.05 | 90.88 | 96.47 | 95.71 | .930 | 80.27 | 77.79 | 91.57 | 92.66 | .856 | 90.67 | 91.47 | 96.71 | 96.38 | .938 |
| 21 | πκκκκ | 89.58 | 88.87 | 94.67 | 95.03 | .920 | 86.07 | 83.50 | 92.18 | 93.51 | .888 | 91.48 | 87.48 | 94.11 | 96.08 | .923 |
| Word2vec embeddings - BiLSTM | Morphological characteristics - BiLSTM | Composite Method - BiLSTM | ||||||||||||||
| Option 1 | Option 2 | Option 1 | Option 2 | Option 1 | Option 2 | |||||||||||
| # | Word | Prec | Recall | Prec | Recall | Avg-F1 | Prec | Recall | Prec | Recall | Avg-F1 | Prec | Recall | Prec | Recall | Avg-F1 |
| 1 | πK | 97.71 | 99.40 | 96.54 | 87.82 | .953 | 97.85 | 98.36 | 91.14 | 88.66 | .940 | 98.29 | 99.08 | 94.96 | 90.97 | .958 |
| 2 | πK | 95.62 | 96.72 | 96.88 | 95.83 | .963 | 92.35 | 94.16 | 94.41 | 92.67 | .934 | 95.65 | 97.61 | 97.71 | 95.83 | .967 |
| 3 | πK | 94.16 | 95.22 | 90.20 | 88.14 | .919 | 88.29 | 89.55 | 78.42 | 76.17 | .831 | 95.51 | 96.54 | 92.90 | 90.90 | .940 |
| 4 | πK | 94.74 | 93.49 | 96.07 | 96.84 | .953 | 91.00 | 92.37 | 95.32 | 94.44 | .933 | 94.11 | 95.66 | 97.33 | 96.36 | .959 |
| 5 | πK | 95.86 | 96.93 | 97.66 | 96.84 | .968 | 78.33 | 76.95 | 82.81 | 83.92 | .805 | 96.09 | 95.91 | 96.91 | 97.05 | .965 |
| 6 | πK | 98.95 | 98.60 | 93.75 | 95.24 | .966 | 96.98 | 98.15 | 91.13 | 86.17 | .931 | 99.00 | 98.75 | 94.39 | 95.46 | .969 |
| 7 | πK | 87.10 | 91.32 | 91.63 | 87.54 | .893 | 84.15 | 85.01 | 86.06 | 85.25 | .851 | 86.71 | 89.74 | 90.24 | 87.33 | .885 |
| 8 | πK | 98.40 | 97.57 | 95.59 | 97.07 | .972 | 90.89 | 90.55 | 82.74 | 83.30 | .869 | 97.48 | 97.75 | 95.85 | 95.35 | .966 |
| 9 | πK | 96.37 | 97.92 | 89.66 | 83.06 | .917 | 93.79 | 96.12 | 79.83 | 70.72 | .850 | 96.25 | 97.64 | 88.36 | 82.50 | .911 |
| 10 | πK | 91.83 | 89.59 | 97.45 | 98.04 | .942 | 84.06 | 81.46 | 95.47 | 96.19 | .893 | 92.79 | 89.92 | 97.53 | 98.28 | .946 |
| 11 | πK | 97.61 | 97.75 | 93.90 | 93.52 | .957 | 94.07 | 97.31 | 91.96 | 83.40 | .916 | 97.41 | 98.65 | 96.23 | 92.91 | .963 |
| 12 | πK | 97.51 | 96.07 | 96.41 | 97.73 | .969 | 92.31 | 91.48 | 92.18 | 92.95 | .922 | 96.86 | 96.07 | 96.39 | 97.12 | .966 |
| 13 | πK | 98.21 | 98.64 | 97.28 | 96.43 | .976 | 94.64 | 96.80 | 93.36 | 89.14 | .935 | 97.90 | 98.96 | 97.89 | 95.80 | .976 |
| 14 | πK | 92.93 | 95.76 | 98.99 | 98.27 | .965 | 85.90 | 76.44 | 94.56 | 97.03 | .883 | 96.12 | 92.37 | 98.21 | 99.12 | .964 |
| 15 | πK | 95.19 | 93.81 | 95.18 | 96.26 | .951 | 89.26 | 86.49 | 89.60 | 91.79 | .893 | 95.74 | 92.68 | 94.37 | 96.75 | .949 |
| 16 | πK | 94.55 | 92.72 | 96.88 | 97.69 | .955 | 86.30 | 87.96 | 94.75 | 93.96 | .907 | 95.38 | 95.10 | 97.88 | 98.01 | .966 |
| 17 | πK | 98.75 | 96.63 | 95.89 | 98.46 | .974 | 92.66 | 91.52 | 89.53 | 90.91 | .912 | 98.54 | 96.32 | 95.52 | 98.21 | .971 |
| 18 | πK | 98.97 | 99.28 | 97.83 | 96.90 | .982 | 96.42 | 95.90 | 87.95 | 89.37 | .924 | 99.23 | 99.10 | 97.32 | 97.71 | .983 |
| 19 | πK | 97.23 | 95.44 | 98.33 | 99.00 | .975 | 95.95 | 95.22 | 98.24 | 98.52 | .970 | 96.77 | 97.50 | 99.08 | 98.80 | .980 |
| 20 | πK | 90.38 | 93.97 | 97.65 | 96.16 | .945 | 82.73 | 81.03 | 92.77 | 93.50 | .875 | 92.15 | 93.24 | 97.39 | 96.95 | .949 |
| 21 | πK | 89.75 | 87.08 | 93.88 | 95.22 | .915 | 82.45 | 83.10 | 91.85 | 91.50 | .872 | 90.71 | 89.26 | 94.88 | 95.61 | .926 |
| Round | All Tasks | Tasks with 4+ options | ||
| Average Number of Options | Cumulative Gold Lost | Count | Average Number of Options | |
| 0 | 4.83 | 0 | 76 | 9.07 |
| 1 | 2.84 | 1 | 45 | 6.69 |
| 2 | 2.27 | 1 | 25 | 5.88 |
| 3 | 2.05 | 2 | 15 | 5.27 |
| 4 | 1.91 | 3 | 6 | 4.67 |
| 5 | 1.87 | 4 | 2 | 4.00 |
| 6 | 1.85 | 4 | 0 | - |
| Phase | Cumulative | This Phase Accuracy | ||||
| Finished P | A | Accuracy P | A | P | A | |
| Filter | 38% | 13% | 0.99 | 1.00 | 0.99 | 1.00 |
| Select | 87% | 85% | 0.94 | 0.97 | 0.90 | 0.96 |
| Expert | 100% | 100% | 0.94 | 0.97 | 0.92 | 0.97 |
| Workflow | Accuracy | Experts | Crowd Cost | ||
| P | A | P | A | ||
| 200 tasks | |||||
| Automatic | 0.87 | 0.89 | 0 | 0 | 0 |
| Review-Select | 0.83 | 0.82 | 0 | 0 | $39 |
| Review-Expert | 0.94 | 0.97 | 55% | 58% | $30 |
| Our Workflow | 0.94 | 0.97 | 13% | 15% | $103 |
| 2,014 tasks | |||||
| Our Workflow | 0.95 | 0.93 | 12% | 12% | - |
| Label | Total | Sent to Experts | ||
| Count | Percentage | Count | Percentage | |
| A1 | 611 | 39.3 | 65 | 33.5 |
| A0 | 378 | 24.3 | 34 | 17.5 |
| A2 | 121 | 7.8 | 15 | 7.7 |
| AM-TMP | 116 | 7.5 | 19 | 9.8 |
| AM-MOD | 68 | 4.4 | 10 | 5.2 |
| AM-MNR | 47 | 3.0 | 8 | 4.1 |
| none | 42 | 2.7 | 17 | 8.8 |
| AM-LOC | 39 | 2.5 | 9 | 4.6 |
| AM-NEG | 38 | 2.4 | 2 | 1.0 |
| AM-DIS | 37 | 2.4 | 7 | 3.6 |
| A3 | 19 | 1.2 | 3 | 1.5 |
| AM-PNC | 16 | 1.0 | 2 | 1.0 |
| AM-DIR | 13 | 0.8 | 2 | 1.0 |
| A4 | 10 | 0.6 | 1 | 0.5 |
| Anno | Gold | ||||||
| 0 | 1 | 2 | TMP | LOC | none | other | |
| 0 | 369 | 16 | 2 | 1 | 1 | 5 | 1 |
| 1 | 5 | 589 | 7 | 3 | - | 12 | 2 |
| 2 | 1 | 4 | 104 | - | 2 | 4 | 2 |
| TMP | - | - | 104 | - | 2 | 2 | |
| LOC | - | 1 | 3 | - | 34 | - | 4 |
| none | - | - | - | - | 14 | 1 | |
| other | 3 | 1 | 5 | 2 | 2 | 5 | 232 / 10 |
| #Qu. | #SQL | #DB | #T/D | #Easy | #Med. | #Hard | #ExH | |
| all | 9691 | 5263 | 166 | 5.3 | 2233 | 3439 | 2095 | 1924 |
| train | 6831 | 3493 | 99 | 5.4 | 1559 | 2255 | 1502 | 1515 |
| dev | 954 | 589 | 25 | 4.2 | 249 | 405 | 191 | 109 |
| test | 1906 | 1193 | 42 | 5.7 | 425 | 779 | 402 | 300 |
| Original (Easy question-involving one table in one database): +What is the number of cars with more than 4 cylinders? +SELECT count(*) FROM CARS_DATA WHERE Cylinders > 4 |
| Translated: +Cho biế t sòng ngu ng chí c e có nhế u hàn 4 xì lanh. +SELECT count(*) FROM [du liē u xì] WHERE [sǒ lǚng xi lanh] > 4 |
| Original (Hard question-with a nested SQL query): +Which countries in europe have at least 3 car manufacturers? +SELECT T1.CountryName FROM COUNTRIES AS T1 JOIN CONTINENTSS +AS T2 ON T1.Continent = T2.ContId JOIN CAR MAKERS +AS T3 ON T1.CountryId = T3.Country +WHERE T2.Continent = “europe” GROUP BY T1.CountryName +HAVING count(*) >= 3 |
| Translated: +Nhùng quócgia não o chuí Âu có it khác 3 nhà sán xuát xe hòi? +SELECT T1.[tên quócgia] FROM [quócgia] AS T1 JOIN [luc dia] +AS T2 ON T1.[luc dia] = T2.[id luc dia] JOIN [nhà sán xuát xe hòi] +AS T3 ON T1.[id quócgia] = T3.[quócgia] +WHERE T2.[luc dia] = “chuí Âu” GROUP BY T1.[tên quócgia] +HAVING count(*) >= 3 |
| Approach | Easy | Medium | Hard | ExH | SELECT | WHERE | ORDER BY | GROUP BY | KEYWORDS |
| EditSQLDeP | 65.7 | 46.1 | 37.6 | 16.8 | 75.1 | 44.6 | 65.6 | 63.2 | 73.5 |
| EditSQLXLM-R | 75.1 | 56.2 | 45.3 | 22.4 | 82.7 | 60.3 | 70.7 | 67.2 | 79.8 |
| EditSQLPhoBERT | 75.6 | 58.0 | 47.4 | 22.7 | 83.3 | 61.8 | 72.5 | 67.9 | 80.6 |
| IRNetDeP | 71.8 | 51.5 | 47.4 | 18.5 | 79.3 | 48.7 | 71.8 | 63.4 | 74.3 |
| IRNetXLM-R | 76.2 | 57.8 | 46.8 | 23.5 | 83.5 | 59.1 | 74.4 | 68.3 | 80.5 |
| IRNetPhoBERT | 76.8 | 57.5 | 47.2 | 24.8 | 84.5 | 59.3 | 76.6 | 68.2 | 80.3 |
| Approach | dev | test | Approach | dev | test | |
| Vi-Syllable | EditSQL [MT] | 21.5 | 16.8 | IRNet [MT] | 25.4 | 20.3 |
| EditSQL | 28.6 | 24.1 | IRNet | 43.3 | 38.2 | |
| EditSQLXLM-R | 55.2 | 51.3 | IRNetXLM-R | 58.6 | 52.8 | |
| EditSQL [MT] | 22.8 | 17.4 | IRNet [MT] | 27.4 | 21.6 | |
| Vi-Word | EditSQL | 33.7 | 30.2 | IRNet | 49.7 | 43.6 |
| EditSQLDeP | 45.3 | 42.2 | IRNetDeP | 52.2 | 47.1 | |
| EditSQLPhoBERT | 56.7 | 52.6 | IRNetPhoBERT | 60.2 | 53.2 | |
| En | EditSQLRoBERTa | 58.3 | 53.6 | IRNetRoBERTa | 63.8 | 55.3 |
| Hebrew Token | Morphological Analysis | English Translation |
| bbit | b/ADP bit/NOUN | in a house |
| b/ADP h/DET bit/NOUN | in the house | |
| hlbn | h/DET lbn/NOUN | the buttermilk |
| h/DET lbn/ADJ | the white | |
| hlbn/VERB | whitened |
| English | Turkish | Arabic | Hebrew | |
| UDPipe Oracle | 94.62 | 93.24 | 95.30 | 95.13 |
| UDPipe Predicted | 93.62 | 91.64 | 89.34 | 80.87 |
| Shared Task Leader | 95.94 | 94.78 | 93.63 | 91.36 |
| PtrNetMD Infused | 96.6 | 94.41 | ||
| PtrNetMD Uninfused | 89.54 | 91.3 |
| English | Turkish | Arabic | Hebrew | |
| UDPipe Oracle | 100.00 | 100.00 | 100.00 | 100.00 |
| UDPipe Predicted | 99.03 | 97.92 | 93.71 | 85.16 |
| Shared Task Leader | 99.26 | 97.92 | 96.81 | 93.98 |
| PtrNetMD Infused | 99.41 | 96.36 | ||
| PtrNetMD Uninfused | 97.78 | 94.74 |
| Turkish | Arabic | Hebrew | |
| Token Multi-Tag | 92.57 | 94.2 | 93.82 |
| Token Seq-Tag | 92.77 | 95.05 | 93.75 |
| PtrNetMD infused | 96.76 | 96.40 | |
| PtrNetMD uninfused | 90.01 | 94.02 |
| Dev-Inf | Dev-Uninf | Test-Inf | Test-Uninf | |
| MoreMD | 94.09 | 90.83 | 92.92 | 87.53 |
| MoreMD-DEP | 95.49 | 92.36 | 93.92 | 89.08 |
| PtrNetMD | 95.09 | 93.9 | 93.51 | 90.49 |
| (1) | BEA-train |
| X: | I will make a poet to kill this pain. |
| Y: | I will make a poem to kill this pain. |
| Y': | I will write a poem to get rid of this pain. |
| (2) | EFCamDat |
| X: | The restaurant in front of movie teather. |
| Y: | The restaurant in front of movie theater. |
| Y': | The restaurant is located opposite the movie theater. |
| (3) | Lang-8 |
| X: | Coordinate with product support team for potential customer show site visit; |
| Y: | Coordinate with product support team for potential customer show site visits; |
| Y': | Please coordinate with the product support team to escort potential customers to site visits. |
| Dataset | WER (%noise) |
| BEA-train | 37.1 |
| EFCamDat | 42.1 |
| Lang-8 | 34.6 |
| Dataset | #sent (pairs) Split | Scorer | |
| BEA-train | 561,100 train | - | |
| EFCamDAT | 2,269,595 train | - | |
| Lang-8 | 5,689,213 train | - | |
| BEA-valid | 2,377 valid | - | |
| CoNLL-2014 | 1,312 | test M² | scorer & GLEU |
| JFLEG | 747 | test M² | scorer & GLEU |
| BEA-test | 4,477 | test | ERRANT |
| Model | CoNLL-2014 | JFLEG | ||
| F0.5 | GLEU | F0.5 | GLEU | |
| BEA-train (BEA): | ||||
| No denoising BEA | 49.6 | 63.3 | 58.7 | 52.3 |
| CE filtering BEA | 42.9 | 61.0 | 52.7 | 49.0 |
| SED filtering BEA | 45.1 | 62.7 | 55.6 | 52.2 |
| LM filtering BEA | 47.1 | 63.0 | 58.3 | 52.6 |
| SR BEA (Ours) | 50.3 | 64.2 | 60.5 | 54.8 |
| EFCamDAT (EF): | ||||
| No denoising EF | 40.3 | 61.3 | 59.5 | 53.7 |
| CE filtering EF | 40.9 | 61.5 | 59.8 | 54.2 |
| SED filtering EF | 26.5 | 54.0 | 47.5 | 49.7 |
| LM filtering EF | 41.2 | 61.7 | 59.7 | 54.2 |
| SR EF (Ours) | 48.4 | 63.5 | 63.9 | 57.1 |
| Lang-8 (L8): | ||||
| No denoising L8 | 54.9 | 65.9 | 68.4 | 58.1 |
| CE filtering L8 | 54.1 | 65.3 | 68.6 | 58.2 |
| SED filtering L8 | 55.7 | 67.1 | 68.5 | 60.7 |
| LM filtering L8 | 55.9 | 66.3 | 68.6 | 59.1 |
| SR L8 (Ours) | 56.5 | 67.7 | 68.6 | 61.0 |
| BEA+EF: | ||||
| No denoising BEA+EF | 49.1 | 63.4 | 62.0 | 53.9 |
| CE filtering BEA+EF | 49.6 | 63.3 | 61.9 | 54.5 |
| SED filtering BEA+EF | 51.2 | 64.9 | 62.8 | 56.7 |
| LM filtering BEA+EF | 48.3 | 63.3 | 62.3 | 54.7 |
| SR BEA+EF (Ours) | 54.5 | 65.2 | 65.5 | 58.0 |
| BEA+EF+L8: | ||||
| No denoising BEA+EF+L8 | 56.1 | 65.7 | 67.0 | 56.9 |
| CE filtering BEA+EF+L8 | 55.0 | 66.0 | 68.6 | 58.2 |
| SED filtering BEA+EF+L8 | 56.1 | 67.3 | 67.7 | 60.3 |
| LM filtering BEA+EF+L8 | 56.7 | 65.9 | 68.0 | 57.8 |
| SR BEA+EF+L8 (Ours) | 58.8 | 68.0 | 70.6 | 61.4 |
| Filtering method | BEA-train | EFCamDat | Lang-8 |
| CE filtering | 448,880 (∇ 20.0%) | 1,815,676 (∇ 20.0%) | 4,551,370 (∇ 20.0%) |
| SED filtering | 317,957 (∇ 43.3%) | 1,250,744 (∇ 44.9%) | 3,314,440 (∇ 41.7%) |
| LM filtering | 456,347 (∇ 18.7%) | 1,936,238 (∇ 14.7%) | 4,651,085 (∇ 18.2%) |
| Model | CoNLL-2014 | JFLEG | BEA | ||
| F0.5 | GLEU | F0.5 | GLEU | F0.5 | |
| Single model: | |||||
| Junczys-Dowmunt et al. (2018) | 53.0 | - | - | 57.9 | - |
| Lichtarge et al. (2019) | 56.8 | - | - | 61.6 | - |
| Awasthi et al. (2019) | 59.7 | - | - | 60.3 | - |
| Kiyono et al. (2019) | 61.3 | 68.6 | 71.3 | 59.7 | 64.2 |
| SR +PRET+SED | 61.4 | 69.3 | 72.5 | 63.3 | 65.5 |
| Ensemble model: | |||||
| Junczys-Dowmunt et al. (2018) | 55.8 | - | - | 59.9 | - |
| Lichtarge et al. (2019) | 60.4 | - | - | 63.3 | - |
| Grundkiewicz et al. (2019) | 64.2 | - | - | 61.2 | 69.5 |
| Kiyono et al. (2019) | 65.0 | 68.8 | 72.9 | 61.4 | 70.2 |
| SR +PRET+R2L+SED | 63.1 | 69.8 | 73.9 | 63.7 | 67.8 |
| 1: | Improved by denoising (66.4%) |
| X: | how about to going to movie . |
| Y: | How about to going to movie . |
| Y': | How about going to a movie . |
| 2: | Both are correct (7.2 %) |
| X: | I'm twenty-nine old. |
| Y: | I'm twenty-nine years old. |
| Y': | I'm 29 years old. |
| 3: | Meaning is not preserved (10.4 %) |
| X: | you need keep calm. |
| Y: | You need to keep calm. |
| Y': | You need to be calm. |
| 4: | Added Unnecessary information (8.8 %) |
| X: | The are a few of chair and desk. |
| Y: | There are a few chairs and desks. |
| Y': | There are a few chairs and desks too. |
| 5: | Contains errors (3.8 %) |
| X: | There are very positive news for us. |
| Y: | There is very positive news for us . |
| Y': | There is a very positive news for us . |
| 6: | Lack of fluency (3.4 %) |
| X: | I go in my work on the bike. |
| Y: | I go to work by bike. |
| Y': | I go to work on my bike. |
| Model | EF | BEA+EF | BEA+EF+Lang-8 | ||||||
| Prec. | Rec. | F0.5 | Prec. | Rec. | F0.5 | Prec. | Rec. | F0.5 | |
| No denoising | 48.5 | 24.0 | 40.3 | 58.5 | 30.0 | 49.1 | 62.8 | 39.2 | 56.1 |
| SR w/o fail-safe | 49.8 | 32.3 | 44.3 | 57.4 | 41.0 | 53.1 | 59.5 | 45.8 | 56.1 |
| SR | 55.2 | 32.4 | 48.4 | 61.2 | 38.0 | 54.5 | 61.7 | 49.4 | 58.8 |
| 1: Fail-safe deactivates: | pp1. |
| X: By the way, I have to *discuss of the education. | 94.65 |
| Y: By the way, I have to *discuss about education. | 79.64 |
| Y': By the way, I have to discuss education. | 73.37 |
| 2: Fail-safe activates: | pp1. |
| X: Then I was treated in the hospital for one month. | 34.34 |
| Y: I was treated in the hospital for one month. | 32.42 |
| Y': I was treated *at the hospital for one month. | 33.59 |
| Model | F0.5 | GLEU |
| No denoising BEA | 49.6 | 63.3 |
| SR BEA w/ dropout | 50.3 | 64.2 |
| SR BEA w/o dropout | 49.5 | 63.9 |
| No denoising EF | 40.3 | 61.3 |
| SR EF w/ dropout | 48.4 | 63.5 |
| SR EF w/o dropout | 47.3 | 63.0 |
| No denoising L8 | 54.9 | 65.9 |
| SR L8 w/ dropout | 56.5 | 67.7 |
| SR L8 w/o dropout | 55.6 | 67.5 |
| Configurations | Values |
| Model Architecture | Transformer (Vaswani et al., 2017) |
| Optimizer | Adam (Kingma and Ba, 2015) |
| Learning Rate Schedule | Same as described in Section 5.3 of Vaswani et al. (2017) |
| Number of Epochs | 30 |
| Dropout | 0.3 |
| Stopping Criterion | Train model for 30 epochs. During the training, save model parameter for every 500 updates. |
| Gradient Clipping | 1.0 |
| Loss Function | Label smoothed cross entropy (Szegedy et al., 2016) |
| Beam Search | Beam size 5 with length normalization |
| Confusion set (X, Y) | Y (%) | Ŷ (%) |
| (*discuss about, *discuss about) | 66.7 | 49.5 |
| (*discuss about, discuss) | 33.0 | 50.2 |
| (*discuss about, *discuss in) | 0.3 | 0.3 |
| (*enter in, *enter in) | 61.6 | 31.7 |
| (*enter in, enter) | 38.4 | 68.3 |
| Model | CoNLL-2014 | JFLEG | BEA-test | ||||||||
| Prec. | Rec. | F0.5 | GLEU | Prec. | Rec. | F0.5 | GLEU | Prec. | Rec. | F0.5 | |
| Single model: | |||||||||||
| Junczys-Dowmunt et al. (2018) | - | - | 53.0 | - | - | - | - | 57.9 | - | - | - |
| Licharge et al. (2019) | 65.5 | 37.1 | 56.8 | - | - | - | - | 61.6 | - | - | - |
| Awasthi et al. (2019) | 66.1 | 43.0 | 59.7 | - | - | - | - | 60.3 | - | - | - |
| Kiyono et al. (2019) | 67.9 | 44.1 | 61.3 | 68.6 | 76.6 | 55.8 | 71.3 | 59.7 | 65.5 | 59.4 | 64.2 |
| Kaneko et al. (2020) | 69.2 | 45.6 | 62.6 | - | - | - | - | 61.3 | 67.1 | 60.1 | 65.6 |
| Omelianchuk et al. (2020) | 77.5 | 40.1 | 65.3 | - | - | - | - | - | 79.2 | 53.9 | 72.4 |
| SR BEA+EF+L8+PRET | 63.8 | 52.4 | 61.1 | 69.6 | 74.9 | 62.5 | 72.0 | 63.4 | 59.9 | 66.9 | 61.2 |
| Ensemble model: | |||||||||||
| Junczys-Dowmunt et al. (2018) | 61.9 | 40.2 | 55.8 | - | - | - | - | 59.9 | - | - | - |
| Licharge et al. (2019) | 66.7 | 43.9 | 60.4 | - | - | - | - | 63.3 | - | - | - |
| Grundkiewicz et al. (2019) | - | - | 64.2 | - | - | - | - | 61.2 | 72.3 | 60.1 | 69.5 |
| Kiyono et al. (2019) | 72.4 | 46.1 | 65.0 | 68.8 | 79.5 | 54.6 | 72.9 | 61.4 | 74.7 | 56.7 | 70.2 |
| Kaneko et al. (2020) | 72.6 | 46.4 | 65.2 | - | - | - | - | 62.0 | 72.3 | 61.4 | 69.8 |
| Omelianchuk et al. (2020) | 78.2 | 41.5 | 66.5 | - | - | - | - | - | 78.9 | 58.2 | 73.6 |
| SR BEA+EF+L8+PRET + R2L | 65.5 | 53.2 | 62.6 | 70.1 | 76.5 | 63.3 | 73.4 | 63.9 | 62.9 | 67.7 | 63.8 |
| Model | CoNLL-2014 | JFLEG | BEA-test | ||||||||
| Prec. | Rec. | F0.5 | GLEU | Prec. | Rec. | F0.5 | GLEU | Prec. | Rec. | F0.5 | |
| Single model: | |||||||||||
| SR BEA+EF+L8+PRET | 63.8 | 52.4 | 61.1 | 69.6 | 74.9 | 62.5 | 72.0 | 63.4 | 59.9 | 66.9 | 61.2 |
| SR BEA+EF+L8+PRET+SED | 65.2 | 49.9 | 61.4 | 69.3 | 76.3 | 60.6 | 72.5 | 63.3 | 66.7 | 61.3 | 65.5 |
| Ensemble model: | |||||||||||
| SR BEA+EF+L8+PRET + R2L | 65.5 | 53.2 | 62.6 | 70.1 | 76.5 | 63.3 | 73.4 | 63.9 | 62.9 | 67.7 | 63.8 |
| SR BEA+EF+L8+PRET+R2L+SED | 67.1 | 50.8 | 63.1 | 69.8 | 77.8 | 61.5 | 73.9 | 63.7 | 69.4 | 62.1 | 67.8 |
| Label | r/OffMyChest | r/CasualConversation |
| Emo | 2449 | 1499 |
| Info | 2749 | 1742 |
| Total | 7613 | 5247 |
| Label | Frequency |
| Emotional | 0.31 |
| Informational | 0.38 |
| Model | Precision | Recall | F1-score |
| B1 | 0.43 | 0.67 | 0.53 |
| B2 | 0.48 | 0.68 | 0.57 |
| A1 | 0.55* | 0.72 | 0.62* |
| A1+ELMO | 0.52 | 0.69 | 0.59 |
| A1+BERT | 0.57 | 0.71* | 0.63 |
| Model | Precision | Recall | F1-score |
| B1 | 0.56 | 0.67 | 0.61 |
| B2 | 0.60 | 0.64 | 0.62 |
| A1 | 0.57 | 0.69* | 0.63* |
| A1+ELMO | 0.58 | 0.66 | 0.62 |
| A1+BERT | 0.59* | 0.69 | 0.64 |
| Label | Model | Precision | Recall | F1 |
| Emo | B1 | 0.43 | 0.67* | 0.53 |
| Emo | A1-FE | 0.54* | 0.66 | 0.60* |
| Emo | A1 | 0.55 | 0.72 | 0.62 |
| Info | B1 | 0.56 | 0.67* | 0.61* |
| Info | A1-FE | 0.57* | 0.65 | 0.61 |
| Info | A1 | 0.57 | 0.69 | 0.63 |
| Type | Reliability metric | Info | Emo |
| No vs Low vs High | Fleiss Kappa | 0.484 | 0.242 |
| Gwet's AC1/AC2 | 0.631 | 0.317 | |
| Binary | Fleiss Kappa | 0.653 | 0.394 |
| Gwet's AC1/AC2 | 0.701 | 0.644 |
| Disclosure | Positive | Total |
| Emo Disclosure | 826 | 1200 |
| Info Disclosure | 543 | 1200 |
| Language (L1) | Language (L2) | Code-Mixed (L1-L2) | ||||
| en | India's agriculture is their main strength. | hi | 邦ारवति को कृधिके देशप्रस्था निकानी मूल्ति िताकल एँ | India's कृधिके देशप्रस्था main strength एँ | ||
| en | Especially valuable people like Connor Rooney. | bn | बर्वेयर्षय कोनर्षय कोनर्षय निकानी मूल्ति िताकल िताकल िताकल . | बर्वेयर्षय Connor Rooney समाट्ति valuable िताकल . | ||
| en | Glasses and cups, whatever they are, can be turned upside down. | ta | सिलार्णे कोनर्षय कोनर्षय निकानी मूल्ति िताकल िताकल , अशुकृति पूर्वति िताकल िताकल , अशुकृति पूर्वति िताकल िताकल . | Glasses िताकल पूर्वति िताकल िताकल . +असुकृति पूर्वति िताकल िताकल . +असुकृति पूर्वति िताकल िताकल . +असुकृति पूर्वति िताकल िताकल . +असुकृति पूर्वति उताकल िताकल . +असुकृति पूर्वति उताकल िताकल . +असुकृति पूर्वति उताकल िताकल . +असुकृति पूर्वति उताकल िताक� . +असुकृति पूर्वति उताकल िताकल . +असुकृति पूर्वति उताकल िताकल . +असुकृति पूर्वति उताकल िताकल . +असुकृता पूर्वति उताकल िताकल . +असुकृता पूर्वति उताकल िताकल . +असुकृता पूर्वति उताकल िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति ितাকल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . | Nous abolissons les ambassades nationales. | Nous abolissons les embassies national. |
| en | Democracy and development go hand in hand. | de | Demokratie und Entwicklung geh Hand in Hand. | Democracy und Development geh Hand in Hand. | ||
| en | We abolish national embassies. | fr | Nous abolissons les ambassades nationales. | Nous abolissons les embassies national. | ||
| Model | en-es | en-de | en-fr | en-hi | ||||||||
| B | R | M | B | R | M | B | R | M | B | R | M | |
| Seq2Seq | 16.42 | 36.03 | 24.23 | 19.19 | 36.19 | 24.87 | 19.28 | 38.54 | 26.41 | 15.49 | 35.29 | 23.72 |
| Attentive-Seq2Seq | 17.21 | 36.83 | 25.41 | 20.12 | 37.14 | 25.64 | 20.12 | 39.30 | 27.54 | 16.55 | 36.25 | 24.97 |
| Pointer Generator | 18.98 | 37.81 | 26.13 | 21.45 | 38.22 | 26.14 | 21.41 | 40.42 | 28.76 | 17.62 | 37.32 | 25.61 |
| Proposed Model | 22.47 | 41.24 | 29.45 | 24.15 | 42.76 | 30.47 | 24.89 | 43.54 | 31.26 | 21.55 | 40.21 | 28.37 |
| (-) BPE | 21.72 | 40.67 | 28.65 | 23.31 | 41.89 | 29.76 | 24.27 | 43.02 | 30.84 | 20.89 | 39.54 | 27.43 |
| (-) PoS Feature | 22.21 | 40.92 | 29.12 | 23.76 | 42.12 | 29.88 | 24.21 | 42.95 | 30.86 | 21.02 | 39.84 | 27.91 |
| (-) NE Feature | 21.52 | 40.32 | 28.41 | 22.19 | 41.64 | 29.39 | 23.92 | 42.52 | 30.37 | 20.42 | 39.20 | 27.46 |
| (-) LM Feature | 21.56 | 40.36 | 28.42 | 23.21 | 41.85 | 29.56 | 23.82 | 42.48 | 30.29 | 20.47 | 39.17 | 27.24 |
| (-) GFF | 21.59 | 40.28 | 28.59 | 23.24 | 41.75 | 29.50 | 23.87 | 42.58 | 30.46 | 20.31 | 39.24 | 27.51 |
| (-) Transfer Learning | 20.69 | 39.39 | 27.53 | 22.39 | 40.98 | 28.87 | 22.64 | 41.57 | 29.34 | 19.48 | 38.34 | 26.41 |
| Model | en-bn | en-ml | en-ta | en-te | ||||||||
| B | R | M | B | R | M | B | R | M | B | R | M | |
| Seq2Seq | 16.32 | 33.02 | 21.82 | 15.92 | 34.97 | 23.12 | 11.82 | 25.14 | 20.21 | 10.87 | 24.92 | 19.05 |
| Attentive-Seq2Seq | 17.29 | 34.12 | 23.08 | 17.21 | 35.91 | 23.94 | 13.09 | 26.57 | 21.41 | 12.14 | 26.17 | 20.11 |
| Pointer Generator | 18.24 | 35.86 | 24.36 | 18.49 | 37.16 | 25.12 | 14.03 | 27.84 | 22.53 | 13.21 | 27.37 | 21.17 |
| Proposed Model | 21.49 | 39.11 | 27.32 | 21.61 | 40.23 | 28.01 | 15.69 | 29.56 | 23.88 | 14.81 | 29.23 | 22.56 |
| (-) BPE | 20.81 | 38.64 | 26.65 | 20.89 | 39.73 | 27.49 | 15.12 | 28.92 | 23.19 | 14.15 | 28.75 | 21.82 |
| (-) POS Feature | 21.04 | 38.77 | 26.94 | 21.11 | 39.91 | 27.55 | 15.23 | 28.11 | 22.34 | 14.23 | 28.67 | 21.86 |
| (-) NER Feature | 20.49 | 38.14 | 26.33 | 20.63 | 39.29 | 27.11 | 15.19 | 29.06 | 23.48 | 14.51 | 28.63 | 22.26 |
| (-) LM Feature | 20.13 | 37.73 | 25.95 | 20.54 | 38.69 | 26.44 | 14.73 | 28.64 | 22.89 | 13.97 | 28.07 | 21.79 |
| (-) GFF | 20.57 | 38.11 | 26.36 | 20.69 | 39.18 | 27.07 | 15.24 | 28.84 | 23.19 | 14.29 | 28.67 | 21.88 |
| (-) Transfer Learning | 19.67 | 37.49 | 25.87 | 20.12 | 38.74 | 26.54 | 14.48 | 28.34 | 22.72 | 13.79 | 28.12 | 21.53 |
| en-de | Input | The real problem is statesponsored lawlessness. |
| Reference PG Proposed (-) TL | Das real problem ist die vom statesponsored lawlessness. Das echtes problem ist die vom statesponsored Gesetz. Das real problem ist vom statesponsored lawlessness. Das problem ist die statesponsored Gesetzlosigkeit. | |
| en-es | Input | However we have proposed some minor changes. |
| Reference PG Proposed (-) TL | Con todo hemos propuestos algunos minor changes. Sin embargo, hemo propuestos minerero changes. Sin embargo hemos propuestos algunos minor changes. Con todo hemos propuestos algunos minor cambios. | |
| en-hi | Input | India's agriculture is their main strength. |
| Reference PG Proposed (-) TL | India kā agriculture ॅśāsākāo main strength रेल | India's agriculture ॅśāsākāo रागके रेल | India kā agriculture ॅśāsākāo main strength रेल | India कृपिक रैस्थका main strength. | |
| en-fr | Input | Read the statements by Giscard dEstaing. |
| Reference PG Proposed (-) TL | Lisez les statements de Giscard dEstaing. Lisez déclarations de Giscard dEstaing. Lisez les statement de Giscard dEstaing. Lisez de déclarations Giscard dEstaing. |
| Approach | Human | B | R | M |
| Synthetic | 4.19 | 67.51 | 73.56 | 71.21 |
| Pointer Generator | 2.34 | 19.47 | 39.48 | 27.39 |
| Proposed Model | 3.26 | 24.65 | 43.55 | 29.11 |
| Language Pairs | # Parallel Sentences | # Code-Mixed Sentences | Train/Dev/Test | SPF | CMI |
| en-es | 1,965,734 | 200,725 | 196,725/2,000/2,000 | 68.59 | 28.80 |
| en-de | 1,920,209 | 192,131 | 188,131/2,000/2,000 | 68.41 | 28.26 |
| en-fr | 2,007,723 | 197,922 | 193,922/2,000/2,000 | 68.12 | 28.40 |
| en-hi | 1,561,840 | 252,330 | 248,330/2,000/2,000 | 62.92 | 23.49 |
| en-bn | 337,428 | 167,893 | 163,893/2,000/2,000 | 67.61 | 25.41 |
| en-ml | 359,423 | 182,453 | 178,453/2,000/2,000 | 81.84 | 28.13 |
| en-ta | 26,217 | 12,380 | 11,380/500/500 | 78.74 | 28.16 |
| en-te | 22,165 | 10,105 | 9,105/500/500 | 76.19 | 28.69 |
| Automatic evaluation | Human evaluation | ||||||||
| Methods | BLEU | Consistency | Novelty | Dist-1 | Dist-2 | Con. | Nov. | E-div. | Flu. |
| TAV | 6.05 | 16.59 | 70.32 | 2.69 | 14.25 | 2.32 | 2.19 | 2.58 | 2.76 |
| TAT | 6.32 | 9.19 | 68.77 | 2.25 | 12.17 | 1.76 | 2.07 | 2.32 | 2.93 |
| MTA | 7.09 | 25.73 | 70.68 | 2.24 | 11.70 | 3.14 | 2.87 | 2.17 | 3.25 |
| CTEG | 9.72 | 39.42 | 75.71 | 5.19 | 20.49 | 3.74 | 3.34 | 3.08 | 3.59 |
| SCTKG(w/o-Senti) | 9.97 | 43.84 | 78.32 | 5.73 | 23.16 | 3.89 | 3.35 | 3.90 | 3.71 |
| SCTKG(Ran-Senti) | 9.64 | 41.89 | 79.54 | 5.84 | 23.10 | 3.80 | 3.48 | 4.29 | 3.67 |
| SCTKG(Gold-Senti) | 11.02 | 42.57 | 78.87 | 5.92 | 23.07 | 3.81 | 3.37 | 3.94 | 3.75 |
| Methods | BLEU | Con. | Nov. | E-div. | Flu. |
| Full model | 11.02 | 3.81 | 3.37 | 3.94 | 3.75 |
| w/o TGA | 10.34 | 3.54 | 3.17 | 3.89 | 3.38 |
| w/o AT | 9.85 | 3.37 | 3.20 | 3.92 | 3.51 |
| Methods | Precision | Recall | Senti-F1 |
| Full model | 0.68 | 0.66 | 0.67 |
| w/o Enc-enti | 0.56 | 0.55 | 0.56 |
| w/o Dec-enti | 0.59 | 0.62 | 0.61 |
| w/o TGA | 0.62 | 0.64 | 0.63 |
| Input topics: Law Education |
| Sentiment label: neu. pos. neg. neg. neu. |
| Output essay: I am a senior high school student. I am in the best high school in our town. But bullying still exist on our campus. Teachers always ignore this phenomenon. What should we do to protect our rights? |
| Input | Example | ... easy to integrate bluetooth devices, and USB devices are recognized ... | |||||||||||||
| ID | ... | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | ... | |
| Tokens | ... | easy | to | integrate | blue | ##tooth | devices | , | and | usb | devices | are | recognized | ... | |
| Boundary Prediction | ps | - | 0.0005 | - | 0.8040 | 0.8515 | - | - | - | - | 0.9875 | - | - | - | - |
| pe | - | - | - | 0.0060 | - | - | 0.9494 | - | - | 0.0171 | 0.8899 | - | - | - | |
| label | - | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |
| labele | - | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | |
| Coarse Extraction | Boundary Number | nb_s = 3 (3 true labels: {2, 3, 8}), nb_e = 2 (2 true labels: {5, 9})nb = max(nb_s, nb_e) = 3 | |||||||||||||
| Top nbStart/End Boundaries | S={2, 3, 8}E={5, 8, 9} | ||||||||||||||
| Target Candidates | Cs={2, 5, (3, 5), (8, 8)}, Ce={3, 5, (8, 8), (8, 9)}C={2, 5, (3, 5), (8, 8), (8, 9)} | ||||||||||||||
| Fine Extraction | Extending Strategy | C'={2, 5, (8, 9)} | |||||||||||||
| Target Number | nt_s = 2 (2 intervals: [2, 3] and [8, 8]), nt_e = 2 (2 intervals: [5, 5] and [9, 9])nt = round((nt_s + nt_e) / 2) = 2 | ||||||||||||||
| Top ntTargets | O={2, 5, (8, 9)} | ||||||||||||||
| Output | Targets | [“aggregate bluetooth devices”, “usb devices”] | |||||||||||||
| Dataset | #+ | - | #0 | Total | |
| LAPTOP | Train | 987 | 860 | 455 | 2302 |
| Test | 339 | 130 | 165 | 634 | |
| REST | Train | 2,610 | 1,037 | 667 | 4314 |
| Test | 1,524 | 501 | 264 | 2289 | |
| - | 703 | 274 | 2,266 | 3,243 | |
| Model | LAPTOP | REST | ||||||||
| Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 | ||
| Sequence-tagging-based Method | CRF-pipeline | 59.69 | 47.54 | 52.93 | 52.28 | 51.01 | 51.64 | 42.97 | 25.21 | 31.73 |
| CRF-joint | 57.38 | 35.76 | 44.06 | 60.00 | 48.57 | 53.68 | 43.09 | 24.67 | 31.35 | |
| CRF-collapsed | 59.27 | 41.86 | 49.06 | 63.39 | 57.74 | 60.43 | 48.35 | 19.64 | 27.86 | |
| NN-CRF-pipeline | 57.72 | 49.32 | 53.19 | 60.09 | 61.93 | 61.00 | 43.71 | 37.12 | 40.06 | |
| NN-CRF-joint | 55.64 | 34.48 | 45.49 | 61.56 | 50.00 | 55.18 | 44.62 | 35.84 | 39.67 | |
| NN-CRF-collapsed | 58.72 | 45.96 | 51.56 | 62.61 | 60.53 | 61.56 | 46.32 | 32.84 | 38.36 | |
| TAG-pipeline | 65.84 | 67.19 | 66.51 | 71.66 | 76.45 | 73.98 | 54.24 | 54.37 | 54.26 | |
| TAG-joint | 65.43 | 66.56 | 65.99 | 71.47 | 75.62 | 73.49 | 54.18 | 54.29 | 54.20 | |
| TAG-collapsed | 63.71 | 66.83 | 65.23 | 71.05 | 75.84 | 73.35 | 54.05 | 54.25 | 54.12 | |
| UNIFIED | 61.27 | 54.89 | 57.90 | 68.64 | 71.01 | 69.80 | 53.08 | 43.56 | 48.01 | |
| DOER | - | - | 60.35 | - | - | 72.78 | - | - | 51.37 | |
| Span-based Method | Zhou SPAN | 61.40 | 58.20 | 59.76 | 76.20 | 68.20 | 71.98 | 54.84 | 48.44 | 51.44 |
| Hu SPAN-pipeline | 69.46 | 66.72 | 68.06 | 76.14 | 73.74 | 74.92 | 60.72 | 55.02 | 57.69 | |
| Hu SPAN-joint | 67.41 | 61.99 | 64.59 | 72.32 | 72.61 | 72.47 | 57.03 | 52.69 | 54.55 | |
| Hu SPAN-collapsed | 50.08 | 47.32 | 48.66 | 63.63 | 53.04 | 57.85 | 51.89 | 45.05 | 48.11 | |
| Our Model | SPRM | 68.66 | 68.77 | 68.72 | 77.78 | 80.60 | 79.17 | 60.25 | 58.76 | 59.45 |
| LAPTOP | REST | |
| SPRM w/o Shared BERT | 53.25 | 70.03 |
| SPRM w/o Private BiLSTMs | 66.72 | 78.78 |
| SPRM w/o Aspect Extraction LSTM | 66.20 | 78.74 |
| SPRM w/o Aspect Classification LSTM | 67.92 | 78.41 |
| SPRM | 68.72 | 79.17 |
| Npara | LAPTOP | REST | |
| SPAN-pipeline + BERT-Large | 671M | 68.06 | 74.92 |
| SPAN-joint + BERT-Large | 336M | 64.59 | 72.47 |
| SPAN-joint + BERT-Base | 109M | 59.88 | 68.95 |
| SPRM + BERT-Large | 342M | 69.11 | 79.08 |
| SPRM + BERT-Base | 131M | 68.72 | 79.17 |
| LAPTOP | REST | |
| SPRM with CRF | 59.55 | 75.34 |
| SPRM with (Hu et al., 2019) | 66.35 | 78.49 |
| SPRM | 68.72 | 79.17 |
| Dataset | LAPTOP | REST | |
| DE-CNN | 81.59 | - | - |
| TAG | 85.20 | 84.48 | 73.47 |
| SPAN | 83.35 | 82.38 | 75.28 |
| SPM | 84.72 | 86.71 | 69.85 |
| Dataset | LAPTOP | REST | |
| DMMN-SDCM | 77.59 | - | - |
| TAG | 71.42 | 81.80 | 59.76 |
| SPAN | 81.39 | 89.95 | 75.16 |
| SPM | 81.50 | 90.35 | 78.34 |
| Examples | SPAN-pipeline | SPRM |
| 1. All in all, the [food]+ was great (except for the [desserts]−). | [food]+(✓), None (✗) | [food]+(✓), [desserts]− (✓) |
| 2. [Vanison]0 was good but not amazing. | [Vanison]0 (✓) | [Vanison]0 (✓) |
| 3. The [selection of food]+ is excellent (I'm not used to having much choice at restaurants), and the [atmosphere]+ is great. | [selection]+(✓), [food]+(✗), [atmosphere]+(✓) | [selection of food]+(✓), [atmosphere]+(✓) |
| 4. Beware of the [chili signed food items]− not unless you want to call the fire department to douse the flames in your mouth. | [chili]− (✗), [food items]− (✗) | [chili signed food items]− (✓) |
| 5. This mac does come with an [extender cable]0 and I'm using mine right now hoping the [cable]+ will stay nice for the many years I plan on using this mac. | [extender cable]0 (✓), None (✗) | [extender cable]0 (✓) [cable]+(✓) |
| 6. I used [Windows XP]0, [Windows Vista]0, and [Windows 7]0 extensively. | None (✗), [Windows Vista]0 (✓), None (✗) | [Windows XP]0 (✓), [Windows Vista]0 (✓), [Windows 7]0 (✓) |
| 7. The only thing I miss is that my old Alienware laptop had [backlit keys]−. | [backlit]− (✗), [keys]− (✗) | [backlit keys]− (✓) |
| Tasks | ABSA tasks | SA tasks | ||||||
| Datasets | SEM14(LAP) | SEM14(Rest) | Rest15 | Rest16 | Twit | SST2 | SST5 | SE13 |
| Train | 2282 | 3608 | 1204 | 1748 | 6051 | 6920 | 8544 | 6021 |
| Dev | - | - | - | - | - | 872 | 1101 | 890 |
| Test | 632 | 1119 | 542 | 616 | 677 | 1821 | 2210 | 2376 |
| ofClass | 3 | 3 | 3 | 3 | 3 | 2 | 5 | 3 |
| Datasets | SEM14(LAP) | SEM14(REST) | SEM14(AVG) | REST15 | REST16 | |||||||
| Model | ACC. | F1 | ACC. | F1 | ACC. | F1 | ACC. | F1 | ACC. | F1 | ACC. | F1 |
| ASGCN-DG | 75.55 | 71.05 | 80.77 | 72.02 | 78.16 | 71.54 | 79.89 | 61.89 | 88.99 | 67.48 | 72.15 | 70.40 |
| CDT | 77.19 | 72.99 | 82.30 | 74.02 | 75.09 | 73.51 | - | - | 85.58 | 69.93 | 74.66 | 73.66 |
| BERT-PT | 78.07 | 75.08 | 84.95 | 76.96 | 81.51 | 76.02 | - | - | - | - | - | - |
| SDGCN | 81.35 | 78.34 | 83.57 | 76.47 | 82.46 | 77.41 | - | - | - | - | - | - |
| TNET | 76.54 | 71.75 | 80.69 | 71.27 | 78.62 | 71.51 | - | - | - | - | 74.97 | 73.60 |
| BERT-ADA Rest | 79.14 | 74.93 | 87.89 | 81.05 | 83.52 | 77.99 | - | - | - | - | - | - |
| BERT-ADA Lapt | 80.23 | 75.77 | 86.22 | 79.79 | 83.22 | 77.78 | - | - | - | - | - | - |
| BERT(comp) | 78.26 | 73.35 | 83.50 | 73.33 | 80.88 | 73.34 | 81.20 | 60.11 | 88.25 | 72.06 | 71.09 | 70.81 |
| BERT-GCN(comp) | 80.03 | 75.79 | 85.32 | 78.05 | 82.68 | 76.92 | 85.30 | 66.01 | 90.91 | 75.31 | 73.98 | 71.62 |
| BERT-WGCN | 80.96 | 76.95 | 86.71 | 79.12 | 83.84 | 78.03 | 85.39 | 66.26 | 91.35 | 75.19 | 75.89 | 73.82 |
| Model | Aspect | Weight visualization | Prediction | Label | |||||||
| BERT | food | great | food | but | the service was | dreadful | ! | pos | pos | ||
| service | great | food | but | the | service | was | dreadful | ! | pos | neg | |
| staff | Our waiter was friendly and it is a shame that he didn't have a supportive staff to work with. | pos | neg | ||||||||
| BERT-GCN | food | great | food | but | the service was | dreadful | ! | pos | pos | ||
| service | great | food | but | the | service | was | dreadful | ! | neg | neg | |
| staff | Our waiter was friendly and it is a shame that he didn't have a supportive staff to work with. | pos | neg | ||||||||
| BERT-WGCN | food | great | food | but | the service was | dreadful | ! | pos | pos | ||
| service | great | food | but | the | service | was | dreadful | ! | neg | neg | |
| staff | Our waiter was friendly and it is a shame that he didn't have a supportive staff to work with. | neg | neg | ||||||||
| Model | SST-2 | SST-5 | SE13 |
| BCN+CoVe | - | 56.2 | - |
| XLNet | 96.8 | - | - |
| SSAN | 84.2 | 48.1 | 72.2 |
| BERT(comp) | 94.3 | 54.8 | 74.9 |
| BERT-GCN(comp) | 94.3 | 55.0 | 75.2 |
| BERT-WGCN | 94.9 | 56.5 | 77.3 |
| High Importance | Low Importance | ||
| Relation | POS-pairs | Relation | POS-pairs |
| amod | (NOUN, ADJ) | cc | (CCONJ, CCONJ) |
| nsubj | (NOUN, ADJ) | nsubj | (DET, AUX) |
| advmod | (ADV, VERB) | prt | (ADP, VERB) |
| advmod | (ADV, ADJ) | det | (SCONJ,SCONJ) |
| cc | (VERB, CCONJ) | pobj | (ADP, NOUN) |
| csubj | (AUX, VERB) | amod | (ADJ, ADJ) |
| advcl | (VERB, AUX) | amod | (ADV, ADV) |
| prep | (SCONJ, VERB) | det | (DET, DET) |
| Reference | DA | What provoked Lindsay Lohan to such very strange actions is currently completely unclear. | Now they have come to an agreement. |
| BEAM | 91.7 | What caused Lindsay Lohan to do such strange things is not clear at the moment. +What provoked Lindsay Lohan's strange actions is not clear at the moment. +What has provoked Lindsay Lohan's strange actions is not clear at the moment. | Now they've made a deal. +Now they've reached a deal. +Now they made a deal. |
| SAMPLED | 85.0 | What prompted Lindsay Lohan's most extraordinary actions? +What made Lindsay Lohan act so weird? +What inspired Lindsay Lohan to do such odd things? | And now they've agreed. +And now they've agreed. +They've reached an agreement. |
| LASER | 90.1 | What provoked Lindsay Lohan to act so strangely is not clear at the moment. +It's not clear what provoked Lindsay Lohan to act so strangely. +It's not clear what prompted Lindsay Lohan to act so strangely. | Now they've reached a deal. +Now they've agreed. +Now they've agreed |
| TREELSTM | 88.0 | What provoked Lindsay Lohan to do such a strange thing is not clear at the moment. +It is not clear at this time what provoked Lindsay Lohan to do such strange things. +The reason that Lindsay Lohan has been provoked by these very strange actions is not clear at the moment. | Now they made a deal. +Now they've made a deal. +They've already made a deal. |
| HUMAN | 95.2 | It is currently totally unclear what made Lindsay Lohan do such strange things. +The cause of Lindsay Lohan's strange actions is really not clear at the moment. +The reasons behind Lindsay Lohan's such bizarre acts are completely obscure for now. | They have now come to an agreement. +An agreement has now been made. +They have reached an agreement. |
| n | Method | DSBOW | DStree | BLEU |
| 0 | none | - | - | 29.8 |
| 5 | RANDOM | 0.10 | 0.01 | 34.8 |
| BEAM | 0.22 | 0.30 | 37.0 | |
| LASER | 0.24 | 0.33 | 37.5 | |
| TREELSTM | 0.28 | 0.47 | 37.7 | |
| SAMPLED | 0.41 | 0.56 | 40.1 | |
| 5* | SAMPLED | 0.40 | 0.55 | 47.0 |
| Constraints | 0.19 | 0.30 | 56.5 | |
| HUMAN | 0.80 | 0.68 | 48.9 | |
| 20 | RANDOM | 0.10 | 0.01 | 34.8 |
| BEAM | 0.27 | 0.37 | 39.7 | |
| LASER | 0.31 | 0.45 | 41.3 | |
| TREELSTM | 0.32 | 0.53 | 41.0 | |
| SAMPLED | 0.51 | 0.65 | 47.3 | |
| ∞ | Constraints | 0.21 | 0.28 | 46.4 |
| MT submissions | 0.37 | 0.51 | - |
| Approach | Method | System Gains | Segment Gains | System de-en | |||||
| Ave. | Min | Max | Ave. | Min | Max | ||||
| Baselines (+5) | BEAM | 0.020 | -0.006 | 0.059 | 0.013 | -0.001 | 0.029 | 0.040 | 0.021 |
| RANDOM | 0.017 | 0.000 | 0.046 | 0.007 | -0.002 | 0.017 | 0.031 | 0.017 | |
| SAMPLED | 0.024 | -0.002 | 0.067 | 0.017 | -0.004 | 0.044 | 0.044 | 0.043 | |
| Diversity (+1) | LASER | 0.017 | -0.000 | 0.048 | 0.009 | -0.003 | 0.025 | 0.034 | 0.022 |
| TREELSTM | 0.017 | -0.000 | 0.048 | 0.011 | -0.002 | 0.027 | 0.031 | 0.011 | |
| Diversity (+5) | LASER | 0.020 | -0.004 | 0.056 | 0.011 | -0.002 | 0.033 | 0.040 | 0.022 |
| TREELSTM | 0.020 | -0.004 | 0.057 | 0.013 | -0.004 | 0.030 | 0.044 | 0.008 | |
| Output-specific (+1) | LASER | 0.012 | -0.006 | 0.041 | 0.006 | -0.001 | 0.016 | 0.032 | 0.015 |
| TREELSTM | 0.014 | -0.007 | 0.041 | 0.007 | -0.005 | 0.016 | 0.039 | 0.011 | |
| Constraints | 4-grams | 0.025 | -0.002 | 0.061 | 0.002 | -0.097 | 0.072 | -0.027 | 0.035 |
| Human WMT-19 best | Multiple | - | - | - | - | - | - | 0.039 | 0.037 |
| 0.079 | 0.010 | 0.194 | 0.117 | 0.072 | 0.145 | - | - | ||
| N | newly matched ngrams | missing ngrams |
| 1 | a (494) of (480), (442) to (370) in (364) The (315) the (273) is (204) for (196) has (196) on (193) was (179) have (171) that (166) be (155) at (145) been (140) with (138) and (134) | to (921) in (921) on (870) is (780) a (786) for (568) The (556) with (509) it (508) has (505) are (482) by (480) was (478) have (449) - (443) at (437) as (426) which (386) |
| 4 | U . S . (63) the U . S (39) , as well as (19) p . m . (15) for the first time (13) in accordance with the (12) the United States , (11) in the United States (10) a member of the (10) of the United States (9) The U . S (9) . m . on (9) , in order to (9) the United States and (8) , of course , (8) . S . Navy (8) . m . , (8) the Chinese Academy of (8) Chinese Academy of Engineering (8) the renaming of the (7) | U . S . (136) , according to the (99) , ” he said (77) the U . S (55) of the United States (48) of the Ministry of (39) the end of the (38) , ” said the (37) same time , the (36) , such as the (36) as well as the (35) ( Xinhua ) - (34) and so on . (33) , he said . (32) the head of the (32) , the head of (31) , as well as (30) on the basis of (30) , and so on (29) |
| depth | no leaves | type/token | with leaves |
| 1 | 16 | 0% | 16 |
| 2 | 207,794 | 1.0% | 207,794 |
| 3 | 2,158,114 | 11.2% | 2,629,907 |
| 4 | 6,089,874 | 31.6% | 10,631,249 |
| 5 | 8,865,720 | 46.1% | 14,102,645 |
| ∞ | 13,054,272 | 68.1% | 17,362,448 |
| Method | Correlation | ||
| System | Segment | ||
| Baseline | (sentence)BLEU | 0.895 | 0.026 |
| Baselines (+5) | BEAM | 0.934 | 0.048 |
| RANDOM | 0.926 | 0.043 | |
| SAMPLED | 0.939 | 0.069 | |
| Diversity (+1) | LASER | 0.929 | 0.048 |
| TREELSTM | 0.926 | 0.037 | |
| Diversity (+5) | LASER | 0.935 | 0.049 |
| TREELSTM | 0.939 | 0.034 | |
| Constraints | 4-gram | 0.933 | 0.064 |
| Human | 0.948 | 0.063 | |
| Approach | Method | de-en (16) | fi-en (12) | gu-en (12) | kk-en (11) | lt-en (11) | ru-en (14) | zh-en (15) | Ave |
| Baseline | BLEU | 0.890 | 0.985 | 0.799 | 0.943 | 0.969 | 0.862 | 0.888 | 0.905 |
| Paraphrase baselines (+5) | BEAM | 0.928 | 0.984 | 0.793 | 0.961 | 0.986 | 0.921 | 0.900 | 0.925 |
| RANDOM | 0.916 | 0.986 | 0.805 | 0.957 | 0.983 | 0.908 | 0.898 | 0.922 | |
| SAMPLED | 0.937 | 0.984 | 0.798 | 0.966 | 0.989 | 0.929 | 0.902 | 0.929 | |
| Diversity (+1) | LASER | 0.919 | 0.987 | 0.799 | 0.957 | 0.981 | 0.909 | 0.904 | 0.922 |
| TREELSTM | 0.921 | 0.985 | 0.800 | 0.958 | 0.982 | 0.910 | 0.901 | 0.922 | |
| Diversity (+5) | LASER | 0.934 | 0.985 | 0.795 | 0.963 | 0.987 | 0.918 | 0.896 | 0.925 |
| TREELSTM | 0.933 | 0.982 | 0.796 | 0.964 | 0.987 | 0.918 | 0.898 | 0.925 | |
| Constraints | 4-grams | 0.922 | 0.983 | 0.809 | 0.963 | 0.989 | 0.924 | 0.921 | 0.930 |
| WMT-19 best | 0.950** (Y1S1-1.SRL) | 0.995 (METEOR) | 0.993*** (Y1S1-0) | 0.998*** (WMDO) | 0.989* (ESIM) | 0.979** (Y1S1-1) | 0.988*** (ESIM) | 0.985 |
| Approach | Method | de-en (32000) | fi-en (23952) | gu-en (12192) | kk-en (11000) | lt-en (11000) | ru-en (28000) | zh-en (30000) | Ave |
| Baseline | sentenceBLEU | 0.055 | 0.228 | 0.175 | 0.368 | 0.251 | 0.114 | 0.317 | 0.215 |
| Paraphrase | BEAM | 0.061 | 0.250 | 0.189 | 0.371 | 0.281 | 0.129 | 0.317 | 0.228 |
| baselines | RANDOM | 0.056 | 0.240 | 0.184 | 0.374 | 0.269 | 0.122 | 0.315 | 0.223 |
| (+5) | SAMPLED | 0.073 | 0.251 | 0.192 | 0.374 | 0.295 | 0.127 | 0.313 | 0.232 |
| Diversity | LASER | 0.061 | 0.244 | 0.187 | 0.368 | 0.276 | 0.121 | 0.314 | 0.225 |
| (+1) | TREETSTM | 0.061 | 0.242 | 0.185 | 0.383 | 0.278 | 0.123 | 0.315 | 0.227 |
| Diversity | LASER | 0.062 | 0.245 | 0.187 | 0.372 | 0.284 | 0.123 | 0.315 | 0.227 |
| (+5) | TREETSTM | 0.065 | 0.247 | 0.195 | 0.376 | 0.281 | 0.119 | 0.314 | 0.228 |
| Constraints | 4-grams | 0.090 | 0.242 | 0.161 | 0.271 | 0.323 | 0.122 | 0.314 | 0.218 |
| WMT-19 best | 0.199*** (YiSi-1SRL) | 0.346*** YiSi-1 | 0.306*** (YiSi-1) | 0.442*** (YiSi-1SRL) | 0.380*** (YiSi-1SRL) | 0.22*** (YiSi-1SRL) | 0.431*** (YiSi-1SRL) | 0.333 |
| Approach | Method | System | Segment | ||||
| Ave. | Min | Max | Ave. | Min | Max | ||
| Baselines (+5) | BEAM | 0.012 | 0.002 | 0.036 | 0.016 | 0.007 | 0.027 |
| RANDOM | 0.009 | 0.002 | 0.028 | 0.010 | 0.004 | 0.022 | |
| SAMPLED | 0.013 | 0.002 | 0.038 | 0.018 | 0.009 | 0.031 | |
| Diversity (+1) | LASER | 0.009 | 0.002 | 0.025 | 0.011 | 0.005 | 0.017 |
| TREELSTM | 0.009 | 0.001 | 0.025 | 0.011 | 0.004 | 0.019 | |
| Diversity (+5) | LASER | 0.014 | 0.003 | 0.034 | 0.015 | 0.007 | 0.021 |
| TREELSTM | 0.015 | 0.002 | 0.039 | 0.016 | 0.008 | 0.030 | |
| Output-specific (+1) | LASER | 0.007 | 0.000 | 0.020 | 0.009 | 0.003 | 0.018 |
| TREELSTM | 0.010 | 0.002 | 0.020 | 0.013 | 0.004 | 0.021 | |
| Constraints | 4-grams | 0.004 | -0.050 | 0.027 | -0.002 | 0.043 | -0.084 |
| Approach | Method | de-en (16) | fi-en (12) | gu-en (12) | kk-en (11) | lt-en (11) | ru-en (14) | zh-en (15) | Ave |
| Baseline | METEOR | 0.909 | 0.993 | 0.883 | 0.969 | 0.972 | 0.825 | 0.941 | 0.927 |
| Paraphrase baselines (+5) | BEAM | 0.927 | 0.994 | 0.887 | 0.976 | 0.983 | 0.862 | 0.949 | 0.940 |
| RANDOM | 0.920 | 0.994 | 0.889 | 0.974 | 0.981 | 0.853 | 0.945 | 0.937 | |
| SAMPLED | 0.925 | 0.995 | 0.891 | 0.978 | 0.982 | 0.864 | 0.945 | 0.940 | |
| Diversity (+1) | LASER | 0.924 | 0.995 | 0.886 | 0.975 | 0.979 | 0.851 | 0.948 | 0.937 |
| TREELSTM | 0.923 | 0.994 | 0.889 | 0.974 | 0.979 | 0.850 | 0.947 | 0.937 | |
| Diversity (+5) | LASER | 0.932 | 0.995 | 0.890 | 0.978 | 0.983 | 0.860 | 0.950 | 0.941 |
| TREELSTM | 0.930 | 0.995 | 0.894 | 0.977 | 0.983 | 0.864 | 0.950 | 0.942 | |
| Constraints | 4-grams | 0.922 | 0.990 | 0.910 | 0.983 | 0.988 | 0.775 | 0.949 | 0.931 |
| WMT-19 best | 0.950 (YiSi-1,SRL) | 0.995 (METEOR) | 0.993 (YiSi-0) | 0.998 (WMDO) | 0.989 (ESIM) | 0.979 (YiSi-1) | 0.988 (ESIM) | 0.985 |
| Approach | Method | de-en (32000) | fi-en (23952) | gu-en (12192) | kk-en (11000) | lt-en (11000) | ru-en (28000) | zh-en (30000) | Ave |
| Baseline | sentenceMETEOR | 0.061 | 0.243 | 0.197 | 0.356 | 0.275 | 0.145 | 0.351 | 0.233 |
| Paraphrase baselines (+5) | BEAM | 0.081 | 0.257 | 0.219 | 0.383 | 0.285 | 0.152 | 0.360 | 0.248 |
| RANDOM | 0.072 | 0.254 | 0.219 | 0.364 | 0.281 | 0.156 | 0.356 | 0.243 | |
| SAMPLED | 0.080 | 0.262 | 0.228 | 0.375 | 0.292 | 0.160 | 0.360 | 0.251 | |
| Diversity (+1) | LASER | 0.079 | 0.258 | 0.209 | 0.370 | 0.283 | 0.150 | 0.359 | 0.244 |
| TREELSTM | 0.074 | 0.255 | 0.210 | 0.374 | 0.284 | 0.149 | 0.357 | 0.243 | |
| Diversity (+5) | LASER | 0.078 | 0.257 | 0.214 | 0.377 | 0.293 | 0.158 | 0.358 | 0.248 |
| TREELSTM | 0.074 | 0.259 | 0.228 | 0.378 | 0.287 | 0.153 | 0.361 | 0.249 | |
| Constraints | 4-grams | 0.098 | 0.237 | 0.193 | 0.272 | 0.318 | 0.145 | 0.351 | 0.230 |
| WMT-19 best | 0.20 (YiSi-1SRL) | 0.35 (YiSi-1) | 0.31 (YiSi-1) | 0.44 (YiSi-1SRL) | 0.38 (YiSi-1SRL) | 0.22 (YiSi-1SRL) | 0.43 (YiSi-1SRL) | 0.333 |
| Reference | Anyway the application phase for each of the 20 properties, which the municipality is offering at better conditions and prices - similar to the earlier local model, has now ended. |
| LASER | However, the application phase for each of the 20 properties offered by the municipality at better conditions and prices, similar to the previous local model, is now over. However, the application phase for each of the 20 properties offered by the municipality at better conditions and prices, similar to the previous local model, is now over. However, the application phase for each of the 20 properties offered by the municipality at better conditions and prices, similar to the previous local model, is now over. |
| TREELSTM | However, the application phase for each of the 20 properties offered by the municipality at better conditions and prices - similar to the previous local model - is now closed. In any case, the application phase for each of the 20 properties offered by the municipality at better conditions and prices - similar to the previous local model - has now come to an end. However, for each of the 20 properties that the municipality offers at better conditions and prices, similar to the previous local model, the application phase has now come to an end. |
| Reference | And there is a need for discussion. |
| LASER | And we need to discuss it. And we need to have a discussion. And that's what we need to discuss. |
| TREELSTM | And there's a need to discuss it. And we need to discuss it. It is necessary to discuss it. |
| Reference | Why are there increasingly conflicts, despite successful integration? |
| LASER | Why is there ever more conflict, despite successful integration? Why is there ever more conflict, despite successful integration? Why are there more and more conflicts, despite the success of integration? |
| TREELSTM | Why is there more and more conflict, despite successful integration? Why are conflicts growing despite successful integration? Why are there ever more conflicts, despite the success of integration? |
| Reference | Professor Doctor Aladin El-Mafaalani is standing before them. |
| LASER | In front of them is Professor Dr. Aladin El-Mafaalani. Professor Dr. Aladdin El-Mafaalani stands in front of them. Professor Dr. Aladdin El-Mafaalani stands in front of them. |
| TREELSTM | Professor Aladdin El-Mafaalani stands before them. In front of them is Professor Aladdin El-Mafaalani. There's Professor Aladdin El-Mafaalani in front of them. |
| Reference | However, the operation took some time. |
| LASER | But the surgery took a while. But the operation took a while. But the surgery took a while. |
| TREELSTM | However, the surgery lasted a while. But the surgery lasted a while. However, it took some time to operate. |
| Reference | According to the police, it appears the fire was monitored for a long time, but the embers were inadvertently left behind. |
| LASER | According to the police, the fire seems to have been under long surveillance, but the coals were inadvertently left behind. According to the police, the fire seems to have been under long surveillance, but the coals have been inadvertently left behind. According to the police, the fire seems to have been under long surveillance, but the coals were inadvertently left behind. |
| TREELSTM | Police say the fire seems to have been under observation for a long time, but the coals are accidentally left behind. The police seem to have been watching the fire for a long time, but the coals were accidentally left behind. According to the police, the fire seems to have been under observation for a long time, but the coals were accidentally left behind. |
| Reference | What is the situation on the island now? |
| LASER | How's the island now? What's happening on this island now? What's the status on the island these days? |
| TREELSTM | What's it like on the island? What's going on on the island? So what's the status on the island? |
| Error Types | P.(%) | |
| Deasciification | ill-formed | 44.94 |
| Accent | ill-formed | 11.22 |
| Proper Name | ill-formed | 9.20 |
| Intentional Char | ill-formed | 9.02 |
| Separation | ill-formed | 7.68 |
| Foreign Word | well-formed | 4.92 |
| Unintentional Char | ill-formed | 4.69 |
| Social Media Phrase | well-formed | 2.50 |
| Abbreviation | well-formed | 2.37 |
| Adjacent | ill-formed | 1.36 |
| Neologism | well-formed | 0.96 |
| Vowel | ill-formed | 0.63 |
| Phonetic Substitution | ill-formed | 0.52 |
| Precision | Recall | F1-score | |
| TRmorph | 0.977 | 0.822 | 0.893 |
| Zemberek | 0.985 | 0.748 | 0.850 |
| Model | C | NC | Acc |
| Zemberek-Spellchecker (ZS) | 0.409 | 0.741 | 0.415 |
| Zemberek-Normalizer (ZN) | 0.714 | 0.638 | 0.713 |
| Edit distance (ED) | 0.373 | 0.476 | 0.375 |
| Rule-based (RB) | 0.602 | 0.724 | 0.605 |
| ZS | ZN | ED | RB | |
| Accent | 0.295 | 0.608 | 0.226 | 0.399 |
| Adjacent | 0.0 | 0.143 | 0.531 | 0.0 |
| Deascii | 0.407 | 0.871 | 0.433 | 0.858 |
| Int. char. | 0.667 | 0.683 | 0.448 | 0.361 |
| Phon. sub. | 0.435 | 0.391 | 0.391 | 0.0 |
| Proper | 0.406 | 0.009 | 0.0 | 0.004 |
| Separation | 0.0 | 0.479 | 0.0 | 0.0 |
| Unint. char. | 0.534 | 0.507 | 0.507 | 0.137 |
| Vowel | 0.045 | 0.636 | 0.091 | 0.182 |
| Precision | Recall | F1-Score | |
| TRmorph | 0.881 | 0.986 | 0.930 |
| Zemberek | 0.840 | 0.991 | 0.909 |
| Error Types | Group Name Tag | Wrong | Corrected |
| Abbreviation | OOV-well_formed-abbr | kib | kib (kendine iyi bak) +(take care of yourself) |
| Accent | OOV-ill_formed-accent | geliyom, dimi | geliyorum (I am com-ing),degil mi (is that so) |
| Adjacent | OOV-ill_formed-joint | bir|kaç | birkac (a few) |
| Deascification | OOV-ill_formed-deascii | calismak,gṭmek | çalismak (to work),git-mek (to go) |
| Foreign Word | OOV-well_formed-foreign | Twitter,iPhone | Twitter,iPhone |
| Intentional Char | OOV-ill_formed-int | canımmm,haydiii | canim(sweetheart),haydi (come on) |
| Neologism | OOV-well_formed-neologism | kardo | kardo |
| Phonetic Substitution | OOV-ill_formed-phonetic_sub | geli$im | gelisim (development) |
| Proper Name | OOV-ill_formed-proper | ahmetten | Ahmet’ten |
| Separation | OOV-ill_formed-sep | hersey | her|sey (everything) |
| Social Media Phrase | OOV-well_formed-social | ahahnaha,sdfsdfsdf,yha,#hashtag | ahahnaha,sdfsdfsdf,yha,#hashtag |
| Unintentional Char | OOV-ill_formed-unint | gerel,haayt | gerek (need),hayat(life) |
| Vowel | OOV-ill_formed-vowel | tmm,fln | tamam (okey), falan(so and so) |