ACL-OCL / Base_JSON /prefixS /json /S17 /S17-2016.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S17-2016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:30:16.394046Z"
},
"title": "HCTI at SemEval-2017 Task 1: Use convolutional neural network to evaluate Semantic Textual Similarity",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Shao",
"suffix": "",
"affiliation": {},
"email": "yang.shao.kn@hitachi.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our convolutional neural network (CNN) system for the Semantic Textual Similarity (STS) task. We calculated semantic similarity score between two sentences by comparing their semantic vectors. We generated a semantic vector by max pooling over every dimension of all word vectors in a sentence. There are two key design tricks used by our system. One is that we trained a CNN to transfer GloVe word vectors to a more proper form for the STS task before pooling. Another is that we trained a fully-connected neural network (FCNN) to transfer the difference of two semantic vectors to the probability distribution over similarity scores. All hyperparameters were empirically tuned. In spite of the simplicity of our neural network system, we achieved a good accuracy and ranked 3rd on primary track of SemEval 2017.",
"pdf_parse": {
"paper_id": "S17-2016",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our convolutional neural network (CNN) system for the Semantic Textual Similarity (STS) task. We calculated semantic similarity score between two sentences by comparing their semantic vectors. We generated a semantic vector by max pooling over every dimension of all word vectors in a sentence. There are two key design tricks used by our system. One is that we trained a CNN to transfer GloVe word vectors to a more proper form for the STS task before pooling. Another is that we trained a fully-connected neural network (FCNN) to transfer the difference of two semantic vectors to the probability distribution over similarity scores. All hyperparameters were empirically tuned. In spite of the simplicity of our neural network system, we achieved a good accuracy and ranked 3rd on primary track of SemEval 2017.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic Textual Similarity (STS) is the task of determining the degree of semantic similarity between two sentences. STS task is a building block of many natural language processing (NLP) applications. Therefore, it has received a significant amount of attention in recent years. STS tasks in SemEval have been held from 2012 to 2017 (Cer et al., 2017) . Successfully estimating the degree of semantic similarity between two sentences requires a very deep understanding of both sentences. Well performing STS methods can be applied to many other natural language understanding tasks including paraphrasing, entailment detection, answer selection, hypothesis evidencing, machine translation (MT) evaluation and quality estimation, summarization, question answering (QA) and short answer grading.",
"cite_spans": [
{
"start": 335,
"end": 353,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Measuring sentence similarity is challenging for two reasons. One is the variability of linguistic expression and the other is the limited amount of annotated training data. Therefore, conventional NLP approaches, such as sparse, hand-crafted features are difficult to use. However, neural network systems (He et al., 2015a; He and Lin, 2016) can alleviate data sparseness with pre-training and distributed representations. We propose a convolutional neural network system with 5 components: 1) Enhance GloVe word vectors by adding handcrafted features. 2) Transfer the enhanced word vectors to a more proper form by a convolutional neural network. 3) Max pooling over every dimension of all word vectors to generate semantic vector. 4) Generate semantic difference vector by concatenating the element-wise absolute difference and the element-wise multiplication of two semantic vectors. 5) Transfer the semantic difference vector to the probability distribution over similarity scores by fully-connected neural network.",
"cite_spans": [
{
"start": 306,
"end": 324,
"text": "(He et al., 2015a;",
"ref_id": "BIBREF10"
},
{
"start": 325,
"end": 342,
"text": "He and Lin, 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 System Description Figure 1 provides an overview of our system. The two sentences to be semantically compared are first pre-processed as described in subsection 2.1. Then the CNN described in subsection 2.2 combines the word vectors from each sentence into an appropriate sentence level embedding. After that, the methods described in subsection 2.3 are used to compute representations that compare paired sentence level embeddings. Then, a fullyconnected neural network (FCNN) described in subsection 2.4 transfers the semantic difference vector to a probability distribution over similarity scores. All hyperparameters in our system were empirically tuned for the STS task and shown in Table 1 . We implemented our neural network system by using Keras 1 (Chollet, 2015) and Tensor-Flow 2 (Abadi et al., 2016) .",
"cite_spans": [
{
"start": 758,
"end": 773,
"text": "(Chollet, 2015)",
"ref_id": null
},
{
"start": 792,
"end": 812,
"text": "(Abadi et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 690,
"end": 697,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several text preprocessing operations were performed before feature engineering: 1) All punctuations are removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-process",
"sec_num": "2.1"
},
{
"text": "2) All words are lower-cased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-process",
"sec_num": "2.1"
},
{
"text": "3) All sentences are tokenized by Natural Language Toolkit (NLTK) (Bird et al., 2009) . 4) All words are replaced by pre-trained GloVe word vectors (Common Crawl, 840B tokens) (Pennington et al., 2014) . Words that do not exist in the pre-trained embeddings are set to the zero vector.",
"cite_spans": [
{
"start": 66,
"end": 85,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF7"
},
{
"start": 176,
"end": 201,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-process",
"sec_num": "2.1"
},
{
"text": "5) All sentences are padded to a static length l = 30 with zero vectors (He et al., 2015a) .",
"cite_spans": [
{
"start": 72,
"end": 90,
"text": "(He et al., 2015a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-process",
"sec_num": "2.1"
},
{
"text": "Several hand-crafted features are added to enhance the GloVe word vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-process",
"sec_num": "2.1"
},
{
"text": "1) If a word appears in both sentences, add a TRUE flag to the word vector, otherwise, add a FALSE flag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-process",
"sec_num": "2.1"
},
{
"text": "2) If a word is a number, and the same number appears in the other sentence, add a TRUE flag to the word vector of the matching number in each sentence, otherwise, add a FALSE flag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-process",
"sec_num": "2.1"
},
{
"text": "3) The part-of-speech (POS) tag of every word according to NLTK is added as a one-hot vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-process",
"sec_num": "2.1"
},
{
"text": "Our CNN consists of n = 300 one dimensional filters. The length of the filters is set to be the same as the dimension of the enhanced word vectors. The activation function of the CNN is set to be relu (Nair and Hinton, 2010) . We did not use any regularization or drop out. Early stopping triggered by model performance on validation data was used to avoid overfitting. The number of layers is set to be 1. We used the same model weights to transfer each of the words in a sentence. Sentence level embeddings are calculated by max pooling (Scherer et al., 2010) over every dimension of the transformed word level embedding. ",
"cite_spans": [
{
"start": 201,
"end": 224,
"text": "(Nair and Hinton, 2010)",
"ref_id": "BIBREF13"
},
{
"start": 539,
"end": 561,
"text": "(Scherer et al., 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional neural network (CNN)",
"sec_num": "2.2"
},
{
"text": "To calculate the semantic similarity score of two sentences, we generate a semantic difference vector by concatenating the element-wise absolute difference and the element-wise multiplication of the corresponding paired sentence level embeddings. The calculation equation is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of semantic vectors",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "SDV = (| SV 1 \u2212 SV 2|, SV 1 \u2022 SV 2)",
"eq_num": "(1)"
}
],
"section": "Comparison of semantic vectors",
"sec_num": "2.3"
},
{
"text": "Here, SDV is the semantic difference vector, SV 1 and SV 2 are the semantic vectors of the two sentences, and \u2022 is Hadamard product which generate the element-wise multiplication of two semantic vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of semantic vectors",
"sec_num": "2.3"
},
{
"text": "An FCNN is used to transfer the semantic difference vector (600 dimension) to a probability distribution over the six similarity labels used by STS. The number of layers is set to be 2. The first layer uses 300 units with a tanh activation function. The second layer produces the similarity label probability distribution with 6 units combined with a sof tmax activation function. We train without using regularization or drop out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully-connected neural network (FCNN)",
"sec_num": "2.4"
},
{
"text": "We randomly split all dataset files of SemEval-2012-2015 (Agirre et al., 2012 (Agirre et al., , 2013 (Agirre et al., , 2014 (Agirre et al., , 2015 into ten. We used the preparation of the data from (Baudis et al., 2016) . We used 90% of the pairs in each individual dataset file for training and the other 10% for validation. We tested our model in the English dataset of SemEval-2016 (Agirre et al., 2016) . Our objective function is the Pearson correlation coefficient computed over each batch. ADAM was used as the gradient descent optimization method. All parameters are set to the values (He et al., 2015b) was used as the initial function of layers. We did the experiment 8 times and choose the model that achieved the best performance on the validation dataset. Our system got a Pearson correlation coefficient result of 0.7192\u00b10.0062. We also used the same model design to take part in all tracks of SemEval-2017. We submitted two runs. One with machine translation (MT) and another without (non-MT). In MT run, we translated all the other languages in the test dataset into English by Google Translate 3 and used the English model to evaluate all similarity scores. For the monolingual tracks, we also tried non-MT run, which means we trained the models directly from the English, Spanish and Arabic data. Here, we independently trained another English model for each run. The difference between English-English performance from MT and non-MT is caused by the random shuffling of data during training.",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(Agirre et al., 2012",
"ref_id": "BIBREF5"
},
{
"start": 78,
"end": 100,
"text": "(Agirre et al., , 2013",
"ref_id": "BIBREF4"
},
{
"start": 101,
"end": 123,
"text": "(Agirre et al., , 2014",
"ref_id": "BIBREF2"
},
{
"start": 124,
"end": 146,
"text": "(Agirre et al., , 2015",
"ref_id": "BIBREF1"
},
{
"start": 198,
"end": 219,
"text": "(Baudis et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 385,
"end": 406,
"text": "(Agirre et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 593,
"end": 611,
"text": "(He et al., 2015b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "3"
},
{
"text": "We also trained another English model with same design to evaluate the STS benchmark dataset (Cer et al., 2017) 4 . We used only the Train part for training and the Dev. part to fine tune. We also run our system without any hand-crafted features. The purely sentence representation system ",
"cite_spans": [
{
"start": 93,
"end": 113,
"text": "(Cer et al., 2017) 4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "3"
},
{
"text": "The difference between our model's performance and that of the best participating system are relative small for all tracks except track 4b and 6. We note that the sentences in track 4b are significantly longer than the sentences in other tracks. We speculate that the results of our system in track 4b were pulled down by the decision to use static padding of length 30 within our model. Another trend that could be observed is that the results of non-MT were likely harmed by the smaller amounts of available training data. We had over 10,000 training pairs for English, but only 1634 pairs in Spanish and 1104 in Arabic. Correspondingly, for our non-MT models, we achieved our best Pearson correlation scores on English with diminished results on Spanish and our worst results on Arabic. Notably, the results obtained by combining our English model with MT to handle Spanish and Arabic were not affected by the limited amount of training data for these two languages and provided better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "We proposed a simple convolutional neural network system for the STS task. First, it uses a convolutional neural network to transfer hand-crafted feature enhanced GloVe word vectors. Then, it calculates a semantic vector representation of each sentence by max pooling every dimension of their transformed word vectors. After that, it generates a semantic difference vector between two paired sentences by concatenating their element-wise absolute difference and the element-wise multiplication of their semantic vectors. Next, it uses a fullyconnected neural network to transfer the semantic difference vector to a probability distribution over similarity scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In spite of the simplicity of our neural network system, the difference in performance between our proposed model and the best performing systems that participated in the STS shared task are less than 0.1 absolute in almost all STS tracks and result in our model being ranked 3rd on primary track of SemEval STS 2017.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://github.com/fchollet/keras 2 http://github.com/tensorflow/tensorflow",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://translate.google.com 4 http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As of April 17, 2017",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tensorflow: A system for large-scale machine learning",
"authors": [
{
"first": "Mart\u00edn",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barham",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Ghemawat",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Irving",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Manjunath",
"middle": [],
"last": "Kudlur",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Levenberg",
"suffix": ""
},
{
"first": "Rajat",
"middle": [],
"last": "Monga",
"suffix": ""
},
{
"first": "Sherry",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"G"
],
"last": "Murray",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tucker",
"suffix": ""
},
{
"first": "Vijay",
"middle": [],
"last": "Vasudevan",
"suffix": ""
},
{
"first": "Pete",
"middle": [],
"last": "Warden",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wicke",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiaoqiang",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation. USENIX Association",
"volume": "16",
"issue": "",
"pages": "265--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mart\u00edn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Mar- tin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: A system for large-scale machine learn- ing. In Proceedings of the 12th USENIX Confer- ence on Operating Systems Design and Implemen- tation. USENIX Association, Berkeley, CA, USA, OSDI'16, pages 265-283.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Montse",
"middle": [],
"last": "Maritxalar",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Larraitz",
"middle": [],
"last": "Uria",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "252--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. Semeval-2015 task 2: Semantic tex- tual similarity, english, spanish and pilot on inter- pretability. In Proceedings of the 9th International Workshop on Semantic Evaluation. pages 252-263.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semeval-2014 task 10: Multilingual semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "81--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation. pages 81-91.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "497--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evalua- tion. Association for Computational Linguistics, San Diego, California, pages 497-511.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sem 2013 shared task: Semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity",
"volume": "",
"issue": "",
"pages": "32--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. Sem 2013 shared task: Semantic textual similarity. In Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity. pages 32-43.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semeval-2012 task 6: A pilot on semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "385--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pi- lot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computa- tional Semantics. pages 385-393.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Sentence pair scoring: Towards unified framework for text comprehension",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Baudis",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.06127"
]
},
"num": null,
"urls": [],
"raw_text": "Petr Baudis, Jan Pichl, Tomas Vyskocil, and Jan Se- divy. 2016. Sentence pair scoring: Towards unified framework for text comprehension. arXiv preprint arXiv:1603.06127 .",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O'Reilly Media.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017). Association for Computa- tional Linguistics, Vancouver, Canada, pages 1-14. http://www.aclweb.org/anthology/S17-2001.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multiperspective sentence similarity modelling with convolutional neural networks",
"authors": [
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1576--1586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hua He, Kevin Gimpel, and Jimmy Lin. 2015a. Multi- perspective sentence similarity modelling with con- volutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1576-1586.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pairwise word interaction modelling with deep neural networks for semantic similarity measurement",
"authors": [
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hua He and Jimmy Lin. 2016. Pairwise word inter- action modelling with deep neural networks for se- mantic similarity measurement. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015b. Delving deep into rectifiers: Surpass- ing human-level performance on imagenet classifi- cation. In Proceedings of the International Confer- ence on Computer Vision (ICCV).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Rectified linear units improve restricted boltzmann machines",
"authors": [
{
"first": "Vinod",
"middle": [],
"last": "Nair",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 27th International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinod Nair and Geoffrey Hinton. 2010. Rectified lin- ear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natu- ral Language Processing. pages 1532-1543.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P.Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceed- ings of the 3rd International Conference on Learn- ing Representations (ICLR).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evaluation of pooling operations in convolutional architectures for object recognition",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Scherer",
"suffix": ""
},
{
"first": "Andreas",
"middle": [
"C"
],
"last": "Muller",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "Behnke",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of 20th International Conference on Artificial Neural Networks (ICANN)",
"volume": "",
"issue": "",
"pages": "92--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominik Scherer, Andreas C. Muller, and Sven Behnke. 2010. Evaluation of pooling operations in convolutional architectures for object recognition. In Proceedings of 20th International Conference on Artificial Neural Networks (ICANN). pages 92-101.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Overview of system",
"num": null,
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td colspan=\"2\">: Hyperparameters</td></tr><tr><td>Sentence pad length</td><td>30</td></tr><tr><td>Dimension of GloVe vectors</td><td>300</td></tr><tr><td>Number of CNN layers</td><td>1</td></tr><tr><td>Dimension of CNN filters</td><td>1</td></tr><tr><td>Number of CNN filters</td><td>300</td></tr><tr><td>Activation function of CNN</td><td>relu</td></tr><tr><td>Initial function of CNN</td><td>he unif orm</td></tr><tr><td>Number of FCNN layers</td><td>2</td></tr><tr><td>Dimension of input layer</td><td>600</td></tr><tr><td>Dimension of first layer</td><td>300</td></tr><tr><td>Dimension of second layer</td><td>6</td></tr><tr><td>Activation of first layer</td><td>tanh</td></tr><tr><td>Activation of second layer</td><td>sof tmax</td></tr><tr><td>Initial function of layers</td><td>he unif orm</td></tr><tr><td>Optimizer</td><td>ADAM</td></tr><tr><td>Batch size</td><td>339</td></tr><tr><td>Max epoch</td><td>6</td></tr><tr><td>Run times</td><td>8</td></tr><tr><td colspan=\"2\">suggested by (P.Kingma and Ba, 2015): learning</td></tr><tr><td colspan=\"2\">rate is 0.001, \u03b21 is 0.9, \u03b22 is 0.999, is 1e-08.</td></tr><tr><td>he unif orm</td><td/></tr></table>",
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "Pearson correlation coefficient with the golden standard of 2017 test dataset",
"html": null,
"content": "<table><tr><td>Tracks</td><td>CNN</td><td>Best</td><td>Diff.(Rank)</td></tr><tr><td>STS 2016</td><td>0.7192</td><td colspan=\"2\">0.7781 0.0589(14 th )</td></tr><tr><td/><td>\u00b10.0062</td><td/><td/></tr><tr><td colspan=\"2\">STS 2017 (MT)</td><td/><td/></tr><tr><td>Primary</td><td>0.6598</td><td colspan=\"2\">0.7316 0.0718(3 rd )</td></tr><tr><td>1 AR-AR</td><td>0.7130</td><td colspan=\"2\">0.7543 0.0413(6 th )</td></tr><tr><td>2 AR-EN</td><td>0.6836</td><td colspan=\"2\">0.7493 0.0657(3 rd )</td></tr><tr><td>3 SP-SP</td><td>0.8263</td><td colspan=\"2\">0.8559 0.0296(4 th )</td></tr><tr><td>4a SP-EN</td><td>0.7621</td><td colspan=\"2\">0.8302 0.0681(5 th )</td></tr><tr><td colspan=\"2\">4b SP-EN 0.1483</td><td colspan=\"2\">0.3407 0.1924(7 th )</td></tr><tr><td>5 EN-EN</td><td>0.8113</td><td colspan=\"2\">0.8547 0.0434(8 th )</td></tr><tr><td>6 EN-TR</td><td>0.6741</td><td colspan=\"2\">0.7706 0.0965(3 rd )</td></tr><tr><td colspan=\"2\">STS 2017 (non-MT)</td><td/><td/></tr><tr><td>1 AR-AR</td><td>0.4373</td><td colspan=\"2\">0.7543 0.3170(15 th )</td></tr><tr><td>3 SP-SP</td><td>0.6709</td><td colspan=\"2\">0.8559 0.1850(15 th )</td></tr><tr><td>5 EN-EN</td><td>0.8156</td><td colspan=\"2\">0.8547 0.0391(7 th )</td></tr><tr><td colspan=\"3\">STS benchmark (hand-craft)</td><td/></tr><tr><td>Dev.</td><td>0.8343</td><td colspan=\"2\">0.8470 0.0127(4 th )</td></tr><tr><td>Test</td><td>0.7842</td><td colspan=\"2\">0.8100 0.0258(4 th )</td></tr><tr><td colspan=\"3\">STS benchmark (no hand-craft)</td><td/></tr><tr><td>Dev.</td><td>0.8236</td><td colspan=\"2\">0.8470 0.0234(4 th )</td></tr><tr><td>Test</td><td>0.7833</td><td colspan=\"2\">0.8100 0.0267(4 th )</td></tr><tr><td colspan=\"4\">also got a good accuracy. The results are shown in</td></tr><tr><td colspan=\"4\">Table 2. Our model achieves 4 th place on the STS</td></tr><tr><td>benchmark 5 .</td><td/><td/><td/></tr></table>",
"num": null
}
}
}
}