| { |
| "paper_id": "O16-1009", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:04:55.162015Z" |
| }, |
| "title": "Crowdsourcing Experiment Designs for Chinese Word Sense Annotation", |
| "authors": [ |
| { |
| "first": "Tzu-Yun", |
| "middle": [], |
| "last": "\u2fc8\u9ec3\u8cc7\u52fb\u5300", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "\u570b\u2f74\u7acb\u81fa\u7063\u2f24\u5927\u5b78\u8a9e\u2f94\u8a00\u5b78\u7814\u7a76\u6240 Graduate Institute of Linguistics National Taiwan University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "\u570b\u2f74\u7acb\u81fa\u7063\u2f24\u5927\u5b78\u8a9e\u2f94\u8a00\u5b78\u7814\u7a76\u6240 Graduate Institute of Linguistics National Taiwan University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Chia-Chen", |
| "middle": [], |
| "last": "\u674e\u4f73\u81fb", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Taiwan University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Taiwan University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "\u674e\u51a0\u7def", |
| "middle": [], |
| "last": "Guan-Wei", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Taiwan University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Hsiao-Han", |
| "middle": [], |
| "last": "\u5433\u2f29\u5c0f\u6db5", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Taiwan University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Shao-Man", |
| "middle": [], |
| "last": "\u674e\u97f6\u66fc", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Taiwan University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Shu-Kai", |
| "middle": [], |
| "last": "\u8b1d\u8212\u51f1", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper tries to demonstrate our exploratory efforts in tackling with the \"high accuracy-low quantity\" problem of human word sense annotation task in Chinese, and ultimately reach the goal of automatic word sense annotation. Our proposed annotation architecture consists of explicit and implicit aspects of of crowdsourcing approach. Explicit method focuses on the general issues of crowdsourcing and made adjustments on current MTurk framework. Implicit method concentrates on the idea of Game with a Purpose (GWAP) design, which originates from a well-known video game Super Mario.", |
| "pdf_parse": { |
| "paper_id": "O16-1009", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper tries to demonstrate our exploratory efforts in tackling with the \"high accuracy-low quantity\" problem of human word sense annotation task in Chinese, and ultimately reach the goal of automatic word sense annotation. Our proposed annotation architecture consists of explicit and implicit aspects of of crowdsourcing approach. Explicit method focuses on the general issues of crowdsourcing and made adjustments on current MTurk framework. Implicit method concentrates on the idea of Game with a Purpose (GWAP) design, which originates from a well-known video game Super Mario.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Sense-aware system has become central to many NLP and related intelligent systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The core technique involved is the Word Sense Disambiguation (WSD) which can determine the proper sense of each word in varied contexts. Current WSD models rely largely on gold standard data from manual annotation that has been suffering from the problems of high accuracy, low quantity and low efficiency. This paper aims to sketch a preliminary blueprint of (word) sense annotation service by resorting to crowdsourcing (CS) approaches tailored for the Chinese WSD task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Over the past years, crowdsourcing is an emerging collaborative way for collecting annotated corpus data and other types of language resources, with the advantages of being able to greatly increase the quantity and reduce time-cost by distribute the work to the public. Current implementations of crowdsourcing platforms include MTurks (e.g., Amazon Mechanical Turk; CrowdFlower), Game with a Purpose (GWAP) and Altruistic (or volunteer-based) crowdsourcing (e.g., Crowdcrafting). Although the explicit crowdsourcing method such as MTurks has been applied for years on several renowned platforms such as Yahoo!Answer, Quora, and so forth, several problems remain unsolved; for example, the recruitment of annotators, the annotator quality, and the design of the platforms for the recruitment. Inspired by the CrowdTruth project , we propose an 1 internal-external adjusted framework to increase the ground-truth quality in the context of semantic annotation task. The explicit crowdsourcing has tackled with the main problems discovered in manual annotation; however, issue such expanses and interestedoriented bias still remain unsolved. Thus led to our second design, the implicit crowdsourcing-game. GWAP design for annotations is not as common as the explicit approach since it is difficult to make an annotation game \"interesting\" and collect the required data in limited time. However, we assume that the implicit approach will become the trend by collecting data from players with greater diversities, better reflect the language user distinct, and more importantly, with low cost.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The design contributed by this paper shall be viewed as a pilot design and hope to attract relevant experts for further development. Following the introduction, Section 2 begins with a source review on English SENSEVAL, and Chinese Wordnet that we relied on, followed by a sense labelled annotation for test data and for our analysis of annotation problems in Section 3. We propose a crowdsourcing-based experiment design in Section 4, and a GWAP design in Section 5. And Section 6 concludes the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "SENSEVAL [1] is the international organization devoted to the sense data distribution and evaluation of Word Sense Disambiguation Systems. We use (SENSEVAL-1) sample words as our pre-selected sample. Verbs that meet the following criteria were translated into Chinese as our examples: (1) There is no homonymy, (2), the number of polysemy is between 5 and 10, and (3) the major syntactic role of the word is verb. Another resource used in this work is the Chinese Wordnet (CWN) [2] , which has been developed mainly based on the English WordNet framework: synonymous lemmata are clustered as synsets, which are interconnected with various lexical semantic relations, such as antonymy, paranymy, hypernymy-hyponymy, meronymy-holonymy, etc. CWN is used as the sense http://crowdtruth.org/ inventory in this work. It is noted that in contrast with English WordNet, CWN has a higher granularity in its word meaning representation. Meaning extensions that are latent involve 'meaning facet', while meaning differences that are active involve 'senses' [17] .", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 12, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 478, |
| "end": 481, |
| "text": "[2]", |
| "ref_id": null |
| }, |
| { |
| "start": 1046, |
| "end": 1050, |
| "text": "[17]", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Resources", |
| "sec_num": "2." |
| }, |
| { |
| "text": "However, this fine-grained sense distinction is not considered for the sake of simplicity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Resources", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Before annotation work, data collection pipeline is taken as below: word select based on Kilgarriff's lexical sample task [3] ; lemma and sense numbers confirm in Chinese Wordnet (CWN); and data collection and preprocessing. Five verbs are chosen for lexical sample task: bother (\u7169, fan), calculate (\u7b97, suan), float (\u6d6e,fu), invade (\u4fb5, qin), and seize (\u6293, zua). We translated the verbs into Chinese and remove two-word form such as \u627f\u8afe for promise, or \u6d88\u8017 for consume, and look for only the 'single character' form with only one lemma and no more than ten senses in CWN (see Table 1 ).", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 125, |
| "text": "[3]", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 572, |
| "end": 579, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Collection and Process", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "[ Table 1 . Lexical sample translation, data collection and annotation assignment]", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 2, |
| "end": 9, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Collection and Process", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Five linguistic graduate students were recruited in the annotation work. Each was assigned with data collection for one verb and annotation for two verbs (Table 1.) Thus every verb was annotated by two annotators; agreement was made after every individual annotation. Data are mainly extracted from Sinica Corpus [4] , and COPENS(\u958b\u653e\u8a9e\u6599\u5eab) [5] . If there is no suitable concordance found in these two corpora, we search online as an alternative resource. The seed word needs to stand along as one character with one meaning, one sense. between annotator is too different to the extent that requires all team members to discuss and vote for the final sense decision. Figure 1 shows the annotation scheme:", |
| "cite_spans": [ |
| { |
| "start": 313, |
| "end": 316, |
| "text": "[4]", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 337, |
| "end": 340, |
| "text": "[5]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 663, |
| "end": 671, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Annotation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "[ Figure 1 . The annotation scheme ]", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 2, |
| "end": 10, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Annotation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Three problems were found in annotation process: low-quantity, low-efficiency, and disagreements. Manual annotation is time-consuming and relatively low efficiency. And since a word may possess more than one sense and carry features from different senses in limited contexts, it often causes disagreements among annotators. To select the most suitable sense of the target word is a general but complicated issue. For human annotation, we tackle the problem by conducting cross-annotation, discussions, and vote for the best reasonable answer. But again, the time-cost is high. In order to solve the problems, we propose two possible solutions -explicit and implicit crowdsourcing designs. By outsourcing the annotation work to the public and rate annotators in advance for their credibility, the quantity may greatly increase and reduce discussion time since the one with higher score would become the agreed answer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Problems", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Sense annotation for Chinese WSD depends largely on manual works, which has been suffering from problems of low quantity and low efficiency. Studies before have tried to provide solutions, however, the Chinese WSD remain unsolved. The paper aims to provide solutions designed from two subtasks of the CS system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Crowdsourcing on Chinese Word Sense Tagging System", |
| "sec_num": "4." |
| }, |
| { |
| "text": "In terms of the nature of collaboration, a CS system can be divided into two subcategories: explicit and implicit ones (Doan, Anhai, et al 2011) [6] . Two appropriate As to the retention of contributors, the encouragement and retention scheme (E&R scheme) provides well-structured solutions. Systems can automatically provide instant user-gratification, display how their contributions make differences immediately.", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 148, |
| "text": "[6]", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Crowdsourcing on Chinese Word Sense Tagging System", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Providing ownership is another way making users feel they own a part of the system. In order to solve previous mentioned problem, this paper provides an infrastructure of CS system for Chinese sense annotation based on the ideas of Bontcheva et al (2014) [7] .", |
| "cite_spans": [ |
| { |
| "start": 255, |
| "end": 258, |
| "text": "[7]", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Crowdsourcing on Chinese Word Sense Tagging System", |
| "sec_num": "4." |
| }, |
| { |
| "text": "There are four main steps: first, data preprocessing; second, the creation of user interface (Figure 2 demonstrates an ideal platform for WSD crowdsourcing system (Bontcheva, Kalina et al, 2014) [7] ); third, create and upload a gold unit for quality control; and last, map the judgments back to documents and aggregating them into the central database.", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 174, |
| "text": "(Bontcheva,", |
| "ref_id": null |
| }, |
| { |
| "start": 195, |
| "end": 198, |
| "text": "[7]", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 93, |
| "end": 102, |
| "text": "(Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Crowdsourcing on Chinese Word Sense Tagging System", |
| "sec_num": "4." |
| }, |
| { |
| "text": "[ Figure 2 . Ideal Interface for WSD Crowdsourcing System ] [7]", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 2, |
| "end": 10, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Crowdsourcing on Chinese Word Sense Tagging System", |
| "sec_num": "4." |
| }, |
| { |
| "text": "The design of the crowdsourcing system of this paper separated into two parts, internal and external. Internally, we focused on the above-mentioned four CS-system creation steps. Externally, the main targets are the recruitment and retention of contributors and individual evaluations. Based on the consultation that CrowdFlower suggests for annotation accuracy (Hong and Baker, 2011) [8] , this paper improved the infrastructure ideas (Bontcheva, Kalina et al, 2014) [7] and provides a revised framework.", |
| "cite_spans": [ |
| { |
| "start": 385, |
| "end": 388, |
| "text": "[8]", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 468, |
| "end": 471, |
| "text": "[7]", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Design", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Data preparation: All pre-processed data are divided into micro-tasks with ten sentences per set to make annotation task easier. Notably, the number of senses for contributors to select from are recommended between 4 to 7, including an additional 'none of the above'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Internal Framework", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "User interface: For better performances, instead of multiple-choice questions, users are given example sentences for each lexical item, and then asked to categorize a list of displayed sentences all at once(Hong, Baker 2011) [8] . The primary advantage is that contributors notice the difference of senses among sentences. Similar to Sinica Corpus, sentences are aligned horizontally with the target word highlighted in the page-center.", |
| "cite_spans": [ |
| { |
| "start": 225, |
| "end": 228, |
| "text": "[8]", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Internal Framework", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "Gold unit: In order to control quality and avoid random answers or same answers, we will set up model question and insert at least one per annotation page. A gold criterion of CrowdFlower [9] is that model questions shall be at least 20% of total questions Aggregation: Same as previous studies, this paper takes majority vote as the final result. However, for senses with equivalent score, we would recount the score of each sense based on the reliability score of individual contributors.", |
| "cite_spans": [ |
| { |
| "start": 188, |
| "end": 191, |
| "text": "[9]", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Internal Framework", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "Recruitment and training: We provide payments to contributors; however, the payment will be retrieved if discovered cheating. The basic fee for qualified annotation is TWD 5 per set (10 sentences). Contributors with good quality will receive bonuses. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "External Framework", |
| "sec_num": "4.4.2" |
| }, |
| { |
| "text": "Despite the advantages of GWAP, the games nowadays share some deficiencies: textcentric, randomly played, and un-controllable data gathering time. The simplest way to address text-centric WSD, is boredom, such as Wardrobe (Venhuizen, N., Basile, V., Evang, K., & Bos, J., 2013) [13] , is a classic text-centric game (Figure 4 ). Later games developed to be more \"game-centric\", hoping to create a game-like environment by transforming the senses from texts to images, such as The Knowledge Towers (Vannella et al., 2014 [14] ), and Puzzle Racer(Jurgens, D., & Navigli, R., 2014 [15] .)", |
| "cite_spans": [ |
| { |
| "start": 278, |
| "end": 282, |
| "text": "[13]", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 497, |
| "end": 524, |
| "text": "(Vannella et al., 2014 [14]", |
| "ref_id": null |
| }, |
| { |
| "start": 578, |
| "end": 582, |
| "text": "[15]", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 316, |
| "end": 325, |
| "text": "(Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "General Issue of GWAP", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "[ Figure 4 . text-centric example -Wordrobe ] [13] The interface of The Knowledge Tower is a lot more game-like compare to the Wordrobe, and equipped with an import game element -my high score.", |
| "cite_spans": [ |
| { |
| "start": 46, |
| "end": 50, |
| "text": "[13]", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 2, |
| "end": 10, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "General Issue of GWAP", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "[ Figure 5 . character selection ]", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 2, |
| "end": 10, |
| "text": "Figure 5", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "General Issue of GWAP", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "[ Figure 6 . Image selecting task]", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 2, |
| "end": 10, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "General Issue of GWAP", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The player needs to gather the images that describes the concept of the tower. The images of the senses input in the game are from an online source -Babel Net. However, we do not have a corresponding source in Chinese, it is rather difficult for the Chinese WSD game developer to replace senses with images to cut the amount of texts. How to avoid randomly played is another issue. The paper use \"repeating questions\" and a \"playertryout\" to weight their validity. Details shall be provided in later Section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Issue of GWAP", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "As a pioneer study of designing a game-centric GWAP for Chinese WSD, we proposed a game, \"Super Chario\", named and designed after the long-lasting game \"Super Mario\" [16] + \"Chinese.\" The reason for choosing the game is to avoid players learning too many un-familiar rules and become more approachable to laymen. Since it is not yet possible to build up a WSD game based on sense images elaborated in Section 5.3, the game will focus on making text-based with challenging, entertaining, and a game-like interface.", |
| "cite_spans": [ |
| { |
| "start": 166, |
| "end": 170, |
| "text": "[16]", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "The goal for the players is to raise an Olympia contestant, but the goal of the game is to retrieve at least 1,000 annotations per player. From average WSD annotation experiences, one may annotate 100 or more annotations per an hour. Thus we hope the game could have players to play at least 10 hours, one hour per day to reach 1,000 sentences within two weeks to control the speed of data gathering. This shall be achieved by giving \"signin price\" and \"1,000 reaching price(level 50)\" if they complete the challenge in 15 days.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Designs of \"Super Chario\" followed the game elements proposed by Von Ahn, L. et al in 2008 [10] : timed response, score keeping, player skill levels, and high score lists The game-centric and data collecting time controlling is solved by using a game-like interface, multiple-tasks, \"everyday sign-in price,\" and \"1,000 reaching price (level 50)\".", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 95, |
| "text": "[10]", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Also, we could also buy the ads on Youtube or platforms to force the potential players to answer one or two questions and slowly accumulated the annotation. But how do we solve the randomly-played problem? The game borrow the weighting concept from the explicit crowd-sourcing. Upon signing up for the game, the player will be requested for a short try-out described below. Another possible approach is to repeat questions three times. The reason for repeating three times is to avoid the possibility of knowledge gain, and cause answer changed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In order to test the weighting parameter of each player, we design a simple try-out game:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "\"Saving Princes.\" After the tryout, we would assign different titles to different players, ranging from King (Queen), Prince (Princess), Duke (Duchess), and to warriors for both gender. The game rules are as following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "[ Figure 10 . try-out game interface ]", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 2, |
| "end": 11, |
| "text": "Figure 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "The player input their names and age. The goal of the player is to save the real princes from the dark woods. The hint of which princes are real is: find the sentences follow by the prince that fit the definition of the required sense of a particular word. For the example game attached to the paper: find the sense of \"anxious (\u7126\u616e\u4e0d\u5e73\u975c)\" of \"fen (\u7169)\". The player only needs to select the ones with the given sentences, thus the annotation numbers or meaning of the numbers provided in Table 2 shall not be relevant to the players. Sample sentences are:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 484, |
| "end": 491, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "[ Table 2 . examples of sample sentences ]", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 2, |
| "end": 9, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Gold Standard Answer Data [4] \u2f54\u6c34\u88e1\u5f88\u6dbc\u590f\u5929\u4e5f\u4e0d\u2f64\u7528\u5403\u51b0\u76f4\u63a5\u559d\u6cb3\u2f54\u6c34\u4e5f\u4e0d\u2f64\u7528\u7169\u8457\u7238\u7238\u8aaa [1] \u2f3c\u5fc3\u60c5\u7169 \u6628\u5929\u2f00\u4e00\u500b\u2f08\u4eba\u5728\u5bb6\u60f3\u4e86\u5f88\u591a [5] \u5a5a\u59fb\u5f88\u7169\u592b\u59bb\u6e9d\u901a\u6709\u969c\u7919\u5a46\u5ab3\u76f8\u8655\u4e0d\u4f86 [4] \u5c0d\u6c92\u932f\u62115\u5e74\u524d\u6709\u8a02\u904e\u7522\u54c1 \u9023\u7e8c\u6253\u7d66\u6211\u8d85\u7169\u7684 [4] \u65e2\u7136\u4f60\u89ba\u5f97\u7169\u90a3\u6211\u5c31\u6536\u56de\u6240\u6709\u52aa\u2f12\u529b\u4e0d\u518d\u5c0d\u4f60\u597d Since this is a try-out, we test only 15 sentences, however, we valued both precision and recall score of players' credibility thus we will use the F-score as the crucial criteria. If the F-score is over 70, the player would be titled as a King/Queen; over 50, the player would be Prince/Princess; over 30, the player would be Duke/Duchess; and below 30 would be all assigned as warriors. The Result shows that 2 males and 8", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 29, |
| "text": "[4]", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 59, |
| "end": 62, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 82, |
| "end": 85, |
| "text": "[5]", |
| "ref_id": null |
| }, |
| { |
| "start": 104, |
| "end": 107, |
| "text": "[4]", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 130, |
| "end": 133, |
| "text": "[4]", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "females with age range from 20 to 35, have played the try-out game. No players received the King/Queen title, 2 received the Prince/Princess title, 5 players were titled Duke/Duchess, and others were titled Warrior (Table 3 .)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 215, |
| "end": 223, |
| "text": "(Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "[ Table 3 . try-out game player result ]", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 2, |
| "end": 9, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Game for Chinese WSD", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "The evaluation of Super Chario may be determined by three aspects: game efficiency, player enjoyability (Von Ahn, L., & Dabbish, L., 2008), and popularity. We slightly adjust the game efficiency and player enjoyability for the purpose of evaluation, with the aid of popularity that we proposed in this paper. Game efficiency consists of \"throughput\" and \"learning curves.\" Throughput is defined as the number of annotation per an hour, and the learning curves are whether a player skill strengthened overtime. A good game, in other words, is to have high throughput with learning curve slope upward. In the Super Chario, we expect the player to finish 3-4 levels per throughput, 80-100 annotations. Player enjoyability is calculated by the total amount of time played per player. The assumption is align with human intuition: we spend more time on something if we are drawn by it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of GWAP", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "Popularity is hard to measure but we might find a hint from the number of registration per day, the shape of the user growth-line since the game launched, and the ratings of the Both implicit and explicit type of tasks in crowd-sourcing has their distinct advantages and disadvantages, but \"correctness\" is considered the major issue shared by the approaches, compared with the \"golden-standard answers\" annotated by trained linguistic experts. In order to measure the effectiveness, we suggest examining the annotation performances of implicit and explicit tasks by generally agreed evaluation measures in test accuracy: Precision, Recall, and F-score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of GWAP", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "Problems witnessed in most annotation process are of annotation quantity, efficiency, and agreement. Current studies utilizing manual annotation provides only little amount of results with time-consuming and of efficiency concerns. Furthermore, the disagreement on the most suitable sense of the target words between annotators is most complicated and unnoticed. While linguistics expert focus much more on syntactic structure and semantic content during annotation, laypersons lean on world knowledge in that context. This paper argues that meta language and world knowledge is a main influence to the annotation results, which should be taken into serious consideration during annotation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6." |
| }, |
| { |
| "text": "Thus, explicit crowd-sourcing and GWAP for Chinese WSD not only address solutions to quantity and efficiency problems, but also increases annotator diversification, native speaker instinct, thus might better reflect the nature feeling of Chinese native speakers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6." |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "English SENSEVAL resources in the public domain", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kilgarriff", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kilgarriff, A., \"English SENSEVAL resources in the public domain,\" 1999. Available at: http://www.senseval.org/", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Lexicographical policy and procedure in the Hector project", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kilgarriff", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kilgarriff, A., \"Lexicographical policy and procedure in the Hector project,\" 1999. Available at: http://www.senseval.org/", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The Senseval-3 English lexical sample task", |
| "authors": [ |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "A" |
| ], |
| "last": "Chklovski", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kilgarriff", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihalcea, Rada, T.A. Chklovski, and A.Kilgarriff., \"The Senseval-3 English lexical sample task,\" Association for Computational Linguistics, 2004.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "\u4e2d\u7814\u9662\u5e73\u8861\u8a9e\u6599\u5eab", |
| "authors": [], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "\"\u4e2d\u7814\u9662\u5e73\u8861\u8a9e\u6599\u5eab\", asbc.iis.sinica.edu.tw, [Online]. Available: http:// asbc.iis.sinica.edu.tw/. [Accessed: 21-Jul-2016].", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Crowdsourcing systems on the worldwide web", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Doan", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Ramakrishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Halevy", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Communications of the ACM", |
| "volume": "54", |
| "issue": "4", |
| "pages": "86--96", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Doan, A., Ramakrishnan, R., & Halevy, A. Y., \"Crowdsourcing systems on the world- wide web,\" Communications of the ACM, 54(4), pp. 86-96, 2011.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The GATE Crowdsourcing Plugin: Crowdsourcing Annotated Corpora Made Easy", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Bontcheva", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Derczynski", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "P" |
| ], |
| "last": "Rout", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "97--100", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bontcheva, K., Roberts, I., Derczynski, L., & Rout, D. P., \"The GATE Crowdsourcing Plugin: Crowdsourcing Annotated Corpora Made Easy,\" In EACL, pp. 97-100, April 2014.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "How good is the crowd at real WSD?", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Hong", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "F" |
| ], |
| "last": "Baker", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 5th linguistic annotation workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "30--37", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hong, J., & Baker, C. F., \"How good is the crowd at real WSD?\" In Proceedings of the 5th linguistic annotation workshop, pp. 30-37, Association for Computational Linguistics, June 2011.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Make your data useful | CrowdFlower", |
| "authors": [], |
| "year": 2016, |
| "venue": "CrowdFlower", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "\"Make your data useful | CrowdFlower\", CrowdFlower, [Online]. Available: https:// www.crowdflower.com/. [Accessed: 21-Jul-2016].", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Designing games with a purpose", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Von Ahn", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Dabbish", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Communications of the ACM", |
| "volume": "51", |
| "issue": "8", |
| "pages": "58--67", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Von Ahn, L., & Dabbish, L., \"Designing games with a purpose,\" Communications of the ACM, 51(8), pp. 58-67, 2008.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Phrase detectives: A web-based collaborative annotation game", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Chamberlain", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "U", |
| "middle": [], |
| "last": "Kruschwitz", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the International Conference on Semantic Systems (I-Semantics' 08)", |
| "volume": "", |
| "issue": "", |
| "pages": "42--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chamberlain, J., Poesio, M., & Kruschwitz, U. \"Phrase detectives: A web-based collaborative annotation game,\" In Proceedings of the International Conference on Semantic Systems (I-Semantics' 08), pp. 42-49, September 2008.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Multiscale visual analysis of lexical networks", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Artignan", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hascoet", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Lafourcade", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "\u00a813th International Conference on Information Visualisation", |
| "volume": "", |
| "issue": "", |
| "pages": "685--690", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Artignan, G., M. Hascoet, and M. Lafourcade, \"Multiscale visual analysis of lexical networks,\" In \u00a813th International Conference on Information Visualisation, Barcelona, Spain, pp. 685-690, 2009.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Gamification for word sense labeling", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Venhuizen", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Basile", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Evang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bos", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proc. 10th International Conference on Computational Semantics (IWCS-2013)", |
| "volume": "", |
| "issue": "", |
| "pages": "397--403", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Venhuizen, N., Basile, V., Evang, K., & Bos, J., \"Gamification for word sense labeling,\" In Proc. 10th International Conference on Computational Semantics (IWCS-2013), pp. 397-403, 2013.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Validating and Extending Semantic Knowledge Bases using Video Games with a Purpose", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Vannella", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jurgens", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Scarfini", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Toscani", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL (1)", |
| "volume": "", |
| "issue": "", |
| "pages": "1294--1304", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vannella, D., Jurgens, D., Scarfini, D., Toscani, D., & Navigli, R., \"Validating and Extending Semantic Knowledge Bases using Video Games with a Purpose,\" In ACL (1), pp. 1294-1304, 2014.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "It's All Fun and Games until Someone Annotates: Video Games with a Purpose for Linguistic Annotation", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jurgens", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "449--464", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jurgens, D., & Navigli, R., \"It's All Fun and Games until Someone Annotates: Video Games with a Purpose for Linguistic Annotation,\" Transactions of the Association for Computational Linguistics, 2, pp. 449-464, 2014.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Super Mario Bros. X -Home\", supermariobrosx.org", |
| "authors": [], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "\"Super Mario Bros. X -Home\", supermariobrosx.org, [Online]. Available: http:// www.supermariobrosx.org/. [Accessed: 21-Jul-2016].", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Sense Structure in Cube: Lexical Semantic Representation in Chinese Wordnet", |
| "authors": [ |
| { |
| "first": "Shu-Kai", |
| "middle": [], |
| "last": "Hsieh", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "International Journal of Computer Processing Of Languages", |
| "volume": "23", |
| "issue": "3", |
| "pages": "243--253", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hsieh, Shu-Kai. Sense Structure in Cube: Lexical Semantic Representation in Chinese Wordnet. International Journal of Computer Processing Of Languages Vol. 23, No. 3. 243-253, 2011.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "subtasks that system users can do for Chinese WSD are 'Evaluating': contributors assign words in context with different senses, and 'GWAP': contributors annotate word senses through playing games in system A and contribute the game-result to system B. As an open platform for linguistic annotation, the CS system usually recruits contributors without having the ability to preview their profiles. This leads to five primary issues: the recruitment and retention of contributors, what can contributors do, how to organize the contributions, how to evaluate (Doan, Anhai, et al 2011)[4] as well as the infrastructure of system (Bontcheva, Kalina, et al 2014)[7]. Crowdsource workers can be recruited by several ways: providing payments; volunteering; by requiring; ask users to pay for the usage of system A service, then contribute to system B(crowdsourcing), such as Captcha.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Instructions will be provided in detail with explicit examples, simple terms, and avoiding jargons.Pre-test: Contributors are predicted to have diverse hobby of Chinese usage. By giving pre-tests on sentence understanding and meaning sensitivity before they log-in the CS system helps us control the quality and assign reliability levels of the contributors. The reliability level would effect the sense score marked by the annotator when the outcome of the annotation encountered two senses with same score and needed to be recounted.Crowdsourcing Micro-task: For each micro-task, contributors are required to classify sets of sentences into 4 to 7 sense categories within a single page. Once the task isfinished and the results are not detected as malicious contributions, contributors will receive their rewards. Conversely, if malicious behaviors are detected, CS system will undo and remove all his or her works automatically and refuse to pay for any of his or her contributions [ Figure 3. Revised CS User Interface for Chinese WSD Annotation ] Implicit Crowdsourcing (GWAP) 5.1 What is GWAP GWAP, shortened for Game With a Purpose, is a sub-task of crowd-sourcing with implicit nature of collaboration, aims to solve quantity and costly issue of WSD as the explicit crow-sourcing proposed in Section 4. The definition of GWAP is: \"people, as a side effect of playing, perform tasks computers are unable to perform\" (Von Ahn, L., & Dabbish, L., 2008) [10]. In other words, the game developer channeled the player to work under the disguise of entertainment. The ESP Game (Google Image Labeler) is the first major success of combining game with computation task, which successfully labeled 50,000,000 images with related word. GWAP further developed in NLP field for anaphora analysis (Chamberlain et al., 2008) [11], term relations (Artignan et al., 2009) [12], semantic annotation for word sense disambiguation, known as the Wordrobe (Venhuizen, N., Basile, V., Evang, K., & Bos, J., 2013) [13], the Knowledge Towers (Vannella et al., 2014) [14], and Puzzle Racer (Jurgens, D., & Navigli, R., 2014) [15]. The key of a successful game is that people are willing to spend long-enough time to play, because they are 'enjoyed' and 'entertained.' And to disguise a puzzle to a game needs a well-structured design that inspires appropriate output with an enticing winning conditions and plain dos-don'ts (Von Ahn, L., & Dabbish, L., 2008) [10]. Aiming to make GWAP a universalized approach, Luis Von Ahn and Laura Dabbish addressed three templates to solve diverse computation tasks: Output-agreement games, Inversionproblem games, and Input-agreement games. And this paper is based on the outputagreement game type as design base, sharing the same initial steps and goals but with more complex winning conditions and rules. Detailed design will be elaborated in Section 5.4, followed by brief explanation of why proposing GWAP in Section 5.2, general issues and solution in Section 5.3, finally closed up by evaluation in Section 5.5. 5.2 Why GWAP Why proposing GWAP if explicit crowd-sourcing(Section 4) can solve the quantity problem? Four major reasons are: larger amount of quantity, engaging and long-lasting; annotator diversification resulted from the game is played by layperson (Jurgens, D., & Navigli, R., 2014) [15]; better reflect native speaker instinct; and cost-down, since the game reward the player with entertainment than payment (Venhuizen, N., Basile, V., Evang, K., & Bos, J., 2013) [13].", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "text": ". The tasks needed to be completed within the time to create excitement and input-focus. Score keeping and player skill levels hope to keep the player feeling progressed. High score lists are to create an incentive for showing-off. Current architecture is specified below: A. Initial step: After sign-up and a pre-tryout for the game, the player may choose to play by itself or with other players around you. The selection of multiple players will encounter team challenges to accomplish and create extra bonus. B. Winning conditions: The game is to raise an Olympia contestant by the annotations that player selects. Originally designed with 100 levels, each level contain at least 20 annotation tasks to be accomplished. Once reaching level 50 (1,000 annotations), the contestant that the player trained may write letter of challenge to battle other players to compete who's the best Chinese speaker of all time. The challenge are based on the annotation data for machine learning. One badge will be put on the cloth of the avatar every-time the player has won a battle. C. Tasks to be accomplished 1. Individual tasks: The task is to gain as much fund and knowledge as one can for attending the Olympia. The funding is for better geared, better food provides more energy, and change better weapons with stronger power. An individual is given three lives, if they are all used, one would not die (we do not wish to receive duplicate annotations) but need to buy a new life. Basic tasks including hitting gold words in the sentence for sense disambiguation, shoot off knowledge thieves, and grab the knowledge flag(Figure 7, source: Super Mario). The time for response is 900 seconds per level to reduce thinking time but players may also buy time. Major way to earn funding is to touch the gold whenever you see them. Funding will also gain from expelling knowledge thieves by stepping on them or laser them with laser guns. [Figure 7. Individual] [ Figure 8. Team challenge ] [ Figure 9. Knowledge Monster ] 2. Team challenges: players need to drag the sentences to the possible sense and create a match. The approach hopes to encourage the player to discuss, as human annotators do if encountered disagreements(Figure 8, source: Super Mario). Aside from the annotations tasks, the team may team-up to beat the knowledge monster and earned extra funds (Figure 9, source: Super Mario). Hidden tasks: Hidden tasks are in pre-selected tubes for players to earn extra funds, such as removing the sentence with different sense; or entering a sentence you think that carries the sense describe above, this may help us increase the corpus, but need to be examined later by human annotator.4. Olympiad battle (personal machine learning):The battle is for player who has annotated more than 1,000 sentences for personal machine learning. As the player enters the Olympiad battle, they are examining their annotated results in both accuracy and recall rate, and the input questions are from previously assigned golden standard answers by trained experts.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "num": null, |
| "html": null, |
| "text": "The task was to decide and annotate the verb sense according to CWN's gloss definition. The first round was made by individual annotators without discussion with others. If there are more than one possible senses, the annotator should choose one sense", |
| "content": "<table><tr><td colspan=\"6\">and provide explanations for following discussion. In the second round, two-annotator</td></tr><tr><td colspan=\"6\">discussion step, all sentences and tags are checked and discussed for every disagreement</td></tr><tr><td colspan=\"6\">and ambiguity. Two annotators needed to agree to only one sense per each sentence. If</td></tr><tr><td colspan=\"6\">not, the discussion will move on to group discussion with all team members to vote. The</td></tr><tr><td colspan=\"6\">sense which gets most votes will be the final decision, but before the final decision, an</td></tr><tr><td colspan=\"6\">explanation of disagreement should be provided by the annotators to other members.</td></tr><tr><td colspan=\"6\">There are three types of disagreements. First, mistakes from misread. Second, different</td></tr><tr><td colspan=\"6\">interpretations of contexts. For instance, '\u6d6e' in '\u8b1b\u5230\u2f00\u4e00\u534a\u7a81\u7136C\u5973\u83ab\u540d\u5176\u5999\u6d6e\u8d77\u4f86,'</td></tr><tr><td colspan=\"6\">where '\u6d6e' can be explained as '\u56e0\u6bd4\u91cd\u2f29\u5c0f\u65bc\u6240\u5728\u6c23\u9ad4\u2f7d\u800c\u505c\u7559\u5728\u8a72\u6c23\u9ad4\u4e2d' or '\u5728\u7279\u5b9a\u5c0d</td></tr><tr><td colspan=\"6\">\u8c61\u4e2d\u986f\u73fe' from different perspectives. In this situation, each annotator should argue for</td></tr><tr><td colspan=\"6\">their decision and agreed on one. Third type occur if the contextual interpretation</td></tr><tr><td>Seed word</td><td>Bother</td><td>Calculate</td><td>Float</td><td>Invade</td><td>Seize</td></tr><tr><td>First translations</td><td>\u7169, \u64fe</td><td colspan=\"2\">\u8a08\u7b97, \u7b97\u8a08 \u6f02\u6d6e</td><td colspan=\"2\">\u5165\u4fb5, \u4fb5\u5165 \u6293, \u6355</td></tr><tr><td>Final translation</td><td colspan=\"2\">\u7169 (fan) \u7b97(suan)</td><td colspan=\"2\">\u6d6e (fu) \u4fb5 (qin)</td><td>\u6293 (zua)</td></tr><tr><td colspan=\"2\">Number of senses in CWN 7</td><td>10</td><td>4</td><td>6</td><td>9</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "num": null, |
| "html": null, |
| "text": "Previous study(Hong and Baker 2011) [8] of WSD using crowdsource approach, aggregating the inputs from contributors with majority vote. Another fact that greatly affect the results is the contributor quality, thus leads to the necessity of evaluation.The target of contributor evaluation is to prevent malicious cheating, for such problem, four solutions had been introduced by Doan in 2011. In order to manage contributors, system owner can block malicious contributors by limiting the level of contributions for individuals. We may also detect bad-intention contributions by using both manual(direct monitor) and automatic techniques(random simple question answering). Another solution is giving threat or punishment such as banning the account and public their profile. More technically, we may also create an undo system similar to Wikipedia edit page.", |
| "content": "<table/>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |