| { |
| "paper_id": "C12-1037", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:21:57.254873Z" |
| }, |
| "title": "Extraction of Russian Sentiment Lexicon for Product Meta-Domain", |
| "authors": [ |
| { |
| "first": "Ilia", |
| "middle": [], |
| "last": "Chetviorkin", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Lomonosov Moscow State University", |
| "location": { |
| "addrLine": "Leninskiye Gory 1, Building 52 (2) Research Computing Center", |
| "settlement": "Moscow" |
| } |
| }, |
| "email": "ilia.chetviorkin@gmail.com" |
| }, |
| { |
| "first": "Natalia", |
| "middle": [], |
| "last": "Loukachevitch", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper we consider a new approach for domain-specific sentiment lexicon extraction in Russian. We propose a set of statistical features and algorithm combination that can discriminate sentiment words in a specific domain. The extraction model is trained in the movie domain and then utilized to other domains. We evaluate the quality of obtained sentiment vocabularies intrinsically. Finally we combine the sentiment lexicons from five domains to obtain one general lexicon for the product meta-domain. We demonstrate the robustness of the extracted lexicon in the cross-domain sentiment classification in Russian.", |
| "pdf_parse": { |
| "paper_id": "C12-1037", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper we consider a new approach for domain-specific sentiment lexicon extraction in Russian. We propose a set of statistical features and algorithm combination that can discriminate sentiment words in a specific domain. The extraction model is trained in the movie domain and then utilized to other domains. We evaluate the quality of obtained sentiment vocabularies intrinsically. Finally we combine the sentiment lexicons from five domains to obtain one general lexicon for the product meta-domain. We demonstrate the robustness of the extracted lexicon in the cross-domain sentiment classification in Russian.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "\u0412 \u0434\u0430\u043d\u043d\u043e\u0439 \u0440\u0430\u0431\u043e\u0442\u0435 \u0440\u0430\u0441\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u0435\u0442\u0441\u044f \u043d\u043e\u0432\u044b\u0439 \u043f\u043e\u0434\u0445\u043e\u0434 \u043a \u0438\u0437\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044e \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u043e\u043e\u0440\u0438\u0435\u043d\u0442\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u043e\u0433\u043e \u0441\u043b\u043e\u0432\u0430\u0440\u044f \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0438 \u043d\u0430 \u0440\u0443\u0441\u0441\u043a\u043e\u043c \u044f\u0437\u044b\u043a\u0435. \u041c\u044b \u043f\u0440\u0435\u0434\u043b\u0430\u0433\u0430\u0435\u043c \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u044c \u0441\u043e\u0432\u043e\u043a\u0443\u043f\u043d\u043e\u0441\u0442\u044c \u0441\u0442\u0430\u0442\u0438\u0441\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0445 \u0438 \u043b\u0438\u043d\u0433\u0432\u0438\u0441\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0445 \u043f\u0440\u0438\u0437\u043d\u0430\u043a\u043e\u0432, \u043f\u043e\u0437\u0432\u043e\u043b\u044f\u044e\u0449\u0438\u0445 \u0432\u044b\u044f\u0432\u043b\u044f\u0442\u044c \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0435 \u0441\u043b\u043e\u0432\u0430, \u0438 \u043a\u043e\u043c\u0431\u0438\u043d\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u044d\u0442\u0438 \u043f\u0440\u0438\u0437\u043d\u0430\u043a\u0438 \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u0430\u043b\u0433\u043e\u0440\u0438\u0442\u043c\u043e\u0432 \u043c\u0430\u0448\u0438\u043d\u043d\u043e\u0433\u043e \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u044f. \u041c\u043e\u0434\u0435\u043b\u044c \u0438\u0437\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u0441\u043e\u0437\u0434\u0430\u0435\u0442\u0441\u044f \u0434\u043b\u044f \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u0444\u0438\u043b\u044c\u043c\u043e\u0432, \u0430 \u0437\u0430\u0442\u0435\u043c \u043f\u0440\u0438\u043c\u0435\u043d\u044f\u0435\u0442\u0441\u044f \u0432 \u0434\u0440\u0443\u0433\u0438\u0445 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u044f\u0445. \u041c\u044b \u043e\u0446\u0435\u043d\u0438\u0432\u0430\u0435\u043c \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u043e \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u044b\u0445 \u0441\u043b\u043e\u0432\u0430\u0440\u0435\u0439 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432 \u043f\u043e\u0441\u0440\u0435\u0434\u0441\u0442\u0432\u043e\u043c \u0440\u0443\u0447\u043d\u043e\u0439 \u0440\u0430\u0437\u043c\u0435\u0442\u043a\u0438. \u041d\u0430\u043a\u043e\u043d\u0435\u0446, \u043c\u044b \u0441\u043e\u0431\u0438\u0440\u0430\u0435\u043c \u0438\u0437 \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u044b\u0445 \u0441\u043b\u043e\u0432\u0430\u0440\u0435\u0439 \u043e\u0431\u0449\u0438\u0439 \u0441\u043b\u043e\u0432\u0430\u0440\u044c \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432, \u0440\u0430\u0441\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u044f \u0435\u0433\u043e \u043a\u0430\u043a \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0439 \u0441\u043b\u043e\u0432\u0430\u0440\u044c \u0432 \u0448\u0438\u0440\u043e\u043a\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u0442\u043e\u0432\u0430\u0440\u043e\u0432. \u041c\u044b \u0434\u0435\u043c\u043e\u043d\u0441\u0442\u0440\u0438\u0440\u0443\u0435\u043c \u043f\u043e\u043b\u0435\u0437\u043d\u043e\u0441\u0442\u044c \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u043e\u0433\u043e \u043e\u0431\u0449\u0435\u0433\u043e \u043b\u0435\u043a\u0441\u0438\u043a\u043e\u043d\u0430 \u0432 \u0437\u0430\u0434\u0430\u0447\u0435 \u043f\u0435\u0440\u0435\u043d\u043e\u0441\u0430 \u043c\u043e\u0434\u0435\u043b\u0438 \u0430\u043d\u0430\u043b\u0438\u0437\u0430 \u0442\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u0438 \u0441 \u043e\u0434\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u043d\u0430 \u0434\u0440\u0443\u0433\u0443\u044e \u0434\u043b\u044f \u043e\u0442\u0437\u044b\u0432\u043e\u0432 \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u0435\u0439 \u043d\u0430 \u0440\u0443\u0441\u0441\u043a\u043e\u043c \u044f\u0437\u044b\u043a\u0435. KEYWORDS : Sentiment Analysis, Sentiment Lexicon, Domain Adaptation. KEYWORDS IN RUSSIAN: \u0410\u043d\u0430\u043b\u0438\u0437 \u0422\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u0438, \u041e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0435 \u0441\u043b\u043e\u0432\u0430, \u041d\u0430\u0441\u0442\u0440\u043e\u0439\u043a\u0430 \u043d\u0430 \u041f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u0443\u044e \u041e\u0431\u043b\u0430\u0441\u0442\u044c \u0412 \u043f\u043e\u0441\u043b\u0435\u0434\u043d\u0435\u0435 \u0432\u0440\u0435\u043c\u044f \u0431\u043e\u043b\u044c\u0448\u0438\u0435 \u0443\u0441\u0438\u043b\u0438\u044f \u0431\u044b\u043b\u0438 \u043d\u0430\u043f\u0440\u0430\u0432\u043b\u0435\u043d\u044b \u043d\u0430 \u0440\u0435\u0448\u0435\u043d\u0438\u0435 \u0437\u0430\u0434\u0430\u0447\u0438 \u0430\u043d\u0430\u043b\u0438\u0437\u0430 \u043c\u043d\u0435\u043d\u0438\u0439 \u0432 \u0440\u0430\u0437\u043b\u0438\u0447\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u044f\u0445. \u0410\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0437\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u044b\u0435 \u043f\u043e\u0434\u0445\u043e\u0434\u044b \u043a \u0430\u043d\u0430\u043b\u0438\u0437\u0443 \u0442\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u0438 \u043c\u043e\u0433\u0443\u0442 \u0431\u044b\u0442\u044c \u043f\u043e\u043b\u0435\u0437\u043d\u044b \u0434\u043b\u044f \u0433\u043e\u0441\u0443\u0434\u0430\u0440\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0445 \u043e\u0440\u0433\u0430\u043d\u043e\u0432 \u0438 \u043f\u043e\u043b\u0438\u0442\u0438\u043a\u043e\u0432, \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0439 \u0438 \u043f\u0440\u043e\u0441\u0442\u044b\u0445 \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u0435\u0439. \u041e\u0434\u043d\u043e\u0439 \u0438\u0437 \u0432\u0430\u0436\u043d\u0435\u0439\u0448\u0438\u0445 \u0437\u0430\u0434\u0430\u0447, \u044f\u0432\u043b\u044f\u044e\u0449\u0435\u0439\u0441\u044f \u043e\u0441\u043d\u043e\u0432\u043e\u0439 \u0434\u043b\u044f \u0430\u043d\u0430\u043b\u0438\u0437\u0430 \u043c\u043d\u0435\u043d\u0438\u0439 \u0432 \u0442\u0435\u043a\u0441\u0442\u0430\u0445, \u043d\u0430\u043f\u0438\u0441\u0430\u043d\u043d\u044b\u0445 \u043d\u0430 \u0440\u0430\u0437\u043b\u0438\u0447\u043d\u044b\u0445 \u044f\u0437\u044b\u043a\u0430\u0445, \u044f\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0441\u043e\u0437\u0434\u0430\u043d\u0438\u0435 \u0441\u043b\u043e\u0432\u0430\u0440\u0435\u0439 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u041c\u043d\u043e\u0433\u0438\u0435 \u0438\u0441\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u0438 \u0441\u043e\u0437\u0434\u0430\u044e\u0442 \u0441\u043b\u043e\u0432\u0430\u0440\u0438 \u043e\u0431\u0449\u0435\u0443\u043f\u043e\u0442\u0440\u0435\u0431\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0445 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432 \u0434\u043b\u044f \u0441\u0432\u043e\u0438\u0445 \u044f\u0437\u044b\u043a\u043e\u0432. \u0412\u043c\u0435\u0441\u0442\u0435 \u0441 \u0442\u0435\u043c \u0438\u0437\u0432\u0435\u0441\u0442\u043d\u043e, \u0447\u0442\u043e \u0432 \u0440\u0430\u0437\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u044f\u0445 \u043c\u043e\u0433\u0443\u0442 \u043f\u0440\u0438\u043c\u0435\u043d\u044f\u0442\u044c\u0441\u044f \u0434\u043e\u0441\u0442\u0430\u0442\u043e\u0447\u043d\u043e \u0440\u0430\u0437\u043d\u044b\u0435 \u043d\u0430\u0431\u043e\u0440\u044b \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0432\u044b\u0440\u0430\u0436\u0435\u043d\u0438\u0439. \u041d\u0430\u043a\u043e\u043d\u0435\u0446, \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0435 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u043c\u043e\u0433\u0443\u0442 \u0438\u043c\u0435\u0442\u044c \u0441\u0445\u043e\u0434\u0441\u0442\u0432\u043e \u043c\u0435\u0436\u0434\u0443 \u0441\u043e\u0431\u043e\u0439 \u0432 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c\u043e\u0439 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0435. \u0422\u0430\u043a, \u0442\u0430\u043a\u0438\u0435 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0435 \u0441\u043b\u043e\u0432\u0430 \u043a\u0430\u043a \u043d\u0435\u0433\u043e\u0434\u044f\u0439 \u0438\u043b\u0438 \u0437\u043b\u043e \u043e\u0434\u0438\u043d\u0430\u043a\u043e\u0432\u043e \u043d\u0435\u043f\u0440\u0438\u043c\u0435\u043d\u0438\u043c\u044b \u043a\u043e \u0432\u0441\u0435\u043c \u043e\u0431\u043b\u0430\u0441\u0442\u044f\u043c \u043e\u0446\u0435\u043d\u043a\u0438 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0430 \u0442\u043e\u0432\u0430\u0440\u043e\u0432.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u0412 \u0434\u0430\u043d\u043d\u043e\u0439 \u0440\u0430\u0431\u043e\u0442\u0435 \u043c\u044b \u0438\u0441\u0441\u043b\u0435\u0434\u0443\u0435\u043c \u043d\u043e\u0432\u0443\u044e \u0438\u0434\u0435\u044e \u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u043a\u0438 \u0440\u0443\u0441\u0441\u043a\u043e\u0433\u043e \u0441\u043b\u043e\u0432\u0430\u0440\u044f \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0438 \u0434\u043b\u044f \u0448\u0438\u0440\u043e\u043a\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u0442\u043e\u0432\u0430\u0440\u043e\u0432. \u041f\u0440\u0438 \u044d\u0442\u043e\u043c \u0432\u0430\u0436\u043d\u043e \u043f\u043e\u0434\u0447\u0435\u0440\u043a\u043d\u0443\u0442\u044c, \u0447\u0442\u043e \u0432 \u043d\u0430\u0441\u0442\u043e\u044f\u0449\u0435\u0435 \u0432\u0440\u0435\u043c\u044f \u043d\u0435\u0442 \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u043e \u0434\u043e\u0441\u0442\u0443\u043f\u043d\u043e\u0433\u043e \u0440\u0443\u0441\u0441\u043a\u043e\u044f\u0437\u044b\u0447\u043d\u043e\u0433\u043e \u0441\u043b\u043e\u0432\u0430\u0440\u044f \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0438. \u041d\u0430\u0448 \u043c\u0435\u0442\u043e\u0434 \u0431\u0430\u0437\u0438\u0440\u0443\u0435\u0442\u0441\u044f \u043d\u0430 \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u0438 \u0430\u043b\u0433\u043e\u0440\u0438\u0442\u043c\u0430 \u0438\u0437\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u0440\u0443\u0441\u0441\u043a\u043e\u0439 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0438 \u0432 \u043e\u0434\u043d\u043e\u0439 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438, \u0438 \u0437\u0430\u0442\u0435\u043c \u043f\u0435\u0440\u0435\u043d\u043e\u0441\u0435 \u043e\u0431\u0443\u0447\u0435\u043d\u043d\u043e\u0439 \u043c\u043e\u0434\u0435\u043b\u0438 \u043d\u0430 \u0434\u0440\u0443\u0433\u0438\u0435 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0435 \u043e\u0431\u043b\u0430\u0441\u0442\u0438. \u041c\u044b \u043f\u043e\u043a\u0430\u0437\u044b\u0432\u0430\u0435\u043c, \u0447\u0442\u043e \u043c\u043e\u0434\u0435\u043b\u044c \u0438\u0437\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0438 \u043c\u043e\u0436\u0435\u0442 \u0431\u044b\u0442\u044c \u043f\u0435\u0440\u0435\u043d\u0435\u0441\u0435\u043d\u0430 \u043d\u0430 \u0434\u0440\u0443\u0433\u0438\u0435 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0435 \u043e\u0431\u043b\u0430\u0441\u0442\u0438, \u0435\u0441\u043b\u0438 \u0438\u043c\u0435\u044e\u0442\u0441\u044f \u0432\u0441\u0435 \u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u044b\u0435 \u0434\u043b\u044f \u0440\u0430\u0431\u043e\u0442\u044b \u0441\u0438\u0441\u0442\u0435\u043c\u044b \u0434\u0430\u043d\u043d\u044b\u0435. \u041c\u044b \u043f\u0440\u0438\u043c\u0435\u043d\u044f\u0435\u043c \u043d\u0430\u0448\u0443 \u043c\u043e\u0434\u0435\u043b\u044c \u043a \u043d\u0435\u0441\u043a\u043e\u043b\u044c\u043a\u0438\u043c \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u043c \u043e\u0431\u043b\u0430\u0441\u0442\u044f\u043c \u0438 \u0437\u0430\u0442\u0435\u043c \u0438\u0437 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432\u0430\u0440\u0435\u0439 \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u0435\u0439 \u0441\u043e\u0431\u0438\u0440\u0430\u0435\u043c \u0435\u0434\u0438\u043d\u044b\u0439 \u0441\u043b\u043e\u0432\u0430\u0440\u044c \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0438, \u0440\u0430\u0441\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u044f \u0435\u0433\u043e \u043a\u0430\u043a \u0441\u043b\u043e\u0432\u0430\u0440\u044c \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0438 \u0432 \u0448\u0438\u0440\u043e\u043a\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u0442\u043e\u0432\u0430\u0440\u043e\u0432.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u0418\u0437\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u0435 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432 \u0432 \u0437\u0430\u0434\u0430\u043d\u043d\u043e\u0439 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043e \u043d\u0430 \u043d\u0435\u0441\u043a\u043e\u043b\u044c\u043a\u0438\u0445 \u0442\u0435\u043a\u0441\u0442\u043e\u0432\u044b\u0445 \u043a\u043e\u043b\u043b\u0435\u043a\u0446\u0438\u044f\u0445: \u043a\u043e\u043b\u043b\u0435\u043a\u0446\u0438\u0438 \u043e\u0442\u0437\u044b\u0432\u043e\u0432 \u043e \u043f\u0440\u043e\u0434\u0443\u043a\u0442\u0430\u0445 \u0441 \u043e\u0446\u0435\u043d\u043a\u0430\u043c\u0438 \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u0435\u0439, \u043a\u043e\u043b\u043b\u0435\u043a\u0446\u0438\u0438 \u043e\u043f\u0438\u0441\u0430\u043d\u0438\u0439 \u043f\u0440\u043e\u0434\u0443\u043a\u0442\u043e\u0432 \u0438 \u043a\u043e\u043d\u0442\u0440\u0430\u0441\u0442\u043d\u043e\u0439 \u043a\u043e\u043b\u043b\u0435\u043a\u0446\u0438\u0438 (\u043d\u0430\u043f\u0440\u0438\u043c\u0435\u0440, \u043d\u043e\u0432\u043e\u0441\u0442\u043d\u0430\u044f \u043a\u043e\u043b\u043b\u0435\u043a\u0446\u0438\u044f). \u0422\u0430\u043a\u0438\u0435 \u043a\u043e\u043b\u043b\u0435\u043a\u0446\u0438\u0438 \u043c\u043e\u0433\u0443\u0442 \u0431\u044b\u0442\u044c \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u0438 \u0441\u0444\u043e\u0440\u043c\u0438\u0440\u043e\u0432\u0430\u043d\u044b \u0434\u043b\u044f \u0440\u0430\u0437\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u0435\u0439. \u041a\u0440\u043e\u043c\u0435 \u0442\u043e\u0433\u043e, \u043c\u044b \u043f\u0440\u0435\u0434\u043f\u043e\u043b\u043e\u0436\u0438\u043b\u0438, \u0447\u0442\u043e \u043c\u043e\u0436\u043d\u043e \u0432\u044b\u0434\u0435\u043b\u0438\u0442\u044c \u043d\u0435\u043a\u043e\u0442\u043e\u0440\u044b\u0435 \u0447\u0430\u0441\u0442\u0438 \u043a\u043e\u0440\u043f\u0443\u0441\u0430 \u043c\u043d\u0435\u043d\u0438\u0439 (\u043d\u0430\u043f\u0440\u0438\u043c\u0435\u0440, \u043e \u0444\u0438\u043b\u044c\u043c\u0430\u0445), \u0432 \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u043a\u043e\u043d\u0446\u0435\u043d\u0442\u0440\u0430\u0446\u0438\u044f \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432 \u0432\u044b\u0448\u0435: \u043f\u0440\u0435\u0434\u043b\u043e\u0436\u0435\u043d\u0438\u044f, \u0437\u0430\u043a\u0430\u043d\u0447\u0438\u0432\u0430\u044e\u0449\u0438\u0435\u0441\u044f \u043d\u0430 \u00ab!\u00bb \u0438\u043b\u0438 \u00ab\u2026\u00bb; \u043a\u043e\u0440\u043e\u0442\u043a\u0438\u0435 \u043f\u0440\u0435\u0434\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u043d\u0435 \u0431\u043e\u043b\u0435\u0435 \u0447\u0435\u043c \u0438\u0437 7 \u0441\u043b\u043e\u0432; \u043f\u0440\u0435\u0434\u043b\u043e\u0436\u0435\u043d\u0438\u044f, \u0441\u043e\u0434\u0435\u0440\u0436\u0430\u0449\u0438\u0435 \u0441\u043b\u043e\u0432\u043e \u00ab\u0444\u0438\u043b\u044c\u043c\u00bb \u0431\u0435\u0437 \u0434\u0440\u0443\u0433\u0438\u0445 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0445. \u0423\u0441\u043b\u043e\u0432\u043d\u043e \u043d\u0430\u0437\u043e\u0432\u0435\u043c \u044d\u0442\u043e\u0442 \u043a\u043e\u0440\u043f\u0443\u0441 -\u043c\u0430\u043b\u044b\u0439 \u043a\u043e\u0440\u043f\u0443\u0441.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u0414\u043b\u044f \u043a\u0430\u0436\u0434\u043e\u0433\u043e \u0441\u043b\u043e\u0432\u0430 \u0432 \u043a\u043e\u043b\u043b\u0435\u043a\u0446\u0438\u0438 \u043e\u0442\u0437\u044b\u0432\u043e\u0432 \u043c\u044b \u0432\u044b\u0447\u0438\u0441\u043b\u044f\u0435\u043c \u043d\u0430\u0431\u043e\u0440 \u0441\u0442\u0430\u0442\u0438\u0441\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0445 \u0438 \u043b\u0438\u043d\u0433\u0432\u0438\u0441\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0445 \u043f\u0440\u0438\u0437\u043d\u0430\u043a\u043e\u0432.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u0414\u043b\u044f \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u044f \u0430\u043b\u0433\u043e\u0440\u0438\u0442\u043c\u043e\u0432 \u043d\u0430\u043c \u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u043e \u0440\u0430\u0437\u043c\u0435\u0447\u0435\u043d\u043d\u043e\u0435 \u043c\u043d\u043e\u0436\u0435\u0441\u0442\u0432\u043e \u0441\u043b\u043e\u0432. \u0414\u043b\u044f \u044d\u0442\u043e\u0433\u043e \u043c\u044b \u0432\u0440\u0443\u0447\u043d\u0443\u044e \u0440\u0430\u0437\u043c\u0435\u0442\u0438\u043b\u0438 \u043c\u043d\u043e\u0436\u0435\u0441\u0442\u0432\u043e \u0432\u0441\u0435\u0445 \u0441\u043b\u043e\u0432 \u0441 \u0447\u0430\u0441\u0442\u043e\u0442\u043e\u0439 \u0432\u044b\u0448\u0435 \u0442\u0440\u0435\u0445 \u0438\u0437 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u043e \u0444\u0438\u043b\u044c\u043c\u0430\u0445 (18362 \u0441\u043b\u043e\u0432\u0430). \u041c\u044b \u043e\u0442\u043d\u043e\u0441\u0438\u043b\u0438 \u0441\u043b\u043e\u0432\u043e \u043a \u043a\u0430\u0442\u0435\u0433\u043e\u0440\u0438\u0438 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0432 \u0441\u043b\u0443\u0447\u0430\u0435 \u0435\u0441\u043b\u0438 \u043c\u043e\u0433\u043b\u0438 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u0438\u0442\u044c \u0435\u0433\u043e \u0432 \u043a\u0430\u043a\u043e\u043c-\u043b\u0438\u0431\u043e \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u043c \u043a\u043e\u043d\u0442\u0435\u043a\u0441\u0442\u0435.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u041c\u044b \u0440\u0435\u0448\u0430\u043b\u0438 \u0437\u0430\u0434\u0430\u0447\u0443 \u043a\u043b\u0430\u0441\u0441\u0438\u0444\u0438\u043a\u0430\u0446\u0438\u0438 \u043d\u0430 \u0434\u0432\u0430 \u043a\u043b\u0430\u0441\u0441\u0430: \u0440\u0430\u0437\u0434\u0435\u043b\u0435\u043d\u0438\u0435 \u0432\u0441\u0435\u0445 \u0441\u043b\u043e\u0432 \u043d\u0430 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0435 \u0438 \u043d\u0435\u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0435. \u0414\u043b\u044f \u044d\u0442\u0438\u0445 \u0446\u0435\u043b\u0435\u0439 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043b\u0438\u0441\u044c \u0441\u043b\u0435\u0434\u0443\u044e\u0449\u0438\u0435 \u0430\u043b\u0433\u043e\u0440\u0438\u0442\u043c\u044b: Logistic Regression, LogitBoost \u0438 Random Forest. \u0412\u0441\u0435 \u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440\u044b \u0430\u043b\u0433\u043e\u0440\u0438\u0442\u043c\u043e\u0432 \u0431\u044b\u043b\u0438 \u0432\u044b\u0441\u0442\u0430\u0432\u043b\u0435\u043d\u044b \u0432 \u0441\u043e\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0438\u0438 \u0441 \u0438\u0445 \u0437\u043d\u0430\u0447\u0435\u043d\u0438\u044f\u043c\u0438 \u043f\u043e \u0443\u043c\u043e\u043b\u0447\u0430\u043d\u0438\u044e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u0418\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u044f \u0434\u0430\u043d\u043d\u044b\u0435 \u0430\u043b\u0433\u043e\u0440\u0438\u0442\u043c\u044b, \u043c\u044b \u043f\u043e\u043b\u0443\u0447\u0438\u043b\u0438 \u0441\u043f\u0438\u0441\u043a\u0438 \u0441\u043b\u043e\u0432, \u0443\u043f\u043e\u0440\u044f\u0434\u043e\u0447\u0435\u043d\u043d\u044b\u0435 \u043f\u043e \u0432\u0435\u0440\u043e\u044f\u0442\u043d\u043e\u0441\u0442\u0438 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0441\u0442\u0438 \u0441\u043b\u043e\u0432. \u0414\u043b\u044f \u043e\u0446\u0435\u043d\u043a\u0438 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0430 \u044d\u0442\u0438\u0445 \u0441\u043f\u0438\u0441\u043a\u043e\u0432 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043b\u0430\u0441\u044c \u043c\u0435\u0440\u0430 Precision@n. \u0414\u043b\u044f \u0441\u0440\u0430\u0432\u043d\u0435\u043d\u0438\u044f \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0430 \u0440\u0430\u0431\u043e\u0442\u044b \u0441\u0438\u0441\u0442\u0435\u043c\u044b \u0432 \u0440\u0430\u0437\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u044f\u0445 \u043c\u044b \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043b\u0438 \u0437\u043d\u0430\u0447\u0435\u043d\u0438\u0435 n = 1000.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u041c\u044b \u0437\u0430\u043c\u0435\u0442\u0438\u043b\u0438, \u0447\u0442\u043e \u0438\u0437\u0432\u043b\u0435\u0447\u0435\u043d\u043d\u044b\u0435 \u0441\u043f\u0438\u0441\u043a\u0438 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u043e \u0440\u0430\u0437\u043b\u0438\u0447\u0430\u044e\u0442\u0441\u044f \u0432 \u0437\u0430\u0432\u0438\u0441\u0438\u043c\u043e\u0441\u0442\u0438 \u043e\u0442 \u0430\u043b\u0433\u043e\u0440\u0438\u0442\u043c\u0430. \u041f\u043e\u044d\u0442\u043e\u043c\u0443 \u043c\u044b \u0440\u0435\u0448\u0438\u043b\u0438 \u0432\u044b\u0447\u0438\u0441\u043b\u0438\u0442\u044c \u0441\u0440\u0435\u0434\u043d\u0435\u0435 \u043e\u0442 \u0437\u043d\u0430\u0447\u0435\u043d\u0438\u0439 \u0432\u0435\u0440\u043e\u044f\u0442\u043d\u043e\u0441\u0442\u0435\u0439 \u0432 \u043a\u0430\u0436\u0434\u043e\u043c \u0438\u0437 \u0441\u043f\u0438\u0441\u043a\u043e\u0432. \u0412 \u0440\u0435\u0437\u0443\u043b\u044c\u0442\u0430\u0442\u0435 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u043e \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u043e\u0433\u043e \u0438\u0437\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432 \u0432 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u0444\u0438\u043b\u044c\u043c\u043e\u0432 Precision@1000 \u0441\u043e\u0441\u0442\u0430\u0432\u0438\u043b\u043e 81.5%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u0414\u043b\u044f \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u044f \u0441\u0438\u0441\u0442\u0435\u043c\u044b \u0432 \u043d\u043e\u0432\u043e\u0439 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u043e \u0441\u043e\u0431\u0440\u0430\u0442\u044c \u0430\u043d\u0430\u043b\u043e\u0433\u0438\u0447\u043d\u044b\u0439 \u043d\u0430\u0431\u043e\u0440 \u043a\u043e\u043b\u043b\u0435\u043a\u0446\u0438\u0439, \u043a\u0430\u043a \u0438 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u043e \u0444\u0438\u043b\u044c\u043c\u0430\u0445. \u041c\u044b \u043f\u0440\u0438\u043c\u0435\u043d\u0438\u043b\u0438 \u043c\u043e\u0434\u0435\u043b\u044c \u0438\u0437\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432 \u0432 \u0442\u0430\u043a\u0438\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u044f\u0445, \u043a\u0430\u043a \u043a\u043d\u0438\u0433\u0438, \u0438\u0433\u0440\u044b, \u0446\u0438\u0444\u0440\u043e\u0432\u044b\u0435 \u043a\u0430\u043c\u0435\u0440\u044b, \u043c\u043e\u0431\u0438\u043b\u044c\u043d\u044b\u0435 \u0442\u0435\u043b\u0435\u0444\u043e\u043d\u044b.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u0414\u043b\u044f \u0442\u043e\u0433\u043e \u0447\u0442\u043e\u0431\u044b \u0441\u043e\u0431\u0440\u0430\u0442\u044c \u043e\u0431\u043e\u0431\u0449\u0435\u043d\u043d\u044b\u0439 \u0441\u043f\u0438\u0441\u043e\u043a \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0438 \u0432 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u0442\u043e\u0432\u0430\u0440\u043e\u0432, \u043c\u044b \u043f\u0440\u0438\u043c\u0435\u043d\u0438\u043b\u0438 \u0444\u043e\u0440\u043c\u0443\u043b\u0443, \u043f\u043e\u043e\u0449\u0440\u044f\u044e\u0449\u0443\u044e \u043d\u0430\u0445\u043e\u0436\u0434\u0435\u043d\u0438\u0435 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0433\u043e \u0441\u043b\u043e\u0432\u0430 \u0432 \u043d\u0430\u0447\u0430\u043b\u0435 \u043d\u0430\u0438\u0431\u043e\u043b\u044c\u0448\u0435\u0433\u043e \u043a\u043e\u043b\u0438\u0447\u0435\u0441\u0442\u0432\u0430 \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u044b\u0445 \u0441\u043f\u0438\u0441\u043a\u043e\u0432 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0438 \u0432 \u0440\u0430\u0437\u043d\u044b\u0445 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u044f\u0445. \u041a\u0430\u0447\u0435\u0441\u0442\u0432\u043e \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u043e\u0433\u043e \u0441\u043f\u0438\u0441\u043a\u0430 \u0441\u043e\u0441\u0442\u0430\u0432\u0438\u043b\u043e P@1000 = 91.4%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u0414\u043b\u044f \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0438 \u043f\u043e\u043b\u0435\u0437\u043d\u043e\u0441\u0442\u0438 \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u043e\u0433\u043e \u043e\u0431\u043e\u0431\u0449\u0435\u043d\u043d\u043e\u0433\u043e \u0441\u043f\u0438\u0441\u043a\u0430 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432 \u0432 \u043c\u0435\u0442\u0430-\u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u0442\u043e\u0432\u0430\u0440\u043e\u0432 \u043c\u044b \u043f\u0440\u043e\u0442\u0435\u0441\u0442\u0438\u0440\u043e\u0432\u0430\u043b\u0438 \u0435\u0433\u043e \u0432 \u0437\u0430\u0434\u0430\u0447\u0435 \u043f\u0435\u0440\u0435\u043d\u043e\u0441\u0430 \u0441\u0438\u0441\u0442\u0435\u043c\u044b \u0430\u043d\u0430\u043b\u0438\u0437\u0430 \u0442\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u0438 \u0441 \u043e\u0434\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u043d\u0430 \u0434\u0440\u0443\u0433\u0443\u044e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u0414\u043b\u044f \u0442\u0435\u0441\u0442\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u044f \u043c\u044b \u0432\u0437\u044f\u043b\u0438 \u043f\u043e 1000 \u043f\u043e\u043b\u043e\u0436\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0445 \u0438 1000 \u043e\u0442\u0440\u0438\u0446\u0430\u0442\u0435\u043b\u044c\u043d\u044b\u0445 \u043e\u0442\u0437\u044b\u0432\u043e\u0432 \u0432 \u0447\u0435\u0442\u044b\u0440\u0435\u0445 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u044f\u0445. \u041c\u044b \u043e\u0431\u0443\u0447\u0430\u043b\u0438 \u043a\u043b\u0430\u0441\u0441\u0438\u0444\u0438\u043a\u0430\u0442\u043e\u0440 \u0442\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u0438 \u0432 \u043e\u0434\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u043d\u0430 \u0442\u0440\u0435\u0445 \u0440\u0430\u0437\u043d\u044b\u0445 \u043d\u0430\u0431\u043e\u0440\u0430\u0445 \u043f\u0440\u0438\u0437\u043d\u0430\u043a\u043e\u0432: \u0432\u0441\u0435\u0445 \u0441\u043b\u043e\u0432\u0430\u0445, \u0438\u0437\u0432\u043b\u0435\u0447\u0435\u043d\u043d\u043e\u043c\u0443 \u0441\u043f\u0438\u0441\u043a\u0443 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432 \u044d\u0442\u043e\u0439 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u0438 \u043e\u0431\u043e\u0431\u0449\u0435\u043d\u043d\u043e\u043c\u0443 \u0441\u043f\u0438\u0441\u043a\u0443 \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432. \u0414\u0430\u043b\u0435\u0435 \u043c\u044b \u043f\u0440\u0438\u043c\u0435\u043d\u044f\u043b\u0438 \u043e\u0431\u0443\u0447\u0435\u043d\u043d\u044b\u0439 \u043a\u043b\u0430\u0441\u0441\u0438\u0444\u0438\u043a\u0430\u0442\u043e\u0440 \u043d\u0430 \u0434\u0440\u0443\u0433\u043e\u0439 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438. \u0412\u0441\u0435\u0433\u043e \u0431\u044b\u043b\u043e \u0440\u0430\u0441\u0441\u043c\u043e\u0442\u0440\u0435\u043d\u043e 9 \u043f\u0430\u0440 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u0435\u0439. \u0411\u044b\u043b\u043e \u043f\u043e\u043a\u0430\u0437\u0430\u043d\u043e, \u0447\u0442\u043e \u0432 \u0441\u0440\u0435\u0434\u043d\u0435\u043c \u043a\u043b\u0430\u0441\u0441\u0438\u0444\u0438\u043a\u0430\u0442\u043e\u0440, \u043e\u0431\u0443\u0447\u0435\u043d\u043d\u044b\u0439 \u043d\u0430 \u043e\u0431\u043e\u0431\u0449\u0435\u043d\u043d\u043e\u043c \u0441\u043f\u0438\u0441\u043a\u0435 \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u044b\u0445 \u043e\u0431\u043b\u0430\u0441\u0442\u0435\u0439, \u043b\u0443\u0447\u0448\u0435 \u043f\u0435\u0440\u0435\u043d\u043e\u0441\u0438\u0442\u0441\u044f \u043d\u0430 \u043d\u043e\u0432\u0443\u044e \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u0443\u044e \u043e\u0431\u043b\u0430\u0441\u0442\u044c.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u0422\u0430\u043a\u0438\u043c \u043e\u0431\u0440\u0430\u0437\u043e\u043c, \u0432 \u043d\u0430\u0448\u0435\u0439 \u0440\u0430\u0431\u043e\u0442\u0435 \u043c\u044b \u0441\u043e\u0437\u0434\u0430\u043b\u0438 \u0440\u0443\u0441\u0441\u043a\u043e\u044f\u0437\u044b\u0447\u043d\u044b\u0439 \u0441\u043f\u0438\u0441\u043e\u043a \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432 \u0434\u043b\u044f \u0448\u0438\u0440\u043e\u043a\u043e\u0439 \u043e\u0431\u043b\u0430\u0441\u0442\u0438 \u0442\u043e\u0432\u0430\u0440\u043e\u0432 \u0438 \u043f\u043e\u043a\u0430\u0437\u0430\u043b\u0438 \u0435\u0433\u043e \u043f\u043e\u043b\u0435\u0437\u043d\u043e\u0441\u0442\u044c \u0432 \u0437\u0430\u0434\u0430\u0447\u0430\u0445, \u0441\u0432\u044f\u0437\u0430\u043d\u043d\u044b\u0445 \u0441 \u043d\u0430\u0441\u0442\u0440\u043e\u0439\u043a\u043e\u0439 \u0441\u0438\u0441\u0442\u0435\u043c \u0430\u043d\u0430\u043b\u0438\u0437\u0430 \u0442\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u0438 \u043d\u0430 \u043d\u043e\u0432\u0443\u044e \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043d\u0443\u044e \u043e\u0431\u043b\u0430\u0441\u0442\u044c. \u041c\u044b \u043f\u043b\u0430\u043d\u0438\u0440\u0443\u0435\u043c \u043e\u043f\u0443\u0431\u043b\u0438\u043a\u043e\u0432\u0430\u0442\u044c \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u044b\u0439 \u0441\u043f\u0438\u0441\u043e\u043a \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u044b\u0445 \u0441\u043b\u043e\u0432, \u0438 \u044d\u0442\u043e \u0431\u0443\u0434\u0435\u0442 \u043f\u0435\u0440\u0432\u044b\u0439 \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u043e \u0434\u043e\u0441\u0442\u0443\u043f\u043d\u044b\u0439 \u0441\u043f\u0438\u0441\u043e\u043a \u043e\u0446\u0435\u043d\u043e\u0447\u043d\u043e\u0439 \u043b\u0435\u043a\u0441\u0438\u043a\u0438 \u0434\u043b\u044f \u0440\u0443\u0441\u0441\u043a\u043e\u0433\u043e \u044f\u0437\u044b\u043a\u0430.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Over the last few years a lot of efforts were made to solve sentiment analysis tasks in different domains. Automated approaches to sentiment analysis can be useful for state bodies and politicians, companies, and ordinary users. Most of these efforts concern English, where a lot of resources and tools for natural language processing and especially for sentiment analysis exist.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One of the important tasks, considered as a basis for sentiment analysis of documents written in a specific language, is a creation of its sentiment lexicon (Abdul-Mageed et al., 2011; Peres-Rosas et al., 2012) .", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 184, |
| "text": "(Abdul-Mageed et al., 2011;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 185, |
| "end": 210, |
| "text": "Peres-Rosas et al., 2012)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Usually authors try to gather general sentiment lexicons for their languages. However a lot of researchers stress the differences between sentiment lexicons in specific domains. For example, \"must-see\" is a strongly opinionated word in the movie domain, but neutral in the digital camera domain (Blitzer et al., 2007) . For these reasons, supervised learning algorithms trained in one domain and applied to other domains demonstrate considerable decrease in the performance (Ponomareva & Thelwall, 2012; Read & Carroll, 2009; Taboada et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 295, |
| "end": 317, |
| "text": "(Blitzer et al., 2007)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 474, |
| "end": 503, |
| "text": "(Ponomareva & Thelwall, 2012;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 504, |
| "end": 525, |
| "text": "Read & Carroll, 2009;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 526, |
| "end": 547, |
| "text": "Taboada et al., 2011)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To overcome this issue various adaptation methods are proposed, like ensembles of classifiers (Aue & Gamon, 2005) or graph-based approaches (Wu et al., 2009) . Nevertheless such approaches usually do not work well for domains whose lexicons differ significantly and recent studies are focused on bridging the gap between domainspecific words (Pan et al, 2010) . Indeed, sentiment lexicons adapted to a particular domain or topic have been shown to improve task performance in a number of applications, including opinion retrieval (Jijkoun et al., 2010) , and expression-level sentiment classification (Choi & Cardie, 2009) . In addition sentiment word extraction from a text collection enables to find slang and non-vocabulary words, which can be strong sentiment predictors.", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 113, |
| "text": "(Aue & Gamon, 2005)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 140, |
| "end": 157, |
| "text": "(Wu et al., 2009)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 342, |
| "end": 359, |
| "text": "(Pan et al, 2010)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 530, |
| "end": 552, |
| "text": "(Jijkoun et al., 2010)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 601, |
| "end": 622, |
| "text": "(Choi & Cardie, 2009)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Stressing the differences in sentiment lexicons between domains, one should understand that domains can form clusters of similar domains. So a lot of sentiment words relevant to various product domains are not relevant to the political domain or the general news domain and vice versa. For example, such words as evil or villain are not applicable to all product domains. Therefore we suppose that gathering a specialized sentiment lexicon for the product meta-domain can be useful for researchers and practitioners.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the current study we focus on the novel idea of construction of Russian sentiment lexicon for the product meta-domain. At this moment we should also emphasize that no publicly available Russian sentiment lexicon exists. Our method is based on training of the supervised algorithm for sentiment lexicon extraction in one domain and further transfer of the model to other domains. We show that in comparison with supervised sentiment classifiers, our sentiment lexicon extractor can be transferred to other domains if all necessary data are available. The trained sentiment lexicon extraction model is applied to an extensive number of domains and then extracted lexicons are summed up to the single list of sentiment words. So we obtain the generalized sentiment lexicon for the group of domains. We opt to focus on recognizing sentiment words without any polarity scores. It is pointed in the research papers that the two-stage approach is often beneficial, in which on the first stage we determine main sentiment bearers in a text and on the second stage classify them according to the polarity (Pang and Lee, 2008) . Thus such sentiment lexicons can be very useful for more accurate processing of user opinions.", |
| "cite_spans": [ |
| { |
| "start": 1099, |
| "end": 1119, |
| "text": "(Pang and Lee, 2008)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We evaluate the extracted general lexicon intrinsically, by manually labelling of word lists, and extrinsically, by transferring of sentiment classifiers based on our general lexicon to domains without any labelled data. The results demonstrate the effectiveness of our constructed general sentiment lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The reminder of this article is organized as follows. In Section 2 we observe state-of-theart methods for the sentiment lexicon generation, Section 3 describes the data collections and features involved in the model, in Section 4 we utilize our approach for four other domains and combine sentiment word vocabularies from all of them in Section 5. Finally, in Section 6 we conduct the experiments on the cross-domain sentiment classification involving extracted sentiment words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The related works can be divided into two categories: the creation of a sentiment lexicon for a specific language, and the creation of a sentiment lexicon for a specific domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "There are four main methods that are exploited by researchers to develop the sentiment lexicons for their languages: use of translated English sentiment resources, use of language-specific wordnets aligned to Princeton WordNet, use of corpora-based techniques similar to the techniques proposed for English sentiment lexicon extraction, use of electronic dictionaries of specific languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Creation of sentiment lexicons for specific languages", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In (Mihalcea et al., 2007) two methods for translating sentiment lexicons to Romanian are proposed. The first method uses bilingual dictionaries to translate an English sentiment lexicon gathered using OpinionFinder (Wiebe & Riloff, 2005) and obtain 4,983 Romanian sentiment words. The evaluation of randomly chosen units shows the percentage of the sentiment words in the list is around 50%; besides, the low coverage of existing Romanian sentiment expressions is revealed. The second method is based on parallel corpora. The corpus on the source language is annotated with sentiment information, and the information is then projected to the target language. The problems arise due to mistranslations, e.g. because irony is not recognized.", |
| "cite_spans": [ |
| { |
| "start": 3, |
| "end": 26, |
| "text": "(Mihalcea et al., 2007)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 216, |
| "end": 238, |
| "text": "(Wiebe & Riloff, 2005)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Creation of sentiment lexicons for specific languages", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Researchers in (Banea et al., 2008) propose to use a monolingual dictionary to acquire a sentiment lexicon from 60 manually selected seeds, equally sampled from verbs, nouns, adjectives and adverbs. To filter erroneous entries the LSA similarity measure is used.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 35, |
| "text": "(Banea et al., 2008)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Creation of sentiment lexicons for specific languages", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In (Perez-Rosas et al., 2012) a method to derive Spanish lexicons by using manually or automatically annotated data available in English is presented. The multilingual senselevel aligned WordNet structure is used to generate a highly accurate (90%) polarity lexicon comprising 1,347 entries, and one with accuracy (74%) encompassing 2,496 words. (Clematide & Klenner, 2010) begin their work with German polarity lexicon from 8000 polarity words obtained from GermaNet, a WordNet-like lexical database. Revealing rather low coverage of German novels by polarity-bearing adjectives from this list, they expand the set of 2899 German sentiment adjectives extracting coordinated adjectives pairs similar to (Hatzivassiloglou & McKeown, 1997) .", |
| "cite_spans": [ |
| { |
| "start": 346, |
| "end": 373, |
| "text": "(Clematide & Klenner, 2010)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 703, |
| "end": 737, |
| "text": "(Hatzivassiloglou & McKeown, 1997)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Creation of sentiment lexicons for specific languages", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To enhance the quality of dictionary-based methods for the general sentiment vocabulary generation in other languages, (Steinberger et al., 2011) create two source sentiment vocabularies: English (2400 entries) and Spanish (1737 entries). Both lists are translated by Google translator to the target language. Only overlapping entries from each translation are taken into further consideration. The set of target languages comprises six languages including Russian. The extracted Russian list of sentiment words contained 966 entries with accuracy of 94.9%.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 145, |
| "text": "(Steinberger et al., 2011)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Creation of sentiment lexicons for specific languages", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In comparison with these approaches we create a Russian lexicon for a very broad domain -meta-domain of products and services, for which we do not use any dictionaries -only users' reviews, and in this paper we show usefulness of this general lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Creation of sentiment lexicons for specific languages", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In many studies domain-specific sentiment lexicons are created using various types of propagation from a seed set of words, usually a general sentiment lexicon (Kanayama & Nasukawa, 2007; Lau et al., 2011; Qiu et al., 2011) . In such approaches an important problem is to determine an appropriate seed lexicon for propagation, which can heavily influence the quality of the results. Besides, the propagation often lead to unclear for a human sentiment lists. So, for example, in (Lau et al., 2011) only 100 first obtained sentiment words were evaluated by experts, precision@100 was around 80%, what means that the intrinsic quality of the extracted 4000 lexicon (as announced in the paper) can be quite low.", |
| "cite_spans": [ |
| { |
| "start": 160, |
| "end": 187, |
| "text": "(Kanayama & Nasukawa, 2007;", |
| "ref_id": null |
| }, |
| { |
| "start": 188, |
| "end": 205, |
| "text": "Lau et al., 2011;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 206, |
| "end": 223, |
| "text": "Qiu et al., 2011)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 479, |
| "end": 497, |
| "text": "(Lau et al., 2011)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Development of sentiment lexicons for specific domains", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Another approaches apply statistical measures based on domain-specific corpora to extract domain-specific sentiment words: \u03c72 (Jijkoun et el., 2010) , divergence from randomness (DFR), which measures the divergence between a term's probability distribution in a set of relevant and opinionated documents and its probability distribution in a set of relevant documents (He et al., 2009) etc.", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 148, |
| "text": "(Jijkoun et el., 2010)", |
| "ref_id": null |
| }, |
| { |
| "start": 368, |
| "end": 385, |
| "text": "(He et al., 2009)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Development of sentiment lexicons for specific domains", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The sentiment lexicon extraction method proposed in this paper exploits a set of statistical and linguistic measures, which can characterize domain-specific sentiment words from different sides. We combine these features into a single model using machine learning methods. Then we train it on one domain and show that such a model can be effectively transferred to other domains for extraction of their sentiment lexicons.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Development of sentiment lexicons for specific domains", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In the current study a new supervised method for domain-specific sentiment lexicon extraction is presented. We train our model in one domain and then apply it to several others. Finally, we combine the extracted word lists to construct a general lexicon of sentiment words typical for products and services.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction of sentiment lexicon in a specific domain", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our approach is based on several text collections, which can be automatically formed for many domains, such as: a collection of product reviews with authors' evaluation scores, a text collection of product descriptions and a contrast corpus (for example, a general news collection). For each word in the review collection we calculate a set of linguistic and statistical features using the aforementioned collections and then apply machine learning algorithms for term classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction of sentiment lexicon in a specific domain", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our method does not require any seed words, and is rather language-independent, however, lemmatization (or stemming) and part-of speech tagging are desirable. Working with Russian language, we use a dictionary-based morphological processor, including unknown word processing. Below in the text we will speak only about lemmatized words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction of sentiment lexicon in a specific domain", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We collected 28, 773 movie reviews of various genres from the online recommendation service www.imhonet.ru. For each review, user's score on a ten-point scale was extracted. We called this collection the review collection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data preparation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Nice and light comedy. There is something to laugh -exactly over the humour, rather than over the stupidity... Allows you to relax and gives rest to your head.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example of the movie review:", |
| "sec_num": null |
| }, |
| { |
| "text": "We also required a contrast collection of texts for our experiments. In this collection the concentration of opinions should be as little as possible. For this purpose, we collected 17, 680 movie descriptions. This collection was named the description collection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example of the movie review:", |
| "sec_num": null |
| }, |
| { |
| "text": "One more contrast corpus was a collection of two million news documents. We had calculated a document frequency of each word in this collection and used only this frequency list further. This list was named the news corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example of the movie review:", |
| "sec_num": null |
| }, |
| { |
| "text": "We suggested that it was possible to extract some fragments of reviews from the review collection that had higher concentration of sentiment words. These fragments may include:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Collections with higher concentration of opinions", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\uf0b7 Sentences ending with a \"!\"; \uf0b7 Sentences ending with a \"\u2026\"; \uf0b7 Short sentences, no more than seven word length;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Collections with higher concentration of opinions", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\uf0b7 Sentences containing the word \u00abmovie\u00bb without any other nouns.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Collections with higher concentration of opinions", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We called this collection the small collection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Collections with higher concentration of opinions", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Our aim is to create a high quality list of sentiment words based on the combination of various discriminative features. We propose the following set of features for each word: We will consider some of them in more detail.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Frequency of capitalized words. The meaning of this feature is the frequency (in the review corpus) of each word starting with the capital letter and not located at the beginning of the sentence. With this feature we are trying to identify potential proper names, which are always neutral.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Weirdness. To calculate this feature two collections are required: one with high concentration of sentiment words and the other -contrast one. The main idea of this feature is that sentiment words will be \u00abstrange\u00bb in the contexts of the contrast collection. This feature is calculated as follows (Ahmad et al., 1999): where Ps(w) -probability of the word in a special corpus, Pg(w) -probability of the word in a general corpus. Here and further we consider maximum likelihood estimation of the probabilities. Instead of the collection frequency one can use the document frequency for the probability calculation.", |
| "cite_spans": [ |
| { |
| "start": 297, |
| "end": 318, |
| "text": "(Ahmad et al., 1999):", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Weirdness was calculated using the following collection pairs: opinion-news, opiniondescription, description-news with document frequency and small-description, opiniondescription with collection frequency. TFIDF. We use TFIDF variant described in (Callan et al., 1992) , based on BM25 function. We calculate TFIDF using the collection pairs: small-news, small-description, opinion-news, opinion-description, description-news.", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 269, |
| "text": "(Callan et al., 1992)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "As we mentioned above we had collected user's numerical score (on a ten point scale) for each review. Let C = {1\u202610} to be the set of rating categories in the review collection. First, we want to give some definitions, which we will use further. The probability of a rating category c given a word w:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "\uf0e5 \uf0ce \uf03d C c i i c w f c w f w c P ) , ( ) , ( ) | ( ii.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The probability of a word w given a rating category c:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "\uf0e5 \uf0ce \uf03d c w i i c w f c w f c w P ) , ( ) , ( ) | ( Definition 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "i.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "An expected category for a given word:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": ") | ( ) | ( w c P c w c E i C c i i \uf0d7 \uf03d \uf0e5 \uf0ce", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "ii.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "An expected category in the review collection:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": ") ( ) ( i C c i c P c c E i \uf0d7 \uf03d \uf0e5 \uf0ce", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Using our definitions we suggest the following features:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Deviation from the average score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": ") ( ) | ( ) ( c E w c E w Dev \uf02d \uf03d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "This feature can discriminate words appearing in a wide range of rating categories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Word score variance. One more useful predictor is word score variance. If a word has small variance then it might be used in reviews with similar scores and has high probability to be a sentiment word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "2 2 ) | ( ) | ( ) ( w c E w c E w Var \uf02d \uf03d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Scaled likelihood. To get some intuition about how likely a word is to appear in each sentiment class we define a scaled log-likelihood:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": ") ( ) | ( log ) ( w P c w P w Lhc \uf03d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Scalability is required to be comparable between words. We have also added some features aggregating Lhc values like maximum and average.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rating-based features", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Some linguistic features were also added to our system because they can play crucial role in improving the sentiment lexicon extraction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Features", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "\uf0b7 Four binary features indicating the word part of speech (noun, verb, adjective and adverb)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Features", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "\uf0b7 Two binary features reflecting POS ambiguity (i.e. word can have various parts of speech depending on a context) and the feature indicating if this word is recognized by the POS tagger.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Features", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "\uf0b7 Predefined list of prefixes of a word (for example, Russian prefixes \"ne\", \"bes\", \"bez\" etc. similar to English \"un\", \"in\", \"im\" etc.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Features", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "The last feature is a strong predictor for words starting with negation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Features", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "To train supervised machine learning algorithms we needed a set of labeled sentiment words. For our experiments we manually labeled words with the frequency greater than three in the movie review collection (18362 words). We marked up a word as a sentiment one in case we could imagine it in any opinion context in the movie domain. All words were tagged by two assessors. If there was a disagreement about the sentiment of a specific word, the collective judgment after discussion was used as a final ground truth. As a result of our assessment procedure we had obtained the list of 4079 sentiment words in the movie domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithms and evaluation", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "We solved the two class classification problem: to separate all words into sentiment and neutral categories. For this purpose Weka 1 data mining tool was used. We considered the following algorithms: Logistic Regression, LogitBoost and Random Forest. All parameters in the algorithms were set to their default values. For each experiment 10 fold cross-validation was used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithms and evaluation", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "Using this algorithms we obtained word lists, ordered by the predicted probability of their opinion orientation. To measure the quality of these lists the Precision@n metric was used. This metric was very convenient for measuring the quality of list combinations and it could be used with different thresholds. To compare quality of the algorithms in different domains we chose n = 1000. This level was not too large for the manual labeling and demonstrated the quality in an appropriate way.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithms and evaluation", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "The results of classification are in Table 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 37, |
| "end": 44, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Algorithms and evaluation", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "75.7% 75.3% 72.4% 81.5%", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Logistic Regression LogitBoost Random Forest Average", |
| "sec_num": null |
| }, |
| { |
| "text": "We noticed that the lists of sentiment words extracted by the algorithms differ significantly. So we decided to average word probability values in these three lists. The result of this summation can be found in the last column of the Table 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 234, |
| "end": 241, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "TABLE 1 -Precision@1000 of word classification", |
| "sec_num": null |
| }, |
| { |
| "text": "As the baseline for our experiments we used the lists ordered by frequency in the review collection and deviation from the average score. Precision@1000 in these lists was 26.9% and 35.5% accordingly. Thus our algorithms gave significant improvements over the baselines. All the other features can be found in Table 2 . ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 310, |
| "end": 317, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "TABLE 1 -Precision@1000 of word classification", |
| "sec_num": null |
| }, |
| { |
| "text": "In the previous section we described the construction of the sentiment lexicon extraction model for the movie domain. The next step of the current research is utilizing this model in four other domains and combining obtained results to form a general sentiment lexicon for the product meta-domain. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model adaptation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We collected 2 data in the four domains: books, computer games, mobile phones and digital cameras. The structure of the datasets is the same as for movie domain. Data collection characteristics for each domain can be found in Table 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 226, |
| "end": 233, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Additional datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In further experiments we use the same news corpus as for movie domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Additional datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For all words in a particular field (excluding low frequent ones) we computed feature vectors (see Sections 3.3-3.5) and constructed a domain word-feature matrix. We applied our classification model, which was trained in the movie domain, to these wordfeature matrixes and manually evaluated the first thousand of the most probable sentiment words in each domain. The results of the evaluation are in Table 4 . Despite the drop in some other domains the quality of sentiment word extraction continues to be much higher than the quality level of single features (Table 2) . So we can conclude that the sentiment lexicon extraction model is robust enough to be transferred to other domains.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 401, |
| "end": 408, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 561, |
| "end": 570, |
| "text": "(Table 2)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model utilization and evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To construct the general sentiment lexicon for products and services we combine sentiment word lists from five domains. We want to boost words that occur in many different domains and have high weights in each of them. We propose the following function for the word weight in the resulting list:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Developing the Russian lexicon for product meta-domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "where D -is the domain set with five domains, d is the sentiment word list for a particular domain and d is the total number of words in this list. Functions probd(w) and posd(w) are the sentiment probability and position of the word in the list d .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Developing the Russian lexicon for product meta-domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The Precision@1000 of the obtained sentiment word list is 91.4%. The inter-rater agreement between the two Russian annotators is measured at 0.84 (\u03ba = 0.63).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Developing the Russian lexicon for product meta-domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\uf0e5 \uf0ce \uf0ce \uf0f7 \uf0f7 \uf0f8 \uf0f6 \uf0e7 \uf0e7 \uf0e8 \uf0e6 \uf02d \uf0d7 \uf0d7 \uf03d D d d D d d d w pos D w prob w R ) ( 1 1 )) ( max( ) (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Developing the Russian lexicon for product meta-domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As a baseline for our method of construction of the general sentiment lexicon for product meta-domain, we take the combined weirdness list (review -descr) as rather simple, but high quality one. We construct it from weirdness lists in the same manner as described in the beginning of the section. The Precision@n plots of the extracted lexicon and weirdness list combination are depicted on Figure 1 . This meta-domain list of sentiment words consists of words really used in users' reviews and its creation does not require any dictionary resources. We plan to make it available for further research in sentiment analysis of Russian texts.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 391, |
| "end": 399, |
| "text": "Figure 1", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Developing the Russian lexicon for product meta-domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "6 Lexicon evaluation on the cross-domain sentiment classification task", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Developing the Russian lexicon for product meta-domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To evaluate usefulness of our meta-domain sentiment list we test it in the cross-domain sentiment classification task as described for example in (Blitzer et al., 2007; Bollegala et al., 2011; Pan et al., 2010) . In these studies the dataset consisting of Amazon product reviews for four different product types (books (B), DVDs (D), electronics (E) and kitchen appliances (K)) is used. There are 1000 positive and 1000 negative reviews selected randomly and labeled for each domain. Domain-adaptation algorithms are trained on the one domain (source domain) and tested on the other domain (target domain).", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 168, |
| "text": "(Blitzer et al., 2007;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 169, |
| "end": 192, |
| "text": "Bollegala et al., 2011;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 193, |
| "end": 210, |
| "text": "Pan et al., 2010)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We do not compare our approach with these approaches because we do not make any efforts to adapt a classifier to a new domain. We use the similar setup to show the generalization abilities of the sentiment word lists. In these experiments we try to demonstrate the influence of our meta-domain list on the sentiment classification quality in a new domain without any labeled data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "So we randomly take 1000 positive and 1000 negative labeled Russian reviews from four domains: movies (M), books (B), mobile phones (P) and digital cameras (C). The reviews with user's score 9-10 are considered as positive and reviews with authors' score 1-4 are considered as negative.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Taking pairs of the domains, we train a sentiment classifier in one domain (source domain) and then transfer the classifier to the other domain (target domain). We treat a review text as a bag-of-words and use the following features for classification: In this task we utilize the LIBLINEAR realization of the support vector machine (SVM) classification algorithm with the default parameter values.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "\uf0b7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Additionally we include TFIDF weights for each feature, as it is pointed to give higher quality of the classification in comparison with the binary weights and we also take into account the polarity influencers, which can revert or magnify the polarity of the following words. The specific details can be found in (Chetviorkin & Loukachevitch, 2011) .", |
| "cite_spans": [ |
| { |
| "start": 314, |
| "end": 349, |
| "text": "(Chetviorkin & Loukachevitch, 2011)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We performed experiments with the proposed feature sets on the 9 domain pairs: B\u2192C, M\u2192C, P\u2192C, B\u2192P, M\u2192P, C\u2192P, M\u2192B, P\u2192B, C\u2192B where the letter before an arrow corresponds with the source domain and the letter after an arrow corresponds with the target domain. We do not consider cross-domain sentiment classification with the movie domain as a target one, because we manually labeled and trained the sentiment word extraction model in it, and the results of the classification can be unclear.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "For domain specific and general sentiment lexicons we explored different word quantity thresholds: {1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000} and report the results with each of them (see Figure 2 and 3).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 207, |
| "end": 215, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We denote by ) , , ( Thus we can define the main measure in the current experiment:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metrics", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": ") , , ( ) , , ( ) , , ( FL T S A L T S A L T S \uf02d \uf03d \uf044", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metrics", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "This is the difference between the accuracy obtained with the lexicon L and baseline lexicon FL, during the transfer from source domain S to target domain T. We also use the averaged variant of this measure:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metrics", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "In our case 9 \uf03d D .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metrics", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We report all results in this section using first 4000 words in the general lexicon and domain specific lexicons. This is the maximum amount of words with rather reliable intrinsic precision values ~70% in the general lexicon (see Section 5). We also provide the results of cross-domain sentiment classification quality with the other threshold values in general and domain specific lexicons on the Figure 2 and 3.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 399, |
| "end": 407, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Main results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "On all tasks the general sentiment lexicon performs on bar or better than the other feature sets. In Table 5 and Table 6 , we summarize the comparison results of crossdomain classification using different feature sets. \uf044 B->C M->C P->C B->P M->P C->P M->B P->B The results demonstrate the effectiveness of the general meta-domain sentiment lexicon.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 101, |
| "end": 120, |
| "text": "Table 5 and Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Main results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "In the Table 6 one can see that for some domain pairs our lexicons show significantly better results than the baseline. The average difference over all domain pairs between FL (baseline) and GL is 1.76%.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 7, |
| "end": 14, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A B->C M->C P->C B->P M->P C->P M->B P->B", |
| "sec_num": null |
| }, |
| { |
| "text": "In some domain pairs the difference is very small or even negative. We connect this issue with the similarity of the domain lexicons in general (Ponomareva & Thelwall, 2012) and sentiment lexicons in particular. Sometimes sentiment words from one domain can be utilized in the other one, but not vice versa.", |
| "cite_spans": [ |
| { |
| "start": 144, |
| "end": 173, |
| "text": "(Ponomareva & Thelwall, 2012)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A B->C M->C P->C B->P M->P C->P M->B P->B", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\uf0e5 \uf0ce \uf044 \uf03d \uf044 D T S L T S D L ) , ( ) , ,", |
| "eq_num": "( 1 )" |
| } |
| ], |
| "section": "A B->C M->C P->C B->P M->P C->P M->B P->B", |
| "sec_num": null |
| }, |
| { |
| "text": "We suppose that such a general lexicon for the product meta-domain can serve as a good source of sentiment seed words to generate domain-specific vocabularies in a lot of specific domains. FIGURE 2 -The dependence of the classification quality on the threshold in the general lexicon FIGURE 3 -The dependence of the classification quality on the threshold in the domain specific lexicons", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we described a method for sentiment lexicon extraction for any domain on the basis of several domain-specific text collections. We utilized our algorithm in different domains and showed that it had good generalization abilities. We combined sentiment lexicons from various domains and constructed the general meta-domain sentiment lexicon for products and services. This lexicon was evaluated intrinsically, with P@1000 = 91.4% and extrinsically in the cross-domain classification task. The sentiment classification algorithm based on the meta-domain sentiment lexicon outperformed all baselines and proved usefulness of the constructed resource. Besides, this meta-lexicon can be a useful source of sentiment seeds for sentiment lexicon extraction in new domains of products and services.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and perspectives", |
| "sec_num": null |
| }, |
| { |
| "text": "We extracted such a general lexicon for Russian language, for which sentiment analysis resources practically do not exist. We plan to make our general lexicon for the product meta-domain publicly available.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and perspectives", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.cs.waikato.ac.nz/ml/weka/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Review data collections in the book and digital camera domains are obtained from Russian Seminar of Information Retrieval Methods (www.romip.ru)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work is partially supported by RFBR grant N11-07-00588-\u0430.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Subjectivity and Sentiment Analysis of Modern Standard Arabic", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Abdul-Mageed", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Diab", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Korayem", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49 th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "587--591", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abdul-Mageed M., Diab M., Korayem M. (2011). Subjectivity and Sentiment Analysis of Modern Standard Arabic. In Proceedings of the 49 th Annual Meeting of the Association for Computational Linguistics, number 3, pp. 587-591.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "University of Surrey participation in Trec8: Weirdness indexing for logical documents extrapolation and retrieval In", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Ahmad", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Gillam", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Tostevin", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "the Proceedings of Eigth Text Retrieval Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ahmad K., Gillam L., Tostevin L. (1999). University of Surrey participation in Trec8: Weirdness indexing for logical documents extrapolation and retrieval In the Proceedings of Eigth Text Retrieval Conference (Trec-8).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Customizing sentiment classifiers to new domains: A case study", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Aue", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Gamon", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "International Conference on Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aue A. and Gamon M. (2005). Customizing sentiment classifiers to new domains: A case study. In International Conference on Recent Advances in Natural Language Processing, Borovets, BG.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Multilingual subjectivity analysis using machine translation", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Banea", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Hassan", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Banea C., Mihalcea R., Wiebe J. and Hassan S. (2008). Multilingual subjectivity analysis using machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Blitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL 2007", |
| "volume": "", |
| "issue": "", |
| "pages": "440--447", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blitzer J., Dredze M., Pereira F. (2007) Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of ACL 2007, pp. 440-447.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Bollegala", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Weir", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carroll", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "132--141", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bollegala D., Weir D. and Carroll J. (2011) Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, Oregon. pp. 132-141.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The INQUERY Retrieval System", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "P" |
| ], |
| "last": "Callan", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "B" |
| ], |
| "last": "Croft", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "M" |
| ], |
| "last": "Harding", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of 3rd International Conference on Database and Expert Systems Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "78--93", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Callan J.P., Croft W.B., Harding S.M. (1992). The INQUERY Retrieval System. In Proceedings of 3rd International Conference on Database and Expert Systems Applications / A.M. Tjoa and I. Ramos (eds.). -Springer Verlag, New York, pp.78-93.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Three-way movie review classification", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Chetviorkin", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Loukachevitch", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the International Conference on Computational Linguistics Dialog", |
| "volume": "", |
| "issue": "", |
| "pages": "177--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chetviorkin I. and Loukachevitch N. (2011). Three-way movie review classification. In Proceedings of the International Conference on Computational Linguistics Dialog, pp 177-186.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "590--598", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Choi Y. and Cardie C. (2009). Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 590-598.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Evaluation and extension of a polarity lexicon for German", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Clematide", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Klenner", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "WASSA-workshop held in conjunction with ECAI-2010", |
| "volume": "", |
| "issue": "", |
| "pages": "7--13", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Clematide S., Klenner S. (2010) Evaluation and extension of a polarity lexicon for German. In WASSA-workshop held in conjunction with ECAI-2010, pp 7-13.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Predicting the semantic orientation of adjectives", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Hatzivassiloglou", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "R" |
| ], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of ACL-97", |
| "volume": "", |
| "issue": "", |
| "pages": "174--181", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hatzivassiloglou V. and McKeown K. R. (1997). Predicting the semantic orientation of adjectives. In Proceedings of ACL-97, pp. 174-181, Madrid, ES.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "An effective statistical approach to blog post opinion retrieval", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Macdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Ounis", |
| "middle": [ |
| "I" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 17th ACM CIKM", |
| "volume": "", |
| "issue": "", |
| "pages": "1063--1072", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "He B., Macdonald C., He J., and Ounis I. (2009). An effective statistical approach to blog post opinion retrieval. In Proceedings of the 17th ACM CIKM, pp. 1063-1072.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Generating focused topic-specific sentiment lexicons", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Jijkoun", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "De Rijke", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Weerkamp", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ACL '10", |
| "volume": "", |
| "issue": "", |
| "pages": "585--594", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jijkoun V., de Rijke M. and Weerkamp W. (2010). Generating focused topic-specific sentiment lexicons. In Proceedings of ACL '10, pp. 585-594.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Fully automatic lexicon expansion for domainoriented sentiment analysis", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Kanayama", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Nasukawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "EMNLP '06", |
| "volume": "", |
| "issue": "", |
| "pages": "355--363", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kanayama H. and Nasukawa T. (2006). Fully automatic lexicon expansion for domain- oriented sentiment analysis. In EMNLP '06, pp. 355-363, Morristown, NJ, USA.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Pseudo Labeling for Scalable Semisupervised Learning of Domain-specific Sentiment Lexicons", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Lau", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Lai", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Bruza", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Wong", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "20th ACM Conference on Information and Knowledge Management", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lau R., Lai C., Bruza P. and Wong K. (2011). Pseudo Labeling for Scalable Semi- supervised Learning of Domain-specific Sentiment Lexicons. In 20th ACM Conference on Information and Knowledge Management.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Learning multilingual subjective language via cross-lingual projections", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Banea", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45 th Annual Meeting of the Association of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "976--983", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihalcea R., Banea C. and Wiebe J. (2007). Learning multilingual subjective language via cross-lingual projections. In Proceedings of the 45 th Annual Meeting of the Association of Computational Linguistics, pp. 976-983, Prague, Czech Republic.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Cross-Domain Sentiment Classification via Spectral Feature Alignment", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "J" |
| ], |
| "last": "Pan", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Ni", |
| "suffix": "" |
| }, |
| { |
| "first": "J-T", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [ |
| "Q" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the World Wide Web Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "751--760", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pan S. J., Ni X., Sun J-T, Yang Q. and Chen Z. (2010). Cross-Domain Sentiment Classification via Spectral Feature Alignment. In Proceedings of the World Wide Web Conference. pp. 751-760, New York, USA.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Opinion mining and sentiment analysis. Foundations and Trends\u00ae in Information Retrieval", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pang B., Lee L. (2008). Opinion mining and sentiment analysis. Foundations and Trends\u00ae in Information Retrieval. Now Publishers.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Learning Sentiment Lexicons in Spanish", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Perez-Rosas", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Banea", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Perez-Rosas V., Banea C. and Mihalcea R. (2012). Learning Sentiment Lexicons in Spanish. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12).", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Bibliographies or blenders: Which resource is best for cross-domain sentiment analysis?", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Ponomareva", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Thelwall", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 13th Conference on Intelligent Text Processing and Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ponomareva N. and Thelwall M. (2012): Bibliographies or blenders: Which resource is best for cross-domain sentiment analysis? In Proceedings of the 13th Conference on Intelligent Text Processing and Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Opinion word expansion and target extraction through double propagation", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Qiu", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bu", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qiu G., Liu B., Bu J. and Chen C. (2011). Opinion word expansion and target extraction through double propagation. Computational Linguistics, 37(1).", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Weakly Supervised techniques for domain independent sentiment classification", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Read", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carroll", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the first International CIKM Workshop on Topic-Sentiment Analysis for Mass Opinion Measurement", |
| "volume": "", |
| "issue": "", |
| "pages": "45--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Read J., Carroll J. (2009). Weakly Supervised techniques for domain independent sentiment classification. In Proceedings of the first International CIKM Workshop on Topic-Sentiment Analysis for Mass Opinion Measurement, pp. 45-52.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Creating Sentiment Dictionaries via Triangulation", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Steinberger", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Lenkova", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ebrahim", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ehrmann", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Hurriyetogly", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Kabadjov", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Steinberger", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Tanev", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Zavarella", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vazquez", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis", |
| "volume": "", |
| "issue": "", |
| "pages": "28--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steinberger J., Lenkova P., Ebrahim M., Ehrmann M., Hurriyetogly A., Kabadjov M., Steinberger R., Tanev H., Zavarella V. and Vazquez S. (2011). Creating Sentiment Dictionaries via Triangulation. In Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis, ACL-HLT 2011, pp. 28-36,", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Lexicon-based methods for Sentiment Analysis", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Taboada", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Brooke", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Tofiloski", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Voll", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Stede", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Computational linguistics", |
| "volume": "", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taboada M., Brooke J., Tofiloski M., Voll K. and Stede M. (2011). Lexicon-based methods for Sentiment Analysis. Computational linguistics, 37(2).", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Creating subjective and objective sentence classifiers from unannotated texts", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of CICLing 2005", |
| "volume": "", |
| "issue": "", |
| "pages": "486--497", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wiebe J. and Riloff E. (2005). Creating subjective and objective sentence classifiers from unannotated texts. In Proceedings of CICLing 2005. pp. 486-497.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Graph ranking for sentiment transfer", |
| "authors": [ |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of ACL-IJCNLP 2009", |
| "volume": "", |
| "issue": "", |
| "pages": "317--320", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wu Q., Tan S. and Cheng X. (2009). Graph ranking for sentiment transfer. In Proceedings of ACL-IJCNLP 2009, pp. 317-320.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Deviation from the average score o Word score variance o Sentiment category likelihood for each (word, category) pair", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "Precision@n depending on #wordsThe first ten most probable sentiment words are: bespodobniy (matchless), kleviy (cool), obaldenniy (astounding), neponiatniy (incomprehensible), neprivichniy (unusual), srednenkiy (mediocre), posredstvenniy (moderate), neploho (not bad), otlichneishiy (splendiferous), nenuzhniy(unnecessary). This sentiment lexicon is clean enough to be used in various sentiment analysis tasks.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": "All frequent words of the source domain (Full List), Sentiment words from the generated sentiment lexicon of the source domain (Source Domain Lexicon), \uf0b7 Words from the meta-domain sentiment lexicon, excluding the sentiment vocabulary of the target domain during the extraction (General Lexicon).", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "text": "S A the accuracy obtained during the transfer from source domain S to target domain T of the sentiment classifier trained using the lexicon L. The main point of comparison in the current research is the accuracy accuracy obtained by the baseline lexicon, i.e. all frequent words from the source domain.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "num": null, |
| "text": "Let us look at some examples of sentiment words with the high probability value in the", |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"3\">sum list: Trogatel'nyi (affective), otstoi (trash), fignia (crap), otvratitel'no</td></tr><tr><td colspan=\"3\">(disgustingly), posredstvenniy (satisfactory), predskazuemyi (predictable), ljubimyj</td></tr><tr><td>(love) etc.</td><td/><td/></tr><tr><td>Feature</td><td>Collection</td><td>Precision @1000</td></tr><tr><td>TFIDF</td><td>small -news</td><td>38.5%</td></tr><tr><td>TFIDF</td><td>small -descr</td><td>36.4%</td></tr><tr><td>TFIDF</td><td>review -news</td><td>30.5%</td></tr><tr><td>TFIDF</td><td>review -descr</td><td>39.8%</td></tr><tr><td>Weirdness</td><td>review -news (doc. count)</td><td>31.7%</td></tr><tr><td>Weirdness</td><td>review -descr (doc. count)</td><td>48.1%</td></tr><tr><td>Weirdness</td><td>small -descr (frequency)</td><td>49.1%</td></tr><tr><td>Weirdness</td><td>review -descr (frequency)</td><td>46.6%</td></tr><tr><td>Dev</td><td>review</td><td>35.5%</td></tr><tr><td>Var</td><td>review</td><td>21.5%</td></tr><tr><td>Lhc</td><td>review</td><td>33.0%</td></tr><tr><td>Frequency</td><td>review</td><td>26.9%</td></tr><tr><td>Frequency</td><td>small</td><td>31.9%</td></tr><tr><td>Document Frequency</td><td>review</td><td>27.8%</td></tr><tr><td colspan=\"3\">TABLE 2 -Precision@1000 for different features</td></tr></table>" |
| }, |
| "TABREF2": { |
| "html": null, |
| "num": null, |
| "text": "The characteristics of the data collections", |
| "type_str": "table", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |