ACL-OCL / Base_JSON /prefixK /json /konvens /2021.konvens-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:55.318485Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "The papers of these proceedings have been presented at the 17th edition of KONVENS (Konferenz zur Verarbeitung nat\u00fcrlicher Sprache/Conference on Natural Language Processing). KONVENS is a conference series on computational linguistics established in 1992 that was held biennially until 2018 and has been held annually since. KONVENS is organized under the auspices of the German Society for Computational Linguistics and Language Technology, the Special Interest Group on Computational Linguistics of the German Linguistic Society and the Austrian Society for Artificial Intelligence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The 17th KONVENS took place from September 6 to September 9, 2021 at Heinrich Heine University D\u00fcsseldorf. Due to the COVID-19 pandemic situation, KONVENS was held as a hybrid event in order to allow both speakers and regular participants to attend the conference either on-site or online. The special theme of this year's meeting was Deep Linguistic Modeling. The KONVENS main conference was accompanied by two workshops, three shared task meetings, and a 'PhD Day'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Many thanks to all who submitted their work to KONVENS and to our board of reviewers for supporting us greatly with evaluating the submissions. Moreover we would like to thank Heinrich Heine University D\u00fcsseldorf for providing the conference rooms and all people from the CL department in D\u00fcsseldorf who made the conference possible. Our special thanks go to Tobias Koch from the 'Multimediazentrum' for his generous technical support. Humans learn to understand speech from weak and noisy supervision: they extract structure and meaning from speech by simply being exposed to utterances situated and grounded in their daily sensory experience. Emulating this remarkable skill has been the goal of numerous studies; however, researchers have often used severely simplified settings where either the language input or the extralinguistic sensory input, or both, are small-scale and symbolically represented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Recently, deep neural network models have been successfully used for visually grounded language understanding, where representations of images are mapped to those of their written or spoken descriptions. Despite their high performance, these architectures come at a cost: we know little about the type of linguistic knowledge these models capture from the input signal in order to perform their target task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "I present a series of studies on modelling visually grounded language learning and analyzing the emergent linguistic representations in these models. Using variations of recurrent neural networks to model the temporal nature of spoken language, we examine how form and meaning-based linguistic knowledge emerges from the input signal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Deep generative models with latent variables have become a major focus n of NLP research over the past several years. These models have been used both for generating text and as a way of learning latent representations of text for downstream tasks. While much previous work uses continuous latent variables, discrete variables are attractive because they are more interpretable and typically more space efficient. In this talk we consider learning discrete latent variable models with Quantized Variational Autoencoders, and show how these can be ported to two NLP tasks, namely opinion summarization and paraphrase generation for questions. For the first task, we provide a clustering interpretation of the quantized space and a novel extraction algorithm to discover popular opinions among hundreds of reviews, while for the second task we show that a principled information bottleneck leads to an encoding space that separately represents meaning and surface from, thereby allowing us to generate syntactically varied paraphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mirella Lapata: Summarization and Paraphrasing in Quantized Transformer Spaces",
"sec_num": null
},
{
"text": "The quantitative turn at the beginning of the 21st century has drastically changed the field of comparative linguistics. Had individual genious and expert insights dominated historical linguistics in the past, we now find many studies by interdisciplinary teams who use complex computational techniques to investigate the history of invididual language families based on large amounts of data. Had the identification of linguistic universals in hand-crafted language samples dominated linguistic typology for a long time, scholars now use large cross-linguistic databases to investigate dependencies among linguistic and nonlinguistic variables with the help of complex statistical models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Johann-Mattis List: Chances and Challenges for Computational Comparative Linguistics in the 21st Century",
"sec_num": null
},
{
"text": "However, despite a period of more than two decades in which quantitative approaches have been increasingly used in comparative linguistics, gaining constantly more popularity even among predominantly qualitatively oriented linguists, we still find many problems, which have only sporadically been addressed. In the talk, I will present three of these so far unsolved problems, which I find particularly important for the future of the field of comparative linguistics. These are: (1) the problem of modeling and comparing sound change patterns across the languages of the world; (2) the problem of identifying cross-linguistic patterns of semantic change, and (3) the problem of estimating the borrowability of linguistic traits across languages and times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Johann-Mattis List: Chances and Challenges for Computational Comparative Linguistics in the 21st Century",
"sec_num": null
},
{
"text": "While none of these problems has been solved so far, I will argue that substantial progress on their solution can be made by improving the integration of cross-linguistic data and by developing dedicated problem-solving strategies in computational linguistics which take the specifics of cross-linguistic data and language evolution into account. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Johann-Mattis List: Chances and Challenges for Computational Comparative Linguistics in the 21st Century",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {},
"ref_entries": {
"TABREF0": {
"text": "Mattis List, Max Planck Institute for the Science of Human History Jena Satellite Events 1st Workshop on Computational Linguistics for Political Text Analysis Organizers: Ines Rehbein, Goran Glava\u0161, Simone Ponzetto, Gabriella Lapesa Workshop on Multimodal and Multilingual Hate Speech Detection Organizers: \u00d6zge Ala\u00e7am, Seid Muhie Yimam Shared Task on Scene Segmentation (STSS) Organizers: Albin Zehe, Leonard Konle, Lea D\u00fcmpelmann, Evelyn Glus, Svenja Guhr, Andreas Hotho, Fotis Jannidis, Lucas Kaufmann, Markus Krug, Frank Puppe, Nils Reiter, Annekea Schreiber GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments",
"content": "<table><tr><td>People</td></tr><tr><td>Local Organization:</td></tr><tr><td>Tatiana Bladier, Rafael Ehren, Kilian Evang, Laura Kallmeyer, Rainer Osswald, Esther Seyffarth</td></tr><tr><td>Program Chairs:</td></tr><tr><td>Laura Kallmeyer, Rainer Osswald, Jakub Waszczuk, Torsten Zesch</td></tr><tr><td>Program Committee:</td></tr><tr><td>Adrien Barbaresi, Felix Bildhauer, Marcel Bollmann, Ernst Buchberger, Stephan Busemann, Miriam</td></tr><tr><td>Butt, Mark Cieliebak, Berthold Crysmann, Stefanie Dipper, Sarah Ebling, Kilian Evang, Stefan</td></tr><tr><td>Evert, Diego Frassinelli, Abhijeet Gupta, Anke Holler, Laura Kallmeyer, Manfred Klenner, Ro-</td></tr><tr><td>Kilian Evang Laura Kallmeyer Rainer Osswald Jakub Waszczuk man Klinger, Valia Kordoni, Invited Speakers:</td></tr><tr><td>Torsten Zesch Afra Alishahi, Tilburg University</td></tr><tr><td>Milica Ga\u0161i\u0107, Heinrich Heine University D\u00fcsseldorf</td></tr><tr><td>Mirella Lapta, University of Edinburgh</td></tr><tr><td>Johann-</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF1": {
"text": "Long PapersThe Impact of Word Embeddings on Neural Dependency Parsing . . . . . . . . . . . . . . . . . . 1 Benedikt Adelmann, Wolfgang Menzel and Heike Zinsmeister Benchmarking down-scaled (not so large) pre-trained language models . . . . . . . . . . . . . . . 14 Matthias A\u00dfenmacher, Patrick Schulze and Christian Heumann",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
}
}
}
}