ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-demos.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:41:36.958376Z"
},
"title": "CogniVal in Action: An Interface for Customizable Cognitive Word Embedding Evaluation",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We demonstrate the functionalities of the new user interface for CogniVal. CogniVal is a framework for the cognitive evaluation of English word embeddings, which evaluates the quality of the embeddings based on their performance to predict human lexical representations from cognitive language processing signals from various sources. In this paper, we present an easy-to-use command line interface for CogniVal with multiple improvements over the original work, including the possibility to evaluate custom embeddings against custom cognitive data sources.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We demonstrate the functionalities of the new user interface for CogniVal. CogniVal is a framework for the cognitive evaluation of English word embeddings, which evaluates the quality of the embeddings based on their performance to predict human lexical representations from cognitive language processing signals from various sources. In this paper, we present an easy-to-use command line interface for CogniVal with multiple improvements over the original work, including the possibility to evaluate custom embeddings against custom cognitive data sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The system presented in this work is based on the CogniVal framework presented by Hollenstein et al. (2019) . We present the first encompassing framework for cognitive word embedding evaluation. We improve and extend the original features of CogniVal and provide a simple command line interface for scalable and customized experiments. CogniVal is openly available at https://github.com/ DS3Lab/cognival-cli and can be easily installed with pip. Language models and word representations are the corner stones of state-of-the-art NLP models. Evaluating and comparing the quality of different word representations is a well-known, largely open challenge. While word representations and language models have proven very useful for NLP applications, their interpretability is inherently challenging. Interpretability is key for many NLP applications to be able to understand the algorithms' decisions. For a truly intrinsic evaluation of word embeddings more research about the cognitive plausibility of current language models is required (Rogers et al., 2018) .",
"cite_spans": [
{
"start": 82,
"end": 107,
"text": "Hollenstein et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 1036,
"end": 1057,
"text": "(Rogers et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction & Background",
"sec_num": "1"
},
{
"text": "Embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Import data sources",
"sec_num": "1."
},
{
"text": "Moreover, one of the challenges for computational linguistics is to build cognitively plausible models of language processing, i.e., models that integrate multiple aspects of human language processing at the syntactic and semantic level (Keller, 2010) .",
"cite_spans": [
{
"start": 237,
"end": 251,
"text": "(Keller, 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Import data sources",
"sec_num": "1."
},
{
"text": "Currently, word embeddings are evaluated with extrinsic or intrinsic methods. Extrinsic evaluation is the process of assessing the quality of the embeddings based on their performance on downstream NLP tasks, e.g., sentiment analysis. However, embeddings can be trained and fine-tuned for specific tasks, but this does not mean that they accurately reflect the meaning of words. One the other hand, intrinsic evaluation methods, such as word similarity and word analogy tasks, merely test single linguistic aspects. These tasks are based on conscious human judgements, which can be biased by subjective factors (Nissim et al., 2019) . It has been noted that current intrinsic evaluation methods do not capture the cognitive plausibility of the word embeddings and language models (Manning et al., 2020) . Both intrinsic and extrinsic evaluation types often lack statistical significance testing and do not provide a global quality score. CogniVal addresses these issues and proposes an evaluation method based on cognitive lexical semantics.",
"cite_spans": [
{
"start": 611,
"end": 632,
"text": "(Nissim et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 780,
"end": 802,
"text": "(Manning et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Import data sources",
"sec_num": "1."
},
{
"text": "Cognitive lexical semantics proposes that words are defined by how they are organized in the brain (Miller and Fellbaum, 1992) . Recordings of brain activity play a central role in furthering our understanding of how human language works. Huth et al. 2016showed in a neuroscientific study how words are represented in semantic maps across the brain. Moreover, language representations tuned on brain activity show improved performance on NLP tasks , and word representations trained on brain activity are more generalizable to unseen words (Fyshe et al., 2014) . Hence, it seems natural to evaluate language models against human language processing data, as we have proposed with CogniVal (Hollenstein et al., 2019) .",
"cite_spans": [
{
"start": 99,
"end": 126,
"text": "(Miller and Fellbaum, 1992)",
"ref_id": "BIBREF14"
},
{
"start": 540,
"end": 560,
"text": "(Fyshe et al., 2014)",
"ref_id": "BIBREF7"
},
{
"start": 689,
"end": 715,
"text": "(Hollenstein et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Import data sources",
"sec_num": "1."
},
{
"text": "To accurately encode the semantics of words, we believe that embeddings, and full language models, should reflect this mental lexical representation. This allows us to evaluate word embeddings by quantifying their cognitive plausibility. There was a need for an extensive approach showing the utility of human cognitive data for language model evaluation and its correlation in predicting downstream task performance (Gladkova and Drozd, 2016) . Evaluating word embeddings with cognitive language processing data has been proposed previously. For instance, Abnar et al. (2018) and Rodrigues et al. (2018) evaluated different embeddings by predicting the neuronal activity of nouns. S\u00f8gaard (2016) showed preliminary results in evaluating embeddings against continuous text stimuli in eye-tracking and functional magnetic resonance imaging (fMRI) data. Moreover, Beinborn et al. (2019) recently presented an extensive set of language-brain encoding experiments. Electroencephalography (EEG) data has been used for similar purposes. Schwartz and Mitchell (2019) and Ettinger et al. (2016) show that components of event-related potentials can successfully be predicted with neural network models and word embeddings. However, these approaches mostly focus on one modality of brain activity data from small individual cognitive datasets. The presence of only few and small data sources has been one reason why this type of evaluation has not been too popular until now (Bakarov, 2018) .",
"cite_spans": [
{
"start": 417,
"end": 443,
"text": "(Gladkova and Drozd, 2016)",
"ref_id": "BIBREF8"
},
{
"start": 557,
"end": 576,
"text": "Abnar et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 581,
"end": 604,
"text": "Rodrigues et al. (2018)",
"ref_id": "BIBREF16"
},
{
"start": 862,
"end": 884,
"text": "Beinborn et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 1031,
"end": 1059,
"text": "Schwartz and Mitchell (2019)",
"ref_id": "BIBREF19"
},
{
"start": 1064,
"end": 1086,
"text": "Ettinger et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 1465,
"end": 1480,
"text": "(Bakarov, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Import data sources",
"sec_num": "1."
},
{
"text": "Hence, for CogniVal we collected a wide range of cognitive data sources ranging from eye-tracking to EEG and fMRI to ensure coverage of a large vocabulary and of different features of the cognitive processes during language comprehension. The CogniVal command line interface (CLI) is the first tool to unify a diverse range of cognitive data sources of multiple recording modalities of cognitive processing signals and to provide a generic user interface. In this paper, we provide a user interface for CogniVal, which evaluates English word embeddings against the lexical representations of words in the human brain, recorded when passively understanding language. The CogniVal command line interface (CLI) makes large-scale cognitive word embedding evaluation accessible to NLP practitioners. It offers pre-processed cognitive data sources, readily provided for evaluation in a user-friendly interaction. It supports and complements other intrinsic and extrinsic evaluation methods for word embeddings. The CogniVal CLI is a unified framework, allowing the evaluation of a large set of existing pre-trained embeddings but also of custom word representations and language models on a large range of cognitive sources. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Import data sources",
"sec_num": "1."
},
{
"text": "The Cognival CLI is implemented in Python (version 3.7.4) and provides an interactive shell using python-nubia 1 . For the purpose of cognitive embedding evaluation, Hollenstein et al. 2019collected and prepared 15 cognitive data sources and evaluated 6 pre-trained embedding types, including GloVe, word2vec, WordNet2Vec, FastText, ELMo and BERT. The command line interface provides these preprocessed data types. For details about the format of the cognitive data sources please refer to Hollenstein et al. (2019) . The evaluation process is automatized in the CogniVal CLI and works as as depicted in Figure 1 . First, the user defines the general evaluation configuration, including path specifications and training parameters (command: config). If required, the user can then import custom word representations as well as custom cognitive data sources, using the import function. Second, the user specifies the embedding/cognitive-data combinations to be evaluated, as well as the hyper-parameter ranges for the neural regression models. Moreover, if requested, CogniVal generates random vectors of the same dimension of the embeddings to be evaluated. The embeddings can also be evaluated against this random baseline. As an improvement from Hollenstein et al. (2019) , the CogniVal CLI automatically generates 10 sets of different random embeddings and averages over the results for a fairer comparison to a more robust baseline. The tuning (implemented through a grid search) and training of all models (n embeddings x m cognitive data sources) is fully automatized within the command run.",
"cite_spans": [
{
"start": 490,
"end": 515,
"text": "Hollenstein et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 1248,
"end": 1273,
"text": "Hollenstein et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 604,
"end": 612,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "Thereafter, the user can either use the saved results as they are (i.e., mean squared errors for each word in the vocabulary), or they can run the significance testing (command: significance), which consists of a Wilcoxon signed-rank test for each hypothesis (i.e., for each embedding/cognitive-data evaluation combination), applying the Bonferroni correction for the multiple hypotheses problem, as described by Dror et al. (2018) . Finally, the automatized generation of the report also includes significance testing by default and compares the results to the baseline of random embeddings before aggregating them. The dynamic HTML or PDF reports include all detailed results for the individual combinations, as well as aggregated over the modalities (see Figure 2) . Table 1 presents the most important commands provided in the CogniVal CLI. Additionally, please refer to the GitHub repository for a full tutorial 2 .",
"cite_spans": [
{
"start": 413,
"end": 431,
"text": "Dror et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 758,
"end": 767,
"text": "Figure 2)",
"ref_id": "FIGREF2"
},
{
"start": 770,
"end": 777,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "The target audience for the CogniVal CLI are NLP and machine learning practitioners and researchers developing word embeddings and in need of an evaluation benchmark. In this CogniVal demonstration paper, we describe the following two possible use case scenarios. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Use Cases",
"sec_num": "3"
},
{
"text": "One of the most relevant features of the CogniVal CLI is the possibility to upload custom word representations from any language model. Any type of word embedding can be imported into the system as text or binary files, and can then be evaluated against the available cognitive data sources and compared to the other embeddings included in CogniVal by default. Moreover, the automated versioning and reporting supports the development process of new embeddings by readily generating plots over the course of time to show whether the performance of the embeddings in development is improving or deteriorating across multiple runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scenario 1: Custom Word Embeddings & Cognitive Data Sources",
"sec_num": null
},
{
"text": "In addition, custom cognitive data sources can also be imported into the CogniVal interface. This feature allows the user to add more cognitive language processing data as more of these datasets become available (Alday, 2019). Through these features CogniVal becomes a generic framework for cognitive word embedding evaluation and drastically increases the number of possible applications in any language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scenario 1: Custom Word Embeddings & Cognitive Data Sources",
"sec_num": null
},
{
"text": "A second use case scenario for the CogniVal CLI is to use the cognitive word embedding evaluation as a complementary evaluation and a benchmark for embedding selection. If the user wants to compare the results achieved by their embeddings on CogniVal to other intrinsic or extrinsic results achieved with the same embeddings, the CLI allows to upload external results into the result directory and run the significance testing and aggregation report on all available results. Hence, CogniVal can easily be extended to include external results and can be leveraged as a tool for embedding selection. Furthermore, CogniVal can be used during the development of language models. If one is comparing a certain checkpoint of a language model to extrinsic results on a downstream task, the cognitive evaluation can help to ensure that the word representations are not overfitting on the downstream task and that they still maintain cognitive plausibility. To this end, the automatic report generated in the CogniVal CLI also includes plots to show changes over various runs, which can be very useful during the development of new language models or fine-tuning of existing pre-trained language models (see Figure 2b ). 1 2 3 4 5 6 7 8 9 10 11 12BERT Layer Eye-tracking tracking and EEG. Transformer-based language models such as BERT are widely used in state-of-the-art NLP, but their inner workings are still largely unknown (Rogers et al., 2020) . We extract word-level BERT embeddings (Devlin et al., 2019) for all words where there is cognitive data available. Using the bert-as-service package 3 we extract the hidden states of all 12 layers of the BERT base uncased model with 768 dimensions. Subsequently, using the new CogniVal functionality to load custom embeddings, we import the BERT states of each layer easily into the CogniVal interface. We set the configuration to run all experiments against the 10-fold random baseline, with the following parameters for training: 3 fold cross-validation, a hidden layer of 200 dimensions, 20% validation split, batch size of 128, and ReLu activation functions.",
"cite_spans": [
{
"start": 1420,
"end": 1441,
"text": "(Rogers et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 1482,
"end": 1503,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 1200,
"end": 1209,
"text": "Figure 2b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Scenario 2: Complementary Benchmark & Evaluation Over Time",
"sec_num": null
},
{
"text": "The results of this example application are presented in Figure 3 . In addition, Figure 2a shows the numerical summary of the results of BERT embeddings predicting eye-tracking features as it is presented in the automatically generated report, including the results of the random baselines and the results of the significance testing. While all hypotheses tested on the BERT layers proved to be statistically significant against the random baseline (4 EEG hypotheses -one for each dataset, and 42 eye-tracking hypotheses -one for each feature), there are visible differences in performance between the layers. Surprisingly, for both EEG and eye-tracking, layer 3 performs best. The results also show, how the last layer performs very closely to the average of all layers (dashed line). This finding reflects the original performance of the layers of the BERT base model on downstream NLP tasks (Devlin et al., 2019) . It is also in line with , who find that the lower layers perform best at predicting neural activation for short context ranges. Lin et al. (2019) show that the lower layers have the most linear word order information, which is likewise reflected in our results. This application scenario show how Cognival can be used to explore and support findings concerning the interpretability of language models.",
"cite_spans": [
{
"start": 894,
"end": 915,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 1046,
"end": 1063,
"text": "Lin et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 57,
"end": 65,
"text": "Figure 3",
"ref_id": "FIGREF4"
},
{
"start": 81,
"end": 90,
"text": "Figure 2a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Example Application: Comparison of BERT Layers",
"sec_num": "4"
},
{
"text": "In this demonstration paper, we presented the new command line interface for CogniVal. The CogniVal CLI builds upon the work by Hollenstein et al. (2019) and extends it with various new features, especially the ability to evaluate custom embeddings against custom cognitive data sources. We described the functionalities of the tool as well as various use cases and an application scenario. The Cognivsl CLI aims at improving the accessibility and usability of cognitive embedding evaluation for NLP practitioners. CogniVal is still under active research and will be extended to additionally support the evaluation of sentence embeddings and further languages.",
"cite_spans": [
{
"start": 128,
"end": 153,
"text": "Hollenstein et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "5"
},
{
"text": "https://github.com/facebookincubator/python-nubia 2 https://github.com/DS3Lab/cognival-cli/blob/master/cognival_tutorial.pdf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/hanxiao/bert-as-service",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Antonio de la Torre and Leonard von Kleist for their contributions to the command line interface.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Experiential, distributional and dependency-based word embeddings have complementary roles in decoding brain activity",
"authors": [
{
"first": "Samira",
"middle": [],
"last": "Abnar",
"suffix": ""
},
{
"first": "Rasyan",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Mijnheer",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)",
"volume": "",
"issue": "",
"pages": "57--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samira Abnar, Rasyan Ahmed, Max Mijnheer, and Willem Zuidema. 2018. Experiential, distributional and dependency-based word embeddings have complementary roles in decoding brain activity. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018), pages 57-66.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "M/EEG analysis of naturalistic stories: A review from speech to language processing. Language",
"authors": [
{
"first": "",
"middle": [],
"last": "Phillip M Alday",
"suffix": ""
}
],
"year": 2019,
"venue": "Cognition and Neuroscience",
"volume": "34",
"issue": "4",
"pages": "457--473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phillip M Alday. 2019. M/EEG analysis of naturalistic stories: A review from speech to language processing. Language, Cognition and Neuroscience, 34(4):457-473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Can eye movement data be used as ground truth for word embeddings evaluation?",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Bakarov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Bakarov. 2018. Can eye movement data be used as ground truth for word embeddings evaluation? In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Robust evaluation of language-brain encoding experiments",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "Samira",
"middle": [],
"last": "Abnar",
"suffix": ""
},
{
"first": "Rochelle",
"middle": [],
"last": "Choenni",
"suffix": ""
}
],
"year": 2019,
"venue": "International Journal of Computational Linguistics and Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Beinborn, Samira Abnar, and Rochelle Choenni. 2019. Robust evaluation of language-brain encoding exper- iments. International Journal of Computational Linguistics and Applications.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The hitchhiker's guide to testing statistical significance in natural language processing",
"authors": [
{
"first": "Rotem",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "Gili",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "Segev",
"middle": [],
"last": "Shlomov",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1383--1392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Modeling N400 amplitude using vector space models of word representation",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
},
{
"first": "Naomi",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 2016,
"venue": "CogSci",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger, Naomi Feldman, Philip Resnik, and Colin Phillips. 2016. Modeling N400 amplitude using vector space models of word representation. In CogSci.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Interpretable semantic vectors from a joint model of brain-and text-based meaning",
"authors": [
{
"first": "Alona",
"middle": [],
"last": "Fyshe",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Partha",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Tom M",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "489--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alona Fyshe, Partha P Talukdar, Brian Murphy, and Tom M Mitchell. 2014. Interpretable semantic vectors from a joint model of brain-and text-based meaning. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 489-499.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Intrinsic evaluations of word embeddings: What can we do better?",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Gladkova",
"suffix": ""
},
{
"first": "Aleksandr",
"middle": [],
"last": "Drozd",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "36--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Gladkova and Aleksandr Drozd. 2016. Intrinsic evaluations of word embeddings: What can we do better? In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 36-42.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "CogniVal: A framework for cognitive word embedding evaluation",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "De La Torre",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Antonio de la Torre, Nicolas Langer, and Ce Zhang. 2019. CogniVal: A framework for cognitive word embedding evaluation. In Proceedings of the 23nd Conference on Computational Natural Language Learning.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Natural speech reveals the semantic maps that tile human cerebral cortex",
"authors": [
{
"first": "G",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"A"
],
"last": "Huth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De Heer",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Jack",
"middle": [
"L"
],
"last": "Fr\u00e9d\u00e9ric E Theunissen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gallant",
"suffix": ""
}
],
"year": 2016,
"venue": "Nature",
"volume": "532",
"issue": "7600",
"pages": "453--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander G Huth, Wendy A de Heer, Thomas L Griffiths, Fr\u00e9d\u00e9ric E Theunissen, and Jack L Gallant. 2016. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600):453-458.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Cognitively plausible models of human language processing",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics: Short Papers",
"volume": "",
"issue": "",
"pages": "60--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Keller. 2010. Cognitively plausible models of human language processing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics: Short Papers, pages 60-67.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Open Sesame: Getting inside BERT's linguistic knowledge",
"authors": [
{
"first": "Yongjie",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chern Tan",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "241--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open Sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241-253.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Emergent linguistic structure in artificial neural networks trained by self-supervision",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Kevin Clark, John Hewitt, Urvashi Khandelwal, and Omer Levy. 2020. Emergent lin- guistic structure in artificial neural networks trained by self-supervision. Proceedings of the National Academy of Sciences.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WordNet and the organization of lexical memory",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1992,
"venue": "Intelligent tutoring systems for foreign language learning",
"volume": "",
"issue": "",
"pages": "89--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller and Christiane Fellbaum. 1992. WordNet and the organization of lexical memory. In Intelligent tutoring systems for foreign language learning, pages 89-102. Springer.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Fair is better than sensational: Man is to doctor as woman is to doctor",
"authors": [
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "Rik",
"middle": [],
"last": "Van Noord",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Van Der Goot",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.09866"
]
},
"num": null,
"urls": [],
"raw_text": "Malvina Nissim, Rik van Noord, and Rob van der Goot. 2019. Fair is better than sensational: Man is to doctor as woman is to doctor. arXiv preprint arXiv:1905.09866.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Predicting brain activation with WordNet embeddings",
"authors": [
{
"first": "Joao",
"middle": [
"Ant\u00f3nio"
],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "Ruben",
"middle": [],
"last": "Branco",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Chakaveh",
"middle": [],
"last": "Saedi",
"suffix": ""
},
{
"first": "Ant\u00f3nio",
"middle": [],
"last": "Branco",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joao Ant\u00f3nio Rodrigues, Ruben Branco, Jo\u00e3o Silva, Chakaveh Saedi, and Ant\u00f3nio Branco. 2018. Predicting brain activation with WordNet embeddings. In Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing, pages 1-5.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "What's in your embedding, and how it predicts task performance",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Shashwath Hosur Ananthakrishna",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2690--2703",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Shashwath Hosur Ananthakrishna, and Anna Rumshisky. 2018. What's in your embedding, and how it predicts task performance. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2690-2703.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A primer in BERTology: What we know about how BERT works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.12327"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. arXiv preprint arXiv:2002.12327.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Understanding language-elicited EEG data by predicting it from a finetuned language model",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "43--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Schwartz and Tom Mitchell. 2019. Understanding language-elicited EEG data by predicting it from a fine- tuned language model. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 43- 57.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Inducing brain-relevant bias in natural language processing models",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Mariya",
"middle": [],
"last": "Toneva",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Wehbe",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "14100--14110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Schwartz, Mariya Toneva, and Leila Wehbe. 2019. Inducing brain-relevant bias in natural language process- ing models. In Advances in Neural Information Processing Systems, pages 14100-14110.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Evaluating word embeddings with fMRI and eye-tracking",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "116--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard. 2016. Evaluating word embeddings with fMRI and eye-tracking. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 116-121.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)",
"authors": [
{
"first": "Mariya",
"middle": [],
"last": "Toneva",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Wehbe",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "14928--14938",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mariya Toneva and Leila Wehbe. 2019. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). In Advances in Neural Information Processing Systems, pages 14928-14938.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "User interaction with the CogniVal interface.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "(a) Automatically generated result table.(b) Plot over time: When adding more eye-tracking features in each run, the aggregated results become more precise. Each colored line represents a different BERT layer.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Snippets from an automatically generated result report in the CogniVal CLI.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "As an exemplary use case scenario for the CogniVal CLI, we analyze the performance of different layers of BERT pre-trained contextual word representations on all available cognitive data sources of eye-",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "CogniVal results on evaluating the 12 layers of a BERT model against all available EEG and eye-tracking data sources.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF2": {
"content": "<table/>",
"num": null,
"text": "Main steps and example commands for using the CogniVal CLI for cognitive word embedding evaluation.",
"type_str": "table",
"html": null
}
}
}
}