ACL-OCL / Base_JSON /prefixB /json /bea /2021.bea-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:10:40.503301Z"
},
"title": "Employing distributional semantics to organize task-focused vocabulary learning",
"authors": [
{
"first": "Haemanth",
"middle": [],
"last": "Santhi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of T\u00fcbingen",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Ponnusamy",
"middle": [
"Detmar"
],
"last": "Meurers",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of T\u00fcbingen",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "How can a learner systematically prepare for reading a book they are interested in? In this paper, we explore how computational linguistic methods such as distributional semantics, morphological clustering, and exercise generation can be combined with graph-based learner models to answer this question both conceptually and in practice. Based on highly structured learner models and concepts from network analysis, the learner is guided to efficiently explore the targeted lexical space. They practice using multi-gap learning activities generated from the book. In sum, the approach combines computational linguistic methods with concepts from network analysis and tutoring systems to support learners in pursuing their individual reading task goals.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "How can a learner systematically prepare for reading a book they are interested in? In this paper, we explore how computational linguistic methods such as distributional semantics, morphological clustering, and exercise generation can be combined with graph-based learner models to answer this question both conceptually and in practice. Based on highly structured learner models and concepts from network analysis, the learner is guided to efficiently explore the targeted lexical space. They practice using multi-gap learning activities generated from the book. In sum, the approach combines computational linguistic methods with concepts from network analysis and tutoring systems to support learners in pursuing their individual reading task goals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Learning vocabulary is a major component of foreign language learning. In the school context, initially, vocabulary learning is typically organized around the words introduced by the textbook. In addition to the incrementally growing vocabulary lists, some textbooks also provide thematically organized word banks. When other texts are read, the publisher or the teacher often provides annotations for new vocabulary items that appear in the text. A range of tools has been developed to support vocabulary learning, from digital versions of file cards to digital text editions offering annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While such applications serve the needs of the formal learning setting in the initial foreign language learning phase, where the texts that are read are primarily chosen to systematically introduce the language, later the selection of texts to be read can in principle follow the individual interests of the student or adult, which boosts the motivation to engage with the book. Linking language learning to a functional goal that someone actually wants to achieve using language is in line with the idea of Task-Based Language Teaching (TBLT), a prominent strand in language teaching (Ellis, 2009) .",
"cite_spans": [
{
"start": 585,
"end": 598,
"text": "(Ellis, 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Naturally, not all authentic texts are accessible to every learner, but linguistically-aware search engines, such as FLAIR (Chinkina and Meurers, 2016) , make it possible to identify authentic texts that are at the right reading level and are rich in the language constructions next on the curriculum. Where the unknown vocabulary that the reader encounters in such a setting goes beyond around 2% of unknown words in a text that can be present without substantial loss of comprehension (Schmitt et al., 2011) , many digital reading environments provide the option to look up a word in a dictionary. Yet, frequently looking up words in such a context is cumbersome and distracts the reader from the world of the book they are trying to engage with. Relatedly, one of the key criteria of TBLT is that learners should rely on their own resources to complete a task (Ellis, 2009) . But this naturally can require pre-task activities preparing the learner to be able to successfully tackle the task (Willis and Willis, 2013) . But how can a learner systematically prepare for reading a text or book they are interested in reading?",
"cite_spans": [
{
"start": 123,
"end": 151,
"text": "(Chinkina and Meurers, 2016)",
"ref_id": "BIBREF7"
},
{
"start": 487,
"end": 509,
"text": "(Schmitt et al., 2011)",
"ref_id": "BIBREF24"
},
{
"start": 863,
"end": 876,
"text": "(Ellis, 2009)",
"ref_id": null
},
{
"start": 995,
"end": 1020,
"text": "(Willis and Willis, 2013)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we explore how computational linguistic methods such as distributional semantics, morphological clustering, and exercise generation can be combined with graph-based learner models to answer this question both conceptually and in practice. On the practical side, we developed an application that supports vocabulary learning as a pre-task activity for reading a self-selected book. The conceptual goal is to automatically organize the lexical-semantic space of any given English book in the form of a graph that makes it possible to sequence the vocabulary learning in a way efficiently exploring the space and to visualize this graph for the users as an open learner model (Bull and Kay, 2010) showing their growing mastery of the book's lexical space. Lexical learning is fostered and monitored through automatically generated multi-gap activities (Zesch and Melamud, 2014) that support learning and revision of words in the contexts in which they occur in the book.",
"cite_spans": [
{
"start": 688,
"end": 708,
"text": "(Bull and Kay, 2010)",
"ref_id": "BIBREF6"
},
{
"start": 864,
"end": 889,
"text": "(Zesch and Melamud, 2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In section 2 we discuss how a book or other text chosen by the learner is turned in to a graph encoding the lexical space that the learner needs to engage with to read the book, and how words that are morphologically related as word families (Bauer and Nation, 1993) are automatically identified and compactly represented in the graph (2.1.1). In section 3 we then turn to the use of the graph representation of the lexical semantic space of the book to determine the reader's learning path and represent their growing lexical knowledge as spreading activation in the graph. In section 4, the conceptual ideas are realized in an application. We discuss how the new learner cold-start problem is avoided using a very quick word recognition task we implemented, before discussing the content selection and activity generation for practice and testing activities. Section 6 then provides a conceptual evaluation of the approach and compares it with related work, before concluding in section 7.",
"cite_spans": [
{
"start": 242,
"end": 266,
"text": "(Bauer and Nation, 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Constructing a structured domain model for the lexical space of a book",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Going beyond the benefits of interactivity and adaptivity of individualized digital learning tools, supporting learner autonomy is known to be important for boosting motivation and self-regulation skills (Godwin-Jones, 2019) . This includes the choice of reading material a learner wants to engage with, where the texts prepared by a teacher or publisher cannot reflect the interests of individual students, the topics and genres they want to explore in the foreign language. The freedom of choosing a text that the learner wants to engage with also identifying a clear functional goal for learning vocabulary -learning new words to enable us to read a text of interest, so that the interest in the content coincides with the interest in further developing the language skills. In that sense learning vocabulary becomes a pre-task activity in the spirit of task-based language learning. Organizing vocabulary learning in this way also helps turn the otherwise open-ended challenge of learning the lexical space of a new language to the clearly delineated task of mastering a sub-space. This functionally guided approach contrasts with the approach of other vocabulary learning tools selecting random infrequent lexical items from the language to be learned, which given their rare and often highly specialized nature are likely to only be useful for impressing friends when playing foreign language scrabble.",
"cite_spans": [
{
"start": 204,
"end": 224,
"text": "(Godwin-Jones, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To make text-driven vocabulary learning work, we need to map the text selected by the learner into a structured domain to support systematic and efficient learning of the lexical space as used in the book. We distinguish the process of structuring the vocabulary used in the book, independent of the learner's background, from the representation of the individual learner's knowledge. The former is tackled in this section and can be regarded as our domain model, while the latter is a learner model that essentially is an overlay over the domain model, and will be discussed in section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since vocabulary learning is about establishing form-meaning connections, in principle the basic unit best suited for this would be word senses. At the same time, full automatic word sense disambiguation is complex, error prone, and often domain-specific -and in the context of a given book, a given word will often occur with the same meaning. We, therefore, limit ourselves to only disambiguating homographs in terms of their partof-speech (POS), following Wilks and Stevenson (1998) . Throughout our approach, we therefore use <word, POS> pairs as basic units. To POS annotate the book selected by the user, we use the Spacy NLP tools (http://spacy.io). Given our focus on learning the characteristic vocabulary of the book, we eliminate stop words as well as word-POS pairs appearing less than five times in the given book.",
"cite_spans": [
{
"start": 459,
"end": 485,
"text": "Wilks and Stevenson (1998)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To structure the lexical space in terms of meaning, there are two related options. Words can be semantically related, e.g., tiger, elephant, and crocodile all have the property of being wild animals; from the perspective of a WordNet, they are hyponyms of wild animal. On the other hand, words can also be thematically related, such as blackboard, teacher, and chalk all belonging to a school theme. Gholami and Khezrlou (2014) highlights the benefits of the semantic approach over the thematic approach from the perspective of a tutor. As we are building a system that acts as a tutor tracking and fostering the learner's vocabulary knowledge, we decided to focus on semantic relatedness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic and thematic relations",
"sec_num": "2.1"
},
{
"text": "Complementing the lexical semantic relationships, words are also related to each other through derivational and inflectional morphology. Many of these morphological processes are semantically transparent. Bauer and Nation (1993) proposed the idea of grouping words into so-called word families stating that \"once the base word or even a derived word is known, the recognition of other members of the family requires little or no extra effort\". The creation of word families is based on criteria involving frequency, regularity, productivity, and predictability of all the English affixes. Bauer and Nation (1993) arranged the inflectional affixes and common derivational affixes into the graded levels, as exemplified on the left-hand side of Figure 1 . (Bauer and Nation, 1993) and an expanded family node in our graph",
"cite_spans": [
{
"start": 205,
"end": 228,
"text": "Bauer and Nation (1993)",
"ref_id": "BIBREF0"
},
{
"start": 589,
"end": 612,
"text": "Bauer and Nation (1993)",
"ref_id": "BIBREF0"
},
{
"start": 754,
"end": 778,
"text": "(Bauer and Nation, 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 743,
"end": 751,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word families",
"sec_num": "2.1.1"
},
{
"text": "We adopt the idea of word families to compactly represent morphologically related words. The graph on the right side of Figure 1 exemplifies the word family that becomes visible when selecting the lemma dream in our graph representation (where word families normally are shown in collapsed form and represented by their underlying lemma). We currently put words up to level three, which generally will be transparently related, into one family -though in the future one could make this a parameter, which could also depend on the level of the learner.",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word families",
"sec_num": "2.1.1"
},
{
"text": "To structure the lexical space of the user selected book in terms of a semantically related word graph, we start with a distributional semantic vector representation of each word, which we obtain from the pre-trained model of GloVe (Pennington et al., 2014) based on the co-occurrence statistics of the words form a large Common Crawl data-set (http://commoncrawl.org). Such word embeddings capture the distributional semantic properties of words (Goldberg and Levy, 2014) . On this basis, the relationship score between the families is computed to be the maximum pair-wise cosine similarity score of all its members. Let F 1 be a family with m members and F 2 be a family with n members. The relationship score between two families F 1 and F 2 is the maximum of cosine similarity score of all m \u00d7 n pairs.",
"cite_spans": [
{
"start": 232,
"end": 257,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 447,
"end": 472,
"text": "(Goldberg and Levy, 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating a lexical graph of word families and their semantic relations",
"sec_num": "2.2"
},
{
"text": "w 12 = max i\u2208F 1 ;j\u2208F 2 V i * V j V i V j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating a lexical graph of word families and their semantic relations",
"sec_num": "2.2"
},
{
"text": "where, w 12 is the cosine similarity between the families F 1 and F 2 The result is a network of word families, where families with members closer in the semantic vector space are connected with higher weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating a lexical graph of word families and their semantic relations",
"sec_num": "2.2"
},
{
"text": "Following D'Angelo and West (1997), the number of edges in the graph can be computed as e = n\u00d7(n\u22121)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating a lexical graph of word families and their semantic relations",
"sec_num": "2.2"
},
{
"text": ", where e is the total number of edges and n is the number of nodes (families) in the graph. The number of edges in the graph thus grows exponentially as the number of nodes increases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "When inspecting graphs derived for sample texts, we observe the majority of the connections are weak. To obtain a graph of semantic relationships that meaningfully structure the vocabulary used in a book, we focus on the stronger relationship and eliminate edges with weights less than 0.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "We also observed that the node families of very frequently occurring verbs tend to be very densely connected, and this impact of frequency on distributional semantic measures has been discussed in the literature (Patel et al., 1998; Weeds et al., 2004) . In order to control for this kind of over-sensitivity of distributional semantic measures for highly frequent words, we restrict the node degree to a maximum threshold. Based on experiments with sample data, only the five edges with the highest weight are retained for each node.",
"cite_spans": [
{
"start": 212,
"end": 232,
"text": "(Patel et al., 1998;",
"ref_id": "BIBREF22"
},
{
"start": 233,
"end": 252,
"text": "Weeds et al., 2004)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "As a result of the method described in this section, we obtain a lexical graph for the user-provided text that structures and compactly represents the lexical space of the text in a graph-based domain model. This is the lexical space that the user wants to explore and master enough to be able to read the book. In terms of computational linguistic methods, on the one hand, distributional semantics creates the overall structure of a meaning-connected lexical space, on the other hand, word families organize and collapse forms that are related by morphological processes in the linguistic system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "To test the graph construction, we chose three books as a sample to study the characteristics of the vocabulary space created by our application: (a) Table 1 shows the size of the text and the graph created for each book. Selecting the <word, POS> pairs occurring at least five times in the entire text, we find that 15-25% of the words from the text qualifies as lexical learning targets. These targets are grouped into families as discussed in 2.1.1, with each family being represented as a node in the graph. The resulting set of graph nodes representing word families is 20-30% smaller than the initial set of learning targets. The families then are linked as explained in section 2.2. The average number of links a family has with other families is around 2.5.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 157,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Example generation of graphs for books",
"sec_num": "2.3"
},
{
"text": "Some example word family clusters formed for these books at a threshold of similarity scores greater than 0.7 are shown in Figure 3 . Only the root nodes of each family are shown. The examples illustrate that the semantically close families form meaningfully interpretable clusters.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example generation of graphs for books",
"sec_num": "2.3"
},
{
"text": "With a structured domain model established for the vocabulary space to be explored by the user, we want to make use of it to efficiently guide the learner to cover the space and track learning in a learner model. The learner model is an overlay on the domain model that helps us track the learner's vocabulary knowledge in terms of a mastery score associated with each word family. On the basis of the learner model, we then can propose the next set of words to be practiced in a way that reduces the number of interactions required to cover the vocabulary space. It also serves as an open learner model (Bull and Kay, 2010) by allowing the user to view and explore the lexical space of the book as a graph, with each node being colored according to the current mastery score. In this section, we discuss how this is achieved.",
"cite_spans": [
{
"start": 604,
"end": 624,
"text": "(Bull and Kay, 2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representing the lexical knowledge of a learner: an open learner model",
"sec_num": "3"
},
{
"text": "Identifying the nodes that are more central than others is one of the vital tasks in network analysis (Freeman, 1978; Bonacich, 1987; Borgatti, 2005; Borgatti and Everett, 2006) . Freeman (1978) formulated three major centrality measures for a node in a network: (a) degree centrality: a measure of strength of ties of each nodes in the network, (b) closeness centrality: a measure of closeness of a node to all other nodes in the network, and (c) betweenness centrality: a measure of the number of elements of a set S, the set of shortest paths of other node pairs in the network passing through a node. The degree centrality measure is a greedy approach looking only at the immediate neighbours to decide the central node, whereas the closeness centrality measure accounts for the bigger picture of the entire network. So closeness centrality seems best suited for our goal of efficient coverage of the network, in our case: the graph representing the vocabulary of the given book. While the basic Figure 3 : Example family clusters from the graphs resulting for the sample books closeness centrality notion is only defined for fully connected networks, Wasserman et al. (1994) successfully extends it to apply to any graph. Based on this metric, we choose the top 20 (word family) nodes for a learning session and chose a word from each of those 20 families. 1 Selecting the next words to be learned based on closeness centrality brings up the problem that neighbors that are tightly bound to the central node are likely to have a similar closeness centrality score. So when selecting the words to be practiced only based on closeness centrality, we would risk practicing closely related lexical items rather than systematically introducing the learner to the broader lexical space. In order to avoid this issue, we exclude the immediate neighbours of a word that was selected from that learning session. This supports a more distributed selection of words.",
"cite_spans": [
{
"start": 102,
"end": 117,
"text": "(Freeman, 1978;",
"ref_id": "BIBREF11"
},
{
"start": 118,
"end": 133,
"text": "Bonacich, 1987;",
"ref_id": "BIBREF2"
},
{
"start": 134,
"end": 149,
"text": "Borgatti, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 150,
"end": 177,
"text": "Borgatti and Everett, 2006)",
"ref_id": "BIBREF4"
},
{
"start": 1156,
"end": 1179,
"text": "Wasserman et al. (1994)",
"ref_id": "BIBREF28"
},
{
"start": 1362,
"end": 1363,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1000,
"end": 1008,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Central node selection for efficient exploration of the vocabulary graph",
"sec_num": "3.1"
},
{
"text": "Each node in the graph is associated with a mastery score ranging from 0 to 1, with 1 indicating that the learner masters the word. We initialize the master score of each node with 0.5 and interpret this as a middle ground, where the model is uncertain about the learner's knowledge about that word. The mastery score is updated based on the learner responses in the learning activities. To address the bottleneck that the system is tied to such a thin stream of evidence about the learner's lex-ical knowledge, we make use of the fact that the learner model is based on a network of semantically related word families. We use this to spread some activation from a word for which the learner has shown mastery to semantically closely related words to indicate that this word is more likely to also be known.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mastery scores and updating them in the graph to capture learning",
"sec_num": "3.2"
},
{
"text": "Let r be the learner response for a learning activity involving a word from the family F i . Then the update to its mastery m i is updated using \u2206m i = m i * \u03b1 * r. The update to the mastery score of its immediate neighbours is weighted based on the similarity score between the families \u2206m j = m j * (\u03b2 * r * w ij ) where m j is the mastery score of F j , a neighbouring family of F i attached with a edge weight of w ij . \u03b1 and \u03b2 are tune-able parameters for the magnitude of an update. 2 r \u2208 {\u22121, +1} indicate the polarity of the learner's response, +1 for the learner responding correctly and \u22121 an incorrect response. Figure 4 provides a close-up view of the graph with enlarged nodes highlighting the nodes selected for a learning activities. The figure also illustrates the color representation of the mastery level and the spreading activation to neighboring nodes. Initially, all nodes are grey, corresponding to a mastery level of 0.5. The closer the level gets to 1, the greener the node appears, and the closer to 0, the redder. A node the user has practiced with mixed success can result in a 0.5 level again, which then is shown in yellow to distinguish nodes that have already been practiced from the untouched grey ones. an open learner model allowing learners to inspect the current state of their knowledge in relation to the lexical demands of the book, the mastery level also plays a role in selecting the next words to be practiced, with words over 0.8 no longer being selected. In the future, we plan to add a component that takes into account memory decay and the so-called spacing effect (Sense et al., 2016) to optimize when a word is selected again.",
"cite_spans": [
{
"start": 1612,
"end": 1632,
"text": "(Sense et al., 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 623,
"end": 631,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Mastery scores and updating them in the graph to capture learning",
"sec_num": "3.2"
},
{
"text": "4 Putting it all together in an application 4.1 A warm start for the learner model Given that we are targeting learners beyond the beginner stage, it is important to determine their vocabulary knowledge to avoid a cold start of the learner model. Starting from a blank slate would require many interactions with the system until the learner model reflects the learner's lexical competence -a time during which the system cannot optimally select the words to be learned next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mastery scores and updating them in the graph to capture learning",
"sec_num": "3.2"
},
{
"text": "To avoid this cold start problem, we implemented a short web-based vocabulary Yes/No test, a format long used for vocabulary estimation (Sims, 1929; Tilley, 1936; Goulden et al., 1990) . The participants see a checklist of words and select whether they know the word or not. While there is a rich literature on the test and adjustments have been proposed to counter its weaknesses as a competence diagnostic (Meara and Buxton, 1987; Beeckmans et al., 2001; Huibregtse et al., 2002; Mochida and Harrington, 2006) , it readily enables learners to quickly initialize their learner model. The words included in the test are selected from the graph using the same central node selection approach introduced in section 3.1, and the mechanism spreading activation to related nodes discussed in section 3.2 allows the system to make additional use of the information from the test.",
"cite_spans": [
{
"start": 136,
"end": 148,
"text": "(Sims, 1929;",
"ref_id": "BIBREF26"
},
{
"start": 149,
"end": 162,
"text": "Tilley, 1936;",
"ref_id": "BIBREF27"
},
{
"start": 163,
"end": 184,
"text": "Goulden et al., 1990)",
"ref_id": "BIBREF15"
},
{
"start": 408,
"end": 432,
"text": "(Meara and Buxton, 1987;",
"ref_id": "BIBREF18"
},
{
"start": 433,
"end": 456,
"text": "Beeckmans et al., 2001;",
"ref_id": "BIBREF1"
},
{
"start": 457,
"end": 481,
"text": "Huibregtse et al., 2002;",
"ref_id": "BIBREF16"
},
{
"start": 482,
"end": 511,
"text": "Mochida and Harrington, 2006)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mastery scores and updating them in the graph to capture learning",
"sec_num": "3.2"
},
{
"text": "Content creation for vocabulary learning activities typically is a task requiring human effort so that adapting the material to individual learners' language competence and interests is beyond the reach of traditional methods. To overcome this limitation, we generate activities based on the target text for which the user wants to acquire the vocabulary. While there are multiple activity types one could consider, we implemented multi-gap activities (Zesch and Melamud, 2014) , given that they make it possible for the learner to engage with a word using several sentence contexts drawn from the targeted book. Given the frequency threshold used in constructing the domain model graph, there are at least five sentences for each word in the book. We rank sentences to determine which sentences are best out of context. Fortunately, this issue has been addressed in lexicography, where authentic sentences are used in dictionaries to illustrate word usage. Kilgarriff et al. (2008) developed GDEX, a method to identify sentences that are well-suited to illustrate word meaning within a single sentence context. GDEX considers factors such as the sentence length, use of rare words and anaphora, target word occurrence in main clauses, sentence com-pleteness, and target word collocations towards the end of sentences. Sentence length and rare word usage are the highly weighted features. We adapted GDEX for our purpose of ranking sentences for vocabulary activities and customized the rare word feature to reflect the individual learner's vocabulary knowledge as recorded in the learner model. Learning and testing in the system are conducted in sessions. Each session consists of the top 20 central nodes from the learner model that are below the mastery score threshold. Multi-gap activities consisting of three to four sentences in which the target word chosen from the central node word family occurs are used for both learning and testing. The sentences are initially shown with the occurrences replaced by a blank. For each activity, four lexical options are provided: the target word and three distractors chosen from the book, as discussed below. Figure 5 shows an activity targeting the word family scowl in a learning session for the book \"A Game of Thrones: A Song of Ice and Fire\", after the correct word was selected by the learner. In the learning mode, the learners are provided with learning aid such as dictionary lookup, translations, and word usage examples from within and outside the targeted book. The mastery scores in the learner model are not updated during training mode. In the testing mode, no such support is provided and the score for the target family and its neighbors is updated based on the user responses.",
"cite_spans": [
{
"start": 452,
"end": 477,
"text": "(Zesch and Melamud, 2014)",
"ref_id": "BIBREF32"
},
{
"start": 958,
"end": 982,
"text": "Kilgarriff et al. (2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 2157,
"end": 2165,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Activity generation: practicing and testing in the target context",
"sec_num": "4.2"
},
{
"text": "Distractor generation is a critical part of multigap learning activities. We are interested in distractors that require some cognitive effort to discriminate, actively engaging the learner with the choices in the different sentence contexts. Thus in the step of choosing distractors, morpho-syntactically appropriate forms are used to focus the choice on the meaning rather than grammatical surface cues. To identify challenging distractors, we select the appropriate forms from neighboring graph nodes. We empirically established that edge weights between 0.5 to 0.8 seems to be a suitably challenging distractor. This avoids synonyms that are too closely related to be distinguishable from the target, but also semantically unrelated words that are too easy to rule out. The edge weight for nodes that are not immediate neighbors is computed as the product of the edge weights connecting the nodes. Often the best distractors turn out to be two hops away. We are considering combining such a distractor generation based on the domain model with other strategies discussed by Zesch and Melamud (2014) .",
"cite_spans": [
{
"start": 1077,
"end": 1101,
"text": "Zesch and Melamud (2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Activity generation: practicing and testing in the target context",
"sec_num": "4.2"
},
{
"text": "To empirically evaluate the approach, we first want to establish that it works as described. There are a number of components and parameters involved in generating the graph, selecting the nodes to be practiced, and updating the learner model. So we ran experiments with simulated users who performed the activities at different levels of accuracy. In addition to a a cold-start setting, where the system initially knows nothing about the learner, we also performed warm-start simulations for users at different proficiency levels. As a second step, we envisage conducting studies with language learners in authentic learning contexts. Testing educational tools in real-life contexts is crucial for establishing that an approach is effective in the complex authentic education contexts with the rich set of cognitive, motivational, and social variables at stake there. While in Meurers et al. (2019) we illustrate the feasibility of conducting such randomized controlled field studies, this clearly is an endeavor of its own, beyond the scope of this paper.",
"cite_spans": [
{
"start": 878,
"end": 899,
"text": "Meurers et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "In the first set of experiments, we cold start the system with the learner model set to the default .5 chance level for every word family, and we simulated learners with performance levels of 60%, 70%, 80%, 90%, and 100%. As baseline approach for comparison, we include a traditional flashcard setup tackling words in a linear fashion, one by one, where each word is independent of the other words. Throughout, we assume that mastery is achieved when a word has reached .8 or more. Table 2 shows the number of learning sessions, each consisting of 20 words, that the user would need to complete to fully master all learning targets in the given books in our and the baseline setup. We see that under the 100% accuracy condition, where the learner successfully completes each activity they work on, the baseline approach requires Learning Setup # of sessions given accuracy rate of targets 100% 90% 80% 70% 60% a) 1.7k our 85 100 120 195 1280 baseline 170 235 340 650 3150 b) 1.2k our 65 75 90 140 735 baseline 120 165 245 475 2490 c) 3.7k our 165 190 240 395 2850 baseline 370 505 745 1390 7230 Table 2 : Number of interactions required to master the vocabulary for simulated learners at given accuracy rate exactly twice the number of learning targets when compared to our graph based approach spreading the activation to semantically related words and taking semantically transparent word families into account. The difference becomes even more pronounced when the accuracy for completing the activities is set to more realistic levels between 60 and 90%. Note the steep increase in the number of sessions needed by learners performing exercises with only 60% accuracy. This showcases that the ability to interpret lexical material in context, based on an understanding of the domain of the book from which the exercises are drawn, is important for determining which book one can successfully prepare for. Overall, while the simulation experiments clearly are based on a very simple model of learning, the observations reported should carry over to more sophisticated learning models in which initial learning gains are higher than later ones and also modeling forgetting of what has been learned.",
"cite_spans": [],
"ref_spans": [
{
"start": 482,
"end": 489,
"text": "Table 2",
"ref_id": null
},
{
"start": 913,
"end": 1133,
"text": "1.7k our 85 100 120 195 1280 baseline 170 235 340 650 3150 b) 1.2k our 65 75 90 140 735 baseline 120 165 245 475 2490 c) 3.7k our 165 190 240 395 2850 baseline 370 505 745 1390 7230 Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "In the second set of experiments, we assume an accuracy of 90% and instead consider the effect of proficiency differences as indicated by the learner's CEFR level. Instead of simulating the web-based book-specific vocabulary test we implemented as discussed in section 4.1, we base are simulation experiments on Meara and Milton's estimation of the knowledge of the most frequent 5000 lemmatized English words for learners at different CEFR levels as reported in Milton and Alexiou (2009) . Simplifying their estimates for the number of known words distributed over the frequency bands to the upper bound given for the number of words learned in the first four proficiency levels (A1: 1500, A2: 2500, B1: 3250, B2: 3750), we started the simulation by setting the mastery score of those nodes to 0.75 for which the head word of the family occurs frequently enough (as determined by reference to SUBTLEX-US; Brysbaert and New, 2009) to be included in the set for the given proficiency level. The learner model thus encodes that the learner is likely to know the word, but the positive bias alone is not sufficient to cross the 0.8 level indicating mastery so that additional evidence is required to mark them as known. For example, an A2 learner only needs 47 sessions to master the learning targets for book b) assuming an accuracy of 90% in completing the exercises, whereas in the cold start condition we saw in Table 2 , one would need 75 sessions. The number of sessions estimated for this warm-start condition seems realistic for using the approach in practice, especially considering that one naturally does not need to learn all of the words to be able to read a book (Schmitt et al., 2011) .",
"cite_spans": [
{
"start": 463,
"end": 488,
"text": "Milton and Alexiou (2009)",
"ref_id": "BIBREF20"
},
{
"start": 906,
"end": 930,
"text": "Brysbaert and New, 2009)",
"ref_id": "BIBREF5"
},
{
"start": 1674,
"end": 1696,
"text": "(Schmitt et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 1413,
"end": 1420,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "While the experimental evaluation provides some insights into the practical viability of the approach, given the conceptual nature of our proposal, we here also contextualize and compare the approach with related work to discuss where it conceptually advances the state of the art. Our approach can be characterized by the following aspects: First, the user can select what they want to learn the vocabulary for; they pick the text of the book they want to be able to read, i.e., the functional task goal. Second, the system automatically creates a domain model graph representing the lexical semantic space to be learned. Third, a learner model is created as an overlay of the domain model graph and records the mastery of the concepts by the learner, with updates to the learner model spreading activation through the graph to indirectly activate related concepts as a way to avoid explicit interaction for every word. Fourth, it determines in which order the words can be learned in such a way that the lexical space is efficiently explored, prioritizing the words that are central nodes. Fifth, the system compactly represents word families to allow the visualization and open learner model to be concise and usable with minimal number of interactions. Sixth, the system supports learning of the words using multi-gap activities using sentences drawn from the actual book to be read.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "Putting this approach into context of the related work on vocabulary learning, there is a large number of applications designed to support vocabulary learning -though, as we will see, the above characteristics clearly seem to set our approach apart from what is offered in this domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "Foreign language textbooks systematically provide a list of vocabulary items per chapter and there are many specialized or general file card applications for memorizing these sentences including Phase-6.de, Quizlet.com, or Ankiweb.net. Other tools offer more language-related functionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "Lextutor (https://lextutor.ca) is a website offering a collection of tools to learn vocabulary using lexical resources such as frequency-based vocabulary lists and corpus data. List learn supports learners in choosing words from frequency-based word lists and working with corpus concordances. Grouplex lets the learner select from a 2k crowd-sourced word list and practice them in fill-in-the-blank activities, with hints based on dictionary definition and POS tags. Flash employs cards showing words on one side and lexical support on the other. Apart from word meaning and usage, MorphoLex supports learning regular inflectional and derivational affixes based on the word family levels of Bauer and Nation (1993) . Other lextutor tools target reading texts with support from concordances and dictionaries. Resource assisted reading lets the user choose a pre-processed book, but Hyper text allows the learner to upload their text. While Lextutor offers a variety of tools and corpus resources, none of them offer personalized learning, performance tracking, or structured vocabulary spaces.",
"cite_spans": [
{
"start": 692,
"end": 715,
"text": "Bauer and Nation (1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "Memrise.com is a commercial flashcard-based vocabulary learning application focused on beginners, with learning units grouped by theme with little freedom for the learner to choose contents of interest. Duolingo.com is a strictly guided application supporting the users to learn a foreign language using various learning activities offering some gamification elements but no personalized vocabulary learning for texts or domains of personal interest. Vocabulary.com is a gamified free vocabulary list learning application that lets learners choose from collections and the literature to practice the words in multiple-choice questions activities to choose the correct meaning phrase for the given word usage. The literature only is a source of vocabulary though, it is not used as testing context or learning goal, and the vocabulary domain is not semantically structured or to construct a structured learner model. Cabuu.app supports learning of vocabulary lists scanned from books by associating each item with gestures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "Overall, while there is a rich landscape of applications supporting vocabulary learning, the six characteristics of the method presented in this paper set our approach apart -especially the use of distributional semantic methods to create a graph representation for any book or text the user wants to read, to efficiently organize and individually support and track the learning in this lexical space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "In this paper, we discussed the methodological basis and realization of a tool allowing the learner to systematically learn the lexical material needed to be able to read a book they are interested in. Automatically structuring the lexical space and sequencing the learning is achieved through distributional semantic methods, the automatic identification of word families, and concepts from network analysis. The graph-based domain model that is automatically derived from the given book serves as the foundation of a learner model supporting the selection of an efficient learning path through the lexical space to be acquired. Multi-gap activities are automatically generated from the targeted book and used for practice and testing activities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In addition to self-guided learning for people interested in reading specific books, which may be particularly useful in the context of so-called intensive reading programs, the approach is particularly well-suited for the English for Specific Purposes context, where both the language and the particular content domain are of direct importance. Given this kind of integration of language and content learning, a similar affinity exists to so-called Content and Language Integrated Learning (Coyle et al., 2010) .",
"cite_spans": [
{
"start": 491,
"end": 511,
"text": "(Coyle et al., 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Currently, the word is randomly chosen from the words in the word family. One could consider selecting forms of particular relevance (e.g., irregular ones) or taking language use characteristics into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We set both \u03b1 and \u03b2 to 0.3, which requires the least connected nodes to receive a minimum of two positive responses for them to count as mastered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Himanshu Bansal for his contribution to the initial stages of this project, and we are grateful to the reviewers for the helpful suggestions and pointers they provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word families",
"authors": [
{
"first": "Laurie",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Nation",
"suffix": ""
}
],
"year": 1993,
"venue": "International Journal of Lexicography",
"volume": "6",
"issue": "4",
"pages": "253--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurie Bauer and Paul Nation. 1993. Word families. International Journal of Lexicography, 6(4):253- 279.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Examining the yes/no vocabulary test: Some methodological issues in theory and practice",
"authors": [
{
"first": "Renaud",
"middle": [],
"last": "Beeckmans",
"suffix": ""
},
{
"first": "June",
"middle": [],
"last": "Eyckmans",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Janssens",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Dufranne",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Van De Velde",
"suffix": ""
}
],
"year": 2001,
"venue": "Language Testing",
"volume": "18",
"issue": "3",
"pages": "235--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Renaud Beeckmans, June Eyckmans, Vera Janssens, Michel Dufranne, and Hans Van de Velde. 2001. Ex- amining the yes/no vocabulary test: Some method- ological issues in theory and practice. Language Testing, 18(3):235-274.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Power and centrality: A family of measures",
"authors": [
{
"first": "Phillip",
"middle": [],
"last": "Bonacich",
"suffix": ""
}
],
"year": 1987,
"venue": "American Journal of Sociology",
"volume": "92",
"issue": "5",
"pages": "1170--1182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phillip Bonacich. 1987. Power and centrality: A fam- ily of measures. American Journal of Sociology, 92(5):1170-1182.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Centrality and network flow",
"authors": [
{
"first": "P",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Borgatti",
"suffix": ""
}
],
"year": 2005,
"venue": "Social Networks",
"volume": "27",
"issue": "1",
"pages": "55--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen P. Borgatti. 2005. Centrality and network flow. Social Networks, 27(1):55-71.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A graph-theoretic perspective on centrality",
"authors": [
{
"first": "P",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"G"
],
"last": "Borgatti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Everett",
"suffix": ""
}
],
"year": 2006,
"venue": "Social Networks",
"volume": "28",
"issue": "4",
"pages": "466--484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen P. Borgatti and Martin G. Everett. 2006. A graph-theoretic perspective on centrality. Social Net- works, 28(4):466-484.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Moving beyond Ku\u010dera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "New",
"suffix": ""
}
],
"year": 2009,
"venue": "Behavior Research Methods",
"volume": "41",
"issue": "4",
"pages": "977--990",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Brysbaert and Boris New. 2009. Moving be- yond Ku\u010dera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior Research Methods, 41(4):977-990.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Open learner models",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Bull",
"suffix": ""
},
{
"first": "Judy",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in intelligent tutoring systems",
"volume": "",
"issue": "",
"pages": "301--322",
"other_ids": {
"DOI": [
"10.1007/978-3-642-14363-2_15"
]
},
"num": null,
"urls": [],
"raw_text": "Susan Bull and Judy Kay. 2010. Open learner models. In R. Nkambou, J. Bourdeau, and R. Mizoguchi, ed- itors, Advances in intelligent tutoring systems, pages 301-322. Springer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Linguistically-aware information retrieval: Providing input enrichment for second language learners",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Chinkina",
"suffix": ""
},
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications (BEA)",
"volume": "",
"issue": "",
"pages": "188--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Chinkina and Detmar Meurers. 2016. Linguistically-aware information retrieval: Pro- viding input enrichment for second language learners. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications (BEA), pages 188-198, San Diego, CA. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Content and language integrated learning",
"authors": [
{
"first": "Do",
"middle": [],
"last": "Coyle",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Hood",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Marsh",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Do Coyle, Philip Hood, and David Marsh. 2010. Con- tent and language integrated learning. Ernst Klett Sprachen.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mathematical thinking. Problem Solving and Proofs",
"authors": [
{
"first": "P",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"B"
],
"last": "West",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John P. D'Angelo and Douglas B. West. 1997. Mathe- matical thinking. Problem Solving and Proofs.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Task-based language teaching: Sorting out the misunderstandings",
"authors": [],
"year": 2009,
"venue": "International Journal of Applied Linguistics",
"volume": "19",
"issue": "3",
"pages": "221--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rod Ellis. 2009. Task-based language teaching: Sort- ing out the misunderstandings. International Jour- nal of Applied Linguistics, 19(3):221-246.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Centrality in social networks conceptual clarification",
"authors": [
{
"first": "C",
"middle": [],
"last": "Linton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Freeman",
"suffix": ""
}
],
"year": 1978,
"venue": "Social Networks",
"volume": "1",
"issue": "3",
"pages": "215--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linton C. Freeman. 1978. Centrality in social networks conceptual clarification. Social Networks, 1(3):215- 239.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semantic and thematic list learning of second language vocabulary",
"authors": [
{
"first": "Javad",
"middle": [],
"last": "Gholami",
"suffix": ""
},
{
"first": "Sima",
"middle": [],
"last": "Khezrlou",
"suffix": ""
}
],
"year": 2014,
"venue": "CATESOL Journal",
"volume": "25",
"issue": "1",
"pages": "151--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javad Gholami and Sima Khezrlou. 2014. Semantic and thematic list learning of second language vocab- ulary. CATESOL Journal, 25(1):151-162.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Riding the digital wilds: Learner autonomy and informal language learning",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Godwin-Jones",
"suffix": ""
}
],
"year": 2019,
"venue": "Language Learning & Technology",
"volume": "23",
"issue": "1",
"pages": "8--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Godwin-Jones. 2019. Riding the digital wilds: Learner autonomy and informal language learning. Language Learning & Technology, 23(1):8-25.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "word2vec explained: Deriving Mikolov et al.'s negativesampling word-embedding method",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1402.3722"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Omer Levy. 2014. word2vec explained: Deriving Mikolov et al.'s negative- sampling word-embedding method. arXiv preprint arXiv:1402.3722.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "How large can a receptive vocabulary be? Applied Linguistics",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Goulden",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Nation",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Read",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "11",
"issue": "",
"pages": "341--363",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Goulden, Paul Nation, and John Read. 1990. How large can a receptive vocabulary be? Applied Linguistics, 11(4):341-363.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Scores on a yes-no vocabulary test: Correction for guessing and response style",
"authors": [
{
"first": "Ineke",
"middle": [],
"last": "Huibregtse",
"suffix": ""
},
{
"first": "Wilfried",
"middle": [],
"last": "Admiraal",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Meara",
"suffix": ""
}
],
"year": 2002,
"venue": "Language Testing",
"volume": "19",
"issue": "3",
"pages": "227--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ineke Huibregtse, Wilfried Admiraal, and Paul Meara. 2002. Scores on a yes-no vocabulary test: Correc- tion for guessing and response style. Language Test- ing, 19(3):227-245.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Gdex: Automatically finding good dictionary examples in a corpus",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "Milos",
"middle": [],
"last": "Hus\u00e1k",
"suffix": ""
},
{
"first": "Katy",
"middle": [],
"last": "Mcadam",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rundell",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Rychl\u1ef3",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the XIII EURALEX international congress",
"volume": "",
"issue": "",
"pages": "425--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kilgarriff, Milos Hus\u00e1k, Katy McAdam, Michael Rundell, and Pavel Rychl\u1ef3. 2008. Gdex: Automatically finding good dictionary examples in a corpus. In Proceedings of the XIII EURALEX inter- national congress, pages 425-432. Universitat Pom- peu Fabra Barcelona, Spain.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An alternative to multiple choice vocabulary tests",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Meara",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Buxton",
"suffix": ""
}
],
"year": 1987,
"venue": "Language Testing",
"volume": "4",
"issue": "2",
"pages": "142--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Meara and Barbara Buxton. 1987. An alternative to multiple choice vocabulary tests. Language Test- ing, 4(2):142-154.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Scaling up intervention studies to investigate real-life foreign language learning in school",
"authors": [
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
},
{
"first": "Kordula",
"middle": [
"De"
],
"last": "Kuthy",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Nuxoll",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Rudzewitz",
"suffix": ""
},
{
"first": "Ramon",
"middle": [],
"last": "Ziai",
"suffix": ""
}
],
"year": 2019,
"venue": "Annual Review of Applied Linguistics",
"volume": "39",
"issue": "",
"pages": "161--188",
"other_ids": {
"DOI": [
"10.1017/S0267190519000126"
]
},
"num": null,
"urls": [],
"raw_text": "Detmar Meurers, Kordula De Kuthy, Florian Nuxoll, Bj\u00f6rn Rudzewitz, and Ramon Ziai. 2019. Scaling up intervention studies to investigate real-life foreign language learning in school. Annual Review of Ap- plied Linguistics, 39:161-188.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Vocabulary size and the Common European Framework of Reference for languages",
"authors": [
{
"first": "James",
"middle": [],
"last": "Milton",
"suffix": ""
},
{
"first": "Thoma\u00ef",
"middle": [],
"last": "Alexiou",
"suffix": ""
}
],
"year": 2009,
"venue": "Vocabulary studies in first and second language acquisition",
"volume": "",
"issue": "",
"pages": "194--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Milton and Thoma\u00ef Alexiou. 2009. Vocabulary size and the Common European Framework of Ref- erence for languages. In Vocabulary studies in first and second language acquisition, pages 194-211. Springer.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The yes/no test as a measure of receptive vocabulary knowledge",
"authors": [
{
"first": "Kira",
"middle": [],
"last": "Mochida",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Harrington",
"suffix": ""
}
],
"year": 2006,
"venue": "Language Testing",
"volume": "23",
"issue": "1",
"pages": "73--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kira Mochida and Michael Harrington. 2006. The yes/no test as a measure of receptive vocabulary knowledge. Language Testing, 23(1):73-98.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Extracting semantic representations from large text corpora",
"authors": [
{
"first": "Malti",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Joseph P",
"middle": [],
"last": "Bullinaria",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 1997,
"venue": "4th Neural Computation and Psychology Workshop",
"volume": "",
"issue": "",
"pages": "199--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malti Patel, John A Bullinaria, and Joseph P Levy. 1998. Extracting semantic representations from large text corpora. In 4th Neural Computation and Psychology Workshop, London, 9-11 April 1997, pages 199-212. Springer.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The percentage of words known in a text and reading comprehension",
"authors": [
{
"first": "Norbert",
"middle": [],
"last": "Schmitt",
"suffix": ""
},
{
"first": "Xiangying",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Grabe",
"suffix": ""
}
],
"year": 2011,
"venue": "The Modern Language Journal",
"volume": "95",
"issue": "1",
"pages": "26--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Norbert Schmitt, Xiangying Jiang, and William Grabe. 2011. The percentage of words known in a text and reading comprehension. The Modern Language Journal, 95(1):26-43.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "An individual's rate of forgetting is stable over time but differs across materials",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Sense",
"suffix": ""
},
{
"first": "Friederike",
"middle": [],
"last": "Behrens",
"suffix": ""
},
{
"first": "Rob",
"middle": [
"R"
],
"last": "Meijer",
"suffix": ""
},
{
"first": "Hedderik",
"middle": [],
"last": "Van Rijn",
"suffix": ""
}
],
"year": 2016,
"venue": "Topics in Cognitive Science",
"volume": "8",
"issue": "1",
"pages": "305--321",
"other_ids": {
"DOI": [
"10.1111/tops.12183"
]
},
"num": null,
"urls": [],
"raw_text": "Florian Sense, Friederike Behrens, Rob R. Meijer, and Hedderik van Rijn. 2016. An individual's rate of forgetting is stable over time but differs across mate- rials. Topics in Cognitive Science, 8(1):305-321.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The reliability and validity of four types of vocabulary tests",
"authors": [
{
"first": "",
"middle": [],
"last": "Verner Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sims",
"suffix": ""
}
],
"year": 1929,
"venue": "The Journal of Educational Research",
"volume": "20",
"issue": "2",
"pages": "91--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Verner Martin Sims. 1929. The reliability and validity of four types of vocabulary tests. The Journal of Educational Research, 20(2):91-96.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A technique for determining the relative difficulty of word meanings among elementary school children",
"authors": [
{
"first": "C",
"middle": [],
"last": "Harvey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tilley",
"suffix": ""
}
],
"year": 1936,
"venue": "The Journal of Experimental Education",
"volume": "5",
"issue": "1",
"pages": "61--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harvey C. Tilley. 1936. A technique for determining the relative difficulty of word meanings among ele- mentary school children. The Journal of Experimen- tal Education, 5(1):61-64.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Social network analysis: Methods and applications",
"authors": [
{
"first": "Stanley",
"middle": [],
"last": "Wasserman",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Faust",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley Wasserman, Katherine Faust, et al. 1994. So- cial network analysis: Methods and applications. Cambridge University Press.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Characterising measures of lexical distributional similarity",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics (COL-ING)",
"volume": "",
"issue": "",
"pages": "1015--1021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. In Proceedings of the 20th International Conference on Computational Linguistics (COL- ING), pages 1015-1021.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The grammar of sense: Using part-of-speech tags as a first step in semantic disambiguation",
"authors": [
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 1998,
"venue": "Natural Language Engineering",
"volume": "4",
"issue": "2",
"pages": "135--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yorick Wilks and Mark Stevenson. 1998. The grammar of sense: Using part-of-speech tags as a first step in semantic disambiguation. Natural Language Engi- neering, 4(2):135-143.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Doing task-based teaching",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Willis",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Willis",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Willis and David Willis. 2013. Doing task-based teaching. Oxford University Press.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Automatic generation of challenging distractors using contextsensitive inference rules",
"authors": [
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications (BEA)",
"volume": "",
"issue": "",
"pages": "143--148",
"other_ids": {
"DOI": [
"10.3115/v1/W14-1817"
]
},
"num": null,
"urls": [],
"raw_text": "Torsten Zesch and Oren Melamud. 2014. Automatic generation of challenging distractors using context- sensitive inference rules. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Build- ing Educational Applications (BEA), pages 143- 148, Baltimore, Maryland. Association for Compu- tational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Figure 1: A word family example (Bauer and Nation, 1993) and an expanded family node in our graph",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Formula for computing relationship between families and an example illustrating the result",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "A close-up view of a learner model showing nodes selected for practice While the colored visualization of the lexical space serves as",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "An example activity",
"type_str": "figure"
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"text": "Twenty Thousand Leagues Under the Sea by Jules Verne, (b) Harry Potter and the Sorcerer's Stone by J. K. Rowling and (c) A Game of Thrones: A Song of Ice and Fire by George R. R. Martin.",
"content": "<table><tr><td colspan=\"4\">Unique Learning Graph Graph</td></tr><tr><td>words</td><td>targets</td><td>nodes</td><td>edges</td></tr><tr><td>a) 10k</td><td>1.7k</td><td>1.3k</td><td>3.3k</td></tr><tr><td>b) 6.5k</td><td>1.2k</td><td>1k</td><td>2.4k</td></tr><tr><td>c) 14k</td><td>3.7k</td><td>2.5k</td><td>5.6k</td></tr></table>"
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"text": "Example graphs derived for three books",
"content": "<table/>"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"text": "sums up the results of the second set of experiments.",
"content": "<table><tr><td>Learning</td><td>Setup</td><td colspan=\"4\"># of sessions at given proficiency</td></tr><tr><td>targets</td><td/><td>A1</td><td>A2</td><td>B1</td><td>B2</td></tr><tr><td>a) 1.7k</td><td>our baseline</td><td colspan=\"4\">80 175 147 139 123 75 69 68</td></tr><tr><td>b) 1.2k</td><td>our baseline</td><td>54 92</td><td>47 77</td><td>43 72</td><td>42 70</td></tr><tr><td>c) 3.7k</td><td>our baseline</td><td colspan=\"4\">169 157 149 142 421 380 376 357</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Number of interactions required to master the</td></tr><tr><td>vocabulary for simulated learners with 90% accuracy</td></tr><tr><td>when starting out at the specified proficiency level</td></tr></table>"
}
}
}
}