| { |
| "paper_id": "Y14-1012", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:44:53.013004Z" |
| }, |
| "title": "Zero-Shot Learning of Language Models for Describing Human Actions Based on Semantic Compositionality of Actions", |
| "authors": [ |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Asoh", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Institute of Advanced Industrial Science and Technology Tsukuba", |
| "location": { |
| "postCode": "305-8568", |
| "settlement": "Ibaraki", |
| "country": "Japan" |
| } |
| }, |
| "email": "h.asoh@aist.go.jp" |
| }, |
| { |
| "first": "Ichiro", |
| "middle": [], |
| "last": "Kobayashi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ochanomizu University", |
| "location": { |
| "addrLine": "Bunkyo-ku", |
| "postCode": "112-8610", |
| "settlement": "Tokyo", |
| "country": "Japan" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We propose a novel framework for zero-shot learning of topic-dependent language models, which enables the learning of language models corresponding to specific topics for which no language data is available. To realize zeroshot learning, we exploit the semantic compositionality of the target topics. Complex topics are normally composed of several elementary semantic components. We found that the language model that corresponds to a particular topic can be approximated with a linear combination of language models corresponding to elementary components of the target topics. On the basis of the findings, we propose simple methods of zero-shot learning. To confirm the effectiveness of the proposed framework, we apply the methods to the problem of generating natural language descriptions of short Kinect videos of simple human actions.", |
| "pdf_parse": { |
| "paper_id": "Y14-1012", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We propose a novel framework for zero-shot learning of topic-dependent language models, which enables the learning of language models corresponding to specific topics for which no language data is available. To realize zeroshot learning, we exploit the semantic compositionality of the target topics. Complex topics are normally composed of several elementary semantic components. We found that the language model that corresponds to a particular topic can be approximated with a linear combination of language models corresponding to elementary components of the target topics. On the basis of the findings, we propose simple methods of zero-shot learning. To confirm the effectiveness of the proposed framework, we apply the methods to the problem of generating natural language descriptions of short Kinect videos of simple human actions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Constructing topic-dependent language models is useful for many applications such as text mining, speech recognition, statistical machine translation, natural language interfaces, and textual description of images or video contents. In most methods of topic-dependent language model construction, one general model is first constructed from a large amount of language data, and then the general model is modified with a small amount of language data regarding the target topic. The technique of taking the weighted sum of language models is often used for the modification (Bacchiani and Roark, 2003; Jelinek and Mercer, 1980) . However, correcting language data for all target topics is demanding and difficult. In particular, when each target topic becomes narrower and the number of target topics increases, it becomes impractical to correct language data for all topics.", |
| "cite_spans": [ |
| { |
| "start": 573, |
| "end": 600, |
| "text": "(Bacchiani and Roark, 2003;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 601, |
| "end": 626, |
| "text": "Jelinek and Mercer, 1980)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we propose a novel framework for zero-shot learning of topic-dependent language models, which enables the learning of language models corresponding to specific topics without observing language data regarding the topics on the basis of the semantic compositionality of the target topics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the following, we consider rather fine-grained topics such as human activities. Such detailed topics are normally composed of several elementary semantic components. For example, a human action \"raising left leg in the forward direction\" is considered as a topic. The action includes components such as \"up (raise)\", \"left\", \"leg\", and \"in the forward direction\". Another action \"raising left hand in the side direction\" shares the common elements \"up\" and \"left\" with the previous action. In this way, actions are related to each other through common components. Hence, the language models generated from natural language sentences describing those actions are also expected to be related to each other. We will show that using this kind of compositionality, we can generate language models corresponding to actions for which we do not have natural language data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To demonstrate the effectiveness of the proposed methods, we apply the methods to the problem of generating natural language descriptions of short PACLIC 28", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Kinect videos.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "! 86", |
| "sec_num": null |
| }, |
| { |
| "text": "In summary, the original contributions of this work are as follows: 1) the problem of zeroshot learning of topic-dependent language models is newly formulated, 2) novel simple methods for zeroshot learning are proposed, and 3) the effectiveness of the methods is confirmed with real data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "! 86", |
| "sec_num": null |
| }, |
| { |
| "text": "The remainder of the paper is organized as follows: The problem is formalized and solutions are proposed in Section 2, Section 3 discusses related works, Section 4 presents application to the video description problem including experimental setup and results of the experiments, and Section 5 presents the conclusion and discusses future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "! 86", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section we formalize the problem of zeroshot learning of topic-dependent language models, and propose methods to solve the problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Zero-Shot Learning of Language Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As described above, we are interested in the problem of learning multiple topic-dependent language models M i (i = 1, ..., N), each of which corresponds to a complex fine-grained topic such as human action x i . When we have a language data S i i.e. a set of sentences describing the topic x i for all topics, we can simply calculate M i from S i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The problem we will treat in this paper is estimating language models M i corresponding to topics x i for which we do not have language data S i . Such estimation becomes possible on the basis of the semantic compositionality of topics. We assume that each topic is composed of several semantic components. We denote the semantic components as y j (j = 1, ..., K).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "For example, in the experiments described in Section 4, we use N = 20 human actions such as \"raising left leg in the forward direction\" and \"raising both hands in the side direction\". Each action is composed by combining some of K = 9 components such as \"up\", \"down\", \"front\" (front direction), \"side\" (side direction), \"hand\", \"leg\", \"right\", \"left\", and \"both\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The relation between topics and components can be described by a matrix A = (a ij ). When a ij = 1 then the ith topic includes the jth component, and when a ij = 0 then otherwise. In the following section, we assume that a ij is known for all topics. We also assume that the number of topics N is larger than the number of components K.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "As for the language model, we consider the n-gram model. An n-gram language model is normally defined by the conditional probabili-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "ties p(w i |w i\u22121 , ..., w i\u2212n+1 ) for a word sequence (w i\u2212n+1 , ..., w i\u22121 , w i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Here we use the joint probabilities p(w i , w i\u22121 , ..., w i\u2212n+1 ) instead of the conditional probabilities because the joint probabilities are suit for the linear decomposition described below. Hence the conditional probabilities can be calculated from the joint probabilities, this does not reduce the generality and usefulness of the framework.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We denote a vector composed of the joint probability values calculated from language data S i as \u03c8 i , and assume that the probability vector \u03c8 i for the ith topic can be approximately decomposed as the weighted sum of probability vectors \u03c6 j corresponding to the jth components included in the topic as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u03c8 i = j a ij j a ij \u03c6 j + \u03b5 i ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where \u03b5 i is a vector of the noise term.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Because we consider N topics and K components, the relation can be written with matrices as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03a8 =\u00c3\u03a6 + E,", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where \u03a8 is an N \u00d7 W matrix whose ith row is \u03c8 i and \u03a6 is a K \u00d7W matrix whose jth row is \u03c6 j , and\u00c3 is a N \u00d7 K matrix whose element is a ij / j a ij . W is the dimension of the probability vector of the language model, i.e. the number of ordered word pairs appear in the language data. E is a matrix composed of noise terms. We use this linear relation for zeroshot learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Let us assume we have language data S i for only N \u2032 (N \u2032 < N) topics. The set of topics for which we have language data is denoted by T . From the partial language data, we can compute the N \u2032 \u00d7 W probability vector matrix \u03a8 \u2032 by the same way as the matrix \u03a8. A row of \u03a8 \u2032 is the probability vector which corresponds to a topic in T . If we can estimate \u03a6 for the K components from the partial data, then we can recover the whole \u03a8 using the relation of equation 1. This means that we can estimate language models \u03c8 i for topics for which we have no language data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods of Zero-Shot Learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We assume that each of K components y j is included at least once in the N \u2032 topics. Then a naive method of computing \u03a6 is to compute the language model \u03c6 j from the language data of all topics that include the jth component.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods of Zero-Shot Learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We merge the sentences regarding the topics with the jth component. Then from the merged data we compute the probability vector \u03c6 j for the jth component. This method has been designated as \"Method 1\" in this study.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods of Zero-Shot Learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Another method of estimating \u03a6 is to exploit the least-square estimation to estimate \u03a6 from \u03a8 \u2032 a\u015d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods of Zero-Shot Learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u03a6 = arg min \u03a6 ||\u03a8 \u2032 \u2212\u00c3 \u2032 \u03a6|| 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods of Zero-Shot Learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "where\u00c3 \u2032 is an N \u2032 \u00d7K matrix made by extracting N \u2032 rows corresponding to \u03a8 \u2032 from\u00c3. This optimization problem can be easily solved a\u015d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods of Zero-Shot Learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u03a6 =\u00c3 \u2032+ \u03a8 \u2032 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods of Zero-Shot Learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "where\u00c3 \u2032+ is the generalized inverse of matrix\u00c3 \u2032 . Then from\u03a6 we can estimate the language models for topics without language data. This method has been designated as \"Method 2\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods of Zero-Shot Learning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Zero-shot learning has recently become a popular research topic in machine learning, in particular in the domain of large scale visual object recognition and image tagging. Because the number of classes is large, it is difficult to collect true labels for the problems. Hence zero-shot learning is useful in the domain. Palatucci et al. (2009) proposed a method of zero-shot learning and applied to decoding fMRI data from subjects thinking about certain words based on the semantic representation of the target classes. They also gave theoretical analysis of the zero-shot learning framework. Lampert et al. (2009) proposed a method of visual object classification where training and test classes are disjoint. They also exploited semantic attributes of target classes. Farhadi et al. (2009) also proposed rather similar idea. More recently, Cheng et al. (2013) applied the idea of zero-shot learning to human activity recognition task. They mapped sequence of images to category labels. Socher et al. (2013) proposed a method for zero-shot learning of object recognition using deep neural networks. Frome et al. (2014) improved the model with a larger scale dataset.", |
| "cite_spans": [ |
| { |
| "start": 320, |
| "end": 343, |
| "text": "Palatucci et al. (2009)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 594, |
| "end": 615, |
| "text": "Lampert et al. (2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 771, |
| "end": 792, |
| "text": "Farhadi et al. (2009)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 989, |
| "end": 1009, |
| "text": "Socher et al. (2013)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1101, |
| "end": 1120, |
| "text": "Frome et al. (2014)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "All of the previous studies treat zero-shot learning of class labels on the basis of the similarity between input information and also between semantic attribute of the classes. Our work extends the idea of zero-shot learning to language models, which have more complex structure than class labels by exploiting the semantic compositionality of complex topics. In other words, our work goes beyond the word level and treats the sentence level structure. As far as we know, this is the first work which applies the idea of zero-shot learning to topic-dependent language model learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The idea of linearly decomposing language models is strongly related to latent topic extraction in text mining. In the latent semantic analysis (LSA), the word frequency vector (unigram probability vector) of a document is linearly decomposed into a weighted sum of latent topic vectors (Deerwester et al., 1990) . In topic extraction, the aim of the data analysis is to extract latent topics. On the contrary, in this work, the aim of zero-shot learning is to construct language models for which no language data is available.", |
| "cite_spans": [ |
| { |
| "start": 287, |
| "end": 312, |
| "text": "(Deerwester et al., 1990)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this paper, we assume that the latent topics (= components) are known, and we decompose the language models on the basis of the known combination of components (information of matrix A). However, we can also consider another problem setting where matrix A is unknown. In the setting, the problem is mathematically equivalent with the LSA, and singular value decomposition of the language model matrix \u03a8 can be used to estimate latent components and language models for the components simultaneously. Various matrix factorization algorithms such as non-negative matrix factorization (Lee and Seung, 1999; Xu et al., 2003) , or other probabilistic topic extraction methods such as probabilistic latent semantic analysis (Hofmann, 1999) and latent Dirichlet allocation (Blei et al., 2003) may also be applicable.", |
| "cite_spans": [ |
| { |
| "start": 585, |
| "end": 606, |
| "text": "(Lee and Seung, 1999;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 607, |
| "end": 623, |
| "text": "Xu et al., 2003)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 721, |
| "end": 736, |
| "text": "(Hofmann, 1999)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 769, |
| "end": 788, |
| "text": "(Blei et al., 2003)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Zero-shot learning of language models is also in- teresting from the viewpoint of modeling the natural language acquisition process of humans. Humans are believed to acquire language capability from a rather small amount of observations of language data. To cope with this problem of the poverty of stimuli, certain kinds of zero-shot learning may be exploited. As an example, Sugita and Tani (2005) proposes a model of language acquisition with recurrent neural networks. The robot they constructed can generate sentences describing actions that the robot has not yet experienced on the basis of the semantic compositionality of the actions.", |
| "cite_spans": [ |
| { |
| "start": 377, |
| "end": 399, |
| "text": "Sugita and Tani (2005)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To demonstrate the effectiveness of the proposed methods, we applied the methods to the problem of generating natural language description of short Kinect videos. Obtaining a huge amount of video data is becoming easier recently. Whereas we agree with the fact that fully utilization of the data has not been achieved yet. For example, to grasp the content of videos recorded by surveillance cameras, or videos of recorded meetings, we need to watch through the entire videos, which is considerably time-consuming work. If the contents of a video can be recognized and be described with natural language sentences, it will become easier to mine the content of the video data and to achieve various applications such as scene retrieval through natural language queries, etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Video Content Description System", |
| "sec_num": "4" |
| }, |
| { |
| "text": "On the basis of such needs, research of the learning relation between natural language and multimedia information has recently been becoming popular in the areas of both natural language processing and multi-media information processing. Many studies have been conducted to generate sentences to explain human behaviors in a video (Barbu et al., 2012; Ding et al., 2012a; Ding et al., 2012b; Kobayashi et al., 2010; Kojima et al., 2002; Tan et al., 2011) . As representative studies, Yu and Siskind (2013) propose a method that learns representations of word meanings from short video clips paired with sentences. Regneri et al. (2013) consider the problem of grounding sentences describing actions in visual information extracted from videos. Nakamura (2008, 2009) propose incremental learning of association between motion symbols and natural language. Ushiku et al. (2011 Ushiku et al. ( , 2012 propose a method to create a caption for a still picture, by learning n-gram models for describing picture from pairs of still pictures and their explanation sentences.", |
| "cite_spans": [ |
| { |
| "start": 331, |
| "end": 351, |
| "text": "(Barbu et al., 2012;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 352, |
| "end": 371, |
| "text": "Ding et al., 2012a;", |
| "ref_id": null |
| }, |
| { |
| "start": 372, |
| "end": 391, |
| "text": "Ding et al., 2012b;", |
| "ref_id": null |
| }, |
| { |
| "start": 392, |
| "end": 415, |
| "text": "Kobayashi et al., 2010;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 416, |
| "end": 436, |
| "text": "Kojima et al., 2002;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 437, |
| "end": 454, |
| "text": "Tan et al., 2011)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 614, |
| "end": 635, |
| "text": "Regneri et al. (2013)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 744, |
| "end": 765, |
| "text": "Nakamura (2008, 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 855, |
| "end": 874, |
| "text": "Ushiku et al. (2011", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 875, |
| "end": 897, |
| "text": "Ushiku et al. ( , 2012", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Video Content Description System", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Among these works, Kobayashi et al. (2013) are constructing a system for generating natural language description of short Kinect videos of several kinds of human actions. From the pairs of video data of an action taken by the Kinect and Japanese sentences describing the action, the system learns models of observed human actions and language models of the sentences. Using the two models and the correspondence between them, the system can recognizes an action in a new video of a leaned action and outputs Japanese sentences describing the action.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Video Content Description System", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In the work, they assumed that they could collect natural language sentences describing all target actions and construct language models corresponding to all actions from the data. However, when the number of target actions increases, it becomes impractical to prepare natural language descriptions for all actions. Here, we apply our zero-shot learning method to learn the language models of actions for which we do not have language data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Video Content Description System", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We use N = 20 human actions as the target topics. We take short (less than 5 sec.) Kinect videos of PACLIC 28 ! 89 (raise left hand.) 2 hidari te wo ue ni ageru.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(raise left hand upward.) 4 hidari te wo mae kara ageru.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(raise left hand to the front direction) 3 hidari te wo shita kara ue ni ageru.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(raise left hand upward from below.) 4 hidari te wo mae no hou kara ue ni ageru.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(raise left hand upward from the front direction) the actions, and collect several Japanese sentences that describe the actions. Figure 1 shows an example of an action (\"raising both hand through the side direction\"). For each action, around 15 sentences describing the action are collected. Table 1 shows some sentences describing the action of raising left hand in the front direction. The collected sentences are segmented into words and bi-gram joint probabilities p(w i , w i\u22121 ) are computed from the data for each action. The number of word pairs that appeared in the data is 360.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 129, |
| "end": 137, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 292, |
| "end": 299, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We set the number of components K = 9: i.e., \"up\", \"down\", \"front\" (front direction), \"side\" (side direction), \"hand\", \"leg\", \"right\", \"left\", and \"both\" (only for hands). The combinatorial relationship between the actions and the elements is illustrated in Figure 2 . \"L\", \"R\", and \"B\" in the figure denotes \"left\", \"right\", and \"both\" respectively. The figure shows that each human action includes four components in this experiment. For example, Action 3 (ACT 3) is composed of the components \"up\", \"front\", \"hand\", and \"left\", and Action 18 (ACT 18) is composed of \"down\", \"side\", \"leg\", and \"right\". ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 258, |
| "end": 266, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To evaluate the effectiveness of the proposed zeroshot learning methods, sentences describing one of the 20 human actions are omitted from the training data. Then we estimate \u03a6 for components using (M \u2212 1) \u00d7 W matrix \u03a8 \u2032 and (M \u2212 1) \u00d7 K matrix A \u2032 . From the estimated\u03a6 we can recover the language model of the sentences omitted from the training data. Table 1 shows the root mean squared error (RMSE) of the estimated probability values. The column \"Action\" denotes the target action for which the language data is omitted and the probability vector is estimated with the zero-shot learning methods. The column \"Training\" means that the language model is estimate using all the sentences in the training data. This is a baseline. Another baseline \"Uniform\" means that the estimated probability vector is uniform distribution, that is, all probability values are equal to 1/ (# of word pairs). The minimum RMSE value for each action is shown in bold face.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 353, |
| "end": 360, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Result of Experiment", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Compared with the mean value of the non-zero joint probability values 0.0146, it can be said that the RMSE values obtained from our two methods are small enough. The result demonstrates that Method 2 performs better than other methods for allmost all removed topics. However, in Method 2, the estimated values of \u03c6 j and \u03c8 i do not become probabilities, that is, some values may become below zero and the sum of the values slightly differ from one. Hence, it becomes a bit difficult to interpret the values. Although this is not so serious problem in practice, this can be considered as a kind of tradeoff between the accuracy and the interpretability. We also evaluate the RMSE values when we omit language data for more than one actions from the training data. The results strongly depend on the data which are omitted. For example, when we omit language data regarding actions 1, 2, 7, and 8, then the RMSE value of the estimated language model for Action 1 is degraded to 0.00469. However when we omit language data regarding actions 1, 3, 5, and 13, then the RMSE keeps low value 0.00223. This difference comes from the components included in the remaining actions. The Action 1 is composed of \"raise\", \"front\", \"right\", \"hand\". When we omitted actions 1, 2, 7, and 8, no actions including components \"right\" and \"hand\" is remained in the training data. Hence this causes rather serious effect to the accuracy of the zero-shot estimation. However, when we omitted actions 1, 3, 5, and 13, all component pairs are still included in the training data. Hence this does not cause serious damage to the estimated language model. Through the analysis of various cases, we confirmed that if the choice of omitted data is balanced to keep all semantic components remained in the training data, then the performance of zero-shot learning is not degraded so much even though language data regarding several actions are omitted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Result of Experiment", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Finally we evaluate the text generation capability of the estimated language models. Here we use the language models estimated by Method 2. We generate Japanese sentences of high likelihood value in the same way as in the work of Kobayashi et al. (2013) , i.e. with the Viterbi algorithm using the language model of each action. Table 3 contrasts the top two most probable texts generated with the bi-gram computed from the col-lected language data of the action and with the bigram estimated by the zero-shot learning using the language data of the other 19 actions. We demonstrate the results for 6 of the 20 actions. From the table, we can see that almost the same sentences are generated with the bi-gram probability vector estimated by our zero-shot learning method.", |
| "cite_spans": [ |
| { |
| "start": 230, |
| "end": 253, |
| "text": "Kobayashi et al. (2013)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 329, |
| "end": 336, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Result of Experiment", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Although the actions used in the experiment are rather simple, we confirmed the possibility of zeroshot learning of effective language models. Those results show that zero-shot learning is a promising way to cope with the problem of the poverty of language data in natural language processing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Result of Experiment", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We have proposed methods of zero-shot learning of fine-grained topic-dependent language models. Using the methods, we can learn topic-dependent language models corresponding to topics for which we do not have language data on the basis of the compositionality of the topics. We confirmed the effectiveness of the proposed methods with the task of describing short Kinect videos of human actions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Much work remains to be done in the future. Because our experiment was conducted with a smallscale dataset, the methods should be evaluated more elaborately with larger scale datasets. The proposed zero-shot learning may be useful not only for describing videos but also for other various applications such as speech recognition, machine translation, text mining, and video retrieval. Application of the methods to such problems is an interesting topic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In this paper, we assumed that the matrix A which denotes the relationship between actions and components is known. However, as is mentioned in the related work section, the problem setting for unknown A is also interesting. This problem is related to find the optimal elementary components to describe target topics. This is a kind of dictionary learning problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Finally, modeling more complex relation between multiple language models using more sophisticated probabilistic models may be an interesting research direction for natural language processing. As an example, Eisenstein et al. (2011) proposed a new way of representing multiple language models. Introducing their method of sparse additive decomposition of language models into our framework is also an interesting issue.", |
| "cite_spans": [ |
| { |
| "start": 208, |
| "end": 232, |
| "text": "Eisenstein et al. (2011)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Acknowledgement: This work is partly supported by JSPS KAKENHI 26280096, MEXT KAKENHI 25120011, and foundation for the Fusion of Science and Technology.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Unsupervised Language Model Adaptation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Bacchiani", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Roark", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "224--227", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Bacchiani and B. Roark. 2003. Unsupervised Lan- guage Model Adaptation. 2003 IEEE International Conference on Acoustics, Speech and Signal Process- ing, Vol.1:224-227.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Video In Sentences Out", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Barbu", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Bridge", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Burchill", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Coroian", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Dickinson", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Michaux", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Mussman", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Narayanaswamy", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Salvi", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Schmidt", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Shangguan", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Siskind", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Waggoner", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1204.2742" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Barbu, A. Bridge, Z. Burchill, D. Coroian, S. Dickinson, S. Fidler, A. Michaux, S. Mussman, S. Narayanaswamy, D. Salvi, L. Schmidt, J. Shangguan, J. Siskind, J. Waggoner, S. Wang, J. Wei, Y. Yin, and Z. Zhang. 2012. Video In Sentences Out. arXiv:1204.2742.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Latent Dirichlet Allocation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Blei", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Jordan", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "4-5", |
| "pages": "993--1022", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Blei, A. Y. Ng, and M. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3 (4-5): 993-1022.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Towards Zero-Shot Learning for Human Activity Recognition Using Semantic Attribute Sequence Model. Proceedinsg of UbiComp'13", |
| "authors": [ |
| { |
| "first": "H.-T", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Griss", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "You", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H.-T. Chang, M. Griss, P. Davis, J. Li, and D. You. 2013. Towards Zero-Shot Learning for Human Ac- tivity Recognition Using Semantic Attribute Sequence Model. Proceedinsg of UbiComp'13.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Describing Objects by Their Attributes", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Farhadi", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Endres", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Holem", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Forsyth", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Farhadi, I. Endres, D. Holem, and D. Forsyth. 2009. Describing Objects by Their Attributes. Proceedings of CVPR 2009.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "DeViSE: A Deep Visual-Semantic Embedding Model, Proceedings of NIPS", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "F" |
| ], |
| "last": "Frome", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Shlens", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. F. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. A. Ranzato, and T. Mikolov. 2014. DeViSE: A Deep Visual-Semantic Embedding Model, Proceed- ings of NIPS 2014.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Indexing by Latent Semantic Analysis", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Deerwester", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Dumais", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "W" |
| ], |
| "last": "Furnas", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "K" |
| ], |
| "last": "Landauer", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Harshman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Journal of the American Society for Information Science", |
| "volume": "41", |
| "issue": "6", |
| "pages": "391--407", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Deerwester, S. Dumais, G. W. Furnas, T. K. Landauer and R. Harshman. 1990. Indexing by Latent Semantic Analysis. Journal of the American Society for Infor- mation Science, 41 (6): 391-407.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Generating Natural Language Summaries for Multimedia", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Metze", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Rawat", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Schulam", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Burger", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 7th International Natural Language Generation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "128--130", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Ding, F. Metze, S. Rawat, P. F. Schulam, and S. Burger. 2012. Generating Natural Language Sum- maries for Multimedia. Proceedings of the 7th In- ternational Natural Language Generation Conference, 128-130.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Beyond Audio and Video Retrieval: Towards Multimedia Summarization", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Metze", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Rawat", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Schulam", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Burger", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Younessian", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Bao", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "G" |
| ], |
| "last": "Christel", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Hauptmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceeding of the 2nd ACM International Conference on Multimedia Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Ding, F. Metze, S. Rawat, P. F. Schulam, S. Burger, E. Younessian, L. Bao, M. G. Christel, and A. Haupt- mann. 2012. Beyond Audio and Video Retrieval: To- wards Multimedia Summarization. Proceeding of the 2nd ACM International Conference on Multimedia Re- trieval, Article No.2.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Sparse Additive Generative Models of Text", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Eisenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ahmed", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [ |
| "P" |
| ], |
| "last": "Xing", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 28th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Eisenstein, A. Ahmed, and E. P. Xing. 2011. Sparse Additive Generative Models of Text. Proceedings of the 28th International Conference on Machine Learn- ing.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Probabilistic Latent Semantic Analysis", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Hofmann", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "289--296", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Hofmann. 1999. Probabilistic Latent Semantic Anal- ysis. Proceedings of the 15th Conference on Uncer- tainty in Artificial Intelligence, 289-296.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Interpolated Estimation of Markov Source Parameters from Sparse Data", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Proceedings of the Workshop on Pattern Recognition in Practice", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Jelinek and R. L. Mercer. 1980. Interpolated Estima- tion of Markov Source Parameters from Sparse Data. Proceedings of the Workshop on Pattern Recognition in Practice.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A Study on Verbalization of Human Behaviors in a Room", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Kobayashi", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Noumi", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Hiyama", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 IEEE International Conference on Fuzzy Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I. Kobayashi, M. Noumi, and A. Hiyama. 2010. A Study on Verbalization of Human Behaviors in a Room. Pro- ceedings of the 2010 IEEE International Conference on Fuzzy Systems.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A Probabilistic Approach to Text Generation of Human Motions Extracted from Kinect Videos", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Kobayashi", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Kobayash", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Asoh", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Guadarrama", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the World Congress on Engineering and Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Kobayashi, I. Kobayash, H. Asoh, and S. Guadarrama. 2013. A Probabilistic Approach to Text Generation of Human Motions Extracted from Kinect Videos. Pro- ceedings of the World Congress on Engineering and Computer Science 2013.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Natural Language Description of Human Activities from Video Images Based on Concept Hierarchy of Actions", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kojima", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Tamura", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Fukunaga", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "International Journal of Computer Vision", |
| "volume": "50", |
| "issue": "2", |
| "pages": "171--184", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Kojima, T. Tamura, and K. Fukunaga. 2002. Nat- ural Language Description of Human Activities from Video Images Based on Concept Hierarchy of Actions. International Journal of Computer Vision, 50 (2):171- 184.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Learning to Detect Unseen Object Classes by Between-Class Attribute Transfer", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "H" |
| ], |
| "last": "Lambert", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Nickisch", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Harmeling", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. H. Lambert, H. Nickisch, and S. Harmeling. 2009. Learning to Detect Unseen Object Classes by Between-Class Attribute Transfer. Proceedings of CVPR 2009.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning the Parts of Objects with Nonnegative Matrix Factorization", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "S" |
| ], |
| "last": "Seung", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Nature", |
| "volume": "401", |
| "issue": "", |
| "pages": "788--791", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. D. Lee and H. S. Seung. 1999. Learning the Parts of Objects with Nonnegative Matrix Factorization. Na- ture, 401, 788-791.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Zero-shot Learning with Semantic Output Codes", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Palatucci", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Pomerleau", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "M" |
| ], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Palatucci, D. Pomerleau, G. Hinton, and T. M. Mitchell. 2009. Zero-shot Learning with Semantic Output Codes. Proceedings of NIPS 2009.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Grounding Action Descriptions in Videos", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Regneri", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Rohrbach", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Wetzel", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Thater", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Schiele", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Regneri, M. Rohrbach, D. Wetzel, S. Thater, B. Schiele, and M. Pinkal. 2013. Grounding Action De- scriptions in Videos. Proceedings of the 51st Annual Meeting of the Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Translating Video Content to Natural Language Descriptions", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Rohrbach", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Qiu", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Thater", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Schiele", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of ICCV 2013", |
| "volume": "", |
| "issue": "", |
| "pages": "433--440", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Rohrbach, W. Qiu, I. Titov, S. Thater, M. Pinkal, and B. Schiele. 2013. Translating Video Content to Natural Language Descriptions. Proceedings of ICCV 2013, 433-440.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Zero-shot learning through cross-modal transfer", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ganjoo", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Socher, M. Ganjoo, C. D, Manning, and A. Y. Ng. 2013. Zero-shot learning through cross-modal trans- fer. Proceedings of NIPS 2013.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Learning Semantic Combinatoriality from the Interaction between Linguistic and Behavioral Processes", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Sugita", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Tani", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Adaptive Behaviour", |
| "volume": "3", |
| "issue": "1", |
| "pages": "33--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Sugita and J. Tani. 2005. Learning Semantic Com- binatoriality from the Interaction between Linguistic and Behavioral Processes. Adaptive Behaviour, 3 (1): 33-52.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Integrating Whole Body Action Primitives and Natural Language for Humanoid Robots", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Takano", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Nakamura", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of 2008 IEEE-RAS International Conference on Humanoid Robots", |
| "volume": "", |
| "issue": "", |
| "pages": "708--713", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. Takano and Y. Nakamura. 2008. Integrating Whole Body Action Primitives and Natural Language for Hu- manoid Robots. Proceedings of 2008 IEEE-RAS Inter- national Conference on Humanoid Robots, 708-713.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Incremental Learning of Integrated Semiotics based on Linguistic and Behavioral Symbols", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Takano", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Nakamura", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1780--1785", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. Takano and Y. Nakamura. 2009. Incremental Learn- ing of Integrated Semiotics based on Linguistic and Behavioral Symbols. Proceedings of 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1780-1785.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Towards Textually Describing Complex Video Contents with Audio-Visual Concept Classifiers", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "C" |
| ], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Y.-G", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "C.-W", |
| "middle": [], |
| "last": "Ngo", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 19th ACM international conference on Multimedia", |
| "volume": "", |
| "issue": "", |
| "pages": "655--658", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. C. Tan, Y.-G. Jiang and C.-W. Ngo. 2011. To- wards Textually Describing Complex Video Contents with Audio-Visual Concept Classifiers. Proceedings of the 19th ACM international conference on Multime- dia, 655-658.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "A Understanding Images with Natural Sentences", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Ushiku", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Harada", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Kuniyoshi", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 19th Annual ACM International Conference on Multimedia", |
| "volume": "", |
| "issue": "", |
| "pages": "679--682", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Ushiku, T. Harada, and Y. Kuniyoshi. 2011. A Under- standing Images with Natural Sentences. Proceedings of the 19th Annual ACM International Conference on Multimedia, 679-682.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Efficient Image Annotation for Automatic Sentence Generation", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Ushiku", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Harada", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Kuniyoshi", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 20th Annual ACM International Conference on Multimedia", |
| "volume": "", |
| "issue": "", |
| "pages": "549--558", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Ushiku, T. Harada, and Y. Kuniyoshi. 2012. Effi- cient Image Annotation for Automatic Sentence Gen- eration. Proceedings of the 20th Annual ACM Interna- tional Conference on Multimedia, 549-558.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Document Clustering based on Non-negative Matrix Factorization. Proceedings of 26th Annual International ACM SIGIR Conference", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Gong", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "267--273", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. Xu, X. Liu, and Y. Gong. 2003. Document Clustering based on Non-negative Matrix Factorization. Proceed- ings of 26th Annual International ACM SIGIR Confer- ence, 267-273.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Grounded Language Learning from Video Described with Sentences", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "M" |
| ], |
| "last": "Siskind", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Yu, and J. M. Siskind. 2013. Grounded Language Learning from Video Described with Sentences. Pro- ceedings of the 51st Annual Meeting of the Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "An example of human action (action 11)", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Combinatorial relationship between human actions and components", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "html": null, |
| "text": "Examples of collected sentenses 1 hidari te wo ageru.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF1": { |
| "html": null, |
| "text": "Root mean squared error of the estimated values", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td colspan=\"3\">Action Method 1 Method 2 Training Uniform</td></tr><tr><td>1</td><td>0.00353</td><td>0.00280 0.00387 0.00944</td></tr><tr><td>2</td><td>0.00320</td><td>0.00257 0.00354 0.00907</td></tr><tr><td>3</td><td>0.00338</td><td>0.00287 0.00365 0.00928</td></tr><tr><td>4</td><td>0.00358</td><td>0.00309 0.00389 0.00876</td></tr><tr><td>5</td><td>0.00275</td><td>0.00220 0.00336 0.00885</td></tr><tr><td>6</td><td>0.00322</td><td>0.00217 0.00387 0.00883</td></tr><tr><td>7</td><td>0.00373</td><td>0.00314 0.00404 0.00899</td></tr><tr><td>8</td><td>0.00318</td><td>0.00268 0.00348 0.00865</td></tr><tr><td>9</td><td>0.00353</td><td>0.00302 0.00381 0.00906</td></tr><tr><td>10</td><td>0.00335</td><td>0.00295 0.00365 0.00875</td></tr><tr><td>11</td><td>0.00344</td><td>0.00211 0.00411 0.00863</td></tr><tr><td>12</td><td>0.00330</td><td>0.00231 0.00394 0.00782</td></tr><tr><td>13</td><td>0.00380</td><td>0.00339 0.00419 0.00955</td></tr><tr><td>14</td><td>0.00311</td><td>0.00294 0.00350 0.00897</td></tr><tr><td>15</td><td>0.00339</td><td>0.00301 0.00378 0.00934</td></tr><tr><td>16</td><td>0.00315</td><td>0.00280 0.00359 0.00892</td></tr><tr><td>17</td><td>0.00346</td><td>0.00308 0.00385 0.00891</td></tr><tr><td>18</td><td>0.00297</td><td>0.00301 0.00330 0.00859</td></tr><tr><td>19</td><td>0.00361</td><td>0.00312 0.00398 0.00919</td></tr><tr><td>20</td><td>0.00351</td><td>0.00314 0.00389 0.00848</td></tr><tr><td>Mean</td><td>0.00356</td><td>0.00282 0.00377 0.00890</td></tr></table>" |
| }, |
| "TABREF2": { |
| "html": null, |
| "text": "Comparisons of the top two most probable sentences", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>action</td><td>With language data</td><td>Without language data</td></tr><tr><td>1</td><td>migi te wo ageru.</td><td>migi te wo ageru.</td></tr><tr><td/><td>(raise right hand.)</td><td>(raise right hand.)</td></tr><tr><td/><td>migi te wo ue ni ageru.</td><td>migi te wo ue ni ageru.</td></tr><tr><td/><td>(move right hand upward.)</td><td>(move right hand upward.)</td></tr><tr><td>2</td><td>migi te wo sageru.</td><td>migi te wo sageru.</td></tr><tr><td/><td>(lower right hand.)</td><td>(lower right hand.)</td></tr><tr><td/><td>migi te wo shita ni sageru.</td><td>migi te wo uekara sageru.</td></tr><tr><td/><td>(move right hand downward.)</td><td>(lower right hand from upper position.)</td></tr><tr><td>3</td><td>hidari te wo ageru.</td><td>hidari te wo ue ni ageru.</td></tr><tr><td/><td>(raise left hand.)</td><td>(move left hand upward.)</td></tr><tr><td/><td>hidari te wo ue ni ageru.</td><td>hidari te wo ageru.</td></tr><tr><td/><td>(move left hand upward.)</td><td>(raise left hand.)</td></tr><tr><td>5</td><td>ryou te wo ageru.</td><td>ryou te wo ue ni ageru.</td></tr><tr><td/><td>(raise both hands.)</td><td>(move both hands upward.)</td></tr><tr><td/><td>ryou te wo mae kara ageru.</td><td>ryou te wo ageru</td></tr><tr><td/><td colspan=\"2\">(raise both hands in the forward direction.) (raise both hands.)</td></tr><tr><td>18</td><td>migi ashi wo orosu.</td><td>migi ashi wo sageru.</td></tr><tr><td/><td>(lower right leg.)</td><td>(lower right leg.)</td></tr><tr><td/><td>migi ashi wo yoko kara orosu.</td><td>migi ashi wo yoko ni sageru.</td></tr><tr><td/><td>(lower right leg from the side direction.)</td><td>(lower right leg in the side direction.)</td></tr><tr><td>20</td><td>hidari ashi wo orosu.</td><td>hidari ashi wo orosu.</td></tr><tr><td/><td>(lower left leg.)</td><td>(lower left leg.)</td></tr><tr><td/><td>hidari ashi wo yoko ni orosu.</td><td>hidari ashi wo yoko ni orosu.</td></tr><tr><td/><td>(lower left leg in the side direction.)</td><td>(lower left leg in the side direction.)</td></tr></table>" |
| } |
| } |
| } |
| } |