aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1707.04677 | 2737272473 | This paper aims at task-oriented action prediction, i.e., predicting a sequence of actions towards accomplishing a specific task under a certain scene, which is a new problem in computer vision research. The main challenges lie in how to model task-specific knowledge and integrate it in the learning procedure. In this work, we propose to train a recurrent long-short term memory (LSTM) network for handling this problem, i.e., taking a scene image (including pre-located objects) and the specified task as input and recurrently predicting action sequences. However, training such a network usually requires large amounts of annotated samples for covering the semantic space (e.g., diverse action decomposition and ordering). To alleviate this issue, we introduce a temporal And-Or graph (AOG) for task description, which hierarchically represents a task into atomic actions. With this AOG representation, we can produce many valid samples (i.e., action sequences according with common sense) by training another auxiliary LSTM network with a small set of annotated samples. And these generated samples (i.e., task-oriented action sequences) effectively facilitate training the model for task-oriented action prediction. In the experiments, we create a new dataset containing diverse daily tasks and extensively evaluate the effectiveness of our approach. | Recurrent sequence prediction. Recently, the recurrent neural networks has been widely used in various sequence prediction tasks, including natural language generation @cite_15 , machine translation @cite_12 , and image captioning @cite_1 . These works adopted the similar encoder-decoder architecture for solving sequence prediction. @cite_12 mapped the free-form source language sentence into the target language by utilizing the encoder-decoder recurrent network. @cite_1 applied the similar pipeline for image captioning, which utilized a CNN as the encoder to extract image features and an LSTM network as the decoder to generate the descriptive sentence. | {
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_12"
],
"mid": [
"",
"1895577753",
"2950635152"
],
"abstract": [
"",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases."
]
} |
1707.04402 | 2963485523 | Much of the success of single agent deep reinforcement learning (DRL) in recent years can be attributed to the use of experience replay memories (ERM), which allow Deep Q-Networks (DQNs) to be trained efficiently through sampling stored state transitions. However, care is required when using ERMs for multi-agent deep reinforcement learning (MA-DRL), as stored transitions can become outdated when agents update their policies in parallel . In this work we apply leniency to MA-DRL. Lenient agents map state-action pairs to decaying temperature values that control the amount of leniency applied towards negative policy updates that are sampled from the ERM. This introduces optimism in the value-function update, and has been shown to facilitate cooperation in tabular fully-cooperative multi-agent reinforcement learning problems. We evaluate our Lenient-DQN (LDQN) empirically against the related Hysteretic-DQN (HDQN) algorithm as well as a modified version we call scheduled -HDQN, that uses average reward learning near terminal states. Evaluations take place in extended variations of the Coordinated Multi-Agent Object Transportation Problem (CMOTP) . We find that LDQN agents are more likely to converge to the optimal policy in a stochastic reward CMOTP compared to standard and scheduled-HDQN agents. | Lenient learners present an alternative to the hysteretic approach, and have empirically been shown to converge towards superior policies in stochastic games with a small state space @cite_30 . Similar to the hysteretic approach, lenient agents initially adopt an optimistic disposition, before gradually transforming into average reward learners @cite_30 . Lenient methods have received criticism in the past for the time they require to converge @cite_30 , the difficulty involved in selecting the correct hyperparameters, the additional overhead required for storing the temperature values, and the fact that they were originally only proposed for matrix games @cite_36 . However, given their success in tabular settings we here investigate whether leniency can be applied successfully to MA-DRL. | {
"cite_N": [
"@cite_30",
"@cite_36"
],
"mid": [
"2466211196",
"2096145798"
],
"abstract": [
"We introduce the Lenient Multiagent Reinforcement Learning 2 (LMRL2) algorithm for independent-learner stochastic cooperative games. LMRL2 is designed to overcome a pathology called relative overgeneralization, and to do so while still performing well in games with stochastic transitions, stochastic rewards, and miscoordination. We discuss the existing literature, then compare LMRL2 against other algorithms drawn from the literature which can be used for games of this kind: traditional (\"Distributed\") Q-learning, Hysteretic Q-learning, WoLF-PHC, SOoN, and (for repeated games only) FMQ. The results show that LMRL2 is very effective in both of our measures (complete and correct policies), and is found in the top rank more often than any other technique. LMRL2 is also easy to tune: though it has many available parameters, almost all of them stay at default settings. Generally the algorithm is optimally tuned with a single parameter, if any. We then examine and discuss a number of side-issues and options for LMRL2.",
"In the framework of fully cooperative multi-agent systems, independent (non-communicative) agents that learn by reinforcement must overcome several difficulties to manage to coordinate. This paper identifies several challenges responsible for the non-coordination of independent agents: Pareto-selection, non-stationarity, stochasticity, alter-exploration and shadowed equilibria. A selection of multi-agent domains is classified according to those challenges: matrix games, Boutilier's coordination game, predators pursuit domains and a special multi-state game. Moreover, the performance of a range of algorithms for independent reinforcement learners is evaluated empirically. Those algorithms are Q-learning variants: decentralized Q-learning, distributed Q-learning, hysteretic Q-learning, recursive frequency maximum Q-value and win-or-learn fast policy hill climbing. An overview of the learning algorithms' strengths and weaknesses against each challenge concludes the paper and can serve as a basis for choosing the appropriate algorithm for a new domain. Furthermore, the distilled challenges may assist in the design of new learning algorithms that overcome these problems and achieve higher performance in multi-agent applications."
]
} |
1707.04653 | 2736116184 | Emoji have grown to become one of the most important forms of communication on the web. With its widespread use, measuring the similarity of emoji has become an important problem for contemporary text processing since it lies at the heart of sentiment analysis, search, and interface design tasks. This paper presents a comprehensive analysis of the semantic similarity of emoji through embedding models that are learned over machine-readable emoji meanings in the EmojiNet knowledge base. Using emoji descriptions, emoji sense labels and emoji sense definitions, and with different training corpora obtained from Twitter and Google News, we develop and test multiple embedding models to measure emoji similarity. To evaluate our work, we create a new dataset called EmoSim508, which assigns human-annotated semantic similarity scores to a set of 508 carefully selected emoji pairs. After validation with EmoSim508, we present a real-world use-case of our emoji embedding models using a sentiment analysis task and show that our models outperform the previous best-performing emoji embedding model on this task. The EmoSim508 dataset and our emoji embedding models are publicly released with this paper and can be downloaded from http: emojinet.knoesis.org . | While emoji were introduced in the late 1990s, their use and popularity was limited until the Unicode Consortium started to standardize emoji symbols in 2009 @cite_15 . Major mobile phone manufactures such as Apple, Google, Microsoft, and Samsung then began supporting emoji in their device operating systems between 2011 and 2013, which boosted emoji adoption around the world @cite_27 . Early research on emoji was focused on understanding the role of emoji in computer-mediated communication. Kelly et al studied how people in close relationships use emoji in their communications and reported that they use emoji as a way of making their conversations playful @cite_34 . Pavalanathan et al studied how Twitter users adopt emoji and reported that Twitter users prefer emoji over emoticons @cite_10 . Researchers have also studied how emoji usage and interpretation differ across mobile and computer platforms @cite_35 @cite_9 @cite_23 , geographies @cite_30 , and across languages @cite_19 where many others have used emoji as features in their learning algorithms for problems such as emoji-based search @cite_22 , sentiment analysis @cite_29 , emotion analysis @cite_13 , and Twitter profile classification @cite_25 @cite_5 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_13",
"@cite_22",
"@cite_9",
"@cite_29",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_5",
"@cite_15",
"@cite_34",
"@cite_10",
"@cite_25"
],
"mid": [
"2513139648",
"2574189006",
"1971222444",
"1971709890",
"2510851569",
"2122522916",
"2526960150",
"",
"2510632587",
"2475401701",
"2535739320",
"1192513330",
"2553192308",
"2952769242"
],
"abstract": [
"Emojis are a quickly spreading and rather unknown communication phenomenon which occasionally receives attention in the mainstream press, but lacks the scientific exploration it deserves. This paper is a first attempt at investigating the global distribution of emojis. We perform our analysis of the spatial distribution of emojis on a dataset of ∼17 million (and growing) geo-encoded tweets containing emojis by running a cluster analysis over countries represented as emoji distributions and performing correlation analysis of emoji distributions and World Development Indicators. We show that emoji usage tends to draw quite a realistic picture of the living conditions in various parts of our world.",
"Emoji are commonly used in modern text communication. However, as graphics with nuanced details, emoji may be open to interpretation. Emoji also render differently on different viewing platforms (e.g., Apple’s iPhone vs. Google’s Nexus phone), potentially leading to communication errors. We explore whether emoji renderings or differences across platforms give rise to diverse interpretations of emoji. Through an online survey, we solicit people’s interpretations of a sample of the most popular emoji characters, each rendered for multiple platforms. Both in terms of sentiment and semantics, we analyze the variance in interpretation of the emoji, quantifying which emoji are most (and least) likely to be misinterpreted. In cases in which participants rated the same emoji rendering, they disagreed on whether the sentiment was positive, neutral, or negative 25 of the time. When considering renderings across platforms, these disagreements only increase. Overall, we find significant potential for miscommunication, both for individual emoji renderings and for different emoji renderings across platforms.",
"User generated content on Twitter (produced at an enormous rate of 340 million tweets per day) provides a rich source for gleaning people's emotions, which is necessary for deeper understanding of people's behaviors and actions. Extant studies on emotion identification lack comprehensive coverage of \"emotional situations\" because they use relatively small training datasets. To overcome this bottleneck, we have automatically created a large emotion-labeled dataset (of about 2.5 million tweets) by harnessing emotion-related hash tags available in the tweets. We have applied two different machine learning algorithms for emotion identification, to study the effectiveness of various feature combinations as well as the effect of the size of the training data on the emotion identification task. Our experiments demonstrate that a combination of unigrams, big rams, sentiment emotion-bearing words, and parts-of-speech information is most effective for gleaning emotions. The highest accuracy (65.57 ) is achieved with a training data containing about 2 million tweets.",
"This technical demo presents Emoji2Video, a query-by-emoji interface for exploring video collections. Ideogram-based video search and representation presents an opportunity for an intuitive, visual interface and concise non-textual summary of video contents, in a form factor that is ideal for small screens. The demo allows users to build search strings comprised of ideograms which are used to query a large dataset of YouTube videos. The system returns a list of the top-ranking videos for the user query along with an emoji summary of the video contents so that users may make an informed decision whether to view a video or refine their search terms. The ranking of the videos is done in a zero-shot, multi-modal manner that employs an embedding space to exploit semantic relationships between user-selected ideograms and the video's visual and textual content.",
"Emoji provide a way to express nonverbal conversational cues in computer-mediated communication. However, people need to share the same understanding of what each emoji symbolises, otherwise communication can breakdown. We surveyed 436 people about their use of emoji and ran an interactive study using a two-dimensional emotion space to investigate (1) the variation in people's interpretation of emoji and (2) their interpretation of corresponding Android and iOS emoji. Our results show variations between people's ratings within and across platforms. We outline our solution to reduce misunderstandings that arise from different interpretations of emoji.",
"There is a new generation of emoticons, called emojis, that is increasingly being used in mobile communications and social media. In the past two years, over ten billion emojis were used on Twitter. Emojis are Unicode graphic symbols, used as a shorthand to express concepts and ideas. In contrast to the small number of well-known emoticons that carry clear emotional contents, there are hundreds of emojis. But what are their emotional contents? We provide the first emoji sentiment lexicon, called the Emoji Sentiment Ranking, and draw a sentiment map of the 751 most frequently used emojis. The sentiment of the emojis is computed from the sentiment of the tweets in which they occur. We engaged 83 human annotators to label over 1.6 million tweets in 13 European languages by the sentiment polarity (negative, neutral, or positive). About 4 of the annotated tweets contain emojis. The sentiment analysis of the emojis allows us to draw several interesting conclusions. It turns out that most of the emojis are positive, especially the most popular ones. The sentiment distribution of the tweets with and without emojis is significantly different. The inter-annotator agreement on the tweets with emojis is higher. Emojis tend to occur at the end of the tweets, and their sentiment polarity increases with the distance. We observe no significant differences in the emoji rankings between the 13 languages and the Emoji Sentiment Ranking. Consequently, we propose our Emoji Sentiment Ranking as a European language-independent resource for automated sentiment analysis. Finally, the paper provides a formalization of sentiment and a novel visualization in the form of a sentiment bar.",
"Choosing the right emoji to visually complement or condense the meaning of a message has become part of our daily life. Emojis are pictures, which are naturally combined with plain text, thus creating a new form of language. These pictures are the same independently of where we live, but they can be interpreted and used in different ways. In this paper we compare the meaning and the usage of emojis across different languages. Our results suggest that the overall semantics of the subset of the emojis we studied is preserved across all the languages we analysed. However, some emojis are interpreted in a different way from language to language, and this could be related to socio-geographical differences.",
"",
"Emojis are an extremely common occurrence in mobile communications, but their meaning is open to interpretation. We investigate motivations for their usage in mobile messaging in the US. This study asked 228 participants for the last time that they used one or more emojis in a conversational message, and collected that message, along with a description of the emojis' intended meaning and function. We discuss functional distinctions between: adding additional emotional or situational meaning, adjusting tone, making a message more engaging to the recipient, conversation management, and relationship maintenance. We discuss lexical placement within messages, as well as social practices. We show that the social and linguistic function of emojis are complex and varied, and that supporting emojis can facilitate important conversational functions.",
"Gang affiliates have joined the masses who use social media to share thoughts and actions publicly. Interestingly, they use this public medium to express recent illegal actions, to intimidate others, and to share outrageous images and statements. Agencies able to unearth these profiles may thus be able to anticipate, stop, or hasten the investigation of gang-related crimes. This paper investigates the use of word embeddings to help identify gang members on Twitter. Building on our previous work, we generate word embeddings that translate what Twitter users post in their profile descriptions, tweets, profile images, and linked YouTube content to a real vector format amenable for machine learning classification. Our experimental results show that pre-trained word embeddings can boost the accuracy of supervised learning algorithms trained over gang members social media posts.",
"Emoji are a contemporary and extremely popular way to enhance electronic communication. Without rigid semantics attached to them, emoji symbols take on different meanings based on the context of a message. Thus, like the word sense disambiguation task in natural language processing, machines also need to disambiguate the meaning or ‘sense’ of an emoji. In a first step toward achieving this goal, this paper presents EmojiNet, the first machine readable sense inventory for emoji. EmojiNet is a resource enabling systems to link emoji with their context-specific meaning. It is automatically constructed by integrating multiple emoji resources with BabelNet, which is the most comprehensive multilingual sense inventory available to date. The paper discusses its construction, evaluates the automatic resource creation process, and presents a use case where EmojiNet disambiguates emoji usage in tweets. EmojiNet is available online for use at http: emojinet.knoesis.org.",
"Emoji are two-dimensional pictographs that were originally designed to convey emotion between participants in text-based conversation. This paper draws on interview data to identify ways in which emoji have been appropriated in pursuit of relationally meaningful behaviours in contemporary messaging applications. We suggest that the presence of appropriable tools like emoji might influence the selection of a communication channel for particular types of mediated conversation.",
"Many non-standard elements of ‘netspeak’ writing can be viewed as efforts to replicate the linguistic role played by nonverbal modalities in speech, conveying contextual information such as affect and interpersonal stance. Recently, a new non-standard communicative tool has emerged in online writing: emojis. These unicode characters contain a standardized set of pictographs, some of which are visually similar to well-known emoticons. Do emojis play the same linguistic role as emoticons and other ASCII-based writing innovations? If so, might the introduction of emojis eventually displace the earlier, user-created forms of contextual expression? Using a matching approach to causal statistical inference, we show that as social media users adopt emojis, they dramatically reduce their use of emoticons, suggesting that these linguistic resources compete for the same communicative function. Furthermore, we demonstrate that the adoption of emojis leads to a corresponding increase in the use of standard spellings, suggesting that all forms of non-standard writing are losing out in a competition with emojis. Finally, we identify specific textual features that make some emoticons especially likely to be replaced by emojis.",
"Most street gang members use Twitter to intimidate others, to present outrageous images and statements to the world, and to share recent illegal activities. Their tweets may thus be useful to law enforcement agencies to discover clues about recent crimes or to anticipate ones that may occur. Finding these posts, however, requires a method to discover gang member Twitter profiles. This is a challenging task since gang members represent a very small population of the 320 million Twitter users. This paper studies the problem of automatically finding gang members on Twitter. It outlines a process to curate one of the largest sets of verifiable gang member profiles that have ever been studied. A review of these profiles establishes differences in the language, images, YouTube links, and emojis gang members use compared to the rest of the Twitter population. Features from this review are used to train a series of supervised classifiers. Our classifier achieves a promising F1 score with a low false positive rate."
]
} |
1707.04629 | 2739168675 | Simultaneously achieving low trajectory errors and compliant control explicit models of the task was effectively addressed with Compliant Movement Primitives (CMP). For a single-robot task, this means that it is accurately following its trajectory, but also exhibits compliant behavior in case of perturbations. In this paper we extend this kind of behavior without explicit models to bimanual tasks. In the presence of an external perturbation on any of the robots, they will both move in synchrony in order to maintain their relative posture, and thus not exert force on the object they are carrying. Thus, they will act compliantly in their absolute task, but remain stiff in their relative task. To achieve compliant absolute behavior and stiff relative behavior, we combine joint-space CMPs with the well known symmetric control approach. To reduce the necessary feedback reaction of symmetric control, we further augment it with copying of a virtual force vector at the end-effector, calculated through the measured external joint torques. Real-world results on two Kuka LWR-4 robots in a bimanual setting confirm the applicability of the approach. | typically relies on explicit dynamics of the robot and the task @cite_17 . However, besides the above-mentioned CMPs, which are at the core of this paper, similar approaches that rely on task-specific models have emerged. For example in @cite_2 tactile sensors were used to determine the force of contact with the environment on the iCub robot. This information was used calculate the joint-torques from the measured arm pose, and use them in a feed-forward manner in control. Similarly, joint torques along the kinematic trajectory were encoded as DMPs and used as the feed-forward signal to increase the accuracy in the next execution of the in-contact task in @cite_5 . | {
"cite_N": [
"@cite_5",
"@cite_2",
"@cite_17"
],
"mid": [
"2197436471",
"2203369542",
""
],
"abstract": [
"This paper demonstrates a method for simultaneous transfer of positional and force requirements for in-contact tasks from a human instructor to a robotic arm through kinesthetic teaching. This is achieved by a specific use of the sensory configuration, where a force torque sensor is mounted between the tool and the flange of a robotic arm endowed with integrated torque sensors at each joint. The human demonstration is modeled using Dynamic Movement Primitives. Following human demonstration, the robot arm is provided with the capacity to perform sequential in-contact tasks, for example writing on a notepad a previously demonstrated sequence of characters. During the reenactment of the task, the system is not only able to imitate and generalize from demonstrated trajectories, but also from their associated force profiles. In fact, the implemented framework is extended to successfully recover from perturbations of the trajectory during reenactment and to cope with dynamic environments.",
"Whole-body control in unknown environments is challenging: Unforeseen contacts with obstacles can lead to poor tracking performance and potential physical damages of the robot. Hence, a whole-body control approach for future humanoid robots in (partially) unknown environments needs to take contact sensing into account, e.g., by means of artificial skin. However, translating contacts from skin measurements into physically well-understood quantities can be problematic as the exact position and strength of the contact needs to be converted into torques. In this paper, we suggest an alternative approach that directly learns the mapping from both skin and the joint state to torques. We propose to learn such an inverse dynamics models with contacts using a mixture-of-contacts approach that exploits the linear superimposition of contact forces. The learned model can, making use of uncalibrated tactile sensors, accurately predict the torques needed to compensate for the contact. As a result, tracking of trajectories with obstacles and tactile contact can be executed more accurately. We demonstrate on the humanoid robot iCub that our approach improve the tracking error in presence of dynamic contacts.",
""
]
} |
1707.04629 | 2739168675 | Simultaneously achieving low trajectory errors and compliant control explicit models of the task was effectively addressed with Compliant Movement Primitives (CMP). For a single-robot task, this means that it is accurately following its trajectory, but also exhibits compliant behavior in case of perturbations. In this paper we extend this kind of behavior without explicit models to bimanual tasks. In the presence of an external perturbation on any of the robots, they will both move in synchrony in order to maintain their relative posture, and thus not exert force on the object they are carrying. Thus, they will act compliantly in their absolute task, but remain stiff in their relative task. To achieve compliant absolute behavior and stiff relative behavior, we combine joint-space CMPs with the well known symmetric control approach. To reduce the necessary feedback reaction of symmetric control, we further augment it with copying of a virtual force vector at the end-effector, calculated through the measured external joint torques. Real-world results on two Kuka LWR-4 robots in a bimanual setting confirm the applicability of the approach. | of robots can be either asymmetric or symmetric. While the former controls each robot independently, the latter considers both robots as a single system. An example of an asymmetric control scheme using motion primitives is described in @cite_16 . There the robots are coupled through feed-forward signals learned in a few iterations from force-feedback. However, the robots are stiff. A system that combines bimanual robot operation based on dynamical systems was presented in @cite_6 . In the paper the motion of robotic arms is adapted for coordinated bimanual receiving intercepting an object. The system also relies on a virtual object to generate the necessary motions. | {
"cite_N": [
"@cite_16",
"@cite_6"
],
"mid": [
"2078763164",
"2464089775"
],
"abstract": [
"The framework of dynamic movement primitives (DMPs) contains many favorable properties for the execution of robotic trajectories, such as indirect dependence on time, response to perturbations, and the ability to easily modulate the given trajectories, but the framework in its original form remains constrained to the kinematic aspect of the movement. In this paper, we bridge the gap to dynamic behavior by extending the framework with force torque feedback. We propose and evaluate a modulation approach that allows interaction with objects and the environment. Through the proposed coupling of originally independent robotic trajectories, the approach also enables the execution of bimanual and tightly coupled cooperative tasks. We apply an iterative learning control algorithm to learn a coupling term, which is applied to the original trajectory in a feed-forward fashion and, thus, modifies the trajectory in accordance to the desired positions or external forces. A stability analysis and results of simulated and real-world experiments using two KUKA LWR arms for bimanual tasks and interaction with the environment are presented. By expanding on the framework of DMPs, we keep all the favorable properties, which is demonstrated with temporal modulation and in a two-agent obstacle avoidance task.",
"Coordinated control strategies for multi-robot systems are necessary for tasks that cannot be executed by a single robot. This encompasses tasks where the workspace of the robot is too small or where the load is too heavy for one robot to handle. Using multiple robots makes the task feasible by extending the workspace and or increase the payload of the overall robotic system. In this paper, we consider two instances of such task: a co-worker scenario in which a human hands over a large object to a robot; intercepting a large flying object. The problem is made difficult as the pick-up intercept motions must take place while the object is in motion and because the object's motion is not deterministic. The challenge is then to adapt the motion of the robotic arms in coordination with one another and with the object. Determining the pick-up intercept point is done by taking into account the workspace of the multi-arm system and is continuously recomputed to adapt to change in the object's trajectory. We propose a dynamical systems (DS) based control law to generate autonomous and synchronized motions for a multi-arm robot system in the task of reaching for a moving object. We show theoretically that the resulting DS coordinates the motion of the robots with each other and with the object, while the system remains stable. We validate our approach on a dual-arm robotic system and demonstrate that it can re-synchronize and adapt the motion of each arm in synchrony in a fraction of seconds, even when the motion of the object is fast and not accurately predictable."
]
} |
1707.04512 | 2735480778 | Achieving security against adversaries with unlimited computational power is of great interest in a communication scenario. Since polar codes are capacity achieving codes with low encoding-decoding complexity and they can approach perfect secrecy rates for binary-input degraded wiretap channels in symmetric settings, they are investigated extensively in the literature recently. In this paper, a polar coding scheme to achieve secrecy capacity in non-symmetric binary input channels is proposed. The proposed scheme satisfies security and reliability conditions. The wiretap channel is assumed to be stochastically degraded with respect to the legitimate channel and message distribution is uniform. The information set is sent over channels that are good for Bob and bad for Eve. Random bits are sent over channels that are good for both Bob and Eve. A frozen vector is chosen randomly and is sent over channels bad for both. We prove that there exists a frozen vector for which the coding scheme satisfies reliability and security conditions and approaches the secrecy capacity. We further empirically show that in the proposed scheme for non-symmetric binary-input discrete memoryless channels, the equivocation rate achieves its upper bound in the whole capacity-equivocation region. | The works by Hof and Shamai @cite_16 and Mahdavifar and Vardy @cite_24 assume binary-input channels for the symmetric settings. In this work, we assume non-symmetric binary-input channels. Work in @cite_24 considers only achieving secrecy capacity, but we prove that the proposed scheme for non-symmetric setting achieves all capacity-equivocation region. @cite_24 there is no assumption on the distribution of message for proving security condition which is a fair assumption on message M. But we consider uniform distribution since it is the necessary condition for approaching secrecy capacity. @cite_9 the non-binary setting is investigated. However, no experimental results are presented. In this work, we also present simulation results of equivocation at Eve (using randomly chosen frozen vector) in BECs which measures the secrecy. | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_16"
],
"mid": [
"2211831180",
"2581074253",
"2016057463"
],
"abstract": [
"Suppose that Alice wishes to send messages to Bob through a communication channel C1, but her transmissions also reach an eavesdropper Eve through another channel C2. This is the wiretap channel model introduced by Wyner in 1975. The goal is to design a coding scheme that makes it possible for Alice to communicate both reliably and securely. Reliability is measured in terms of Bob's probability of error in recovering the message, while security is measured in terms of the mutual information between the message and Eve's observations. Wyner showed that the situation is characterized by a single constant Cs, called the secrecy capacity, which has the following meaning: for all e >; 0, there exist coding schemes of rate R ≥ Cs-e that asymptotically achieve the reliability and security objectives. However, his proof of this result is based upon a random-coding argument. To date, despite consider able research effort, the only case where we know how to construct coding schemes that achieve secrecy capacity is when Eve's channel C2 is an erasure channel, or a combinatorial variation thereof. Polar codes were recently invented by Arikan; they approach the capacity of symmetric binary-input discrete memoryless channels with low encoding and decoding complexity. In this paper, we use polar codes to construct a coding scheme that achieves the secrecy capacity for a wide range of wiretap channels. Our construction works for any instantiation of the wiretap channel model, as long as both C1 and C2 are symmetric and binary-input, and C2 is degraded with respect to C1. Moreover, we show how to modify our construction in order to provide strong security, in the sense defined by Maurer, while still operating at a rate that approaches the secrecy capacity. In this case, we cannot guarantee that the reliability condition will also be satisfied unless the main channel C1 is noiseless, although we believe it can be always satisfied in practice.",
"Achieving information-theoretic security using explicit coding scheme in which unlimited computational power for eavesdropper is assumed, is one of the main topics is security consideration. It is shown that polar codes are capacity achieving codes and have a low complexity in encoding and decoding. It has been proven that polar codes reach to secrecy capacity in the binary-input wiretap channels in symmetric settings for which the wiretapper's channel is degraded with respect to the main channel. The first task of this paper is to propose a coding scheme to achieve secrecy capacity in asymmetric nonbinary-input channels while keeping reliability and security conditions satisfied. Our assumption is that the wiretap channel is stochastically degraded with respect to the main channel and message distribution is unspecified. The main idea is to send information set over good channels for Bob and bad channels for Eve and send random symbols for channels that are good for both. In this scheme the frozen vector is defined over all possible choices using polar codes ensemble concept. We proved that there exists a frozen vector for which the coding scheme satisfies reliability and security conditions. It is further shown that uniform distribution of the message is the necessary condition for achieving secrecy capacity.",
"A polar coding scheme is suggested for the binary-input memoryless symmetric and degraded wire-tap channel. The provided scheme achieves the entire rate-equivocation region for the considered model."
]
} |
1707.04108 | 2735665712 | We study in this work the importance of depth in convolutional models for text classification, either when character or word inputs are considered. We show on 5 standard text classification and sentiment analysis tasks that deep models indeed give better performances than shallow networks when the text input is represented as a sequence of characters. However, a simple shallow-and-wide network outperforms deep models such as DenseNet with word inputs. Our shallow word model further establishes new state-of-the-art performances on two datasets: Yelp Binary (95.9 ) and Yelp Full (64.9 ). | Convolutional neural networks with end-to-end training have been used in NLP for the first time in @cite_19 @cite_35 . The authors introduce a new max-pooling operation, which is shown to be effective for text, as an alternative to the conventional max-pooling of the original LeNet architecture @cite_47 . Moreover, they proposed to transfer task-specific information by co-training multiple deep models on many tasks. Inspired by this seminal work, @cite_15 proposed a simpler architecture with slight modifications of @cite_19 consisting of fine-tuned or fixed pretraining word2vec embeddings @cite_33 and its combination as multi-channel. The author showed that this simple model can already achieve state-of-the-art performances on many small datasets. @cite_0 proposed a dynamic -max pooling to handle variable-length input sentences. This dynamic -max pooling is a generalisation of the max pooling operator where can be dynamically set as a part of the network. | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_15",
"@cite_0",
"@cite_19",
"@cite_47"
],
"mid": [
"2158899491",
"2950133940",
"",
"2120615054",
"2117130368",
"2310919327"
],
"abstract": [
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"",
"The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.",
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.",
""
]
} |
1707.04108 | 2735665712 | We study in this work the importance of depth in convolutional models for text classification, either when character or word inputs are considered. We show on 5 standard text classification and sentiment analysis tasks that deep models indeed give better performances than shallow networks when the text input is represented as a sequence of characters. However, a simple shallow-and-wide network outperforms deep models such as DenseNet with word inputs. Our shallow word model further establishes new state-of-the-art performances on two datasets: Yelp Binary (95.9 ) and Yelp Full (64.9 ). | Besides convolutional networks, @cite_9 introduced a character aware neural language model by combining a CNN on character embeddings with an highway LSTM on subsequent layers. @cite_32 also explored a multiplicative LSTM (mLSTM) on character embeddings and found that a basic logistic regression learned on this representation can achieve state-of-the-art result on the Sentiment Tree Bank dataset @cite_36 with only a few hundred labeled examples. | {
"cite_N": [
"@cite_36",
"@cite_9",
"@cite_32"
],
"mid": [
"2251939518",
"1938755728",
"2606347107"
],
"abstract": [
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.",
"We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60 fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.",
"We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment."
]
} |
1707.04281 | 2735320691 | Dimensionality reduction is a common method for analyzing and visualizing high-dimensional data across domains. Dimensionality-reduction algorithms involve complex optimizations and the reduced dimensions computed by these algorithms generally lack clear relation to the initial data dimensions. Therefore, interpreting and reasoning about dimensionality reductions can be difficult. In this work, we introduce two interaction techniques, and , for reasoning dynamically about scatter plots of dimensionally reduced data. We also contribute two related visualization techniques, and to facilitate and enrich the effective use of the proposed interactions, which we integrate in a new tool called . To evaluate our techniques, we first analyze their time and accuracy performance across varying sample and dimension sizes. We then conduct a user study in which twelve data scientists use so as to assess the usefulness of the techniques in performing exploratory data analysis tasks. Results suggest that our visual interactions are intuitive and effective for exploring dimensionality reductions and generating hypotheses about the underlying data. | Direct manipulation has a long history in human-computer interaction @cite_8 @cite_24 @cite_36 and visualization research (e.g. @cite_46 ). Direct manipulation techniques aim to improve user engagement by minimizing the distance between the interaction source and the target object @cite_32 . | {
"cite_N": [
"@cite_8",
"@cite_36",
"@cite_32",
"@cite_24",
"@cite_46"
],
"mid": [
"2132233302",
"2078404830",
"2115647291",
"2034967522",
"2099305423"
],
"abstract": [
"The Sketchpad system makes it possible for a man and a computer to converse rapidly through the medium of line drawings. Heretofore, most interaction between man and computers has been slowed down by the need to reduce all communication to written statements that can be typed; in the past, we have been writing letters to rather than conferring with our computers. For many types of communication, such as describing the shape of a mechanical part or the connections of an electrical circuit, typed statements can prove cumbersome. The Sketchpad system, by eliminating typed statements (except for legends) in favor of line drawings, opens up a new area of man-machine communication.",
"The programming language aspects of a graphic simulation laboratory named ThingLab are presented. The design and implementation of ThingLab are extensions to Smalltalk. In ThingLab, constraints are used to specify the relations that must hold among the parts of the simulation. The system is object-oriented and employs inheritance and part-whole hierarchies to describe the structure of a simulation. An interactive, graphic user interface is provided that allows the user to view and edit a simulation.",
"Direct manipulation has been lauded as a good form of interface design, and some interfaces that have this property have been well received by users. In this article we seek a cognitive account of both the advantages and disadvantages of direct manipulation interfaces. We identify two underlying phenomena that give rise to the feeling of directness. One deals with the information processing distance between the user's intentions and the facilities provided by the machine. Reduction of this distance makes the interface feel direct by reducing the effort required of the user to accomplish goals. The second phenomenon concerns the relation between the input and output vocabularies of the interface language. In particular, direct manipulation requires that the system provide representations of objects that behave as if they are the objects themselves. This provides the feeling of directness of manipulation.",
"The Learning Research Group at Xerox Palo Alto Research Center is concerned with all aspects of the communication and manipulation of knowledge. We design, build, and use dynamic media which can be used by human beings of all ages. Several years ago, we crystallized our dreams into a design idea for a personal dynamic medium the size of a notebook (the Dynabook) which could be owned by everyone and could have the power to handle virtually all of its owner's information-related needs. Towards this goal we have designed and built a communications system: the Smalltalk language, implemented on small computers we refer to as \"interim Dynabooks.\" We are exploring the use of this system as a programming and problem solving tool; as an interactive memory for the storage and manipulation of data; as a text editor; and as a medium for expression through drawing, painting, animating pictures, and composing and generating music. (Figure 1 is a view of this interim Dynabook.)",
""
]
} |
1707.04281 | 2735320691 | Dimensionality reduction is a common method for analyzing and visualizing high-dimensional data across domains. Dimensionality-reduction algorithms involve complex optimizations and the reduced dimensions computed by these algorithms generally lack clear relation to the initial data dimensions. Therefore, interpreting and reasoning about dimensionality reductions can be difficult. In this work, we introduce two interaction techniques, and , for reasoning dynamically about scatter plots of dimensionally reduced data. We also contribute two related visualization techniques, and to facilitate and enrich the effective use of the proposed interactions, which we integrate in a new tool called . To evaluate our techniques, we first analyze their time and accuracy performance across varying sample and dimension sizes. We then conduct a user study in which twelve data scientists use so as to assess the usefulness of the techniques in performing exploratory data analysis tasks. Results suggest that our visual interactions are intuitive and effective for exploring dimensionality reductions and generating hypotheses about the underlying data. | Our work is closely related to earlier approaches using direct manipulation to modify data in DR visualizations @cite_18 @cite_43 @cite_19 @cite_2 @cite_47 . Like our and unconstrained techniques, iPCA enables interactive forward and backward projections for PCA-based DRs. However, iPCA recomputes full PCA for each forward and backward projection, and these can suffer from jitter and scalability issues, as noted in @cite_18 . Using out-of-sample extrapolation, avoids re-running dimensionality-reduction algorithms. From the visualization point of view, this is not just a computational convenience, but also has perceptual and cognitive advantages, such as preserving the constancy of scatter-plot representations. For example, re-running (training) a dimensionality reduction algorithm with a new data sample added can significantly alter a two-dimensional scatter plot of the dimensionally reduced data, even though all the original inter-data point similarities may remain unchanged. In contrast to iPCA, we also enable users to interactively define constraints on feature values and perform constrained . We refer readers to a recent survey @cite_1 for a detailed discussion of prior research on visual interaction with dimensionality reduction. | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_19",
"@cite_43",
"@cite_2",
"@cite_47"
],
"mid": [
"2025394193",
"2517066960",
"1990086701",
"",
"2079647943",
"2041420722"
],
"abstract": [
"Principle Component Analysis (PCA) is a widely used mathematical technique in many fields for factor and trend analysis, dimension reduction, etc. However, it is often considered to be a \"black box\" operation whose results are difficult to interpret and sometimes counter-intuitive to the user. In order to assist the user in better understanding and utilizing PCA, we have developed a system that visualizes the results of principal component analysis using multiple coordinated views and a rich set of user interactions. Our design philosophy is to support analysis of multivariate datasets through extensive interaction with the PCA output. To demonstrate the usefulness of our system, we performed a comparative user study with a known commercial system, SAS INSIGHT's Interactive Data Exploration. Participants in our study solved a number of high-level analysis tasks with each interface and rated the systems on ease of learning and usefulness. Based on the participants' accuracy, speed, and qualitative feedback, we observe that our system helps users to better understand relationships between the data and the calculated eigenspace, which allows the participants to more accurately analyze the data. User feedback suggests that the interactivity and transparency of our system are the key strengths of our approach.",
"Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities.",
"Visual-interactive cluster analysis provides valuable tools for effectively analyzing large and complex data sets. Due to desirable properties and an inherent predisposition for visualization, the Kohonen Feature Map (or self-organizing map, or SOM) algorithm is among the most popular and widely used visual clustering techniques. However, the unsupervised nature of the algorithm may be disadvantageous in certain applications. Depending on initialization and data characteristics, cluster maps (cluster layouts) may emerge that do not comply with user preferences, expectations, or the application context. Considering SOM-based analysis of trajectory data, we propose a comprehensive visual-interactive monitoring and control framework extending the basic SOM algorithm. The framework implements the general Visual Analytics idea to effectively combine automatic data analysis with human expert supervision. It provides simple, yet effective facilities for visually monitoring and interactively controlling the trajectory clustering process at arbitrary levels of detail. The approach allows the user to leverage existing domain knowledge and user preferences, arriving at improved cluster maps. We apply the framework on a trajectory clustering problem, demonstrating its potential in combining both unsupervised (machine) and supervised (human expert) processing, in producing appropriate cluster results.",
"",
"The increasing availability of motion sensors and video cameras in living spaces has made possible the analysis of motion patterns and collective behavior in a number of situations. The visualization of this movement data, however, remains a challenge. Although maintaining the actual layout of the data space is often desirable, direct visualization of movement traces becomes cluttered and confusing as the spatial distribution of traces may be disparate and uneven. We present proximity-based visualization as a novel approach to the visualization of movement traces in an abstract space rather than the given spatial layout. This abstract space is obtained by considering proximity data, which is computed as the distance between entities and some number of important locations. These important locations can range from a single fixed point, to a moving point, several points, or even the proximities between the entities themselves. This creates a continuum of proximity spaces, ranging from the fixed absolute reference frame to completely relative reference frames. By combining these abstracted views with the concrete spatial views, we provide a way to mentally map the abstract spaces back to the real space. We demonstrate the effectiveness of this approach, and its applicability to visual analytics problems such as hazard prevention, migration patterns, and behavioral studies.",
"Interactive visualization systems for exploring and manipulating high-dimensional feature spaces have experienced a substantial progress in the last few years. State-of-art methods rely on solid mathematical and computational foundations that enable sophisticated and flexible interactive tools. Current methods are even capable of modifying data attributes during interaction, highlighting regions of potential interest in the feature space, and building visualizations that bring out the relevance of attributes. However, those methodologies rely on complex and non-intuitive interfaces that hamper the free handling of the feature spaces. Moreover, visualizing how neighborhood structures are affected during the space manipulation is also an issue for existing methods. This paper presents a novel visualization-assisted methodology for interacting and transforming data attributes embedded in feature spaces. The proposed approach relies on a combination of multidimensional projections and local transformations to provide an interactive mechanism for modifying attributes. Besides enabling a simple and intuitive visual layout, our approach allows the user to easily observe the changes in neighborhood structures during interaction. The usefulness of our methodology is shown in an application geared to image retrieval."
]
} |
1707.04281 | 2735320691 | Dimensionality reduction is a common method for analyzing and visualizing high-dimensional data across domains. Dimensionality-reduction algorithms involve complex optimizations and the reduced dimensions computed by these algorithms generally lack clear relation to the initial data dimensions. Therefore, interpreting and reasoning about dimensionality reductions can be difficult. In this work, we introduce two interaction techniques, and , for reasoning dynamically about scatter plots of dimensionally reduced data. We also contribute two related visualization techniques, and to facilitate and enrich the effective use of the proposed interactions, which we integrate in a new tool called . To evaluate our techniques, we first analyze their time and accuracy performance across varying sample and dimension sizes. We then conduct a user study in which twelve data scientists use so as to assess the usefulness of the techniques in performing exploratory data analysis tasks. Results suggest that our visual interactions are intuitive and effective for exploring dimensionality reductions and generating hypotheses about the underlying data. | Prior work introduces various visualizations in planar scatter plots of DRs, in order to improve the user experience by communicating projection errors @cite_33 @cite_6 @cite_45 @cite_40 , change in dimensionality projection positions @cite_18 , data properties and clustering results @cite_6 @cite_35 , and contributions of original data dimensions in reduced dimensions @cite_12 . Low-dimensional projections are generally lossy representations of the original data relations: therefore, it is useful to convey both overall and per-point dimensionality-reduction errors to users when desired. Researchers visualized errors in DR scatter plots using Voronoi diagrams @cite_45 @cite_40 and corrected (undistorted) the errors by adjusting the projection layout with respect to the examined point @cite_33 @cite_6 . | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_33",
"@cite_6",
"@cite_40",
"@cite_45",
"@cite_12"
],
"mid": [
"2763925030",
"2025394193",
"",
"1945230793",
"2163825541",
"2088323702",
"2104146802"
],
"abstract": [
"While clustering is one of the most popular methods for data mining, analysts lack adequate tools for quick, iterative clustering analysis, which is essential for hypothesis generation and data reasoning. We introduce Clustrophile, an interactive tool for iteratively computing discrete and continuous data clusters, rapidly exploring different choices of clustering parameters, and reasoning about clustering instances in relation to data dimensions. Clustrophile combines three basic visualizations -- a table of raw datasets, a scatter plot of planar projections, and a matrix diagram (heatmap) of discrete clusterings -- through interaction and intermediate visual encoding. Clustrophile also contributes two spatial interaction techniques, @math and @math , and a visualization method, @math , for reasoning about two-dimensional projections obtained through dimensionality reductions.",
"Principle Component Analysis (PCA) is a widely used mathematical technique in many fields for factor and trend analysis, dimension reduction, etc. However, it is often considered to be a \"black box\" operation whose results are difficult to interpret and sometimes counter-intuitive to the user. In order to assist the user in better understanding and utilizing PCA, we have developed a system that visualizes the results of principal component analysis using multiple coordinated views and a rich set of user interactions. Our design philosophy is to support analysis of multivariate datasets through extensive interaction with the PCA output. To demonstrate the usefulness of our system, we performed a comparative user study with a known commercial system, SAS INSIGHT's Interactive Data Exploration. Participants in our study solved a number of high-level analysis tasks with each interface and rated the systems on ease of learning and usefulness. Based on the participants' accuracy, speed, and qualitative feedback, we observe that our system helps users to better understand relationships between the data and the calculated eigenspace, which allows the participants to more accurately analyze the data. User feedback suggests that the interactivity and transparency of our system are the key strengths of our approach.",
"",
"We introduce a set of integrated interaction techniques to interpret and interrogate dimensionality-reduced data. Projection techniques generally aim to make a high-dimensional information space visible in form of a planar layout. However, the meaning of the resulting data projections can be hard to grasp. It is seldom clear why elements are placed far apart or close together and the inevitable approximation errors of any projection technique are not exposed to the viewer. Previous research on dimensionality reduction focuses on the efficient generation of data projections, interactive customisation of the model, and comparison of different projection techniques. There has been only little research on how the visualization resulting from data projection is interacted with. We contribute the concept of probing as an integrated approach to interpreting the meaning and quality of visualizations and propose a set of interactive methods to examine dimensionality-reduced data as well as the projection itself. The methods let viewers see approximation errors, question the positioning of elements, compare them to each other, and visualize the influence of data dimensions on the projection space. We created a web-based system implementing these methods, and report on findings from an evaluation with data analysts using the prototype to examine multidimensional datasets.",
"Multidimensional scaling is a must-have tool for visual data miners, projecting multidimensional data onto a two-dimensional plane. However, what we see is not necessarily what we think about. In many cases, end-users do not take care of scaling the projection space with respect to the multidimensional space. Anyway, when using non-linear mappings, scaling is not even possible. Yet, without scaling geometrical structures which might appear do not make more sense than considering a random map. Without scaling, we shall not make inference from the display back to the multidimensional space. No clusters, no trends, no outliers, there is nothing to infer without first quantifying the mapping quality. Several methods to qualify mappings have been devised. Here, we propose CheckViz, a new method belonging to the framework of Verity Visualization. We define a two-dimensional perceptually uniform colour coding which allows visualizing tears and false neighbourhoods, the two elementary and complementary types of geometrical mapping distortions, straight onto the map at the location where they occur. As examples shall demonstrate, this visualization method is essential to help users make sense out of the mappings and to prevent them from over interpretations. It could be applied to check other mappings as well.",
"The visualization of continuous multi-dimensional data based on their projection to a 2-dimensional space is a way to detect visually interesting patterns, as far as the projection provides a faithful image of the original data. In order to evaluate this faithfulness, we propose to visualize any measure associated to a projected datum or to a pair of projected data, by coloring the corresponding Voronoi cell in the projection space. We also define specific measures and show how they allow estimating visually whether some part of the projection is or is not a reliable image of the original manifolds. It also helps to figure out what the original topology of the data is, telling where the high-dimensional manifolds have been torn or glued during the projection. We experiment these techniques with the principal component analysis and the curvilinear component analysis applied to artificial and real databases.",
"SUMMARY Any matrix of rank two can be displayed as a biplot which consists of a vector for each row and a vector for each column, chosen so that any element of the matrix is exactly the inner product of the vectors corresponding to its row and to its column. If a matrix is of higher rank, one may display it approximately by a biplot of a matrix of rank two which approximates the original matrix. The biplot provides a useful tool of data analysis and allows the visual appraisal of the structure of large data matrices. It is especially revealing in principal component analysis, where the biplot can show inter-unit distances and indicate clustering of units as well as display variances and correlations of the variables. Any matrix may be represented by a vector for each row and another vector for each column, so chosen that the elements of the matrix are the inner products of the vectors representing the corresponding rows and columns. This is conceptually helpful in understanding properties of matrices. When the matrix is of rank 2 or 3, or can be closely approximated by a matrix of such rank, the vectors may be plotted or modelled and the matrix representation inspected physically. This is of obvious practical interest for the analysis of large matrices. Any n x m matrix Y of rank r can be factorized as"
]
} |
1707.04281 | 2735320691 | Dimensionality reduction is a common method for analyzing and visualizing high-dimensional data across domains. Dimensionality-reduction algorithms involve complex optimizations and the reduced dimensions computed by these algorithms generally lack clear relation to the initial data dimensions. Therefore, interpreting and reasoning about dimensionality reductions can be difficult. In this work, we introduce two interaction techniques, and , for reasoning dynamically about scatter plots of dimensionally reduced data. We also contribute two related visualization techniques, and to facilitate and enrich the effective use of the proposed interactions, which we integrate in a new tool called . To evaluate our techniques, we first analyze their time and accuracy performance across varying sample and dimension sizes. We then conduct a user study in which twelve data scientists use so as to assess the usefulness of the techniques in performing exploratory data analysis tasks. Results suggest that our visual interactions are intuitive and effective for exploring dimensionality reductions and generating hypotheses about the underlying data. | Biplot was introduced @cite_12 to visualize the magnitude and sign of a data attribute's contribution to the first two or three principal components as line vectors in PCA. reduce to biplots when PCA is used for dimensionality reduction. Our construction algorithm is general and reflects the underlying out-of-sample extension method used. On the other hand, biplots are based on singular-value decomposition and always use PCA forward projection, regardless of the actual DR used. Additionally, differ from biplots in being interactive visual objects beyond static vectors and are decorated to communicate distributional characteristics of the underlying data point. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2104146802"
],
"abstract": [
"SUMMARY Any matrix of rank two can be displayed as a biplot which consists of a vector for each row and a vector for each column, chosen so that any element of the matrix is exactly the inner product of the vectors corresponding to its row and to its column. If a matrix is of higher rank, one may display it approximately by a biplot of a matrix of rank two which approximates the original matrix. The biplot provides a useful tool of data analysis and allows the visual appraisal of the structure of large data matrices. It is especially revealing in principal component analysis, where the biplot can show inter-unit distances and indicate clustering of units as well as display variances and correlations of the variables. Any matrix may be represented by a vector for each row and another vector for each column, so chosen that the elements of the matrix are the inner products of the vectors representing the corresponding rows and columns. This is conceptually helpful in understanding properties of matrices. When the matrix is of rank 2 or 3, or can be closely approximated by a matrix of such rank, the vectors may be plotted or modelled and the matrix representation inspected physically. This is of obvious practical interest for the analysis of large matrices. Any n x m matrix Y of rank r can be factorized as"
]
} |
1707.04281 | 2735320691 | Dimensionality reduction is a common method for analyzing and visualizing high-dimensional data across domains. Dimensionality-reduction algorithms involve complex optimizations and the reduced dimensions computed by these algorithms generally lack clear relation to the initial data dimensions. Therefore, interpreting and reasoning about dimensionality reductions can be difficult. In this work, we introduce two interaction techniques, and , for reasoning dynamically about scatter plots of dimensionally reduced data. We also contribute two related visualization techniques, and to facilitate and enrich the effective use of the proposed interactions, which we integrate in a new tool called . To evaluate our techniques, we first analyze their time and accuracy performance across varying sample and dimension sizes. We then conduct a user study in which twelve data scientists use so as to assess the usefulness of the techniques in performing exploratory data analysis tasks. Results suggest that our visual interactions are intuitive and effective for exploring dimensionality reductions and generating hypotheses about the underlying data. | Stahnke al @cite_6 use a grayscale map to visualize how a single attribute value changes between data points in DR scatter plots. We use , a grayscale map, to visualize the feasible regions in the constrained interaction. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1945230793"
],
"abstract": [
"We introduce a set of integrated interaction techniques to interpret and interrogate dimensionality-reduced data. Projection techniques generally aim to make a high-dimensional information space visible in form of a planar layout. However, the meaning of the resulting data projections can be hard to grasp. It is seldom clear why elements are placed far apart or close together and the inevitable approximation errors of any projection technique are not exposed to the viewer. Previous research on dimensionality reduction focuses on the efficient generation of data projections, interactive customisation of the model, and comparison of different projection techniques. There has been only little research on how the visualization resulting from data projection is interacted with. We contribute the concept of probing as an integrated approach to interpreting the meaning and quality of visualizations and propose a set of interactive methods to examine dimensionality-reduced data as well as the projection itself. The methods let viewers see approximation errors, question the positioning of elements, compare them to each other, and visualize the influence of data dimensions on the projection space. We created a web-based system implementing these methods, and report on findings from an evaluation with data analysts using the prototype to examine multidimensional datasets."
]
} |
1707.04281 | 2735320691 | Dimensionality reduction is a common method for analyzing and visualizing high-dimensional data across domains. Dimensionality-reduction algorithms involve complex optimizations and the reduced dimensions computed by these algorithms generally lack clear relation to the initial data dimensions. Therefore, interpreting and reasoning about dimensionality reductions can be difficult. In this work, we introduce two interaction techniques, and , for reasoning dynamically about scatter plots of dimensionally reduced data. We also contribute two related visualization techniques, and to facilitate and enrich the effective use of the proposed interactions, which we integrate in a new tool called . To evaluate our techniques, we first analyze their time and accuracy performance across varying sample and dimension sizes. We then conduct a user study in which twelve data scientists use so as to assess the usefulness of the techniques in performing exploratory data analysis tasks. Results suggest that our visual interactions are intuitive and effective for exploring dimensionality reductions and generating hypotheses about the underlying data. | We compute forward projections using out-of-sample extension (or extrapolation) @cite_16 . Out-of-sample extension is the projection of a new data point into an existing DR (e.g. learned manifold model) using only the properties of the already computed DR. It is conceptually equivalent to testing a trained machine-learning model with data that was not part of the training set. For linear DR methods, out-of-sample extension is often performed by applying the learned linear transformation to the new data point. For autoencoders, the trained network defines the transformation from the high-dimensional to low-dimensional data representation @cite_41 . Back or backward projection maps a low-dimensional data point back into the original high-dimensional data space. For linear DRs, back projection is typically done by applying the inverse of the learned linear DR mapping. For nonlinear DRs, earlier research proposed DR-specific backward-projection techniques. For example, iLAMP @cite_26 introduces a back-projection method for LAMP @cite_23 using local neighborhoods and demonstrates its viability over synthetic datasets @cite_26 . Researchers also investigated general backward projection methods using radial basis functions @cite_22 @cite_27 , treating backward projection as an interpolation problem. | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_41",
"@cite_27",
"@cite_23",
"@cite_16"
],
"mid": [
"1963992598",
"2964040595",
"2153934661",
"",
"2133934472",
"2137570937"
],
"abstract": [
"Ever improving computing power and technological advances are greatly augmenting data collection and scientific observation. This has directly contributed to increased data complexity and dimensionality, motivating research of exploration techniques for multidimensional data. Consequently, a recent influx of work dedicated to techniques and tools that aid in understanding multidimensional datasets can be observed in many research fields, including biology, engineering, physics and scientific computing. While the effectiveness of existing techniques to analyze the structure and relationships of multidimensional data varies greatly, few techniques provide flexible mechanisms to simultaneously visualize and actively explore high-dimensional spaces. In this paper, we present an inverse linear affine multidimensional projection, coined iLAMP, that enables a novel interactive exploration technique for multidimensional data. iLAMP operates in reverse to traditional projection methods by mapping low-dimensional information into a high-dimensional space. This allows users to extrapolate instances of a multidimensional dataset while exploring a projection of the data to the planar domain. We present experimental results that validate iLAMP, measuring the quality and coherence of the extrapolated data; as well as demonstrate the utility of iLAMP to hypothesize the unexplored regions of a high-dimensional space.",
"Abstract Nonlinear dimensionality reduction embeddings computed from datasets do not provide a mechanism to compute the inverse map. In this paper, we address the problem of computing a stable inverse map to such a general bi-Lipschitz map. Our approach relies on radial basis functions (RBFs) to interpolate the inverse map everywhere on the low-dimensional image of the forward map. We demonstrate that the scale-free cubic RBF kernel performs better than the Gaussian kernel: it does not suffer from ill-conditioning, and does not require the choice of a scale. The proposed construction is shown to be similar to the Nystrom extension of the eigenvectors of the symmetric normalized graph Laplacian matrix. Based on this observation, we provide a new interpretation of the Nystrom extension with suggestions for improvement.",
"Several unsupervised learning algorithms based on an eigendecomposition provide either an embedding or a clustering only for given training points, with no straightforward extension for out-of-sample examples short of recomputing eigenvectors. This paper provides a unified framework for extending Local Linear Embedding (LLE), Isomap, Laplacian Eigenmaps, Multi-Dimensional Scaling (for dimensionality reduction) as well as for Spectral Clustering. This framework is based on seeing these algorithms as learning eigenfunctions of a data-dependent kernel. Numerical experiments show that the generalizations performed have a level of error comparable to the variability of the embedding algorithms due to the choice of training data.",
"",
"A standard approach for visualizing multivariate networks is to use one or more multidimensional views (for example, scatterplots) for selecting nodes by various metrics, possibly coordinated with a node-link view of the network. In this paper, we present three novel approaches for achieving a tighter integration of these views through hybrid techniques for multidimensional visualization, graph selection and layout. First, we present the FlowVizMenu, a radial menu containing a scatterplot that can be popped up transiently and manipulated with rapid, fluid gestures to select and modify the axes of its scatterplot. Second, the FlowVizMenu can be used to steer an attribute-driven layout of the network, causing certain nodes of a node-link diagram to move toward their corresponding positions in a scatterplot while others can be positioned manually or by force-directed layout. Third, we describe a novel hybrid approach that combines a scatterplot matrix (SPLOM) and parallel coordinates called the Parallel Scatterplot Matrix (P-SPLOM), which can be used to visualize and select features within the network. We also describe a novel arrangement of scatterplots called the Scatterplot Staircase (SPLOS) that requires less space than a traditional scatterplot matrix. Initial user feedback is reported.",
"In recent years, a variety of nonlinear dimensionality reduction techniques have been proposed that aim to address the limitations of traditional techniques such as PCA and classical scaling. The paper presents a review and systematic comparison of these techniques. The performances of the nonlinear techniques are investigated on artificial and natural tasks. The results of the experiments reveal that nonlinear techniques perform well on selected artificial tasks, but that this strong performance does not necessarily extend to real-world tasks. The paper explains these results by identifying weaknesses of current nonlinear techniques, and suggests how the performance of nonlinear dimensionality reduction techniques may be improved."
]
} |
1707.04281 | 2735320691 | Dimensionality reduction is a common method for analyzing and visualizing high-dimensional data across domains. Dimensionality-reduction algorithms involve complex optimizations and the reduced dimensions computed by these algorithms generally lack clear relation to the initial data dimensions. Therefore, interpreting and reasoning about dimensionality reductions can be difficult. In this work, we introduce two interaction techniques, and , for reasoning dynamically about scatter plots of dimensionally reduced data. We also contribute two related visualization techniques, and to facilitate and enrich the effective use of the proposed interactions, which we integrate in a new tool called . To evaluate our techniques, we first analyze their time and accuracy performance across varying sample and dimension sizes. We then conduct a user study in which twelve data scientists use so as to assess the usefulness of the techniques in performing exploratory data analysis tasks. Results suggest that our visual interactions are intuitive and effective for exploring dimensionality reductions and generating hypotheses about the underlying data. | Autoencoders @cite_34 , neural-network-based DR models, are a promising approach to computing backward projections. An autoencoder model with multiple hidden layers can learn a nonlinear dimensionality reduction function (encoding) as well as the corresponding backward projection (decoding) as part of the DR process. Inverting DRs is, however, an ill-posed problem. In addition to augmenting what-if analysis, the ability to define constraints over a back projection can also ease the computational burden. Praxis also enables users to interactively set equality and boundary constraints over back projections through an intuitive interface. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2100495367"
],
"abstract": [
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
]
} |
1707.04281 | 2735320691 | Dimensionality reduction is a common method for analyzing and visualizing high-dimensional data across domains. Dimensionality-reduction algorithms involve complex optimizations and the reduced dimensions computed by these algorithms generally lack clear relation to the initial data dimensions. Therefore, interpreting and reasoning about dimensionality reductions can be difficult. In this work, we introduce two interaction techniques, and , for reasoning dynamically about scatter plots of dimensionally reduced data. We also contribute two related visualization techniques, and to facilitate and enrich the effective use of the proposed interactions, which we integrate in a new tool called . To evaluate our techniques, we first analyze their time and accuracy performance across varying sample and dimension sizes. We then conduct a user study in which twelve data scientists use so as to assess the usefulness of the techniques in performing exploratory data analysis tasks. Results suggest that our visual interactions are intuitive and effective for exploring dimensionality reductions and generating hypotheses about the underlying data. | We presented initial versions of , , and earlier as part of Clustrophile, an exploratory visual clustering analysis tool @cite_35 . We give here a focused discussion of our revised interaction and visualization techniques, introduce Praxis, a new visualization tool that implements our techniques for exploratory data analysis using DR, and provide a thorough computational and user-performance evaluation. The current work also introduces , a new visualization technique to facilitate interactions. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2763925030"
],
"abstract": [
"While clustering is one of the most popular methods for data mining, analysts lack adequate tools for quick, iterative clustering analysis, which is essential for hypothesis generation and data reasoning. We introduce Clustrophile, an interactive tool for iteratively computing discrete and continuous data clusters, rapidly exploring different choices of clustering parameters, and reasoning about clustering instances in relation to data dimensions. Clustrophile combines three basic visualizations -- a table of raw datasets, a scatter plot of planar projections, and a matrix diagram (heatmap) of discrete clusterings -- through interaction and intermediate visual encoding. Clustrophile also contributes two spatial interaction techniques, @math and @math , and a visualization method, @math , for reasoning about two-dimensional projections obtained through dimensionality reductions."
]
} |
1707.03903 | 2734844314 | We present a new approach to extraction of hypernyms based on projection learning and word embeddings. In contrast to classification-based approaches, projection-based methods require no candidate hyponym-hypernym pairs. While it is natural to use both positive and negative training examples in supervised relation extraction, the impact of negative examples on hypernym prediction was not studied so far. In this paper, we show that explicit negative examples used for regularization of the model significantly improve performance compared to the state-of-the-art approach of (2014) on three datasets from different languages. | for hypernymy extraction rely on sentences where both hyponym and hypernym co-occur in characteristic contexts, e.g., such as and ''. proposed to use hand-crafted lexical-syntactic patterns to extract hypernyms from such contexts. introduced a method for learning patterns automatically based on a set of seed hyponym-hypernym pairs. Further examples of path-based approaches include @cite_18 and @cite_27 . The inherent limitation of the path-based methods leading to sparsity issues is that hyponym and hypernym have to co-occur in the same sentence. | {
"cite_N": [
"@cite_27",
"@cite_18"
],
"mid": [
"2164370343",
"2100428642"
],
"abstract": [
"Definition extraction is the task of automatically identifying definitional sentences within texts. The task has proven useful in many research areas including ontology learning, relation extraction and question answering. However, current approaches -- mostly focused on lexicosyntactic patterns -- suffer from both low recall and precision, as definitional sentences occur in highly variable syntactic structures. In this paper, we propose Word-Class Lattices (WCLs), a generalization of word lattices that we use to model textual definitions. Lattices are learned from a dataset of definitions from Wikipedia. Our method is applied to the task of definition and hypernym extraction and compares favorably to other pattern generalization methods proposed in the literature.",
"We compare two different types of extraction patterns for automatically deriving semantic information from text: lexical patterns, built from words and word class information, and dependency patterns with syntactic information obtained from a full parser. We are particularly interested in whether the richer linguistic information provided by a parser allows for a better performance of subsequent information extraction work. We evaluate automatic extraction of hypernym information from text and conclude that the application of dependency patterns does not lead to substantially higher precision and recall scores than using lexical patterns."
]
} |
1707.03903 | 2734844314 | We present a new approach to extraction of hypernyms based on projection learning and word embeddings. In contrast to classification-based approaches, projection-based methods require no candidate hyponym-hypernym pairs. While it is natural to use both positive and negative training examples in supervised relation extraction, the impact of negative examples on hypernym prediction was not studied so far. In this paper, we show that explicit negative examples used for regularization of the model significantly improve performance compared to the state-of-the-art approach of (2014) on three datasets from different languages. | Methods based on distributional vectors, such as those generated using the toolkit @cite_32 , aim to overcome this sparsity issue as they require no hyponym-hypernym co-occurrence in a sentence. Such methods take representations of individual words as an input to predict relations between them. Two branches of methods relying on distributional representations emerged so far. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2950133940"
],
"abstract": [
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible."
]
} |
1707.03903 | 2734844314 | We present a new approach to extraction of hypernyms based on projection learning and word embeddings. In contrast to classification-based approaches, projection-based methods require no candidate hyponym-hypernym pairs. While it is natural to use both positive and negative training examples in supervised relation extraction, the impact of negative examples on hypernym prediction was not studied so far. In this paper, we show that explicit negative examples used for regularization of the model significantly improve performance compared to the state-of-the-art approach of (2014) on three datasets from different languages. | An inherent limitation of classification-based approaches is that they require a list of candidate words pairs. While these are given in evaluation datasets such as BLESS @cite_33 , a corpus-wide classification of relations would need to classify all possible word pairs, which is computationally expensive for large vocabularies. Besides, discovered a tendency to lexical memorization of such approaches hampering the generalization. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2135964261"
],
"abstract": [
"We introduce BLESS, a data set specifically designed for the evaluation of distributional semantic models. BLESS contains a set of tuples instantiating different, explicitly typed semantic relations, plus a number of controlled random tuples. It is thus possible to assess the ability of a model to detect truly related word pairs, as well as to perform in-depth analyses of the types of semantic relations that a model favors. We discuss the motivations for BLESS, describe its construction and structure, and present examples of its usage in the evaluation of distributional semantic models."
]
} |
1707.03986 | 2735055431 | Given a large number of unlabeled face images, face grouping aims at clustering the images into individual identities present in the data. This task remains a challenging problem despite the remarkable capability of deep learning approaches in learning face representation. In particular, grouping results can still be egregious given profile faces and a large number of uninteresting faces and noisy detections. Often, a user needs to correct the erroneous grouping manually. In this study, we formulate a novel face grouping framework that learns clustering strategy from ground-truth simulated behavior. This is achieved through imitation learning (a.k.a apprenticeship learning or learning by watching) via inverse reinforcement learning (IRL). In contrast to existing clustering approaches that group instances by similarity, our framework makes sequential decision to dynamically decide when to merge two face instances groups driven by short- and long-term rewards. Extensive experiments on three benchmark datasets show that our framework outperforms unsupervised and supervised baselines. | Traditional face clustering methods @cite_0 @cite_49 @cite_39 @cite_14 are usually purely data-driven and unsupervised. They mainly focus on finding good distance metric between faces or effective subspaces for face representation. For instance, Zhu al @cite_14 propose a rank-order distance that measures the similarity between two faces using their neighboring information. Fitzgibbon and Zisserman @cite_45 further develop a joint manifold distance (JMD) that measures the distance between two subspaces, each of which invariant to a desired group of transformations. Zhang al @cite_17 propose agglomerative clustering on a directed graph to better capture global manifold structures of face data. There exist techniques that employ user interactions @cite_20 , extra information on the web @cite_5 and prior knowledge of family photo albums @cite_2 . Deep representation is recently found effective for face clustering @cite_33 , and large-scale face clustering has been attempted @cite_22 . Beyond image-based clustering, most existing video-based approaches employ pairwise constraints derived from face tracklets @cite_19 @cite_24 @cite_48 @cite_27 or other auxiliary information @cite_35 @cite_42 @cite_1 to facilitate face clustering in video. The state-of-the-art method by Zhang al @cite_27 adapts DeepID2+ model @cite_6 to a target domain with joint face representation adaptation and clustering. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_42",
"@cite_2",
"@cite_5",
"@cite_20",
"@cite_48",
"@cite_39",
"@cite_49",
"@cite_17",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_14",
"@cite_33",
"@cite_1",
"@cite_0",
"@cite_24",
"@cite_45"
],
"mid": [
"2161011194",
"2953081179",
"2293791596",
"2017659294",
"2107558380",
"2169827072",
"98220398",
"",
"2164617909",
"2952808567",
"2952304308",
"",
"2519769969",
"2155162820",
"2096733369",
"1995032263",
"",
"1969014310",
"2101169619"
],
"abstract": [
"Content-based people clustering is a crucial step for people indexing within video documents. In this paper, we investigate the use of both face and clothing features. A method of extracting a keyface for each video sequence is proposed. An algorithm based on the average of the N-minimum pair distances between local invariant features is used in order to resolve the problem of face matching. An original method for clothing matching is proposed based on 3D histogram of the dominant color. A 3-levels hierarchical bottom-up clustering that combines local invariant features, skin color, 3D histogram and clothing texture is also described. Experiments and results show the efficiency of the proposed clustering system.",
"In this work, we attempt to address the following problem: Given a large number of unlabeled face images, cluster them into the individual identities present in this data. We consider this a relevant problem in different application scenarios ranging from social media to law enforcement. In large-scale scenarios the number of faces in the collection can be of the order of hundreds of million, while the number of clusters can range from a few thousand to millions--leading to difficulties in terms of both run-time complexity and evaluating clustering and per-cluster quality. An efficient and effective Rank-Order clustering algorithm is developed to achieve the desired scalability, and better clustering accuracy than other well-known algorithms such as k-means and spectral clustering. We cluster up to 123 million face images into over 10 million clusters, and analyze the results in terms of both external cluster quality measures (known face labels) and internal cluster quality measures (unknown face labels) and run-time. Our algorithm achieves an F-measure of 0.87 on a benchmark unconstrained face dataset (LFW, consisting of 13K faces), and 0.27 on the largest dataset considered (13K images in LFW, plus 123M distractor images). Additionally, we present preliminary work on video frame clustering (achieving 0.71 F-measure when clustering all frames in the benchmark YouTube Faces dataset). A per-cluster quality measure is developed which can be used to rank individual clusters and to automatically identify a subset of good quality clusters for manual exploration.",
"In this paper, we investigate the problem of face clustering in real-world videos. In many cases, the distribution of the face data is unbalanced. In movies or TV series videos, the leading casts appear quite often and the others appear much less. However, many clustering algorithms cannot well handle such severe unbalance between the data distribution, resulting in that the large class is split apart, and the small class is merged into the large ones and thus missing. On the other hand, the data distribution proportion information may be known beforehand. For example, we can obtain such information by counting the spoken lines of the characters in the script text. Hence, we propose to make use of the proportion prior to regularize the clustering. A Hidden Conditional Random Field (HCRF) model is presented to incorporate the proportion prior. In experiments on a public data set from real-world videos, we observe improvements on clustering performance against state-of-the-art methods.",
"Digital photo management is becoming indispensable for the explosively growing family photo albums due to the rapid popularization of digital cameras and mobile phone cameras. An effective photo management system could accurately and efficiently group all faces of the same person into a small number of clusters. In this paper, we present a novel photo grouping method based on spectral theory. The key idea is to utilize prior information of family photo albums to improve the performance. First, an individual can only appear once in one photo, which works as the similarity constraint in our graph construction. Second, an individual cannot show more times than the number of photos in each album. That is, the size of a cluster for an individual is at most the number of photos in an album. We consider this constraint as a Minimum Cost Flow (MCF) linear network optimization problem and therefore propose a constrained K-Means for data clustering after graph embedding. Two metrics, i.e., accuracy (AC) and normalized mutual information metric (NMI), are used to evaluate the clustering performance. Extensive experimental results demonstrate the effectiveness of the proposed method.",
"We show quite good face clustering is possible for a dataset of inaccurately and ambiguously labelled face images. Our dataset is 44,773 face images, obtained by applying a face finder to approximately half a million captioned news images. This dataset is more realistic than usual face recognition datasets, because it contains faces captured \"in the wild\" in a variety of configurations with respect to the camera, taking a variety of expressions, and under illumination of widely varying color. Each face image is associated with a set of names, automatically extracted from the associated caption. Many, but not all such sets contain the correct name. We cluster face images in appropriate discriminant coordinates. We use a clustering procedure to break ambiguities in labelling and identify incorrectly labelled faces. A merging procedure then identifies variants of names that refer to the same individual. The resulting representation can be used to label faces in news images or to organize news pictures by individuals present. An alternative view of our procedure is as a process that cleans up noisy supervised data. We demonstrate how to use entropy measures to evaluate such procedures.",
"Face annotation technology is important for a photo management system. In this paper, we propose a novel interactive face annotation framework combining unsupervised and interactive learning. There are two main contributions in our framework. In the unsupervised stage, a partial clustering algorithm is proposed to find the most evident clusters instead of grouping all instances into clusters, which leads to a good initial labeling for later user interaction. In the interactive stage, an efficient labeling procedure based on minimization of both global system uncertainty and estimated number of user operations is proposed to reduce user interaction as much as possible. Experimental results show that the proposed annotation framework can significantly reduce the face annotation workload and is superior to existing solutions in the literature.",
"In this paper, we study the problem of face clustering in videos. Specifically, given automatically extracted faces from videos and two kinds of prior knowledge (the face track that each face belongs to, and the pairs of faces that appear in the same frame), the task is to partition the faces into a given number of disjoint groups, such that each group is associated with one subject. To deal with this problem, we propose a new method called weighted block-sparse low rank representation (WBSLRR) which considers the available prior knowledge while learning a low rank data representation, and also develop a simple but effective approach to obtain the clustering result of faces. Moreover, after using several acceleration techniques, our proposed method is suitable for solving large-scale problems. The experimental results on two benchmark datasets demonstrate the effectiveness of our approach.",
"",
"In this paper, we first develop a direct Bayesian based support vector machine by combining the Bayesian analysis with the SVM. Unlike traditional SVM-based face recognition method that needs to train a large number of SVMs, the direct Bayesian SVM needs only one SVM trained to classify the face difference between intra-personal variation and extra-personal variation. However, the added simplicity means that the method has to separate two complex subspaces by one hyper-plane thus affects the recognition accuracy. In order to improve the recognition performance we develop three more Bayesian based SVMs, including the one-versus-all method, the hierarchical agglomerative clustering based method, and the adaptive clustering method. We show the improvement of the new algorithms over traditional subspace methods through experiments on two face databases, the FERET database and the XM2VTS database.",
"This paper proposes a simple but effective graph-based agglomerative algorithm, for clustering high-dimensional data. We explore the different roles of two fundamental concepts in graph theory, indegree and outdegree, in the context of clustering. The average indegree reflects the density near a sample, and the average outdegree characterizes the local geometry around a sample. Based on such insights, we define the affinity measure of clusters via the product of average indegree and average outdegree. The product-based affinity makes our algorithm robust to noise. The algorithm has three main advantages: good performance, easy implementation, and high computational efficiency. We test the algorithm on two fundamental computer vision problems: image clustering and object matching. Extensive experiments demonstrate that it outperforms the state-of-the-arts in both applications.",
"This paper designs a high-performance deep convolutional network (DeepID2+) for face recognition. It is learned with the identification-verification supervisory signal. By increasing the dimension of hidden representations and adding supervision to early convolutional layers, DeepID2+ achieves new state-of-the-art on LFW and YouTube Faces benchmarks. Through empirical studies, we have discovered three properties of its deep neural activations critical for the high performance: sparsity, selectiveness and robustness. (1) It is observed that neural activations are moderately sparse. Moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images. It is surprising that DeepID2+ still can achieve high recognition accuracy even after the neural responses are binarized. (2) Its neurons in higher layers are highly selective to identities and identity-related attributes. We can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present. Although DeepID2+ is not taught to distinguish attributes during training, it has implicitly learned such high-level concepts. (3) It is much more robust to occlusions, although occlusion patterns are not included in the training set.",
"",
"Clustering faces in movies or videos is extremely challenging since characters’ appearance can vary drastically under different scenes. In addition, the various cinematic styles make it difficult to learn a universal face representation for all videos. Unlike previous methods that assume fixed handcrafted features for face clustering, in this work, we formulate a joint face representation adaptation and clustering approach in a deep learning framework. The proposed method allows face representation to gradually adapt from an external source domain to a target video domain. The adaptation of deep representation is achieved without any strong supervision but through iteratively discovered weak pairwise identity constraints derived from potentially noisy face clustering result. Experiments on three benchmark video datasets demonstrate that our approach generates character clusters with high purity compared to existing video face clustering methods, which are either based on deep face representation (without adaptation) or carefully engineered features.",
"We present a novel clustering algorithm for tagging a face dataset (e. g., a personal photo album). The core of the algorithm is a new dissimilarity, called Rank-Order distance, which measures the dissimilarity between two faces using their neighboring information in the dataset. The Rank-Order distance is motivated by an observation that faces of the same person usually share their top neighbors. Specifically, for each face, we generate a ranking order list by sorting all other faces in the dataset by absolute distance (e. g., L1 or L2 distance between extracted face recognition features). Then, the Rank-Order distance of two faces is calculated using their ranking orders. Using the new distance, a Rank-Order distance based clustering algorithm is designed to iteratively group all faces into a small number of clusters for effective tagging. The proposed algorithm outperforms competitive clustering algorithms in term of both precision recall and efficiency.",
"Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.",
"Face clustering is an important but challenging task since facial images always have huge variation due to change in facial expressions, head poses and partial occlusions, etc. Moreover, face clustering is actually an unsupervised problem which makes it more difficult to reach an accurate result. Fortunately, there are some cues that can be used to improve clustering performance. In this paper, two types of cues are employed. The first one is pairwise constraints: must-link and cannot-link constraints, which can be extracted from the temporal and spatial knowledge of data. The other is that each face is associated with a series of attributes (i.e, gender) which can contribute discrimination among faces. To take advantage of the above cues, we propose a new algorithm, Multi-cue Augmented Face Clustering (McAFC), which effectively incorporates the cues via graph-guided sparse subspace clustering technique. Specially, facial images from the same individual are encouraged to be connected while faces from different persons are restrained to be connected. Experiments on three face datasets from real-world videos show the improvements of our algorithm over the state-of-the-art methods.",
"",
"In this paper, we focus on face clustering in videos. Given the detected faces from real-world videos, we partition all faces into K disjoint clusters. Different from clustering on a collection of facial images, the faces from videos are organized as face tracks and the frame index of each face is also provided. As a result, many pair wise constraints between faces can be easily obtained from the temporal and spatial knowledge of the face tracks. These constraints can be effectively incorporated into a generative clustering model based on the Hidden Markov Random Fields (HMRFs). Within the HMRF model, the pair wise constraints are augmented by label-level and constraint-level local smoothness to guide the clustering process. The parameters for both the unary and the pair wise potential functions are learned by the simulated field algorithm, and the weights of constraints can be easily adjusted. We further introduce an efficient clustering framework specially for face clustering in videos, considering that faces in adjacent frames of the same face track are very similar. The framework is applicable to other clustering algorithms to significantly reduce the computational cost. Experiments on two face data sets from real-world videos demonstrate the significantly improved performance of our algorithm over state-of-the art algorithms.",
"We wish to match sets of images to sets of images where both sets are undergoing various distortions such as viewpoint and lighting changes. To this end we have developed a joint manifold distance (JMD) which measures the distance between two subspaces, where each subspace is invariant to a desired group of transformations, for example affine warping of the image plane. The JMD may be seen as generalizing invariant distance metrics such as tangent distance in two important ways. First, formally representing priors on the image distribution avoids certain difficulties, which in previous work have required ad-hoc correction. The second contribution is the observation that previous distances have been computed using what amounted to \"home-grown\" nonlinear optimizers, and that more reliable results can be obtained by using generic optimizers which have been developed in the numerical analysis community, and which automatically set the parameters which home-grown methods must set by art. The JMD is used in this work to cluster faces in video. Sets of faces detected in contiguous frames define the subspaces, and distance between the subspaces is computed using JMD. In this way the principal cast of a movie can be 'discovered' as the principal clusters. We demonstrate the method on a feature-length movie."
]
} |
1707.03986 | 2735055431 | Given a large number of unlabeled face images, face grouping aims at clustering the images into individual identities present in the data. This task remains a challenging problem despite the remarkable capability of deep learning approaches in learning face representation. In particular, grouping results can still be egregious given profile faces and a large number of uninteresting faces and noisy detections. Often, a user needs to correct the erroneous grouping manually. In this study, we formulate a novel face grouping framework that learns clustering strategy from ground-truth simulated behavior. This is achieved through imitation learning (a.k.a apprenticeship learning or learning by watching) via inverse reinforcement learning (IRL). In contrast to existing clustering approaches that group instances by similarity, our framework makes sequential decision to dynamically decide when to merge two face instances groups driven by short- and long-term rewards. Extensive experiments on three benchmark datasets show that our framework outperforms unsupervised and supervised baselines. | In this study, we focus on image-based face grouping without temporal information. Our method differs significantly to existing methods @cite_27 that cluster instances by deep representation alone. Instead, our method learns from experts to make sequential decision on grouping considering both short- and long-term rewards. It is thus capable of coping with uninteresting faces and noisy detections effectively. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2519769969"
],
"abstract": [
"Clustering faces in movies or videos is extremely challenging since characters’ appearance can vary drastically under different scenes. In addition, the various cinematic styles make it difficult to learn a universal face representation for all videos. Unlike previous methods that assume fixed handcrafted features for face clustering, in this work, we formulate a joint face representation adaptation and clustering approach in a deep learning framework. The proposed method allows face representation to gradually adapt from an external source domain to a target video domain. The adaptation of deep representation is achieved without any strong supervision but through iteratively discovered weak pairwise identity constraints derived from potentially noisy face clustering result. Experiments on three benchmark video datasets demonstrate that our approach generates character clusters with high purity compared to existing video face clustering methods, which are either based on deep face representation (without adaptation) or carefully engineered features."
]
} |
1707.03986 | 2735055431 | Given a large number of unlabeled face images, face grouping aims at clustering the images into individual identities present in the data. This task remains a challenging problem despite the remarkable capability of deep learning approaches in learning face representation. In particular, grouping results can still be egregious given profile faces and a large number of uninteresting faces and noisy detections. Often, a user needs to correct the erroneous grouping manually. In this study, we formulate a novel face grouping framework that learns clustering strategy from ground-truth simulated behavior. This is achieved through imitation learning (a.k.a apprenticeship learning or learning by watching) via inverse reinforcement learning (IRL). In contrast to existing clustering approaches that group instances by similarity, our framework makes sequential decision to dynamically decide when to merge two face instances groups driven by short- and long-term rewards. Extensive experiments on three benchmark datasets show that our framework outperforms unsupervised and supervised baselines. | There exist some pioneering studies that explored clustering with RL. Likas @cite_41 models the decision process of assigning a sample from a data stream to a prototype, , cluster centers produced by on-line K-means. Barbakh and Fyfe @cite_8 employ RL to select a better initialization for K-means. Our work differs to the aforementioned studies: (1) @cite_8 @cite_41 are unsupervised, , their loss is related to the distance from data to a cluster prototype. In contrast, our framework guides an agent with a teacher's behavior. (2) We consider a decision that extends more flexibly to merge arbitrary instances or groups. We also investigate a novel reward function and new mechanisms to deal with noises. | {
"cite_N": [
"@cite_41",
"@cite_8"
],
"mid": [
"2133378540",
"1544101230"
],
"abstract": [
"A general technique is proposed for embedding online clustering algorithms based on competitive learning in a reinforcement learning framework. The basic idea is that the clustering system can be viewed as a reinforcement learning system that learns through reinforcements to follow the clustering strategy we wish to implement. In this sense, the reinforcement guided competitive learning (RGCL) algorithm is proposed that constitutes a reinforcement-based adaptation of learning vector quantization (LVQ) with enhanced clustering capabilities. In addition, we suggest extensions of RGCL and LVQ that are characterized by the property of sustained exploration and significantly improve the performance of those algorithms, as indicated by experimental tests on well-known data sets.",
"We show how a previously derived method of using reinforcement learning for supervised clustering of a data set can lead to a sub-optimal solution if the cluster prototypes are initialised to poor positions. We then develop three novel reward functions which show great promise in overcoming poor initialization. We illustrate the results on several data sets. We then use the clustering methods with an underlying latent space which enables us to create topology preserving mappings. We illustrate this method on both real and artificial data sets."
]
} |
1707.03986 | 2735055431 | Given a large number of unlabeled face images, face grouping aims at clustering the images into individual identities present in the data. This task remains a challenging problem despite the remarkable capability of deep learning approaches in learning face representation. In particular, grouping results can still be egregious given profile faces and a large number of uninteresting faces and noisy detections. Often, a user needs to correct the erroneous grouping manually. In this study, we formulate a novel face grouping framework that learns clustering strategy from ground-truth simulated behavior. This is achieved through imitation learning (a.k.a apprenticeship learning or learning by watching) via inverse reinforcement learning (IRL). In contrast to existing clustering approaches that group instances by similarity, our framework makes sequential decision to dynamically decide when to merge two face instances groups driven by short- and long-term rewards. Extensive experiments on three benchmark datasets show that our framework outperforms unsupervised and supervised baselines. | Ng and Russel @cite_34 introduced the concept of (IRL), which is also known as or apprenticeship learning @cite_46 . The goal of IRL is to find a reward function to explain observed behavior of an expert who acts according to an unknown policy. Inverse reinforcement learning is useful when a reward function is multivariate, , consists of several reward terms of which the relative weights of these terms are unknown a-priori. Imitation learning was shown effective when the supervision of a dynamic process is obtainable, , in robotic navigation @cite_46 , activity understanding and forecasting @cite_21 and visual tracking @cite_25 . | {
"cite_N": [
"@cite_46",
"@cite_34",
"@cite_25",
"@cite_21"
],
"mid": [
"1999874108",
"2061562262",
"2225887246",
""
],
"abstract": [
"We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.",
"Objective—To evaluate the pharmacokinetics of a novel commercial formulation of ivermectin after administration to goats. Animals—6 healthy adult goats. Procedure—Ivermectin (200 μg kg) was initially administered IV to each goat, and plasma samples were obtained for 36 days. After a washout period of 3 weeks, each goat received a novel commercial formulation of ivermectin (200 μg kg) by SC injection. Plasma samples were then obtained for 42 days. Drug concentrations were quantified by use of high-performance liquid chromatography with fluorescence detection. Results—Pharmacokinetics of ivermectin after IV administration were best described by a 2-compartment open model; values for main compartmental variables included volume of distribution at a steady state (9.94 L kg), clearance (1.54 L kg d), and area under the plasma concentration-time curve (AUC; 143 [ng•d] mL). Values for the noncompartmental variables included mean residence time (7.37 days), AUC (153 [ng•d] mL), and clearance (1.43 L kg d). After ...",
"Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation and autonomous driving. In tracking-by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth death and appearance disappearance of targets by treating them as state transitions in the MDP while leveraging existing online single object tracking methods. We conduct experiments on the MOT Benchmark [24] to verify the effectiveness of our method.",
""
]
} |
1707.03718 | 2735039185 | Pixel-wise semantic segmentation for visual scene understanding not only needs to be accurate, but also efficient in order to find any use in real-time application. Existing algorithms even though are accurate but they do not focus on utilizing the parameters of neural network efficiently. As a result they are huge in terms of parameters and number of operations; hence slow too. In this paper, we propose a novel deep neural network architecture which allows it to learn without any significant increase in number of parameters. Our network uses only 11.5 million parameters and 21.2 GFLOPs for processing an image of resolution 3x640x360. It gives state-of-the-art performance on CamVid and comparable results on Cityscapes dataset. We also compare our networks processing time on NVIDIA GPU and embedded system device with existing state-of-the-art architectures for different image resolutions. | In @cite_18 a pre-trained VGG was used as discriminator. Pooling indices after every max-pooling step was saved and then later used for upsampling in the decoder. Later on researchers came up with the idea of deep deconvolution network @cite_32 @cite_12 , fully convolutional network (FCN) combined with skip architecture @cite_24 , which eliminated the need of saving pooling indices. Networks designed for classification and categorization mostly use fully connected layer as their classifier; in FCN they get replaced with convolutional layers. Standard pre-trained encoders such as: AlexNet @cite_14 , VGG @cite_9 , and GoogLeNet @cite_15 have been used for segmentation. In order to get precise segmentation boundaries, researchers have also tried to cascade their deep convolutional neural network (DCNN) with post-processing steps, like the use of Conditional Random Field (CRF) @cite_8 @cite_4 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_32",
"@cite_24",
"@cite_15",
"@cite_12"
],
"mid": [
"360623563",
"",
"",
"1923697677",
"1686810756",
"2952637581",
"2952632681",
"2950179405",
"2963881378"
],
"abstract": [
"We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.",
"",
"",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet ."
]
} |
1707.03811 | 2734403440 | We show the problem of counting homomorphisms from the fundamental group of a homology @math -sphere @math to a finite, non-abelian simple group @math is #P-complete, in the case that @math is fixed and @math is the computational input. Similarly, deciding if there is a non-trivial homomorphism is NP-complete. In both reductions, we can guarantee that every non-trivial homomorphism is a surjection. As a corollary, for any fixed integer @math , it is NP-complete to decide whether @math admits a connected @math -sheeted covering. Our construction is inspired by universality results in topological quantum computation. Given a classical reversible circuit @math , we construct @math so that evaluations of @math with certain initialization and finalization conditions correspond to homomorphisms @math . An intermediate state of @math likewise corresponds to a homomorphism @math , where @math is a pointed Heegaard surface of @math of genus @math . We analyze the action on these homomorphisms by the pointed mapping class group @math and its Torelli subgroup @math . By results of Dunfield-Thurston, the action of @math is as large as possible when @math is sufficiently large; we can pass to the Torelli group using the congruence subgroup property of @math . Our results can be interpreted as a sharp classical universality property of an associated combinatorial @math -dimensional TQFT. | We can also place th:main in the context of other counting problems involving finite groups. We summarize what is known in f:knownunknowns . Given a finite group @math , the most general analogous counting problem is the number of solutions to a system of equations that may allow constant elements of @math as well as variables. Nordh and Jonsson @cite_26 showed that this problem is @math -complete if and only if @math is non-abelian, while Goldman and Russell @cite_34 showed that the existence problem is @math -complete. If @math is abelian, then any finite system of equations can be solved by the Smith normal form algorithm. These authors also considered the complexity of a single equation. In this case, the existence problem has unknown complexity if @math is solvable but not nilpotent, while the counting problem has unknown complexity if @math is solvable but not abelian. | {
"cite_N": [
"@cite_26",
"@cite_34"
],
"mid": [
"1563123524",
"1999377233"
],
"abstract": [
"We study the computational complexity of counting the number of solutions to systems of equations over a fixed finite semigroup. We show that if the semigroup is a group, the problem is tractable if the group is Abelian and #P-complete otherwise. If the semigroup is a monoid (that is not a group) the problem is #P-complete. In the case of semigroups where all elements have divisors we show that the problem is tractable if the semigroup is a direct product of an Abelian group and a rectangular band, and #P-complete otherwise. The class of semigroups where all elements have divisors contains most of the interesting semigroups e.g. regular semigroups. These results are proved by the use of powerful techniques from universal algebra.",
"We study the computational complexity of solving systems of equations over a finite group. An equation over a group G is an expression of the form w1.w2....wk = 1G, where each wi is either a variable, an inverted variable, or a group constant and 1G is the identity element of G . A solution to such an equation is an assignment of the variables (to values in G) which realizes the equality. A system of equations is a collection of such equations; a solution is then an assigmnent which simultaneously realizes each equation. We show that the problem of determining if a (single) equation has a solution is NP-complete for all nonsolvable groups G. For nilpotent groups, this same problem is shown to be in P. The analogous problem for systems of such equations is shown to be NP-complete if G is non-Abelian, and in P otherwise. Finally, we observe some connections between these problems and the theory of nonuniform automata."
]
} |
1707.03811 | 2734403440 | We show the problem of counting homomorphisms from the fundamental group of a homology @math -sphere @math to a finite, non-abelian simple group @math is #P-complete, in the case that @math is fixed and @math is the computational input. Similarly, deciding if there is a non-trivial homomorphism is NP-complete. In both reductions, we can guarantee that every non-trivial homomorphism is a surjection. As a corollary, for any fixed integer @math , it is NP-complete to decide whether @math admits a connected @math -sheeted covering. Our construction is inspired by universality results in topological quantum computation. Given a classical reversible circuit @math , we construct @math so that evaluations of @math with certain initialization and finalization conditions correspond to homomorphisms @math . An intermediate state of @math likewise corresponds to a homomorphism @math , where @math is a pointed Heegaard surface of @math of genus @math . We analyze the action on these homomorphisms by the pointed mapping class group @math and its Torelli subgroup @math . By results of Dunfield-Thurston, the action of @math is as large as possible when @math is sufficiently large; we can pass to the Torelli group using the congruence subgroup property of @math . Our results can be interpreted as a sharp classical universality property of an associated combinatorial @math -dimensional TQFT. | Our approach to th:main (and that of Krovi and Russell for their results) is inspired by quantum computation and topological quantum field theory. Every unitary modular tensor category (UMTC) @math yields a unitary 3-dimensional topological quantum field theory @cite_37 @cite_31 @cite_42 . The topological quantum field theory assigns a vector space @math , or , to every oriented, closed surface. It also assigns a state space @math to every oriented, closed surface with @math boundary circles, where @math is an object in @math interpreted as the color" of each boundary circle. Each state space @math has a projective action of the mapping class group @math . (In fact the unpointed mapping class group @math acts, but we will keep the basepoint for convenience.) These mapping class group actions then extend to invariants of 3-manifolds and links in 3-manifolds. | {
"cite_N": [
"@cite_37",
"@cite_42",
"@cite_31"
],
"mid": [
"2088353214",
"",
"2080743555"
],
"abstract": [
"The generalization of Jones polynomial of links to the case of graphs inR3 is presented. It is constructed as the functor from the category of graphs to the category of representations of the quantum groups.",
"",
"The aim of this paper is to construct new topological invariants of compact oriented 3-manifolds and of framed links in such manifolds. Our invariant of (a link in) a closed oriented 3-manifold is a sequence of complex numbers parametrized by complex roots of 1. For a framed link in S 3 the terms of the sequence are equale to the values of the (suitably parametrized) Jones polynomial of the link in the corresponding roots of 1. In the case of manifolds with boundary our invariant is a (sequence of) finite dimensional complex linear operators. This produces from each root of unity q a 3-dimensional topological quantum field theory"
]
} |
1707.03811 | 2734403440 | We show the problem of counting homomorphisms from the fundamental group of a homology @math -sphere @math to a finite, non-abelian simple group @math is #P-complete, in the case that @math is fixed and @math is the computational input. Similarly, deciding if there is a non-trivial homomorphism is NP-complete. In both reductions, we can guarantee that every non-trivial homomorphism is a surjection. As a corollary, for any fixed integer @math , it is NP-complete to decide whether @math admits a connected @math -sheeted covering. Our construction is inspired by universality results in topological quantum computation. Given a classical reversible circuit @math , we construct @math so that evaluations of @math with certain initialization and finalization conditions correspond to homomorphisms @math . An intermediate state of @math likewise corresponds to a homomorphism @math , where @math is a pointed Heegaard surface of @math of genus @math . We analyze the action on these homomorphisms by the pointed mapping class group @math and its Torelli subgroup @math . By results of Dunfield-Thurston, the action of @math is as large as possible when @math is sufficiently large; we can pass to the Torelli group using the congruence subgroup property of @math . Our results can be interpreted as a sharp classical universality property of an associated combinatorial @math -dimensional TQFT. | Finally, the UMTC @math is universal for quantum computation if the image of the mapping class group action on suitable choices of @math is large enough to simulate quantum circuits on @math qubits, with @math . If the action is only large enough to simulate classical circuits on @math bits, then it is still classically universal. These universality results are important for the fault-tolerance problem in quantum computation @cite_22 @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_22"
],
"mid": [
"1972018751",
"2164171842"
],
"abstract": [
"For a 3-manifold with triangulated boundary, the Turaev–Viro topological invariant can be interpreted as a quantum error-correcting code. The code has local stabilizers, identified by Levin and Wen, on a qudit lattice. Kitaev’s toric code arises as a special case. The toric code corresponds to an abelian anyon model, and therefore requires out-of-code operations to obtain universal quantum computation. In contrast, for many categories, such as the Fibonacci category, the Turaev–Viro code realizes a non-abelian anyon model. A universal set of fault-tolerant operations can be implemented by deforming the code with local gates, in order to implement anyon braiding. We identify the anyons in the code space, and present schemes for initialization, computation and measurement. This provides a family of constructions for fault-tolerant quantum computation that are closely related to topological quantum computation, but for which the fault tolerance is implemented in software rather than coming from a physical medium.",
"We show that the topological modular functor from Witten-Chern-Simons theory is universal for quantum computation in the sense that a quantum circuit com- putation can be efficiently approximated by an intertwining action of a braid on the functor's state space. A computational model based on Chern-Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation has topological implications which will be considered elsewhere."
]
} |
1707.03811 | 2734403440 | We show the problem of counting homomorphisms from the fundamental group of a homology @math -sphere @math to a finite, non-abelian simple group @math is #P-complete, in the case that @math is fixed and @math is the computational input. Similarly, deciding if there is a non-trivial homomorphism is NP-complete. In both reductions, we can guarantee that every non-trivial homomorphism is a surjection. As a corollary, for any fixed integer @math , it is NP-complete to decide whether @math admits a connected @math -sheeted covering. Our construction is inspired by universality results in topological quantum computation. Given a classical reversible circuit @math , we construct @math so that evaluations of @math with certain initialization and finalization conditions correspond to homomorphisms @math . An intermediate state of @math likewise corresponds to a homomorphism @math , where @math is a pointed Heegaard surface of @math of genus @math . We analyze the action on these homomorphisms by the pointed mapping class group @math and its Torelli subgroup @math . By results of Dunfield-Thurston, the action of @math is as large as possible when @math is sufficiently large; we can pass to the Torelli group using the congruence subgroup property of @math . Our results can be interpreted as a sharp classical universality property of an associated combinatorial @math -dimensional TQFT. | One early, important UMTC is the (truncated) category @math of quantum representations of @math at a principal root of unity. This category yields the Jones polynomial for a link @math (taking @math , the first irreducible object) and the Jones-Witten-Reshetikhin-Turaev invariant of a closed 3-manifold. In separate papers, Freedman, Larsen, and Wang showed that @math and @math are both quantumly universal representations of @math and @math @cite_22 @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_22"
],
"mid": [
"2128124629",
"2164171842"
],
"abstract": [
"Introduction 1. The two-eigenvalue problem 2. Hecke algebra representations of braid groups 3. Duality of Jones-Wenzl representations 4. Closed images of Jones-Wenzl sectors 5. Distribution of evaluations of Jones polynomials 6. Fibonacci representations",
"We show that the topological modular functor from Witten-Chern-Simons theory is universal for quantum computation in the sense that a quantum circuit com- putation can be efficiently approximated by an intertwining action of a braid on the functor's state space. A computational model based on Chern-Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation has topological implications which will be considered elsewhere."
]
} |
1707.03811 | 2734403440 | We show the problem of counting homomorphisms from the fundamental group of a homology @math -sphere @math to a finite, non-abelian simple group @math is #P-complete, in the case that @math is fixed and @math is the computational input. Similarly, deciding if there is a non-trivial homomorphism is NP-complete. In both reductions, we can guarantee that every non-trivial homomorphism is a surjection. As a corollary, for any fixed integer @math , it is NP-complete to decide whether @math admits a connected @math -sheeted covering. Our construction is inspired by universality results in topological quantum computation. Given a classical reversible circuit @math , we construct @math so that evaluations of @math with certain initialization and finalization conditions correspond to homomorphisms @math . An intermediate state of @math likewise corresponds to a homomorphism @math , where @math is a pointed Heegaard surface of @math of genus @math . We analyze the action on these homomorphisms by the pointed mapping class group @math and its Torelli subgroup @math . By results of Dunfield-Thurston, the action of @math is as large as possible when @math is sufficiently large; we can pass to the Torelli group using the congruence subgroup property of @math . Our results can be interpreted as a sharp classical universality property of an associated combinatorial @math -dimensional TQFT. | Universality also implies that any approximation of these invariants that could be useful for computational topology is @math -hard @cite_24 @cite_9 . Note that exact evaluation of the Jones polynomial was earlier shown to be @math -hard without quantum computation methods @cite_0 . | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_0"
],
"mid": [
"2962824725",
"2032980094",
"2039763834"
],
"abstract": [
"Freedman, Kitaev, and Wang (2002), and later Aharonov, Jones, and Landau (2009), established a quantum algorithm to \"additively\" approximate the Jones polynomial V(L;t) at any principal root of unity t. The strength of this additive approximation depends exponentially on the bridge number of the link presentation. Freedman, Larsen, and Wang (2002) established that the approximation is universal for quantum computation at a non- lattice, principal root of unity. We show that any value-distinguishing approximation of the Jones polynomial at these non-lattice roots of unity is #P-hard. Given the power to decide whetherjV(L;t)j b for fixed constants 0 1, T(G; x; y) is #P-hard to approximate within a factor of c even for planar graphs G. Along the way, we clarify and generalize both Aaronson's theorem and the Solovay- Kitaev theorem.",
"Recently, Frumkin (9) pointed out that none of the well-known algorithms that transform an integer matrix into Smith (16) or Hermite (12) normal form is known to be polynomially bounded in its running time. In fact, Blankinship (3) noticed--as an empirical fact--that intermediate numbersmaybecome quite large during standard calculations Of these canonical forms. Here we present new algorithms in which both the number of algebraic operations and the number of (binary) digits of all intermediate numbers are bounded by polynomials in the length of the input data (assumed to be encoded in binary). These algorithms also find the multiplier-matrices K, U' and K' such thatAK and U'AK' are the Hermite and Smith normal forms of the given matrix A. This provides the first proof that multipliers with small enough entries exist. 1. Introduction. Every nonsingular integer matrix can be transformed into a lower triangular integer matrix using elementary column operations. This was shown by Hermite ((12), Theorem 1 below). Smith ((16), Theorem 3 below) proved that any integer matrix can be diagonalized using elementary row and column operations. The Smith and Hermite normal forms play an important role in the study of rational matrices (calculating their characteristic equations), polynomial matrices (determining the latent roots), algebraic group theory (Newman 15)), system theory (Heymann and Thorpe (13)) and integer programming (Garfi!akel and Nemhauser (10)). Algorithms that compute Smith andHermite normal forms of an integer matrix are given (among others) by Barnette and Pace (1), Bodewig (5), Bradley (7), Frumkin (9) and Hu (14). The methods of Hu, Bodewig and Bradley are based on the explicit calculation of the greatestcommon divisor (GCD) and a set of multiplierswhereas other algorithms ((1)) perform GCD calculations implicitly. As Frumkin (9) pointed out, none of these algorithms is known to be polynomial. In transforming an integer matrix into Smith or Hermite normal form using known techniques, the number of digits of intermediate numbers does not appear to be bounded by a polynomial in the length of the input data as was pointed out by Blankinship (3), (4-1 and Frumkin (9).",
"We show that determining the Jones polynomial of an alternating link is #P-hard. This is a special case of a wide range of results on the general intractibility of the evaluation of the Tutte polynomial T(M;x,y) of a matroid M except for a few listed special points and curves of the (x,y)-plane"
]
} |
1707.03811 | 2734403440 | We show the problem of counting homomorphisms from the fundamental group of a homology @math -sphere @math to a finite, non-abelian simple group @math is #P-complete, in the case that @math is fixed and @math is the computational input. Similarly, deciding if there is a non-trivial homomorphism is NP-complete. In both reductions, we can guarantee that every non-trivial homomorphism is a surjection. As a corollary, for any fixed integer @math , it is NP-complete to decide whether @math admits a connected @math -sheeted covering. Our construction is inspired by universality results in topological quantum computation. Given a classical reversible circuit @math , we construct @math so that evaluations of @math with certain initialization and finalization conditions correspond to homomorphisms @math . An intermediate state of @math likewise corresponds to a homomorphism @math , where @math is a pointed Heegaard surface of @math of genus @math . We analyze the action on these homomorphisms by the pointed mapping class group @math and its Torelli subgroup @math . By results of Dunfield-Thurston, the action of @math is as large as possible when @math is sufficiently large; we can pass to the Torelli group using the congruence subgroup property of @math . Our results can be interpreted as a sharp classical universality property of an associated combinatorial @math -dimensional TQFT. | Mochon's result is evidence, but not proof, that @math is @math -complete for every fixed, non-solvable @math and every suitable conjugacy class @math that satisfies his theorem. His result implies that if we constrain the associated braid group action with arbitrary initialization and finalization conditions, then counting the number of solutions to the constraints is parsimoniously @math -complete. However, if we use a braid to describe a link, for instance with a plat presentation @cite_24 , then the description yields specific initialization and finalizations conditions that must be handled algorithmically to obtain hardness results. Recall that in our proof of th:main , the state in @math is initialized and finalized using the handlebodies @math and @math . If we could choose any initialization and finalization conditions whatsoever, then it would be much easier to establish (weakly parsimonious) @math -hardness; it would take little more work than to cite th:dt . | {
"cite_N": [
"@cite_24"
],
"mid": [
"2962824725"
],
"abstract": [
"Freedman, Kitaev, and Wang (2002), and later Aharonov, Jones, and Landau (2009), established a quantum algorithm to \"additively\" approximate the Jones polynomial V(L;t) at any principal root of unity t. The strength of this additive approximation depends exponentially on the bridge number of the link presentation. Freedman, Larsen, and Wang (2002) established that the approximation is universal for quantum computation at a non- lattice, principal root of unity. We show that any value-distinguishing approximation of the Jones polynomial at these non-lattice roots of unity is #P-hard. Given the power to decide whetherjV(L;t)j b for fixed constants 0 1, T(G; x; y) is #P-hard to approximate within a factor of c even for planar graphs G. Along the way, we clarify and generalize both Aaronson's theorem and the Solovay- Kitaev theorem."
]
} |
1707.03527 | 2735810016 | Selective bulk analyses, such as statistical learning on temporal spatial data, are fundamental to a wide range of contemporary data analysis. However, with the increasingly larger data-sets, such as weather data and marketing transactions, the data organization access becomes more challenging in selective bulk data processing with the use of current big data processing frameworks such as Spark or keyvalue stores. In this paper, we propose a method to optimize selective bulk analysis in big data processing and referred to as Oseba. Oseba maintains a super index for the data organization in memory to support fast lookup through targeting the data involved with each selective analysis program. Oseba is able to save memory as well as computation in comparison to the default data processing frameworks. | There are a great deal of frameworks or systems that are proposed to manage and process big data @cite_12 @cite_15 @cite_8 @cite_6 @cite_10 . The Hadoop Distributed File System (HDFS) is an open source community response to the Google File System (GFS), specifically for the use of MapReduce style workloads @cite_12 . Dryad @cite_0 and Spark are two other frameworks to support big data processing with the similar interface to MapReduce. Spark can allow data to be repeatedly and interactively processed in distributed memory. and Sparkler @cite_9 extends Spark to support distributed stochastic gradient descent. However, these systems usually apply operations to all data items, which is inconvenient for selective bulk data analysis since only a subset of data is involved. Also, Pregel @cite_7 @cite_4 supports iterative graph applications and HaLoop are iterative MapReduce runtimes. Moreover, there are systems that can support fine-grained data processing. Example of these systems are keyvalue stores @cite_13 , databases, and Piccolo @cite_2 and they provide interfaces to support fine-grained data items cells updates processing. However, these frameworks and systems need extra cost for maintaining reliability as discussed in Spark, while bulk data analysis are more about coarse-grained level data processing. | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"1602474204",
"2170616854",
"",
"2149941839",
"",
"2100830825",
"200298483",
"",
"2558192069",
"2173213060"
],
"abstract": [
"",
"The Bulk Synchronous Parallel (BSP) model, which divides a graphing algorithm into multiple supersteps, has become extremely popular in distributed graph processing systems. However, the high number of network messages exchanged in each superstep of the graph algorithm will create a long period of time. We refer to this as a communication delay. Furthermore, the BSP's global synchronization barrier does not allow computation in the next superstrep to be scheduled during this communication delay. This communication delay makes up a large percentage of the overall processing time of a superstep. While most recent research has focused on reducing number of network messages, but communication delay is still a deterministic factor for overall performance. In this paper, we add a runtime communication and computation scheduler into current graph BSP implementations. This scheduler will move some computation from the next superstep to the communication phase in the current superstep to mitigate the communication delay. Finally, we prototyped our system, Zebra, on Apache Hama, which is an open source clone of the classic Google Pregel. By running a set of graph algorithms on an in-house cluster, our evaluation shows that our system could completely eliminate the communication delay in the best case and can achieve average 2X speedup over Hama.",
"Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs - in some cases billions of vertices, trillions of edges - poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing large graphs that is expressive and easy to program.",
"",
"Low-rank matrix factorization has recently been applied with great success on matrix completion problems for applications like recommendation systems, link predictions for social networks, and click prediction for web search. However, as this approach is applied to increasingly larger datasets, such as those encountered in web-scale recommender systems like Netflix and Pandora, the data management aspects quickly become challenging and form a road-block. In this paper, we introduce a system called Sparkler to solve such large instances of low rank matrix factorizations. Sparkler extends Spark, an existing platform for running parallel iterative algorithms on datasets that fit in the aggregate main memory of a cluster. Sparkler supports distributed stochastic gradient descent as an approach to solving the factorization problem -- an iterative technique that has been shown to perform very well in practice. We identify the shortfalls of Spark in solving large matrix factorization problems, especially when running on the cloud, and solve this by introducing a novel abstraction called \"Carousel Maps\" (CMs). CMs are well suited to storing large matrices in the aggregate memory of a cluster and can efficiently support the operations performed on them during distributed stochastic gradient descent. We describe the design, implementation, and the use of CMs in Sparkler programs. Through a variety of experiments, we demonstrate that Sparkler is faster than Spark by 4x to 21x, with bigger advantages for larger problems. Equally importantly, we show that this can be done without imposing any changes to the ease of programming. We argue that Sparkler provides a convenient and efficient extension to Spark for solving matrix factorization problems on very large datasets.",
"",
"Dryad is a general-purpose distributed execution engine for coarse-grain data-parallel applications. A Dryad application combines computational \"vertices\" with communication \"channels\" to form a dataflow graph. Dryad runs the application by executing the vertices of this graph on a set of available computers, communicating as appropriate through flies, TCP pipes, and shared-memory FIFOs. The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run simultaneously on multiple computers, or on multiple CPU cores within a computer. The application can discover the size and placement of data at run time, and modify the graph as the computation progresses to make efficient use of the available resources. Dryad is designed to scale from powerful multi-core single computers, through small clusters of computers, to data centers with thousands of computers. The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.",
"Piccolo is a new data-centric programming model for writing parallel in-memory applications in data centers. Unlike existing data-flow models, Piccolo allows computation running on different machines to share distributed, mutable state via a key-value table interface. Piccolo enables efficient application implementations. In particular, applications can specify locality policies to exploit the locality of shared state access and Piccolo's run-time automatically resolves write-write conflicts using user-defined accumulation functions. Using Piccolo, we have implemented applications for several problem domains, including the PageRank algorithm, k-means clustering and a distributed crawler. Experiments using 100 Amazon EC2 instances and a 12 machine cluster show Piccolo to be faster than existing data flow models for many problems, while providing similar fault-tolerance guarantees and a convenient programming interface.",
"",
"In this paper, we study the problem of sub-dataset analysis over distributed file systems, e.g., the Hadoop file system. Our experiments show that the sub-datasets distribution over HDFS blocks, which is hidden by HDFS, can often cause corresponding analyses to suffer from a seriously imbalanced or inefficient parallel execution. Specifically, the content clustering of sub-datasets results in some computational nodes carrying out much more workload than others; furthermore, it leads to inefficient sampling of sub-datasets, as analysis programs will often read large amounts of irrelevant data. We conduct a comprehensive analysis on how imbalanced computing patterns and inefficient sampling occur. We then propose a storage distribution aware method to optimize sub-dataset analysis over distributed storage systems referred to as DataNet. First, we propose an efficient algorithm to obtain the meta-data of sub-dataset distributions. Second, we design an elastic storage structure called ElasticMap based on the HashMap and BloomFilter techniques to store the meta-data. Third, we employ distribution-aware algorithms for sub-dataset applications to achieve balanced and efficient parallel execution. Our proposed method can benefit different sub-dataset analyses with various computational requirements. Experiments are conducted on PRObEs Marmot 128-node cluster testbed and the results show the performance benefits of DataNet.",
"MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day."
]
} |
1707.03602 | 2736167582 | The Semantic Web began to emerge as its standards and technologies developed rapidly in the recent years. The continuing development of Semantic Web technologies has facilitated publishing explicit semantics with data on the Web in RDF data model. This study proposes a semantic search framework to support efficient keyword-based semantic search on RDF data utilizing near neighbor explorations. The framework augments the search results with the resources in close proximity by utilizing the entity type semantics. Along with the search results, the system generates a relevance confidence score measuring the inferred semantic relatedness of returned entities based on the degree of similarity. Furthermore, the evaluations assessing the effectiveness of the framework and the accuracy of the results are presented. | Dbpedia @cite_27 , FreeBase @cite_11 and many other large data sources provide a formal query end point for precise searching on the RDF data. There have been many approaches, e.g. @cite_31 adopting semantic searches based on user provided queries in a formal query language such as SPARQL. Use of formal query language-based systems have potential user adoption issues as they can be difficult to use even for technical users. | {
"cite_N": [
"@cite_27",
"@cite_31",
"@cite_11"
],
"mid": [
"102708294",
"1812636409",
"2094728533"
],
"abstract": [
"DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.",
"RDF and RDF Schema are two W3C standards aimed at enriching the Web with machine-processable semantic data.We have developed Sesame, an architecture for efficient storage and expressive querying of large quantities of metadata in RDF and RDF Schema. Sesame's design and implementation are independent from any specific storage device. Thus, Sesame can be deployed on top of a variety of storage devices, such as relational databases, triple stores, or object-oriented databases, without having to change the query engine or other functional modules. Sesame offers support for concurrency control, independent export of RDF and RDFS information and a query engine for RQL, a query language for RDF that offers native support for RDF Schema semantics. We present an overview of Sesame as a generic architecture, as well as its implementation and our first experiences with this implementation.",
"Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications."
]
} |
1707.03602 | 2736167582 | The Semantic Web began to emerge as its standards and technologies developed rapidly in the recent years. The continuing development of Semantic Web technologies has facilitated publishing explicit semantics with data on the Web in RDF data model. This study proposes a semantic search framework to support efficient keyword-based semantic search on RDF data utilizing near neighbor explorations. The framework augments the search results with the resources in close proximity by utilizing the entity type semantics. Along with the search results, the system generates a relevance confidence score measuring the inferred semantic relatedness of returned entities based on the degree of similarity. Furthermore, the evaluations assessing the effectiveness of the framework and the accuracy of the results are presented. | A large quantity of these approaches based on the translation of the natural language questions into structured queries assume that some patterns or templates exist in the query keywords. They typically generate the SPARQL queries by using a parser extracting queries from Natural Language questions @cite_33 or a mechanism deriving the queries based on an ontology or knowledge base @cite_9 , or a supervised machine learning mechanism from Natural Language questions @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_33"
],
"mid": [
"1540862445",
"1722846244",
"2268155668"
],
"abstract": [
"An advantage of Semantic Web standards like RDF and OWL is their flexibility in modifying the structure of a knowledge base. To turn this flexibility into a practical advantage, it is of high importance to have tools and methods, which offer similar flexibility in exploring information in a knowledge base. This is closely related to the ability to easily formulate queries over those knowledge bases. We explain benefits and drawbacks of existing techniques in achieving this goal and then present the QTL algorithm, which fills a gap in research and practice. It uses supervised machine learning and allows users to ask queries without knowing the schema of the underlying knowledge base beforehand and without expertise in the SPARQL query language. We then present the AutoSPARQL user interface, which implements an active learning approach on top of QTL. Finally, we evaluate the approach based on a benchmark data set for question answering over Linked Data.",
"Accessing structured data such as that encoded in ontologies and knowledge bases can be done using either syntactically complex formal query languages like SPARQL or complicated form interfaces that require expensive customisation to each particular application domain. This paper presents the QuestIO system - a natural language interface for accessing structured information, that is domain independent and easy to use without training. It aims to bring the simplicity of Google's search interface to conceptual retrieval by automatically converting short conceptual queries into formal ones, which can then be executed against any semantic repository. QuestIO was developed specifically to be robustwith regard to language ambiguities, incomplete or syntactically ill-formed queries, by harnessing the structure of ontologies, fuzzy stringmatching, and ontology-motivated similarity metrics.",
"Our purpose is to hide the complexity of formulating a query expressed in a graph query language such as SPARQL. We propose a mechanism allowing queries to be expressed in a very simple pivot language, mainly composed of keywords and relations between keywords. Our system associates the keywords with the corresponding elements of the ontology (classes, relations, instances). Then it selects pre-written query patterns, and instanciates them with regard to the keywords of the initial query. Several possible queries are generated, ranked and then shown to the user. These queries are presented by means of natural language sentences. The user then selects the query he she is interested in and the SPARQL query is built."
]
} |
1707.03816 | 2734970223 | Visual tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we propose to exploit the rich hierarchical features of deep convolutional neural networks to improve the accuracy and robustness of visual tracking. Deep neural networks trained on object recognition datasets consist of multiple convolutional layers. These layers encode target appearance with different levels of abstraction. For example, the outputs of the last convolutional layers encode the semantic information of targets and such representations are invariant to significant appearance variations. However, their spatial resolutions are too coarse to precisely localize the target. In contrast, features from earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchical features of convolutional layers as a nonlinear counterpart of an image pyramid representation and explicitly exploit these multiple levels of abstraction to represent target objects. Specifically, we learn adaptive correlation filters on the outputs from each convolutional layer to encode the target appearance. We infer the maximum response of each layer to locate targets in a coarse-to-fine manner. To further handle the issues with scale estimation and target re-detection from tracking failures caused by heavy occlusion or moving out of the view, we conservatively learn another correlation filter that maintains a long-term memory of target appearance as a discriminative classifier. Extensive experimental results on large-scale benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art tracking methods. | We discuss tracking methods closely related to this work in this section. A comprehensive review on visual tracking can be found in @cite_20 @cite_36 @cite_77 @cite_86 . | {
"cite_N": [
"@cite_36",
"@cite_77",
"@cite_86",
"@cite_20"
],
"mid": [
"",
"2126302311",
"2158827467",
"1995903777"
],
"abstract": [
"",
"There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers.",
"This paper addresses the problem of single-target tracker performance evaluation. We consider the performance measures, the dataset and the evaluation system to be the most important components of tracker evaluation and propose requirements for each of them. The requirements are the basis of a new evaluation methodology that aims at a simple and easily interpretable tracker comparison. The ranking-based methodology addresses tracker equivalence in terms of statistical significance and practical differences. A fully-annotated dataset with per-frame annotations with several visual attributes is introduced. The diversity of its visual properties is maximized in a novel way by clustering a large number of videos according to their visual attributes. This makes it the most sophistically constructed and annotated dataset to date. A multi-platform evaluation system allowing easy integration of third-party trackers is presented as well. The proposed evaluation methodology was tested on the VOT2014 challenge on the new dataset and 38 trackers, making it the largest benchmark to date. Most of the tested trackers are indeed state-of-the-art since they outperform the standard baselines, resulting in a highly-challenging benchmark. An exhaustive analysis of the dataset from the perspective of tracking difficulty is carried out. To facilitate tracker comparison a new performance visualization technique is proposed.",
"The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects."
]
} |
1707.03816 | 2734970223 | Visual tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we propose to exploit the rich hierarchical features of deep convolutional neural networks to improve the accuracy and robustness of visual tracking. Deep neural networks trained on object recognition datasets consist of multiple convolutional layers. These layers encode target appearance with different levels of abstraction. For example, the outputs of the last convolutional layers encode the semantic information of targets and such representations are invariant to significant appearance variations. However, their spatial resolutions are too coarse to precisely localize the target. In contrast, features from earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchical features of convolutional layers as a nonlinear counterpart of an image pyramid representation and explicitly exploit these multiple levels of abstraction to represent target objects. Specifically, we learn adaptive correlation filters on the outputs from each convolutional layer to encode the target appearance. We infer the maximum response of each layer to locate targets in a coarse-to-fine manner. To further handle the issues with scale estimation and target re-detection from tracking failures caused by heavy occlusion or moving out of the view, we conservatively learn another correlation filter that maintains a long-term memory of target appearance as a discriminative classifier. Extensive experimental results on large-scale benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art tracking methods. | Tracking by Deep Neural Networks. The recent years have witnessed significant advances in deep neural networks on a wide range of computer vision problems. However, considerably less attention has been made to apply deep networks to visual tracking. One potential reason is that the training data is very limited as the target state (i.e., position and scale) is only available in the first frame. Several methods address this issue by learning a generic representation offline from auxiliary training images. @cite_49 learn a specific feature extractor with CNNs from an offline training set (about 20000 image pairs) for human tracking. Wang and Yeung @cite_12 pre-train a multi-layer autoencoder network on the part of the 80M tiny image @cite_70 in an unsupervised fashion. Using a video repository @cite_83 , @cite_7 learn video features by imposing temporal constraints. To alleviate the issues with offline training, the DeepTrack @cite_51 and CNT @cite_72 methods incrementally learn target-specific CNNs without pre-training. black Note that existing tracking methods based on deep networks @cite_51 @cite_7 @cite_72 use two or fewer convolutional layers to represent target objects, and do not fully exploit rich hierarchical features. | {
"cite_N": [
"@cite_7",
"@cite_70",
"@cite_72",
"@cite_83",
"@cite_49",
"@cite_51",
"@cite_12"
],
"mid": [
"2066757459",
"2145607950",
"2280226538",
"",
"2168117308",
"2069332137",
"2118097920"
],
"abstract": [
"In this paper, we propose an approach to learn hierarchical features for visual object tracking. First, we offline learn features robust to diverse motion patterns from auxiliary video sequences. The hierarchical features are learned via a two-layer convolutional neural network. Embedding the temporal slowness constraint in the stacked architecture makes the learned features robust to complicated motion transformations, which is important for visual object tracking. Then, given a target video sequence, we propose a domain adaptation module to online adapt the pre-learned features according to the specific target object. The adaptation is conducted in both layers of the deep feature learning module so as to include appearance information of the specific target object. As a result, the learned hierarchical features can be robust to both complicated motion transformations and appearance changes of target objects. We integrate our feature learning algorithm into three tracking methods. Experimental results demonstrate that significant improvement can be achieved using our learned hierarchical features, especially on video sequences with complicated motion transformations.",
"With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors.",
"Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However, the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper, we present that, even without offline training with a large amount of auxiliary data, simple two-layer convolutional networks can be powerful enough to learn robust representations for visual tracking. In the first frame, we extract a set of normalized patches from the target region as fixed filters, which integrate a series of adaptive contextual filters surrounding the target to define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps together form a global representation, via which the inner geometric layout of the target is also preserved. A simple soft shrinkage method that suppresses noisy values below an adaptive threshold is employed to de-noise the global representation. Our convolutional networks have a lightweight structure and perform favorably against several state-of-the-art methods on the recent tracking benchmark data set with 50 challenging videos.",
"",
"In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.",
"Defining hand-crafted feature representations needs expert knowledge, requires timeconsuming manual adjustments, and besides, it is arguably one of the limiting factors of object tracking. In this paper, we propose a novel solution to automatically relearn the most useful feature representations during the tracking process in order to accurately adapt appearance changes, pose and scale variations while preventing from drift and tracking failures. We employ a candidate pool of multiple Convolutional Neural Networks (CNNs) as a data-driven model of different instances of the target object. Individually, each CNN maintains a specific set of kernels that favourably discriminate object patches from their surrounding background using all available low-level cues. These kernels are updated in an online manner at each frame after being trained with just one instance at the initialization of the corresponding CNN. Given a frame, the most promising CNNs in the pool are selected to evaluate the hypothesises for the target object. The hypothesis with the highest score is assigned as the current detection window and the selected models are retrained using a warm-start back-propagation which optimizes a structural loss function. In addition to the model-free tracker, we introduce a class-specific version of the proposed method that is tailored for tracking of a particular object class such as human faces. Our experiments on a large selection of videos from the recent benchmarks demonstrate that our method outperforms the existing state-of-the-art algorithms and rarely loses the track of the target object.",
"In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked de-noising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU)."
]
} |
1707.03816 | 2734970223 | Visual tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we propose to exploit the rich hierarchical features of deep convolutional neural networks to improve the accuracy and robustness of visual tracking. Deep neural networks trained on object recognition datasets consist of multiple convolutional layers. These layers encode target appearance with different levels of abstraction. For example, the outputs of the last convolutional layers encode the semantic information of targets and such representations are invariant to significant appearance variations. However, their spatial resolutions are too coarse to precisely localize the target. In contrast, features from earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchical features of convolutional layers as a nonlinear counterpart of an image pyramid representation and explicitly exploit these multiple levels of abstraction to represent target objects. Specifically, we learn adaptive correlation filters on the outputs from each convolutional layer to encode the target appearance. We infer the maximum response of each layer to locate targets in a coarse-to-fine manner. To further handle the issues with scale estimation and target re-detection from tracking failures caused by heavy occlusion or moving out of the view, we conservatively learn another correlation filter that maintains a long-term memory of target appearance as a discriminative classifier. Extensive experimental results on large-scale benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art tracking methods. | Tracking by Region Proposals. Region proposal methods @cite_16 @cite_32 @cite_2 @cite_32 @cite_27 provide candidate regions (in bounding boxes) for object detection and recognition. By generating a relatively small number of candidate regions (compared to exhaustive sliding windows), region proposal methods enable the use of CNNs for classification @cite_40 . Several recent methods exploit region proposal algorithms for visual tracking. @cite_65 compute the objectness scores @cite_32 to select highly confident sampling proposals as tracking outputs. @cite_0 apply region proposals to refine the estimated position and scale changes of target objects. @cite_76 , improve the Struck @cite_73 tracker using region proposals. Similar to @cite_0 @cite_76 , we use region proposals to generate candidate bounding boxes. The main difference is that we learn a correlation filter with long-term memory of target appearance to compute the confidence score of every proposal. In addition, we tailor the EdgeBox @cite_32 method to generate two types of proposals for scale estimation and target re-detection, respectively. | {
"cite_N": [
"@cite_73",
"@cite_65",
"@cite_32",
"@cite_0",
"@cite_40",
"@cite_27",
"@cite_2",
"@cite_76",
"@cite_16"
],
"mid": [
"2098941887",
"2194002514",
"7746136",
"2317058546",
"2102605133",
"2147347568",
"1577168949",
"",
"2010181071"
],
"abstract": [
"Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance.",
"Tracking-by-detection approaches are some of the most successful object trackers in recent years. Their success is largely determined by the detector model they learn initially and then update over time. However, under challenging conditions where an object can undergo transformations, e.g., severe rotation, these methods are found to be lacking. In this paper, we address this problem by formulating it as a proposal selection task and making two contributions. The first one is introducing novel proposals estimated from the geometric transformations undergone by the object, and building a rich candidate set for predicting the object location. The second one is devising a novel selection strategy using multiple cues, i.e., detection score and edgeness score computed from state-of-the-art object edges and motion boundaries. We extensively evaluate our approach on the visual object tracking 2014 challenge and online tracking benchmark datasets, and show the best performance.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"Among increasingly complicated trackers in visual tracking area, recently proposed correlation filter based trackers have achieved appealing performance despite their great simplicity and superior speed. However, the filter input is a bounding box of fixed size, so they are not born with the adaptability to target’s scale and aspect ratio changes. Although scaleadaptive variants have been proposed, they are not flexible enough due to pre-defined scale sampling manners. Moreover, to the best of our knowledge, no correlation filter variant has been proposed to handle aspect ratio variation. To tackle this problem, this paper integrates the class-agnostic detection proposal method, which is widely adopted in object detection area, into a correlation filter tracker, and presents KCFDP tracker. The correlation filter part of KCFDP is based on KCF[2] with some modifications. We extend the HOG feature in KCF to a combination of HOG, intensity, and color naming by simply concatenating the three features, resulting in 42 feature channels. The model updating scheme in KCF, which is simple linear interpolation, is substituted with a more robust scheme presented in [1]. EdgeBoxes[4] is adopted to generate flexible detection proposals and enable the scale and aspect ratio adaptability of our tracker. It traverses the whole image in a sliding window manner, and scores every sampled bounding box according to the number of contours that are wholly enclosed. To accelerate EdgeBoxes and produce less unnecessary proposals, we set the minimum proposal area and aspect ratio range dynamically in sliding window sampling according to the current target size. In the tracking pipeline, KCF is firstly performed to estimate the preliminary target location ld . Within a patch zd extracted from current frame, KCF locates the target center according to the location of the maximum element in f : f(zd) = kxz d · α, (1)",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"In this paper we evaluate the quality of the activation layers of a convolutional neural network (CNN) for the generation of object proposals. We generate hypotheses in a sliding-window fashion over different activation layers and show that the final convolutional layers can find the object of interest with high recall but poor localization due to the coarseness of the feature maps. Instead, the first layers of the network can better localize the object of interest but with a reduced recall. Based on this observation we design a method for proposing object locations that is based on CNN features and that combines the best of both worlds. We build an inverse cascade that, going from the final to the initial convolutional layers of the CNN, selects the most promising object locations and refines their boxes in a coarse-to-fine manner. The method is efficient, because i) it uses the same features extracted for detection, ii) it aggregates features using integral images, and iii) it avoids a dense evaluation of the proposals due to the inverse coarse-to-fine cascade. The method is also accurate, it outperforms most of the previously proposed object proposals approaches and when plugged into a CNN-based detector produces state-of-the-art detection performance.",
"",
"",
"Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2 object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5 DR."
]
} |
1707.03736 | 2735456298 | Users posting online expect to remain anonymous unless they have logged in, which is often needed for them to be able to discuss freely on various topics. Preserving the anonymity of a text's writer can be also important in some other contexts, e.g., in the case of witness protection or anonymity programs. However, each person has his her own style of writing, which can be analyzed using stylometry, and as a result, the true identity of the author of a piece of text can be revealed even if s he has tried to hide it. Thus, it could be helpful to design automatic tools that can help a person obfuscate his her identity when writing text. In particular, here we propose an approach that changes the text, so that it is pushed towards average values for some general stylometric characteristics, thus making the use of these characteristics less discriminative. The approach consists of three main steps: first, we calculate the values for some popular stylometric metrics that can indicate authorship; then we apply various transformations to the text, so that these metrics are adjusted towards the average level, while preserving the semantics and the soundness of the text; and finally, we add random noise. This approach turned out to be very efficient, and yielded the best performance on the Author Obfuscation task at the PAN-2016 competition. | Research in author obfuscation has explored manual, computer-aided, and automated obfuscation @cite_20 . For manual obfuscation, people have tried to mask their own writing style as somebody else's, which was shown to work well @cite_2 @cite_14 @cite_12 . Computer-aided obfuscation uses tools that identify and suggest parts of text and text features that should be obfuscated, but then the obfuscation is to be done manually @cite_9 @cite_5 @cite_16 . | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_16",
"@cite_2",
"@cite_5",
"@cite_20",
"@cite_12"
],
"mid": [
"2119804197",
"2134414769",
"160636586",
"2950536206",
"1427965619",
"",
"2103231373"
],
"abstract": [
"The use of stylometry, authorship recognition through purely linguistic means, has contributed to literary, historical, and criminal investigation breakthroughs. Existing stylometry research assumes that authors have not attempted to disguise their linguistic writing style. We challenge this basic assumption of existing stylometry methodologies and present a new area of research: adversarial stylometry. Adversaries have a devastating effect on the robustness of existing classification methods. Our work presents a framework for creating adversarial passages including obfuscation, where a subject attempts to hide her identity, and imitation, where a subject attempts to frame another subject by imitating his writing style, and translation where original passages are obfuscated with machine translation services. This research demonstrates that manual circumvention methods work very well while automated translation methods are not effective. The obfuscation method reduces the techniques' effectiveness to the level of random guessing and the imitation attempts succeed up to 67p of the time depending on the stylometry technique used. These results are more significant given the fact that experimental subjects were unfamiliar with stylometry, were not professional writers, and spent little time on the attacks. This article also contributes to the field by using human subjects to empirically validate the claim of high accuracy for four current techniques (without adversaries). We have also compiled and released two corpora of adversarial stylometry texts to promote research in this field with a total of 57 unique authors. We argue that this field is important to a multidisciplinary approach to privacy, security, and anonymity.",
"This paper explores techniques for reducing the effectiveness of standard authorship attribution techniques so that an author A can preserve anonymity for a particular document D. We discuss feature selection and adjustment and show how this information can be fed back to the author to create a new document D' for which the calculated attribution moves away from A. Since it can be labor intensive to adjust the document in this fashion, we attempt to quantify the amount of effort required to produce the anonymized document and introduce two levels of anonymization: shallow and deep. In our test set, we show that shallow anonymization can be achieved by making 14 changes per 1000 words to reduce the likelihood of identifying A as the author by an average of more than 83 . For deep anonymization, we adapt the unmasking work of Koppel and Schler to provide feedback that allows the author to choose the level of anonymization.",
"This paper presents Anonymouth, a novel framework for anonymizing writing style. Without accounting for style, anonymous authors risk identification. This framework is necessary to provide a tool for testing the consistency of anonymized writing style and a mechanism for adaptive attacks against stylometry techniques. Our framework defines the steps necessary to anonymize documents and implements them. A key contribution of this work is this framework, including novel methods for identifying which features of documents need to change and how they must be changed to accomplish document anonymization. In our experiment, 80 of the user study participants were able to anonymize their documents in terms of a fixed corpus and limited feature set used. However, modifying pre-written documents were found to be difficult and the anonymization did not hold up to more extensive feature sets. It is important to note that Anonymouth is only the first step toward a tool to acheive stylometric anonymity with respect to state-of-the-art authorship attribution techniques. The topic needs further exploration in order to accomplish significant anonymity.",
"Massive amounts of contributed content -- including traditional literature, blogs, music, videos, reviews and tweets -- are available on the Internet today, with authors numbering in many millions. Textual information, such as product or service reviews, is an important and increasingly popular type of content that is being used as a foundation of many trendy community-based reviewing sites, such as TripAdvisor and Yelp. Some recent results have shown that, due partly to their specialized topical nature, sets of reviews authored by the same person are readily linkable based on simple stylometric features. In practice, this means that individuals who author more than a few reviews under different accounts (whether within one site or across multiple sites) can be linked, which represents a significant loss of privacy. In this paper, we start by showing that the problem is actually worse than previously believed. We then explore ways to mitigate authorship linkability in community-based reviewing. We first attempt to harness the global power of crowdsourcing by engaging random strangers into the process of re-writing reviews. As our empirical results (obtained from Amazon Mechanical Turk) clearly demonstrate, crowdsourcing yields impressively sensible reviews that reflect sufficiently different stylometric characteristics such that prior stylometric linkability techniques become largely ineffective. We also consider using machine translation to automatically re-write reviews. Contrary to what was previously believed, our results show that translation decreases authorship linkability as the number of intermediate languages grows. Finally, we explore the combination of crowdsourcing and machine translation and report on the results.",
"Anonymous authoring includes writing reviews, comments and blogs, using pseudonyms with the general assumption that using these pseudonyms will protect the real identity of authors and allows them to freely express their views. It has been shown, however, that writing style may be used to trace authors across multiple Websites. This is a serious threat to privacy and may even result in revealing the authors's identities. In obfuscating authors' writing style, an authored document is modified to hide the writing characteristics of the author. In this paper we first show that existing obfuscation systems are insecure and propose a general approach for constructing obfuscation algorithms, and then instantiate the framework to give an algorithm that semi-automatically modifies an author's document. We provide a secure obfuscation scheme that is able to hide an author's document securely among other authors' documents in a corpus. As part of our obfuscation algorithm we present a new algorithm for identifying an author's unique words that would be of independent interest. We present a security model and use it to analyze our scheme and also the previous schemes. We implement our scheme and give its performances through experiments. We show that our algorithm can be used to obfuscate documents securely and effectively.",
"",
"The use of statistical AI techniques in authorship recognition (or stylometry) has contributed to literary and historical breakthroughs. These successes have led to the use of these techniques in criminal investigations and prosecutions. However, few have studied adversarial attacks and their devastating effect on the robustness of existing classification methods. This paper presents a framework for adversarial attacks including obfuscation attacks, where a subject attempts to hide their identity imitation attacks, where a subject attempts to frame another subject by imitating their writing style. The major contribution of this research is that it demonstrates that both attacks work very well. The obfuscation attack reduces the effectiveness of the techniques to the level of random guessing and the imitation attack succeeds with 68-91 probability depending on the stylometric technique used. These results are made more significant by the fact that the experimental subjects were unfamiliar with stylometric techniques, without specialized knowledge in linguistics, and spent little time on the attacks. This paper also provides another significant contribution to the field in using human subjects to empirically validate the claim of high accuracy for current techniques (without attacks) by reproducing results for three representative stylometric methods."
]
} |
1707.03736 | 2735456298 | Users posting online expect to remain anonymous unless they have logged in, which is often needed for them to be able to discuss freely on various topics. Preserving the anonymity of a text's writer can be also important in some other contexts, e.g., in the case of witness protection or anonymity programs. However, each person has his her own style of writing, which can be analyzed using stylometry, and as a result, the true identity of the author of a piece of text can be revealed even if s he has tried to hide it. Thus, it could be helpful to design automatic tools that can help a person obfuscate his her identity when writing text. In particular, here we propose an approach that changes the text, so that it is pushed towards average values for some general stylometric characteristics, thus making the use of these characteristics less discriminative. The approach consists of three main steps: first, we calculate the values for some popular stylometric metrics that can indicate authorship; then we apply various transformations to the text, so that these metrics are adjusted towards the average level, while preserving the semantics and the soundness of the text; and finally, we add random noise. This approach turned out to be very efficient, and yielded the best performance on the Author Obfuscation task at the PAN-2016 competition. | @cite_9 explored author masking by detecting the most commonly-used words by the target author and then trying to change them. They also mention the application of machine translation as a possible approach to author obfuscation. Other authors also used machine translation for author obfuscation @cite_14 @cite_6 , e.g., by translating passages of text from English to one or more other languages and then back to English. @cite_14 investigated three different approaches to adversarial stylometry: obfuscation (masking author style), imitation (trying to copy another author's style), and machine translation. They further summarized the most common features people used to obfuscate their own writing style. @cite_10 developed a complex system for author obfuscation which consists of three main modules: canonization (unifying case, normalizing white spaces, spelling correction, etc.), event set determination (extraction of events significant for author detection, such as words, parts of speech bi- or tri-grams, etc.), and statistical inference (measures that determine the results and confidence in the final report). The authors used this same approach @cite_21 to detect deliberate style obfuscation. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_10"
],
"mid": [
"2119804197",
"2134414769",
"2136000463",
"",
"143218990"
],
"abstract": [
"The use of stylometry, authorship recognition through purely linguistic means, has contributed to literary, historical, and criminal investigation breakthroughs. Existing stylometry research assumes that authors have not attempted to disguise their linguistic writing style. We challenge this basic assumption of existing stylometry methodologies and present a new area of research: adversarial stylometry. Adversaries have a devastating effect on the robustness of existing classification methods. Our work presents a framework for creating adversarial passages including obfuscation, where a subject attempts to hide her identity, and imitation, where a subject attempts to frame another subject by imitating his writing style, and translation where original passages are obfuscated with machine translation services. This research demonstrates that manual circumvention methods work very well while automated translation methods are not effective. The obfuscation method reduces the techniques' effectiveness to the level of random guessing and the imitation attempts succeed up to 67p of the time depending on the stylometry technique used. These results are more significant given the fact that experimental subjects were unfamiliar with stylometry, were not professional writers, and spent little time on the attacks. This article also contributes to the field by using human subjects to empirically validate the claim of high accuracy for four current techniques (without adversaries). We have also compiled and released two corpora of adversarial stylometry texts to promote research in this field with a total of 57 unique authors. We argue that this field is important to a multidisciplinary approach to privacy, security, and anonymity.",
"This paper explores techniques for reducing the effectiveness of standard authorship attribution techniques so that an author A can preserve anonymity for a particular document D. We discuss feature selection and adjustment and show how this information can be fed back to the author to create a new document D' for which the calculated attribution moves away from A. Since it can be labor intensive to adjust the document in this fashion, we attempt to quantify the amount of effort required to produce the anonymized document and introduce two levels of anonymization: shallow and deep. In our test set, we show that shallow anonymization can be achieved by making 14 changes per 1000 words to reduce the likelihood of identifying A as the author by an average of more than 83 . For deep anonymization, we adapt the unmasking work of Koppel and Schler to provide feedback that allows the author to choose the level of anonymization.",
"Whistleblowers and activists need the ability to communicate without disclosing their identity, as of course do kidnappers and terrorists. Recent advances in the technology of stylometry (the study of authorial style) or \"authorship attribution\" have made it possible to identify the author with high reliability in a non-confrontational setting. In a confrontational setting, where the author is deliberately masking their identity (i.e. attempting to deceive), the results are much less promising. In this paper, we show that although the specific author may not be identifiable, the intent to deceive and to hide his identity can be. We show this by a reanalysis of the Brennan and Greenstadt (2009) deception corpus and discuss some of the implications of this surprising finding.",
"",
"Authorship attribution is an important and emerging security tool. However, just as criminals may wear gloves to hide their fingerprints, so too may criminal authors mask their writing styles to escape detection. Most authorship studies have focused on cooperative and or unaware authors who do not take such precautions. This paper analyzes the methods implemented in the Java Graphical Authorship Attribution Program (JGAAP) against essays in the Brennan-Greenstadt obfuscation corpus that were written in deliberate attempts to mask style. The results demonstrate that many of the more robust and accurate methods implemented in JGAAP are effective in the presence of active deception."
]
} |
1707.03804 | 2736291100 | Models that can execute natural language instructions for situated robotic tasks such as assembly and navigation have several useful applications in homes, offices, and remote scenarios. We study the semantics of spatially-referred configuration and arrangement instructions, based on the challenging Bisk-2016 blank-labeled block dataset. This task involves finding a source block and moving it to the target position (mentioned via a reference block and offset), where the blocks have no names or colors and are just referred to via spatial location features. We present novel models for the subtasks of source block classification and target position regression, based on joint-loss language and spatial-world representation learning, as well as CNN-based and dual attention models to compute the alignment between the world blocks and the instruction phrases. For target position prediction, we compare two inference approaches: annealed sampling via policy gradient versus expectation inference via supervised regression. Our models achieve the new state-of-the-art on this task, with an improvement of 47 on source block accuracy and 22 on target position distance. | Related to sampling-based loss and policy gradient optimization, adopt policy gradient based reinforcement learning for executing instructions on system troubleshooting and game tutorials. There is also recent policy gradient approaches for the tasks of machine translation and image captioning using metric-based rewards @cite_8 @cite_19 . Since the losses of these models are non-differentiable, a policy gradient approach (introduced in ) is used for optimization. Most recently, extended the pattern-labeled version of the dataset to a new sequential motion planning task based on raw visual simulation input (for intermediate movement steps) fed into a reinforcement learning model. On the other hand, we focus on a different setup, i.e., the original source+target direct-prediction task and dataset; and we address its more challenging blank-labeled-blocks version, hence only relying on spatial location-based semantics. | {
"cite_N": [
"@cite_19",
"@cite_8"
],
"mid": [
"2950178297",
"2176263492"
],
"abstract": [
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster."
]
} |
1707.03821 | 2735085797 | We introduce a methodology for efficient monitoring of processes running on hosts in a corporate network. The methodology is based on collecting streams of system calls produced by all or selected processes on the hosts, and sending them over the network to a monitoring server, where machine learning algorithms are used to identify changes in process behavior due to malicious activity, hardware failures, or software errors. The methodology uses a sequence of system call count vectors as the data format which can handle large and varying volumes of data. Unlike previous approaches, the methodology introduced in this paper is suitable for distributed collection and processing of data in large corporate networks. We evaluate the methodology both in a laboratory setting on a real-life setup and provide statistics characterizing performance and accuracy of the methodology. | @cite_10 provides an early comparison of machine learning methods for modeling process behavior. @cite_3 introduces the model of execution graph, and behavior similarity measure based on the execution graph. @cite_15 combines multiple models into an ensemble to improve anomaly detection. @cite_19 applies continuous time Bayesian network (CTBN) to system call processes to account for time-dependent features and address high variability of system call streams over time. @cite_4 applies a deep LSTM-based architecture to sequences of individual system calls, treating system calls as a language model. | {
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_19",
"@cite_15",
"@cite_10"
],
"mid": [
"2551087083",
"2137569638",
"2134269391",
"1984350393",
""
],
"abstract": [
"In computer security, designing a robust intrusion detection system is one of the most fundamental and important problems. In this paper, we propose a system-call language-modeling approach for designing anomaly-based host intrusion detection systems. To remedy the issue of high false-alarm rates commonly arising in conventional methods, we employ a novel ensemble method that blends multiple thresholding classifiers into a single one, making it possible to accumulate 'highly normal' sequences. The proposed system-call language model has various advantages leveraged by the fact that it can learn the semantic meaning and interactions of each system call that existing methods cannot effectively consider. Through diverse experiments on public benchmark datasets, we demonstrate the validity and effectiveness of the proposed method. Moreover, we show that our model possesses high portability, which is one of the key aspects of realizing successful intrusion detection systems.",
"Many host-based anomaly detection systems monitor a process by observing the system calls it makes, and comparing these calls to a model of behavior for the program that the process should be executing. In this paper we introduce a new model of system call behavior, called an execution graph. The execution graph is the first such model that both requires no static analysis of the program source or binary, and conforms to the control flow graph of the program. When used as the model in an anomaly detection system monitoring system calls, it offers two strong properties: (i) it accepts only system call sequences that are consistent with the control flow graph of the program; (ii) it is maximal given a set of training data, meaning that any extensions to the execution graph could permit some intrusions to go undetected. In this paper, we formalize and prove these claims. We additionally evaluate the performance of our anomaly detection technique.",
"Intrusion detection systems (IDSs) fall into two high-level categories: network-based systems (NIDS) that monitor network behaviors, and host-based systems (HIDS) that monitor system calls. In this work, we present a general technique for both systems. We use anomaly detection, which identifies patterns not conforming to a historic norm. In both types of systems, the rates of change vary dramatically over time (due to burstiness) and over components (due to service difference). To efficiently model such systems, we use continuous time Bayesian networks (CTBNs) and avoid specifying a fixed update interval common to discrete-time models. We build generative models from the normal training data, and abnormal behaviors are flagged based on their likelihood under this norm. For NIDS, we construct a hierarchical CTBN model for the network packet traces and use Rao-Blackwellized particle filtering to learn the parameters. We illustrate the power of our method through experiments on detecting real worms and identifying hosts on two publicly available network traces, the MAWI dataset and the LBNL dataset. For HIDS, we develop a novel learning method to deal with the finite resolution of system log file time stamps, without losing the benefits of our continuous time model. We demonstrate the method by detecting intrusions in the DARPA 1998 BSM dataset.",
"Intrusion detection systems (IDSs) are used to detect traces of malicious activities targeted against the network and its resources. Anomaly-based IDSs build models of the expected behavior of applications by analyzing events that are generated during the applications' normal operation. Once these models have been established, subsequent events are analyzed to identify deviations, on the assumption that anomalies represent evidence of an attack. Host-based anomaly detection systems often rely on system call sequences to characterize the normal behavior of applications. Recently, it has been shown how these systems can be evaded by launching attacks that execute legitimate system call sequences. The evasion is possible because existing techniques do not take into account all available features of system calls. In particular, system call arguments are not considered. We propose two primary improvements upon existing host-based anomaly detectors. First, we apply multiple detection models to system call arguments. Multiple models allow the arguments of each system call invocation to be evaluated from several different perspectives. Second, we introduce a sophisticated method of combining the anomaly scores from each model into an overall aggregate score. The combined anomaly score determines whether an event is part of an attack. Individual anomaly scores are often contradicting and, therefore, a simple weighted sum cannot deliver reliable results. To address this problem, we propose a technique that uses Bayesian networks to perform system call classification. We show that the analysis of system call arguments and the use of Bayesian classification improves detection accuracy and resilience against evasion attempts. In addition, the paper describes a tool based on our approach and provides a quantitative evaluation of its performance in terms of both detection effectiveness and overhead. A comparison with four related approaches is also presented.",
""
]
} |
1707.03336 | 2734643729 | We propose and evaluate a new technique for learning hybrid automata automatically by observing the runtime behavior of a dynamical system. Working from a sequence of continuous state values and predicates about the environment, CHARDA recovers the distinct dynamic modes, learns a model for each mode from a given set of templates, and postulates causal guard conditions which trigger transitions between modes. Our main contribution is the use of information-theoretic measures (1) as a cost function for data segmentation and model selection to penalize over-fitting and (2) to determine the likely causes of each transition. CHARDA is easily extended with different classes of model templates, fitting methods, or predicates. In our experiments on a complex videogame character, CHARDA successfully discovers a reasonable over-approximation of the character's true behaviors. Our results also compare favorably against recent work in automatically learning probabilistic timed automata in an aircraft domain: CHARDA exactly learns the modes of these simpler automata. | Despite the general undecidability of many HA properties, it is possible to constrain models or carefully choose semantics to obtain different analysis characteristics: discretizing time or variable values evades undecidability by approximating the true dynamics @cite_10 ; keeping these continuous but constraining the allowed flow and guard conditions admits geometric analysis @cite_19 ; and one can always merge states together to yield an over-approximation, producing smaller and simpler models. There are also composable variations of hybrid automata that admit compositional analysis @cite_3 as well as a logical axiomatization @cite_9 , not to mention the body of tools and research that already exist for synthesizing control policies, ensuring safety, characterizing reachable areas, et cetera. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_10",
"@cite_3"
],
"mid": [
"",
"1977444293",
"2122179032",
"2103128566"
],
"abstract": [
"",
"Hybrid systems are models for complex physical systems and are defined as dynamical systems with interacting discrete transitions and continuous evolutions along differential equations. With the goal of developing a theoretical and practical foundation for deductive verification of hybrid systems, we introduce a dynamic logic for hybrid programs, which is a program notation for hybrid systems. As a verification technique that is suitable for automation, we introduce a free variable proof calculus with a novel combination of real-valued free variables and Skolemisation for lifting quantifier elimination for real arithmetic to dynamic logic. The calculus is compositional, i.e., it reduces properties of hybrid programs to properties of their parts. Our main result proves that this calculus axiomatises the transition behaviour of hybrid systems completely relative to differential equations. In a case study with cooperating traffic agents of the European Train Control System, we further show that our calculus is well-suited for verifying realistic hybrid systems with parametric system dynamics.",
"A hybrid dynamical system is a mathematical model suitable for describing an extensive spectrum of multi-modal, time-series behaviors, ranging from bouncing balls to air traffic controllers. This paper describes multi-modal symbolic regression (MMSR): a learning algorithm to construct non-linear symbolic representations of discrete dynamical systems with continuous mappings from unlabeled, time-series data. MMSR consists of two subalgorithms--clustered symbolic regression, a method to simultaneously identify distinct behaviors while formulating their mathematical expressions, and transition modeling, an algorithm to infer symbolic inequalities that describe binary classification boundaries. These subalgorithms are combined to infer hybrid dynamical systems as a collection of apt, mathematical expressions. MMSR is evaluated on a collection of four synthetic data sets and outperforms other multi-modal machine learning approaches in both accuracy and interpretability, even in the presence of noise. Furthermore, the versatility of MMSR is demonstrated by identifying and inferring classical expressions of transistor modes from recorded measurements.",
"This paper describes the modeling language CHARON for modular design of interacting hybrid systems. The language allows specification of architectural as well as behavioral hierarchy and discrete as well as continuous activities. The modular structure of the language is not merely syntactic, but is exploited by analysis tools and is supported by a formal semantics with an accompanying compositional theory of refinement. We illustrate the benefits of CHARON in the design of embedded control software using examples from automated highways concerning vehicle coordination."
]
} |
1707.03336 | 2734643729 | We propose and evaluate a new technique for learning hybrid automata automatically by observing the runtime behavior of a dynamical system. Working from a sequence of continuous state values and predicates about the environment, CHARDA recovers the distinct dynamic modes, learns a model for each mode from a given set of templates, and postulates causal guard conditions which trigger transitions between modes. Our main contribution is the use of information-theoretic measures (1) as a cost function for data segmentation and model selection to penalize over-fitting and (2) to determine the likely causes of each transition. CHARDA is easily extended with different classes of model templates, fitting methods, or predicates. In our experiments on a complex videogame character, CHARDA successfully discovers a reasonable over-approximation of the character's true behaviors. Our results also compare favorably against recent work in automatically learning probabilistic timed automata in an aircraft domain: CHARDA exactly learns the modes of these simpler automata. | Given the desirable properties of this class of model, and the ready availability of tools for dealing with them, many researchers have explored automatically recovering these high-level models from real-world system behaviors. CHARDA shares motivations with HyBUTLA @cite_11 , which also aimed to learn a complete automaton from observational data. HyBUTLA seems able to learn only acyclic hybrid automata, since it works by constructing a prefix acceptor tree of the modes for each observation episode and then merges compatible modes from the bottom up. Moreover, HyBUTLA assumes that the segmentation is given in advance and that all transitions happen due to individual discrete events, presumably from a relatively small set. The overall structure of both algorithms---split the observations into a number of intervals in which mode functions are fit, then merge redundant modes---is similar, but CHARDA learns a larger class of automata and does not require data to be pre-split into episodes or segments. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2220583168"
],
"abstract": [
"Innovative methods have been developed for diagnosis, activity monitoring, and state estimation that achieve high accuracy through the use of stochastic models involving hybrid discrete and continuous behaviors. A key bottleneck is the automated acquisition of these hybrid models, and recent methods have focused predominantly on Jump Markov processes and piecewise autoregressive models. In this paper, we present a novel algorithm capable of performing unsupervised learning of guarded Probabilistic Hybrid Automata (PHA) models, which extends prior work by allowing stochastic discrete mode transitions in a hybrid system to have a functional dependence on its continuous state. Our experiments indicate that guarded PHA models can yield significant performance improvements when used by hybrid state estimators, particularly when diagnosing the true discrete mode of the system, without any noticeable impact on their real-time performance."
]
} |
1707.03394 | 2735653932 | The Constrained Application Protocol (CoAP) is an HTTP-like protocol for RESTful applications intended to run on constrained devices, typically part of the Internet of Things. CoAP observe is an extension to the CoAP specification that allows CoAP clients to observe a resource through a simple publish subscribe mechanism. In this paper we leverage Information-Centric Networking (ICN), transparently deployed within the domain of a network provider, to provide enhanced CoAP services. We present the design and the implementation of CoAP observe over ICN and we discuss how ICN can provide benefits to both network providers and CoAP applications, even though the latter are not aware of the existence of ICN. In particular, the use of ICN results in smaller state management and simpler implementation at CoAP endpoints, and less communication overhead in the network. | Recent efforts @cite_19 @cite_21 have been performed on proxy based CoAP observe in Wireless Sensor Network (WSNs). Alessandro et el. @cite_19 includes WebSocket protocol in the design of the CoAP proxy for HTTP based web applications. The work in @cite_23 considers dynamic aggregation scheduling of multiple observe requests at CoAP proxies. These efforts are complementary to our work, which utilizes ICN to further enhance the efficiency gains of the CoAP observe. | {
"cite_N": [
"@cite_19",
"@cite_21",
"@cite_23"
],
"mid": [
"2038260667",
"",
"2345093194"
],
"abstract": [
"In this paper, we present the design of a Constrained Application Protocol (CoAP) proxy able to interconnect Web applications based on Hypertext Transfer Protocol (HTTP) and WebSocket with CoAP based Wireless Sensor Networks. Sensor networks are commonly used to monitor and control physical objects or environments. Smart Cities represent applications of such a nature. Wireless Sensor Networks gather data from their surroundings and send them to a remote application. This data flow may be short or long lived. The traditional HTTP long-polling used by Web applications may not be adequate in long-term communications. To overcome this problem, we include the WebSocket protocol in the design of the CoAP proxy. We evaluate the performance of the CoAP proxy in terms of latency and memory consumption. The tests consider long and short-lived communications. In both cases, we evaluate the performance obtained by the CoAP proxy according to the use of WebSocket and HTTP long-polling.",
"",
"Wireless sensor networks (WSNs) are starting to have a high impact on our societies and, for next generation WSNs to become more integrated with the Internet, researchers recently proposed to embed IPv6 into such very constrained networks. Also, constraint application protocol (CoAP) and Observe have been proposed for RESTful services to be provided. CoAP Observe supports the use of caches proxies and, for this reason, an observation request may resort to multiple client server registration steps in order to get notifications. Here, we propose to plan the multiple registration steps, at proxies, of multiple observation requests in order to make proper aggregation scheduling of notifications for transmission. This leads to less energy consumption and to an effective use of bandwidth, avoiding energy depletion of nodes, and increasing the network lifetime. Besides, mathematically formalizing the problem, a heuristic approach is developed and a discussion on how to incorporate algorithm’s decision into the network is done. The proposed framework can be applied to multiple application domains (e.g., monitoring, machine to machine)."
]
} |
1707.03124 | 2768812640 | Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5 over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2x faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices. | Existing methods on license plate recognition (LPR) can be divided into two categories: segmentation-based @cite_25 @cite_19 @cite_10 and segmentation free-based @cite_7 . Segmentation-based methods firstly segment the license plate into individual characters, and then recognize the segmented character respectively using a classifier. Segmentation algorithms mainly consist of projection-based @cite_25 @cite_10 and connected component-based @cite_9 @cite_43 . After the segmentation, template matching based @cite_39 @cite_8 and learning based @cite_16 @cite_22 @cite_43 algorithms can be used to tackle this character level classification task. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_39",
"@cite_19",
"@cite_43",
"@cite_16",
"@cite_10",
"@cite_25"
],
"mid": [
"",
"2279655419",
"2008701414",
"2171786422",
"",
"2309015593",
"2106073265",
"2120820227",
"2123725958",
"2030326191"
],
"abstract": [
"",
"In this work, we tackle the problem of car license plate detection and recognition in natural scene images. Inspired by the success of deep neural networks (DNNs) in various vision applications, here we leverage DNNs to learn high-level features in a cascade framework, which lead to improved performance on both detection and recognition. Firstly, we train a @math -class convolutional neural network (CNN) to detect all characters in an image, which results in a high recall, compared with conventional approaches such as training a binary text non-text classifier. False positives are then eliminated by the second plate non-plate CNN classifier. Bounding box refinement is then carried out based on the edge information of the license plates, in order to improve the intersection-over-union (IoU) ratio. The proposed cascade framework extracts license plates effectively with both high recall and precision. Last, we propose to recognize the license characters as a sequence labelling problem. A recurrent neural network (RNN) with long short-term memory (LSTM) is trained to recognize the sequential features extracted from the whole license plate via CNNs. The main advantage of this approach is that it is segmentation free. By exploring context information and avoiding errors caused by segmentation, the RNN method performs better than a baseline method of combining segmentation and deep CNN classification; and achieves state-of-the-art recognition accuracy.",
"Registration plate recognition is widely used in detecting speedy cars, traffic law enforcement and electronic toll collection. The problems associated with registration plate recognition are, plate images have different quality, illumination, view angle, distance, complex background and fonts. To address these problems, image processing tools are used for extracting only the region of interest. Then different luminance and background are removed by changing colored image to grayscale and then in to binary matrix form. Character recognition algorithm is applied on the binary images. All components of image are compared with pre-defined standard template and characters are recognized based on best match. Each matched character is displayed and stored in a log file. This algorithm is tested with vehicle images of different backgrounds and illumination. The camera focus, viewing plane and the distance from the vehicle were varied. The results of experiments are fascinating with good accuracy rate.",
"In this paper, a new algorithm for vehicle license plate identification is proposed, on the basis of a novel adaptive image segmentation technique (sliding concentric windows) and connected component analysis in conjunction with a character recognition neural network. The algorithm was tested with 1334 natural-scene gray-level vehicle images of different backgrounds and ambient illumination. The camera focused in the plate, while the angle of view and the distance from the vehicle varied according to the experimental setup. The license plates properly segmented were 1287 over 1334 input images (96.5 ). The optical character recognition system is a two-layer probabilistic neural network (PNN) with topology 108-180-36, whose performance for entire plate recognition reached 89.1 . The PNN is trained to identify alphanumeric characters from car license plates based on data obtained from algorithmic image processing. Combining the above two rates, the overall rate of success for the license-plate-recognition algorithm is 86.0 . A review in the related literature presented in this paper reveals that better performance (90 up to 95 ) has been reported, when limitations in distance, angle of view, illumination conditions are set, and background complexity is low",
"",
"This paper presents a vehicle license plate recognition method based on character-specific extremal regions (ERs) and hybrid discriminative restricted Boltzmann machines (HDRBMs). First, coarse license plate detection (LPD) is performed by top-hat transformation, vertical edge detection, morphological operations, and various validations. Then, character-specific ERs are extracted as character regions in license plate candidates. Followed by suitable selection of ERs, the segmentation of characters and coarse-to-fine LPD are achieved simultaneously. Finally, an offline trained pattern classifier of HDRBM is applied to recognize the characters. The proposed method is robust to illumination changes and weather conditions during 24 h or one day. Experimental results on thorough data sets are reported to demonstrate the effectiveness of the proposed approach in complex traffic environments.",
"Despite the success of license plate recognition (LPR) methods in the past decades, few of them can process multi-style license plates (LPs), especially LPs from different nations, effectively. In this paper, we propose a new method for multi-style LP recognition by representing the styles with quantitative parameters, i.e., plate rotation angle, plate line number, character type and format. In the recognition procedure these four parameters are managed by relevant algorithms, i.e., plate rotation, plate line segmentation, character recognition and format matching algorithm, respectively. To recognize special style LPs, users can configure the method by defining corresponding parameter values, which will be processed by the relevant algorithms. In addition, the probabilities of the occurrence of every LP style are calculated based on the previous LPR results, which will result in a faster and more precise recognition. Various LP images were used to test the proposed method and the results proved its effectiveness.",
"An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow removal method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for image tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16 and 98.34 , respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5 , 98.6 , and 97.8 , respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54 when the system is used for LPR in various complex conditions.",
"License plate localization (LPL) and character segmentation (CS) play key roles in the license plate (LP) recognition system. In this paper, we dedicate ourselves to these two issues. In LPL, histogram equalization is employed to solve the low-contrast and dynamic-range problems; the texture properties, e.g., aspect ratio, and color similarity are used to locate the LP; and the Hough transform is adopted to correct the rotation problem. In CS, the hybrid binarization technique is proposed to effectively segment the characters in the dirt LP. The feedback self-learning procedure is also employed to adjust the parameters in the system. As documented in the experiments, good localization and segmentation results are achieved with the proposed algorithms.",
"This work proposes a novel adaptive approach for character segmentation and feature vector extraction from seriously degraded images. An algorithm based on the histogram automatically detects fragments and merges these fragments before segmenting the fragmented characters. A morphological thickening algorithm automatically locates reference lines for separating the overlapped characters. A morphological thinning algorithm and the segmentation cost calculation automatically determine the baseline for segmenting the connected characters. Basically, our approach can detect fragmented, overlapped, or connected character and adaptively apply for one of three algorithms without manual fine-tuning. Seriously degraded images as license plate images taken from real world are used in the experiments to evaluate the robustness, the flexibility and the effectiveness of our approach. The system approach output data as feature vectors keep useful information more accurately to be used as input data in an automatic pattern recognition system."
]
} |
1707.03124 | 2768812640 | Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5 over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2x faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices. | Learning based algorithms including support vector machine @cite_16 , hidden Markov model (HMM) @cite_22 and neural networks @cite_43 @cite_38 are more robust than the template matching based methods since they extract discriminative features. However, the segmentation process loses information about inner rules in license plates and the segmentation performance has a significant influence on the recognition performance. Li and Shen @cite_7 proposed a cascade framework using deep convolutional neural networks and LSTMs for license plate recognition without segmentation, where a sliding window is applied to extracting feature sequence. Our method is also a segmentation-free approach based on framework proposed by @cite_3 , where a deep CNN is applied for feature extracting directly without a sliding window, and a bidirectional LSTM network is used for sequence labeling. | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_7",
"@cite_3",
"@cite_43",
"@cite_16"
],
"mid": [
"2028587221",
"",
"2279655419",
"2194187530",
"2106073265",
"2120820227"
],
"abstract": [
"Chinese character font recognition (CCFR) has received increasing attention as the intelligent applications based on optical character recognition becomes popular. However, traditional CCFR systems do not handle noisy data effectively. By analyzing in detail the basic strokes of Chinese characters, we propose that font recognition on a single Chinese character is a sequence classification problem, which can be effectively solved by recurrent neural networks. For robust CCFR, we integrate a principal component convolution layer with the 2-D long short-term memory (2DLSTM) and develop principal component 2DLSTM (PC-2DLSTM) algorithm. PC-2DLSTM considers two aspects: 1) the principal component layer convolution operation helps remove the noise and get a rational and complete font information and 2) simultaneously, 2DLSTM deals with the long-range contextual processing along scan directions that can contribute to capture the contrast between character trajectory and background. Experiments using the frequently used CCFR dataset suggest the effectiveness of PC-2DLSTM compared with other state-of-the-art font recognition methods.",
"",
"In this work, we tackle the problem of car license plate detection and recognition in natural scene images. Inspired by the success of deep neural networks (DNNs) in various vision applications, here we leverage DNNs to learn high-level features in a cascade framework, which lead to improved performance on both detection and recognition. Firstly, we train a @math -class convolutional neural network (CNN) to detect all characters in an image, which results in a high recall, compared with conventional approaches such as training a binary text non-text classifier. False positives are then eliminated by the second plate non-plate CNN classifier. Bounding box refinement is then carried out based on the edge information of the license plates, in order to improve the intersection-over-union (IoU) ratio. The proposed cascade framework extracts license plates effectively with both high recall and precision. Last, we propose to recognize the license characters as a sequence labelling problem. A recurrent neural network (RNN) with long short-term memory (LSTM) is trained to recognize the sequential features extracted from the whole license plate via CNNs. The main advantage of this approach is that it is segmentation free. By exploring context information and avoiding errors caused by segmentation, the RNN method performs better than a baseline method of combining segmentation and deep CNN classification; and achieves state-of-the-art recognition accuracy.",
"Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.",
"Despite the success of license plate recognition (LPR) methods in the past decades, few of them can process multi-style license plates (LPs), especially LPs from different nations, effectively. In this paper, we propose a new method for multi-style LP recognition by representing the styles with quantitative parameters, i.e., plate rotation angle, plate line number, character type and format. In the recognition procedure these four parameters are managed by relevant algorithms, i.e., plate rotation, plate line segmentation, character recognition and format matching algorithm, respectively. To recognize special style LPs, users can configure the method by defining corresponding parameter values, which will be processed by the relevant algorithms. In addition, the probabilities of the occurrence of every LP style are calculated based on the previous LPR results, which will result in a faster and more precise recognition. Various LP images were used to test the proposed method and the results proved its effectiveness.",
"An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow removal method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for image tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16 and 98.34 , respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5 , 98.6 , and 97.8 , respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54 when the system is used for LPR in various complex conditions."
]
} |
1707.03124 | 2768812640 | Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5 over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2x faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices. | The generative adversarial networks @cite_20 train a generative model and discriminative model simultaneously via an adversarial process. Deep convolutional generative adversarial networks (DCGANs) @cite_32 provide a stable architecture for training GANs. Conditional GAN @cite_0 generate images with specific class labels by conditioning on both the generator and discriminator. Not only class labels, GANs can be conditioned on text descriptions @cite_27 and images @cite_35 , which build a text-to-image or image-to-image translation. | {
"cite_N": [
"@cite_35",
"@cite_32",
"@cite_0",
"@cite_27",
"@cite_20"
],
"mid": [
"2552465644",
"2173520492",
"2125389028",
"2949999304",
""
],
"abstract": [
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
""
]
} |
1707.03124 | 2768812640 | Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5 over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2x faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices. | @cite_34 proposed the CycleGAN. It learns a mapping between two domains without paired images, upon which our model builds. In order to use unpaired images for training, CycleGAN introduces the cycle consistency loss to fulfill the idea of If we translate from one domain to another and back again we must arrive where we start". | {
"cite_N": [
"@cite_34"
],
"mid": [
"2962793481"
],
"abstract": [
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."
]
} |
1707.03124 | 2768812640 | Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5 over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2x faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices. | The work of Wasserstein GAN (WGAN) @cite_33 designed a training algorithm that provides some techniques to improve the stability of learning and prevent from mode collapse. Beyond that, GANs have also achieved impressive results in image inpainting @cite_36 , representation learning @cite_40 and 3D object generation @cite_24 . | {
"cite_N": [
"@cite_36",
"@cite_40",
"@cite_33",
"@cite_24"
],
"mid": [
"2342877626",
"2432004435",
"",
"2949551726"
],
"abstract": [
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.",
"",
"We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods."
]
} |
1707.03124 | 2768812640 | Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5 over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2x faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices. | However, to date, little results were reported to demonstrate the GAN images' effectiveness in supervised learning. We propose to use CycleGAN @cite_34 and techniques in WGAN to generate labeled images and show that these generated images indeed help improve the performance of recognition. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2962793481"
],
"abstract": [
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."
]
} |
1707.03124 | 2768812640 | Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5 over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2x faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices. | Synthetic data are used to show great performance in text localisation @cite_11 and scene text recognition @cite_23 , without manual annotation. Additional synthetic training data @cite_41 yields improvements in person detection @cite_18 , font recognition @cite_1 and semantic segmentation @cite_12 . But, the knowledge-based approach which hard encodes knowledge about what the real images look like are fragile, as the generated examples often look like toys to discriminative model when compared to real images. @cite_4 use unlabeled samples generated by a vanilla DCGAN for semi-supervised learning, which slightly improves the person re-identification performance. In this work, we combine knowledge-based approach and learning-based approach, to generate labeled license plates from generator of GANs for supervised training. For comparison, we also perform a semi-supervised learning using unlabeled GAN images. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_41",
"@cite_1",
"@cite_23",
"@cite_12",
"@cite_11"
],
"mid": [
"2040898095",
"",
"2462401346",
"2077532029",
"1491389626",
"2431874326",
"2952302849"
],
"abstract": [
"Person detection in complex real-world scenes is a challenging problem. State-of-the-art methods typically use supervised learning relying on significant amounts of training data to achieve good detection results. However, labeling training data is tedious, expensive, and error-prone. This paper presents a novel method to improve detection performance by supplementing real-world data with synthetically generated training data. We consider the case of detecting people in crowded scenes within an AdaBoost-framework employing Haar and Histogram-of-Oriented-Gradients (HOG) features. Our evaluations on real-world video sequences of crowded scenes with significant occlusions show that the combination of real and synthetic training data significantly improves overall detection results.",
"",
"Class imbalance problems, where the number of samples in each class is unequal, is prevalent in numerous real world machine learning applications. Traditional methods which are biased toward the majority class are ineffective due to the relative severity of misclassifying rare events. This paper proposes a novel evolutionary cluster-based oversampling ensemble framework, which combines a novel cluster-based synthetic data generation method with an evolutionary algorithm (EA) to create an ensemble. The proposed synthetic data generation method is based on contemporary ideas of identifying oversampling regions using clusters. The novel use of EA serves a twofold purpose of optimizing the parameters of the data generation method while generating diverse examples leveraging on the characteristics of EAs, reducing overall computational cost. The proposed method is evaluated on a set of 40 imbalance datasets obtained from the University of California, Irvine, database, and outperforms current state-of-the-art ensemble algorithms tackling class imbalance problems.",
"As font is one of the core design concepts, automatic font identification and similar font suggestion from an image or photo has been on the wish list of many designers. We study the Visual Font Recognition (VFR) problem [4] LFE, and advance the state-of-the-art remarkably by developing the DeepFont system. First of all, we build up the first available large-scale VFR dataset, named AdobeVFR, consisting of both labeled synthetic data and partially labeled real-world data. Next, to combat the domain mismatch between available training and testing data, we introduce a Convolutional Neural Network (CNN) decomposition approach, using a domain adaptation technique based on a Stacked Convolutional Auto-Encoder (SCAE) that exploits a large corpus of unlabeled real-world text images combined with synthetic data preprocessed in a specific way. Moreover, we study a novel learning-based model compression approach, in order to reduce the DeepFont model size without sacrificing its performance. The DeepFont system achieves an accuracy of higher than 80 (top-5) on our collected dataset, and also produces a good font similarity measure for font selection and suggestion. We also achieve around 6 times compression of the model without any visible loss of recognition accuracy.",
"In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine -- synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one \"reading\" words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.",
"Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images, thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this paper, we propose to use a virtual world to automatically generate realistic synthetic images with pixel-level annotations. Then, we address the question of how useful such data can be for semantic segmentation – in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show how the inclusion of SYNTHIA in the training stage significantly improves performance on the semantic segmentation task.",
"In this paper we introduce a new method for text detection in natural images. The method comprises two contributions: First, a fast and scalable engine to generate synthetic images of text in clutter. This engine overlays synthetic text to existing background images in a natural way, accounting for the local 3D scene geometry. Second, we use the synthetic images to train a Fully-Convolutional Regression Network (FCRN) which efficiently performs text detection and bounding-box regression at all locations and multiple scales in an image. We discuss the relation of FCRN to the recently-introduced YOLO detector, as well as other end-to-end object detection systems based on deep learning. The resulting detection network significantly out performs current methods for text detection in natural images, achieving an F-measure of 84.2 on the standard ICDAR 2013 benchmark. Furthermore, it can process 15 images per second on a GPU."
]
} |
1707.02812 | 2735135478 | Adversarial samples are strategically modified samples, which are crafted with the purpose of fooling a classifier at hand. An attacker introduces specially crafted adversarial samples to a deployed classifier, which are being mis-classified by the classifier. However, the samples are perceived to be drawn from entirely different classes and thus it becomes hard to detect the adversarial samples. Most of the prior works have been focused on synthesizing adversarial samples in the image domain. In this paper, we propose a new method of crafting adversarial text samples by modification of the original samples. Modifications of the original text samples are done by deleting or replacing the important or salient words in the text or by introducing new words in the text sample. Our algorithm works best for the datasets which have sub-categories within each of the classes of examples. While crafting adversarial samples, one of the key constraint is to generate meaningful sentences which can at pass off as legitimate from language (English) viewpoint. Experimental results on IMDB movie review dataset for sentiment analysis and Twitter dataset for gender detection show the efficiency of our proposed method. | @cite_7 shows that the smoothness assumption of the kernels are not correct which makes the input-output mapping discontinuous in deep neural network. The adversarial samples are a results of this discontinuity, which lies in the pockets of the manifold of the DNNs. Thus the adversarial samples form the hard negative examples even though they lie in the distribution of the inputs provided to a DNN. A simple optimization problem that maximizes the network's prediction error is good enough to create adversarial samples for images. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1673923490"
],
"abstract": [
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input."
]
} |
1707.02812 | 2735135478 | Adversarial samples are strategically modified samples, which are crafted with the purpose of fooling a classifier at hand. An attacker introduces specially crafted adversarial samples to a deployed classifier, which are being mis-classified by the classifier. However, the samples are perceived to be drawn from entirely different classes and thus it becomes hard to detect the adversarial samples. Most of the prior works have been focused on synthesizing adversarial samples in the image domain. In this paper, we propose a new method of crafting adversarial text samples by modification of the original samples. Modifications of the original text samples are done by deleting or replacing the important or salient words in the text or by introducing new words in the text sample. Our algorithm works best for the datasets which have sub-categories within each of the classes of examples. While crafting adversarial samples, one of the key constraint is to generate meaningful sentences which can at pass off as legitimate from language (English) viewpoint. Experimental results on IMDB movie review dataset for sentiment analysis and Twitter dataset for gender detection show the efficiency of our proposed method. | @cite_3 , an attacker can successfully generate adversarial samples. The attacker can train his own model using similar input data and create adversarial samples using FGSM method. These adversarial samples can confuse the deployed classifier also with good probability. Thus the attacker can be successful with almost no information of the deployed model. introduced the term 'gradient masking' and show its application on real world images like traffic signs, where the algorithm performs well. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2951807304"
],
"abstract": [
"Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24 of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19 and 88.94 . We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder."
]
} |
1707.02647 | 2734687012 | Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped Internet of Things (IoT) devices permeate into every aspect of modern life, the ability to execute CNN inference, a computationally intensive application, on resource constrained devices has become increasingly important. In this context, we present Cappuccino, a framework for synthesis of efficient inference software targeting mobile System-on-Chips (SoCs). We propose techniques for efficient parallelization of CNN inference targeting mobile SoCs, and explore the underlying tradeoffs. Experiments with different CNNs on three mobile devices demonstrate the effectiveness of our approach. | Table compares the performance of software synthesized by Cappuccino with the state-of-the-art work @cite_7 . The proposed solution under exact arithmetic improves the execution time by 1.38X. In addition, when the synthesized software is both parallel and imprecise, it shows up to 11.47X speedup compared to CNNDroid @cite_7 . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2525951180"
],
"abstract": [
"Many mobile applications running on smartphones and wearable devices would potentially benefit from the accuracy and scalability of deep CNN-based machine learning algorithms. However, performance and energy consumption limitations make the execution of such computationally intensive algorithms on mobile devices prohibitive. We present a GPU-accelerated library, dubbed CNNdroid [1], for execution of trained deep CNNs on Android-based mobile devices. Empirical evaluations show that CNNdroid achieves up to 60X speedup and 130X energy saving on current mobile devices. The CNNdroid open source library is available for download at https: github.com ENCP CNNdroid"
]
} |
1707.02892 | 2735383278 | Multi-task learning leverages potential correlations among related tasks to extract common features and yield performance gains. However, most previous works only consider simple or weak interactions, thereby failing to model complex correlations among three or more tasks. In this paper, we propose a multi-task learning architecture with four types of recurrent neural layers to fuse information across multiple related tasks. The architecture is structurally flexible and considers various interactions among tasks, which can be regarded as a generalized case of many previous works. Extensive experiments on five benchmark datasets for text classification show that our model can significantly improve performances of related tasks with additional information from others. | @cite_6 belongs to and utilizes shared lookup tables for common features, followed by task-specific neural layers for several traditional NLP tasks such as part-of-speech tagging and semantic parsing. They use a fix-size window to solve the problem of variable-length texts, which can be better handled by recurrent neural networks. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2117130368"
],
"abstract": [
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance."
]
} |
1707.02657 | 2735427346 | The enormous amount of texts published daily by Internet users has fostered the development of methods to analyze this content in several natural language processing areas, such as sentiment analysis. The main goal of this task is to classify the polarity of a message. Even though many approaches have been proposed for sentiment analysis, some of the most successful ones rely on the availability of large annotated corpus, which is an expensive and time-consuming process. In recent years, distant supervision has been used to obtain larger datasets. So, inspired by these techniques, in this paper we extend such approaches to incorporate popular graphic symbols used in electronic messages, the emojis, in order to create a large sentiment corpus for Portuguese. Trained on almost one million tweets, several models were tested in both same domain and cross-domain corpora. Our methods obtained very competitive results in five annotated corpora from mixed domains (Twitter and product reviews), which proves the domain-independent property of such approach. In addition, our results suggest that the combination of emoticons and emojis is able to properly capture the sentiment of a message. | Currently, methods devised to perform sentiment analysis and, more specifically, polarity classification range from machine learning to lexical-based approaches. While machine learning methods have proved useful in scenarios where a large amount of training data is available along with top quality NLP resources (such as taggers, parsers and others), they usually have low performance in opposite scenarios. Since most non-English languages face resource limitations, for example Portuguese, lexical-based approaches have become very popular. Some works following this line are @cite_24 @cite_30 @cite_1 . | {
"cite_N": [
"@cite_24",
"@cite_1",
"@cite_30"
],
"mid": [
"2408607610",
"2030841861",
"2182387343"
],
"abstract": [
"Opinion Lexicons are linguistic resources annotated with semantic orientation of terms (positive and negative) and are important for opinion min- ing tasks. In the literature we see a variety of proposals for the construction of opinion lexicons using different linguistic resources and techniques. In this work, we propose and evaluate the integration of such linguistic resources to create a single lexicon for the Portuguese language.",
"This paper presents some results on lexicon-based classification of sentiment polarity in web reviews of products written in Brazilian Portuguese. They represent a first step towards a robust opinion miner from reviews of technology products. The evaluation shows the performance of 3 different sentiment lexicons combined with simple strategies. It is also discussed the risk of considering the rating provided by the writers for the purpose of evaluating the algorithms. The results show that the better combination is the version of the algorithm that deals also with negation and intensification and uses the sentiment lexicon Sent ilex. The average F-measure achieved 0.73.",
"This work presents an evaluation of the Brazilian Portuguese LIWC dictionary for Sentiment Analysis. This evaluation is conducted by comparison against two other sentiment resources for Portuguese language: Opinion Lexicon and SentiLex. We conducted an intrinsic and an extrinsic evaluations and show how LIWC dictionary could be used in sentiment analysis projects."
]
} |
1707.02657 | 2735427346 | The enormous amount of texts published daily by Internet users has fostered the development of methods to analyze this content in several natural language processing areas, such as sentiment analysis. The main goal of this task is to classify the polarity of a message. Even though many approaches have been proposed for sentiment analysis, some of the most successful ones rely on the availability of large annotated corpus, which is an expensive and time-consuming process. In recent years, distant supervision has been used to obtain larger datasets. So, inspired by these techniques, in this paper we extend such approaches to incorporate popular graphic symbols used in electronic messages, the emojis, in order to create a large sentiment corpus for Portuguese. Trained on almost one million tweets, several models were tested in both same domain and cross-domain corpora. Our methods obtained very competitive results in five annotated corpora from mixed domains (Twitter and product reviews), which proves the domain-independent property of such approach. In addition, our results suggest that the combination of emoticons and emojis is able to properly capture the sentiment of a message. | Machine learning approaches rely on document representations, normally vectorial ones with features like @math -grams @cite_35 , a simple example is the bag-of-words model. Once a representation has been chosen, several classification methods are available, such as Support Vector Machines (SVM), Naive Bayes (NB), Maximum Entropy (MaxEnt), Conditional Random Fields (CRF), and ensembles of classifiers @cite_21 . | {
"cite_N": [
"@cite_35",
"@cite_21"
],
"mid": [
"2097726431",
"2752201871"
],
"abstract": [
"An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.",
"This paper discusses the fourth year of the ”Sentiment Analysis in Twitter Task”. SemEval-2016 Task 4 comprises five subtasks, three of which represent a significant departure from previous editions. The first two subtasks are reruns from prior years and ask to predict the overall sentiment, and the sentiment towards a topic in a tweet. The three new subtasks focus on two variants of the basic “sentiment classification in Twitter” task. The first variant adopts a five-point scale, which confers an ordinal character to the classification task. The second variant focuses on the correct estimation of the prevalence of each class of interest, a task which has been called quantification in the supervised learning literature. The task continues to be very popular, attracting a total of 43 teams."
]
} |
1707.02657 | 2735427346 | The enormous amount of texts published daily by Internet users has fostered the development of methods to analyze this content in several natural language processing areas, such as sentiment analysis. The main goal of this task is to classify the polarity of a message. Even though many approaches have been proposed for sentiment analysis, some of the most successful ones rely on the availability of large annotated corpus, which is an expensive and time-consuming process. In recent years, distant supervision has been used to obtain larger datasets. So, inspired by these techniques, in this paper we extend such approaches to incorporate popular graphic symbols used in electronic messages, the emojis, in order to create a large sentiment corpus for Portuguese. Trained on almost one million tweets, several models were tested in both same domain and cross-domain corpora. Our methods obtained very competitive results in five annotated corpora from mixed domains (Twitter and product reviews), which proves the domain-independent property of such approach. In addition, our results suggest that the combination of emoticons and emojis is able to properly capture the sentiment of a message. | Paragraph vectors @cite_5 (also known as ) can be understood as a generalization of for larger blocks of text, such as paragraphs or documents. This technique has obtained state-of-the-art results on sentiment analysis for two datasets of movie reviews @cite_5 . The main goal of these dense representations is to predict the words in those blocks. Two models were proposed by Le and Mikolov @cite_5 , in which one of them accounts for the word order. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2949547296"
],
"abstract": [
"Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks."
]
} |
1707.02789 | 2735963460 | As parallelism becomes critically important in the semiconductor technology, high-performance computing, and cloud applications, parallel network systems will increasingly follow suit. Today, parallelism is an essential architectural feature of 40 100 400 Gigabit Ethernet standards, whereby high speed Ethernet systems are equipped with multiple parallel network interfaces. This creates new network topology abstractions and new technology requirements: instead of a single high capacity network link, multiple Ethernet end-points and interfaces need to be considered together with multiple links in form of discrete parallel paths. This new paradigm is enabling implementations of various new features to improve overall system performance. In this paper, we analyze the performance of parallel network systems with network coding. In particular, by using random LNC (RLNC), - a code without the need for decoding, we can make use of the fact that we have codes that are both distributed (removing the need for coordination or optimization of resources) and composable (without the need to exchange code information), leading to a fully stateless operation. We propose a novel theoretical modeling framework, including derivation of the upper and lower bounds as well as an expected value of the differential delay of parallel paths, and the resulting queue size at the receiver. The results show a great promise of network system parallelism in combination with RLNC: with a proper set of design parameters, the differential delay and the buffer size at the Ethernet receiver can be reduced significantly, while the cross-layer design and routing can be greatly simplified. | Previous work on linear network coding focused in general on improving network throughput and reliability. However, significant body of work in the last decade (e.g., @cite_24 @cite_32 @cite_3 @cite_23 @cite_11 ) addressed with network coding the end-to-end delays improvement in delay-constrained networks in broadcast and unicast scenarios. In @cite_24 , for instance, the delay performance of network coding was studied and compared it to scheduling methods. Lucani et. al, in @cite_7 tailored coding and feedback to reduce the expected delay. Paper @cite_4 studied the problem of minimizing the mean completion delay for instantly decodable network coding. In @cite_28 @cite_11 authors showed that network coding can outperform optimal routing in single unicast setting. More recent works, like @cite_25 , presented a streaming code that uses forward error correction to reduce in-order delivery delay over multiple parallel wireless networks. However, none of these works address delay in parallel network systems. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_32",
"@cite_3",
"@cite_24",
"@cite_23",
"@cite_25",
"@cite_11"
],
"mid": [
"1965556172",
"2116673244",
"2534339435",
"",
"",
"2116661213",
"",
"2518569826",
""
],
"abstract": [
"In this paper, we consider the problem of minimizing the mean completion delay in wireless broadcast for instantly decodable network coding. We first formulate the problem as a stochastic shortest path (SSP) problem. Although finding the packet selection policy using SSP is intractable, we use this formulation to draw the theoretical properties of efficient selection algorithms. Based on these properties, we propose a simple online selection algorithm that efficiently minimizes the mean completion delay of a frame of broadcast packets, compared to the random and greedy selection algorithms with a similar computational complexity. Simulation results show that our proposed algorithm indeed outperforms these random and greedy selection algorithms.",
"In networks with large latency, feedback about received packets may lag considerably the transmission of the original packets, limiting the feedback's usefulness. Moreover, time duplex constraints may entail that receiving feedback may be costly. In this work, we consider tailoring feedback and coding jointly in such settings to reduce the expected delay for successful in order reception of packets. We find that, in certain applications, judicious choices provide results that are close to those that would be obtained with a full-duplex system. We study two cases of data transmission: one-to-all broadcast and all-to-all broadcast. We also analyze important practical considerations weighing the trade off between performance and complexity in applications that rely on random linear network coding. Finally, we study the problem of transmission of information under the large latency and time duplexing constraints in the presence of random packet arrivals. In particular, we analyze the problem of using a batch by batch approach and an online network coding approach with Poisson arrivals. We present numerical results to illustrate the performance under a variety of scenarios and show the benefits of the proposed schemes as compared to typical ARQ and scheduling schemes.",
"This paper considers network communications under a hard timeliness constraint , where a source node streams perishable information to a destination node over a directed acyclic graph subject to a hard delay constraint. Transmission along any edge incurs unit delay, and it is required that every information bit generated at the source at the beginning of time @math to be received and recovered by the destination at the end of time @math , where @math is the maximum allowed end-to-end delay. We study the corresponding delay-constrained unicast capacity problem. This paper presents the first example showing that network coding (NC) can achieve strictly higher delay-constrained throughput than routing even for the single unicast setting and the NC gain can be arbitrarily close to 2 in some instances. This is in sharp contrast to the delay-unconstrained ( @math ) single-unicast case where the classic min-cut max-flow theorem implies that coding cannot improve throughput over routing. Motivated by the above findings, a series of investigation on the delay-constrained capacity problem is also made, including: 1) an equivalent multiple-unicast representation based on a time-expanded graph approach; 2) a new delay-constrained capacity upper bound and its connections to the existing routing-based results [ 2011]; 3) an example showing that the penalty of using random linear NC can be unbounded; and 4) a counter example of the tree-packing Edmonds’ theorem in the new delay-constrained setting. Built upon the time-expanded graph approach, we also discuss how our results can be readily extended to cyclic networks. Overall, our results suggest that delay-constrained communication is fundamentally different from the well-understood delay-unconstrained one and call for investigation participation.",
"",
"",
"This paper analyzes the gains in delay performance resulting from network coding. We consider a model of file transmission to multiple receivers from a single base station. Using this model, we show that gains in delay performance from network coding with or without channel side information can be substantial compared to conventional scheduling methods for downlink transmission.",
"",
"The capability of mobile devices to use multiple interfaces to support a single session is becoming more prevalent. Prime examples include the desire to implement WiFi offloading and the introduction of 5G. Furthermore, an increasing fraction of Internet traffic is becoming delay sensitive. These two trends drive the need to investigate methods that enable communication over multiple parallel heterogeneous networks, while also ensuring that delay constraints are met. This paper approaches these challenges using a multi-path streaming code that uses forward error correction to reduce the in-order delivery delay of packets in networks with poor link quality and transient connectivity. A simple analysis is developed that provides a good approximation of the in-order delivery delay. Furthermore, numerical results help show that the delay penalty of communicating over multiple paths is insignificant when considering the potential throughput gains obtained through the fusion of multiple networks.",
""
]
} |
1707.02789 | 2735963460 | As parallelism becomes critically important in the semiconductor technology, high-performance computing, and cloud applications, parallel network systems will increasingly follow suit. Today, parallelism is an essential architectural feature of 40 100 400 Gigabit Ethernet standards, whereby high speed Ethernet systems are equipped with multiple parallel network interfaces. This creates new network topology abstractions and new technology requirements: instead of a single high capacity network link, multiple Ethernet end-points and interfaces need to be considered together with multiple links in form of discrete parallel paths. This new paradigm is enabling implementations of various new features to improve overall system performance. In this paper, we analyze the performance of parallel network systems with network coding. In particular, by using random LNC (RLNC), - a code without the need for decoding, we can make use of the fact that we have codes that are both distributed (removing the need for coordination or optimization of resources) and composable (without the need to exchange code information), leading to a fully stateless operation. We propose a novel theoretical modeling framework, including derivation of the upper and lower bounds as well as an expected value of the differential delay of parallel paths, and the resulting queue size at the receiver. The results show a great promise of network system parallelism in combination with RLNC: with a proper set of design parameters, the differential delay and the buffer size at the Ethernet receiver can be reduced significantly, while the cross-layer design and routing can be greatly simplified. | Network-coded multipath routing has been applied for erasure correction @cite_8 , where the combined information from multiple paths is transferred on a few additional (parallel) paths. The additional information was used to recover the missing information during decoding. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2094226915"
],
"abstract": [
"Dispersity Routing is a multi-path routing rule that the author proposed for the ARPA-net more than 35 years ago. Since that time, dispersity routing and other multi-path routing techniques, have been proposed for many networks, for different reasons. However, dispersity routing isn't used in any major networks. We will present some of the reasons why dispersity routing was considered and show how changes in networks have made dispersity routing less useful and more difficult to implement. Dispersity routing is currently being proposed for MANET's and delay tolerant networks. Based on our experience with wired networks we will recommend how multi-path routing techniques should, and should not, be used in wireless networks."
]
} |
1707.02789 | 2735963460 | As parallelism becomes critically important in the semiconductor technology, high-performance computing, and cloud applications, parallel network systems will increasingly follow suit. Today, parallelism is an essential architectural feature of 40 100 400 Gigabit Ethernet standards, whereby high speed Ethernet systems are equipped with multiple parallel network interfaces. This creates new network topology abstractions and new technology requirements: instead of a single high capacity network link, multiple Ethernet end-points and interfaces need to be considered together with multiple links in form of discrete parallel paths. This new paradigm is enabling implementations of various new features to improve overall system performance. In this paper, we analyze the performance of parallel network systems with network coding. In particular, by using random LNC (RLNC), - a code without the need for decoding, we can make use of the fact that we have codes that are both distributed (removing the need for coordination or optimization of resources) and composable (without the need to exchange code information), leading to a fully stateless operation. We propose a novel theoretical modeling framework, including derivation of the upper and lower bounds as well as an expected value of the differential delay of parallel paths, and the resulting queue size at the receiver. The results show a great promise of network system parallelism in combination with RLNC: with a proper set of design parameters, the differential delay and the buffer size at the Ethernet receiver can be reduced significantly, while the cross-layer design and routing can be greatly simplified. | In optical networks, our previous work @cite_31 proposed for the first time a scheme for high-speed Ethernet using multipath routing. Paper @cite_2 focused on enabling parallel transmission by linear network coding without consideration of data link layer technology. In @cite_29 we presented a preliminary theoretical model to achieve fault tolerance by using 2-parallel transmission and RLNC to achieve better spectral efficiency in the optical layer. Finally, in @cite_14 , we showed that utilizing of RLNC significantly improve reliability and security in parallel optical transmission systems. | {
"cite_N": [
"@cite_14",
"@cite_31",
"@cite_29",
"@cite_2"
],
"mid": [
"2169892591",
"2952845537",
"1577543956",
"2111488722"
],
"abstract": [
"Recently, physical layer security in the optical layer has gained significant traction. Security treats in optical networks generally impact the reliability of optical transmission. Linear Network Coding (LNC) can protect from both the security treats in form of eavesdropping and faulty transmission due to jamming. LNC can mix original data to become incomprehensible for an attacker and also extend original data by coding redundancy, thus protecting a data from errors injected via jamming attacks. In this paper, we study the effectiveness of LNC to balance reliable transmission and security in optical networks. To this end, we combine the coding process with data flow parallelization of the source and propose and compare optimal and randomized path selection methods for parallel transmission. The study shows that a combination of data parallelization, LNC and randomization of path selection increases security and reliability of the transmission. We analyze the so-called catastrophic security treat of the network and show that in case of conventional transmission scheme and in absence of LNC, an attacker could eavesdrop or disrupt a whole secret data by accessing only one edge in a network.",
"Parallel transmission, as defined in high-speed Ethernet standards, enables to use less expensive optoelectronics and offers backwards compatibility with legacy Optical Transport Network (OTN) infrastructure. However, optimal parallel transmission does not scale to large networks, as it requires computationally expensive multipath routing algorithms to minimize differential delay, and thus the required buffer size, optimize traffic splitting ratio, and ensure frame synchronization. In this paper, we propose a novel framework for high-speed Ethernet, which we refer to as network coded parallel transmission, capable of effective buffer management and frame synchronization without the need for complex multipath algorithms in the OTN layer. We show that using network coding can reduce the delay caused by packet reordering at the receiver, thus requiring a smaller overall buffer size, while improving the network throughput. We design the framework in full compliance with high-speed Ethernet standards specified in IEEE802.3ba and present solutions for network encoding, data structure of coded parallel transmission, buffer management and decoding at the receiver side. The proposed network coded parallel transmission framework is simple to implement and represents a potential major breakthrough in the system design of future high-speed Ethernet.",
"As optical networks evolve towards more dynamicity and an ever more efficient and elastic spectrum utilization, a more integrated, fault tolerant and system efficient design is becoming critical. To increase efficiency of spectral resource in bit rate per Hz (bit s Hz), high-level modulation formats are used, challenged by the accompanying optical impairments and the resulting limitation of optical reach. Previous work has addressed the issue of optical reach and transmission fault tolerance in the physical layer by deploying various FEC schemes and by a careful design of optical transceivers and links. This paper uses a different approach, applicable to link and networking layers. We propose a novel theoretical framework, whereby a randomized linear network coding (LNC) is applied to the main optical path, and in parallel, an auxiliary optical path is used at much lower transmission speeds, i.e., in addition to the main path. With the reception of the auxiliary path, as we analytically show, the system is highly tolerant to bit errors and packet loss caused by optical impairments in the main path, whereby alleviating the constraints on optical transmission quality and indirectly achieving better optical reach and spectral efficiency. The results are shown for a case study of high-speed Ethernet end-system transmitted over optical OFDM networks, which due to the inherent system-level parallelism in both networks, present one of the most interesting candidate technologies for the proposed method to yield best performance.",
"Parallel transmission is a known technique of transmitting flows over multiple paths from a source towards the same destination. In high-speed Ethernet standards, for instance, large bandwidth flows are inverse-multiplexed into multiple lower-speed flows and transmitted in parallel. However, when flows traverse different paths, attention needs to be paid to the resulting differential delay, which requires computationally expensive path optimizations and large buffering at the receiver. In this paper, we show analytically that linear network coding can significantly reduce the buffering in high-speed Ethernet systems at a price of en decoding overhead, while relaxing the requirements on path optimality. We implement the proposed decoding buffer model according to the IEEE 802.3ba standard, and show that linear network coding reduces the buffer size up to 40 as compared to systems without coding. With linear network coding, input interfaces of the destination node can deploy relatively smaller buffers, which is critical for wider practical deployment of high-speed Ethernet systems at 100 Gbps and beyond."
]
} |
1707.02327 | 2735173242 | Open source is experiencing a renaissance period, due to the appearance of modern platforms and workflows for developing and maintaining public code. As a result, developers are creating open source software at speeds never seen before. Consequently, these projects are also facing unprecedented mortality rates. To better understand the reasons for the failure of modern open source projects, this paper describes the results of a survey with the maintainers of 104 popular GitHub systems that have been deprecated. We provide a set of nine reasons for the failure of these open source projects. We also show that some maintenance practices---specifically the adoption of contributing guidelines and continuous integration---have an important association with a project failure or success. Finally, we discuss and reveal the principal strategies developers have tried to overcome the failure of the studied projects. | @cite_1 analyze 406 projects from FreshMeat (a deprecated open source repository). For each project, they compute a set of measures along four main dimensions: community of developers, community of users, modularity and documentation, and software evolution. They report that most projects (57 a few (15 .e., continuing improving their popularity and number of users and developers. However, they do not investigate the reasons for the project failures. @cite_5 discuss the attributes and characteristics of inactive projects on SourceForge. They report that more than 10,000 projects are inactive (as November, 2012). They also compare the maintainability of inactive projects with other project categories (active and dormant), using the maintainability index (MI) @cite_2 . They conclude that the majority of inactive systems are abandoned with a similar or increased maintainability, in comparison to their initial status. However, there are serious concerns on using MI as a maintainability predictor @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_5",
"@cite_1",
"@cite_2"
],
"mid": [
"2008155546",
"1894804819",
"2131265794",
"2114728368"
],
"abstract": [
"We performed an empirical study of the relation between technical quality of software products and the issue resolution performance of their maintainers. In particular, we tested the hypothesis that ratings for source code maintainability, as employed by the Software Improvement Group (SIG) quality model, are correlated with ratings for issue resolution speed. We tested the hypothesis for issues of type defect and of type enhancement. This study revealed that all but one of the metrics of the SIG quality model show a significant positive correlation with the resolution speed of defects, enhancements, or both.",
"Open Source Software (OSS) proponents suggest that when developers lose interest in their project, their last duty is to “hand it off to a competent successor.” However, the mechanisms of such a hand-off are not clear, or widely known among OSS developers. As a result, many OSS projects, after a certain long period of evolution, stop evolving, in fact becoming “inactive” or “abandoned” projects. This paper presents an analysis of the population of projects contained within one of the largest OSS repositories available (SourceForge.net), in order to describe how projects abandoned by their developers can be identified, and to discuss the attributes and characteristics of these inactive projects. In particular, the paper attempts to differentiate projects that experienced maintainability issues from those that are inactive for other reasons, in order to be able to correlate common characteristics to the “failure” of these projects.",
"Most empirical studies about Open Source (OS) projects or products are vertical and usually deal with the flagship, successful projects. There is a substantial lack of horizontal studies to shed light on the whole population of projects, including failures. This paper presents a horizontal study aimed at characterizing OS projects. We analyze a sample of around 400 projects from a popular OS project repository. Each project is characterized by a number of attributes. We analyze these attributes statically and over time. The main results show that few projects are capable of attracting a meaningful community of developers. The majority of projects is made by few (in many cases one) person with a very slow pace of evolution.",
"It is noted that the factors of software that determine or influence maintainability can be organized into a hierarchical structure of measurable attributes. For each of these attributes the authors show a metric definition consistent with the published definitions of the software characteristic being measured. The result is a tree structure of maintainability metrics which can be used for purposes of evaluating the relative maintainability of the software system. The authors define metrics for measuring the maintainability of a target software system and discuss how those metrics can be combined into a single index of maintainability. >"
]
} |
1707.02327 | 2735173242 | Open source is experiencing a renaissance period, due to the appearance of modern platforms and workflows for developing and maintaining public code. As a result, developers are creating open source software at speeds never seen before. Consequently, these projects are also facing unprecedented mortality rates. To better understand the reasons for the failure of modern open source projects, this paper describes the results of a survey with the maintainers of 104 popular GitHub systems that have been deprecated. We provide a set of nine reasons for the failure of these open source projects. We also show that some maintenance practices---specifically the adoption of contributing guidelines and continuous integration---have an important association with a project failure or success. Finally, we discuss and reveal the principal strategies developers have tried to overcome the failure of the studied projects. | @cite_20 investigate the role, scope and influence of codes of conduct in open source projects. They report that seven codes are used by most projects, usually aiming to provide a safe and inclusive community, as well as dealing with diversity issues. After surveying the literature on empirical studies aiming to validate Lehman's Laws, Fernandez- @cite_37 report that most works conclude that the first law (Continuing Change) applies to mature open source projects. However, in this work we found completed projects, according to their developers. These projects deal with stable requirements and environments and therefore do not need constant updates or modifications. | {
"cite_N": [
"@cite_37",
"@cite_20"
],
"mid": [
"1561601182",
"2600079720"
],
"abstract": [
"This chapter surveys a sample of empirical studies of Open Source Software (OSS) evolution. According to these, the classical findings in proprietary software evolution, such as Lehman’s laws of software evolution, might need to be revised, at least in part, to account for the OSS observations. The book chapter summarises what appears to be the empirical status of each of Lehman’s laws with respect to OSS and highlights the threats to validity that frequently emerge in this type of research.",
"Open source projects rely on collaboration of members from all around the world using web technologies like GitHub and Gerrit. This mixture of people with a wide range of backgrounds including minorities like women, ethnic minorities, and people with disabilities may increase the risk of offensive and destroying behaviours in the community, potentially leading affected project members to leave towards a more welcoming and friendly environment. To counter these effects, open source projects increasingly are turning to codes of conduct, in an attempt to promote their expectations and standards of ethical behaviour. In this first of its kind empirical study of codes of conduct in open source software projects, we investigated the role, scope and influence of codes of conduct through a mixture of quantitative and qualitative analysis, supported by interviews with practitioners. We found that the top codes of conduct are adopted by hundreds to thousands of projects, while all of them share 5 common dimensions."
]
} |
1707.02327 | 2735173242 | Open source is experiencing a renaissance period, due to the appearance of modern platforms and workflows for developing and maintaining public code. As a result, developers are creating open source software at speeds never seen before. Consequently, these projects are also facing unprecedented mortality rates. To better understand the reasons for the failure of modern open source projects, this paper describes the results of a survey with the maintainers of 104 popular GitHub systems that have been deprecated. We provide a set of nine reasons for the failure of these open source projects. We also show that some maintenance practices---specifically the adoption of contributing guidelines and continuous integration---have an important association with a project failure or success. Finally, we discuss and reveal the principal strategies developers have tried to overcome the failure of the studied projects. | Ye and Kishida @cite_3 describe a study to understand what motivates developers to engage in open source development. Using as case study the GIMP project (GNU Image Manipulation Program) they argue that learning is the major driving force that motivates people to get involved in open source projects. However, we do not known if this find applies to the new generation of open source systems, developed using platforms as GitHub. Eghbal @cite_11 reports on the risks and challenges to maintain modern open source projects. She argues that open source plays a key role in the digital infrastructure that sustain our society today. But unlike physical infrastructure, like bridges and roads, open source still lacks a reliable and sustainable source of funding. concluded that nearly two-thirds of a sample of 133 popular GitHub projects depend on one or two developers to survive @cite_17 . | {
"cite_N": [
"@cite_17",
"@cite_3",
"@cite_11"
],
"mid": [
"2344103814",
"2120244029",
"2753171801"
],
"abstract": [
"Truck Factor (TF) is a metric proposed by the agile community as a tool to identify concentration of knowledge in software development environments. It states the minimal number of developers that have to be hit by a truck (or quit) before a project is incapacitated. In other words, TF helps to measure how prepared is a project to deal with developer turnover. Despite its clear relevance, few studies explore this metric. Altogether there is no consensus about how to calculate it, and no supporting evidence backing estimates for systems in the wild. To mitigate both issues, we propose a novel (and automated) approach for estimating TF-values, which we execute against a corpus of 133 popular project in GitHub. We later survey developers as a means to assess the reliability of our results. Among others, we find that the majority of our target systems (65 ) have TF ≤ 2. Surveying developers from 67 target systems provides confidence towards our estimates; in 84 of the valid answers we collect, developers agree or partially agree that the TF's authors are the main authors of their systems; in 53 we receive a positive or partially positive answer regarding our estimated truck factors.",
"An Open Source Software (OSS) project is unlikely to be successful unless there is an accompanied community that provides the platform for developers and users to collaborate. Members of such communities are volunteers whose motivation to participate and contribute is of essential importance to the success of OSS projects. In this paper, we aim to create an understanding of what motivates people to participate in OSS communities. We theorize that learning is one of the motivational forces. Our theory is grounded in the learning theory of Legitimate Peripheral Participation, and is supported by analyzing the social structure of OSS communities and the co-evolution between OSS systems and communities. We also discuss practical implications of our theory for creating and maintaining sustainable OSS communities as well as for software engineering research and education.",
""
]
} |
1707.02327 | 2735173242 | Open source is experiencing a renaissance period, due to the appearance of modern platforms and workflows for developing and maintaining public code. As a result, developers are creating open source software at speeds never seen before. Consequently, these projects are also facing unprecedented mortality rates. To better understand the reasons for the failure of modern open source projects, this paper describes the results of a survey with the maintainers of 104 popular GitHub systems that have been deprecated. We provide a set of nine reasons for the failure of these open source projects. We also show that some maintenance practices---specifically the adoption of contributing guidelines and continuous integration---have an important association with a project failure or success. Finally, we discuss and reveal the principal strategies developers have tried to overcome the failure of the studied projects. | Recent research on open source has focused on the organization of successful open source proj -ects @cite_23 , on how to attract and retain newcomers @cite_18 @cite_6 @cite_34 @cite_36 @cite_35 , and on specific features provided by GitHub, such as pull requests @cite_33 @cite_0 @cite_4 , forks @cite_22 , and stars @cite_7 @cite_15 . | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_22",
"@cite_7",
"@cite_36",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_15",
"@cite_34"
],
"mid": [
"",
"1989862714",
"",
"2139092060",
"2503836346",
"2440056063",
"",
"",
"",
"2107294940",
"",
""
],
"abstract": [
"",
"Motivation: To survive and succeed, FLOSS projects need contributors able to accomplish critical project tasks. However, such tasks require extensive project experience of long term contributors (LTCs). Aim: We measure, understand, and predict how the newcomers’ involvement and environment in the issue tracking system (ITS) affect their odds of becoming an LTC. Method: ITS data of Mozilla and Gnome, literature, interviews, and online documents were used to design measures of involvement and environment. A logistic regression model was used to explain and predict contributor’s odds of becoming an LTC. We also reproduced the results on new data provided by Mozilla. Results: We constructed nine measures of involvement and environment based on events recorded in an ITS. Macro-climate is the overall project environment while micro-climate is person-specific and varies among the participants. Newcomers who are able to get at least one issue reported in the first month to be fixed, doubled their odds of becoming an LTC. The macro-climate with high project popularity and the micro-climate with low attention from peers reduced the odds. The precision of LTC prediction was 38 times higher than for a random predictor. We were able to reproduce the results with new Mozilla data without losing the significance or predictive power of the previously published model. We encountered unexpected changes in some attributes and suggest ways to make analysis of ITS data more reproducible. Conclusions: The findings suggest the importance of initial behaviors and experiences of new participants and outline empirically-based approaches to help the communities with the recruitment of contributors for long-term participation and to help the participants contribute more effectively. To facilitate the reproduction of the study and of the proposed measures in other contexts, we provide the data we retrieved and the scripts we wrote at https: www.passion-lab.org projects developerfluency.html.",
"",
"The advent of distributed version control systems has led to the development of a new paradigm for distributed software development; instead of pushing changes to a central repository, developers pull them from other repositories and merge them locally. Various code hosting sites, notably Github, have tapped on the opportunity to facilitate pull-based development by offering workflow support tools, such as code reviewing systems and integrated issue trackers. In this work, we explore how pull-based software development works, first on the GHTorrent corpus and then on a carefully selected sample of 291 projects. We find that the pull request model offers fast turnaround, increased opportunities for community engagement and decreased time to incorporate contributions. We show that a relatively small number of factors affect both the decision to merge a pull request and the time to process it. We also examine the reasons for pull request rejection and find that technical ones are only a small minority.",
"Forking is the creation of a new software repository by copying another repository. Though forking is controversial in traditional open source software (OSS) community, it is encouraged and is a built-in feature in GitHub. Developers freely fork repositories, use codes as their own and make changes. A deep understanding of repository forking can provide important insights for OSS community and GitHub. In this paper, we explore why and how developers fork what from whom in GitHub. We collect a dataset containing 236,344 developers and 1,841,324 forks. We make surveys, and analyze programming languages and owners of forked repositories. Our main observations are: (1) Developers fork repositories to submit pull requests, fix bugs, add new features and keep copies etc. Developers find repositories to fork from various sources: search engines, external sites (e.g., Twitter, Reddit), social relationships, etc. More than 42 of developers that we have surveyed agree that an automated recommendation tool is useful to help them pick repositories to fork, while more than 44.4 of developers do not value a recommendation tool. Developers care about repository owners when they fork repositories. (2) A repository written in a developer's preferred programming language is more likely to be forked. (3) Developers mostly fork repositories from creators. In comparison with unattractive repository owners, attractive repository owners have higher percentage of organizations, more followers and earlier registration in GitHub. Our results show that forking is mainly used for making contributions of original repositories, and it is beneficial for OSS community. Moreover, our results show the value of recommendation and provide important insights for GitHub to recommend repositories.",
"Software popularity is a valuable information to modern open source developers, who constantly want to know if their systems are attracting new users, if new releases are gaining acceptance, or if they are meeting user's expectations. In this paper, we describe a study on the popularity of software systems hosted at GitHub, which is the world's largest collection of open source software. GitHub provides an explicit way for users to manifest their satisfaction with a hosted repository: the stargazers button. In our study, we reveal the main factors that impact the number of stars of GitHub projects, including programming language and application domain. We also study the impact of new features on project popularity. Finally, we identify four main patterns of popularity growth, which are derived after clustering the time series representing the number of stars of 2,279 popular GitHub repositories. We hope our results provide valuable insights to developers and maintainers, which could help them on building and evolving systems in a competitive software market.",
"",
"",
"",
"According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine data from two major open source projects, the Apache web server and the Mozilla browser. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution intervals for these OSS projects. We develop several hypotheses by comparing the Apache project with several commercial projects. We then test and refine several of these hypotheses, based on an analysis of Mozilla data. We conclude with thoughts about the prospects for high-performance commercial open source process hybrids.",
"",
""
]
} |
1707.02406 | 2736142826 | In this paper, a layer-wise mixture model (LMM) is developed to support hierarchical visual recognition, where a Bayesian approach is used to automatically adapt the visual hierarchy to the progressive improvements of the deep network along the time. Our LMM algorithm can provide an end-to-end approach for jointly learning: 1) the deep network for achieving more discriminative deep representations for object classes and their inter-class visual similarities; 2) the tree classifier for recognizing large numbers of object classes hierarchically; and 3) the visual hierarchy adaptation for achieving more accurate assignment and organization of large numbers of object classes. By learning the tree classifier, the deep network and the visual hierarchy adaptation jointly in an end-to-end manner, our LMM algorithm can achieve higher accuracy rates on hierarchical visual recognition. Our experiments are carried on ImageNet1K and ImageNet10K image sets, which have demonstrated that our LMM algorithm can achieve very competitive results on the accuracy rates as compared with the baseline methods. | Deep learning @cite_8 @cite_40 @cite_31 @cite_34 @cite_18 has demonstrated its outstanding abilities on learning more discriminative features and boosting the accuracy rates for large-scale visual recognition significantly. By learning more representative features and a @math -way flat softmax classifier in an end-to-end fashion, most existing deep learning schemes have made one hidden assumption: the tasks for recognizing all the object classes are independent and share similar learning complexities. However, such assumption may not be true in many real-world applications, e.g., strong inter-class visual similarities are typical in the domain of large-scale visual recognition especially when some object classes are fine-grained (visually-similar) @cite_61 @cite_36 @cite_22 @cite_19 , but the @math -way flat softmax classifier completely ignores the inter-task correlations. Ignoring the inter-task correlations completely may push the deep learning process away from the global optimum because the gradients of the joint objective function are not uniform for all the object classes, especially when they have different inter-class visual similarities and learning complexities, as a result, the deep learning process may distract on discerning some particular object classes that are typically hard to be discriminated. | {
"cite_N": [
"@cite_61",
"@cite_18",
"@cite_22",
"@cite_8",
"@cite_36",
"@cite_19",
"@cite_40",
"@cite_31",
"@cite_34"
],
"mid": [
"",
"",
"2152923755",
"2163605009",
"",
"2091759811",
"1686810756",
"2097117768",
"2194775991"
],
"abstract": [
"",
"",
"Current work in object categorization discriminates among objects that typically possess gross differences which are readily apparent. However, many applications require making much finer distinctions. We address an insect categorization problem that is so challenging that even trained human experts cannot readily categorize images of insects considered in this paper. The state of the art that uses visual dictionaries, when applied to this problem, yields mediocre results (16.1 error). Three possible explanations for this are (a) the dictionaries are unsupervised, (b) the dictionaries lose the detailed information contained in each keypoint, and (c) these methods rely on hand-engineered decisions about dictionary size. This paper presents a novel, dictionary-free methodology. A random forest of trees is first trained to predict the class of an image based on individual keypoint descriptors. A unique aspect of these trees is that they do not make decisions but instead merely record evidence-i.e., the number of descriptors from training examples of each category that reached each leaf of the tree. We provide a mathematical model showing that voting evidence is better than voting decisions. To categorize a new image, descriptors for all detected keypoints are “dropped” through the trees, and the evidence at each leaf is summed to obtain an overall evidence vector. This is then sent to a second-level classifier to make the categorization decision. We achieve excellent performance (6.4 error) on the 9-class STONEFLY9 data set. Also, our method achieves an average AUC of 0.921 on the PASCAL06 VOC, which places it fifth out of 21 methods reported in the literature and demonstrates that the method also works well for generic object categorization.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"",
"Fine-grained recognition concerns categorization at sub-ordinate levels, where the distinction between object classes is highly local. Compared to basic level recognition, fine-grained categorization can be more challenging as there are in general less data and fewer discriminative features. This necessitates the use of stronger prior for feature selection. In this work, we include humans in the loop to help computers select discriminative features. We introduce a novel online game called \"Bubbles\" that reveals discriminative features humans use. The player's goal is to identify the category of a heavily blurred image. During the game, the player can choose to reveal full details of circular regions (\"bubbles\"), with a certain penalty. With proper setup the game generates discriminative bubbles with assured quality. We next propose the \"Bubble Bank\" algorithm that uses the human selected bubbles to improve machine recognition performance. Experiments demonstrate that our approach yields large improvements over the previous state of the art on challenging benchmarks.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
]
} |
1707.02406 | 2736142826 | In this paper, a layer-wise mixture model (LMM) is developed to support hierarchical visual recognition, where a Bayesian approach is used to automatically adapt the visual hierarchy to the progressive improvements of the deep network along the time. Our LMM algorithm can provide an end-to-end approach for jointly learning: 1) the deep network for achieving more discriminative deep representations for object classes and their inter-class visual similarities; 2) the tree classifier for recognizing large numbers of object classes hierarchically; and 3) the visual hierarchy adaptation for achieving more accurate assignment and organization of large numbers of object classes. By learning the tree classifier, the deep network and the visual hierarchy adaptation jointly in an end-to-end manner, our LMM algorithm can achieve higher accuracy rates on hierarchical visual recognition. Our experiments are carried on ImageNet1K and ImageNet10K image sets, which have demonstrated that our LMM algorithm can achieve very competitive results on the accuracy rates as compared with the baseline methods. | One intuitive way for exploiting the inter-task relationships (inter-class visual similarities) is to integrate a tree structure to organize large numbers of object classes hierarchically, e.g., the tasks for training the classifiers for the fine-grained (visually-similar) object classes under the same parent node (in the same group) may have stronger inter-task relationships and share similar learning complexities. Such tree structures can be categorized into two types: (a) concept ontology @cite_6 @cite_21 @cite_11 @cite_2 @cite_5 ; and (b) label tree or visual hierarchy @cite_17 @cite_48 @cite_43 @cite_7 @cite_4 @cite_35 @cite_1 @cite_46 @cite_38 @cite_60 . It is worth noting that the feature space is the common space for classifier training and visual recognition @cite_24 , e.g., both classifier training and visual recognition are performed in the feature space rather than in the semantic label space. Thus it could be more attractive to organize large numbers of object classes hierarchically in the feature space according to their inter-class visual correlations. | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_4",
"@cite_7",
"@cite_60",
"@cite_48",
"@cite_21",
"@cite_1",
"@cite_17",
"@cite_6",
"@cite_24",
"@cite_43",
"@cite_2",
"@cite_5",
"@cite_46",
"@cite_11"
],
"mid": [
"2157065343",
"1987083125",
"2148780922",
"2155144632",
"2586505867",
"2128017662",
"2089150756",
"1967732418",
"1851597118",
"2108598243",
"2142747154",
"2116339064",
"2098020658",
"1857221572",
"2134665698",
"2106097867"
],
"abstract": [
"We present a novel approach to efficiently learn a label tree for large scale classification with many classes. The key contribution of the approach is a technique to simultaneously determine the structure of the tree and learn the classifiers for each node in the tree. This approach also allows fine grained control over the efficiency vs accuracy trade-off in designing a label tree, leading to more balanced trees. Experiments are performed on large scale image classification with 10184 classes and 9 million images. We demonstrate significant improvements in test accuracy and efficiency with less training time and more balanced trees compared to the previous state of the art by",
"In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.",
"As more images and categories become available, organizing them becomes crucial. We present a novel statistical method for organizing a collection of images into a tree-shaped hierarchy. The method employs a non-parametric Bayesian model and is completely unsupervised. Each image is associated with a path through a tree. Similar images share initial segments of their paths and therefore have a smaller distance from each other. Each internal node in the hierarchy represents information that is common to images whose paths pass through that node, thus providing a compact image representation. Our experiments show that a disorganized collection of images will be organized into an intuitive taxonomy. Furthermore, we find that the taxonomy allows good image categorization and, in this respect, is superior to the popular LDA model.",
"Multi-class classification becomes challenging at test time when the number of classes is very large and testing against every possible class can become computationally infeasible. This problem can be alleviated by imposing (or learning) a structure over the set of classes. We propose an algorithm for learning a tree-structure of classifiers which, by optimizing the overall tree loss, provides superior accuracy to existing tree labeling methods. We also propose a method that learns to embed labels in a low dimensional space that is faster than non-embedding approaches and has superior accuracy to existing embedding approaches. Finally we combine the two ideas resulting in the label embedding tree that outperforms alternative methods including One-vs-Rest while being orders of magnitude faster.",
"In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). To achieve more effective accomplishment of the coarse-to-fine tasks for hierarchical visual recognition, multiple sets of deep features are first extracted from the different layers of deep convolutional neural networks (deep CNNs). A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, and it can provide a good environment for identifying the inter-related learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can achieve the global optimum easily and obtain more discriminative node classifiers for distinguishing the visually-similar atomic object classes (in the same group) effectively. Our HD-MTL algorithm can control the inter-level error propagation effectively by using an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on both the accuracy rates and the computational efficiency for large-scale visual recognition.",
"A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CDs. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
"As increasingly powerful techniques emerge for machine tagging multimedia content, it becomes ever more important to standardize the underlying vocabularies. Doing so provides interoperability and lets the multimedia community focus ongoing research on a well-defined set of semantics. This paper describes a collaborative effort of multimedia researchers, library scientists, and end users to develop a large standardized taxonomy for describing broadcast news video. The large-scale concept ontology for multimedia (LSCOM) is the first of its kind designed to simultaneously optimize utility to facilitate end-user access, cover a large semantic space, make automated extraction feasible, and increase observability in diverse broadcast news video data sets",
"Large-scale recognition problems with thousands of classes pose a particular challenge because applying the classifier requires more computation as the number of classes grows. The label tree model integrates classification with the traversal of the tree so that complexity grows logarithmically. In this paper, we show how the parameters of the label tree can be found using maximum likelihood estimation. This new probabilistic learning technique produces a label tree with significantly improved recognition accuracy.",
"Class hierarchies are commonly used to reduce the complexity of the classification problem. This is crucial when dealing with a large number of categories. In this work, we evaluate class hierarchies currently constructed for visual recognition. We show that top-down as well as bottom-up approaches, which are commonly used to automatically construct hierarchies, incorporate assumptions about the separability of classes. Those assumptions do not hold for visual recognition of a large number of object categories. We therefore propose a modification which is appropriate for most top-down approaches. It allows to construct class hierarchies that postpone decisions in the presence of uncertainty and thus provide higher recognition accuracy. We also compare our method to a one-against-all approach and show how to control the speed-for-accuracy trade-off with our method. For the experimental evaluation, we use the Caltech-256 visual object classes dataset and compare to state-of-the-art methods.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"In this paper, a novel data-driven algorithm is developed for achieving quantitative characterization of the semantic gaps directly in the visual feature space, where the visual feature space is the common space for concept classifier training and automatic concept detection. By supporting quantitative characterization of the semantic gaps, more effective inference models can automatically be selected for concept classifier training by: (1) identifying the image concepts with small semantic gaps (i.e., the isolated image concepts with high inner-concept visual consistency) and training their one-against-all SVM concept classifiers independently; (2) determining the image concepts with large semantic gaps (i.e., the visually-related image concepts with low inner-concept visual consistency) and training their inter-related SVM concept classifiers jointly; and (3) using more image instances to achieve more reliable training of the concept classifiers for the image concepts with large semantic gaps. Our experimental results on NUS-WIDE and ImageNet image sets have obtained very promising results.",
"The computational complexity of current visual categorization algorithms scales linearly at best with the number of categories. The goal of classifying simultaneously Ncat = 104 - 105 visual categories requires sub-linear classification costs. We explore algorithms for automatically building classification trees which have, in principle, logNcat complexity. We find that a greedy algorithm that recursively splits the set of categories into the two minimally confused subsets achieves 5-20 fold speedups at a small cost in classification performance. Our approach is independent of the specific classification algorithm used. A welcome by-product of our algorithm is a very reasonable taxonomy of the Caltech-256 dataset.",
"In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.",
"The recently proposed ImageNet dataset consists of several million images, each annotated with a single object category. However, these annotations may be imperfect, in the sense that many images contain multiple objects belonging to the label vocabulary. In other words, we have a multi-label problem but the annotations include only a single label (and not necessarily the most prominent). Such a setting motivates the use of a robust evaluation measure, which allows for a limited number of labels to be predicted and, as long as one of the predicted labels is correct, the overall prediction should be considered correct. This is indeed the type of evaluation measure used to assess algorithm performance in a recent competition on ImageNet data. Optimizing such types of performance measures presents several hurdles even with existing structured output learning methods. Indeed, many of the current state-of-the-art methods optimize the prediction of only a single output label, ignoring this 'structure' altogether. In this paper, we show how to directly optimize continuous surrogates of such performance measures using structured output learning techniques with latent variables. We use the output of existing binary classifiers as input features in a new learning stage which optimizes the structured loss corresponding to the robust performance measure. We present empirical evidence that this allows us to 'boost' the performance of existing binary classifiers which are the state-of-the-art for the task of object classification in ImageNet.",
"Objects in the world can be arranged into a hierarchy based on their semantic meaning (e.g. organism - animal - feline - cat). What about defining a hierarchy based on the visual appearance of objects? This paper investigates ways to automatically discover a hierarchical structure for the visual world from a collection of unlabeled images. Previous approaches for unsupervised object and scene discovery focused on partitioning the visual data into a set of non-overlapping classes of equal granularity. In this work, we propose to group visual objects using a multi-layer hierarchy tree that is based on common visual elements. This is achieved by adapting to the visual domain the generative hierarchical latent Dirichlet allocation (hLDA) model previously used for unsupervised discovery of topic hierarchies in text. Images are modeled using quantized local image regions as analogues to words in text. Employing the multiple segmentation framework of [22], we show that meaningful object hierarchies, together with object segmentations, can be automatically learned from unlabeled and unsegmented image collections without supervision. We demonstrate improved object classification and localization performance using hLDA over the previous non-hierarchical method on the MSRC dataset [33].",
"In this paper we propose to use lexical semantic networks to extend the state-of-the-art object recognition techniques. We use the semantics of image labels to integrate prior knowledge about inter-class relationships into the visual appearance learning. We show how to build and train a semantic hierarchy of discriminative classifiers and how to use it to perform object detection. We evaluate how our approach influences the classification accuracy and speed on the Pascal VOC challenge 2006 dataset, a set of challenging real-world images. We also demonstrate additional features that become available to object recognition due to the extension with semantic inference tools- we can classify high-level categories, such as animals, and we can train part detectors, for example a window detector, by pure inference in the semantic network."
]
} |
1707.02406 | 2736142826 | In this paper, a layer-wise mixture model (LMM) is developed to support hierarchical visual recognition, where a Bayesian approach is used to automatically adapt the visual hierarchy to the progressive improvements of the deep network along the time. Our LMM algorithm can provide an end-to-end approach for jointly learning: 1) the deep network for achieving more discriminative deep representations for object classes and their inter-class visual similarities; 2) the tree classifier for recognizing large numbers of object classes hierarchically; and 3) the visual hierarchy adaptation for achieving more accurate assignment and organization of large numbers of object classes. By learning the tree classifier, the deep network and the visual hierarchy adaptation jointly in an end-to-end manner, our LMM algorithm can achieve higher accuracy rates on hierarchical visual recognition. Our experiments are carried on ImageNet1K and ImageNet10K image sets, which have demonstrated that our LMM algorithm can achieve very competitive results on the accuracy rates as compared with the baseline methods. | By integrating a tree structure to organize large numbers of object classes hierarchically and supervise the hierarchical process for tree classifier training, the hierarchical visual recognition approach @cite_14 @cite_65 @cite_37 @cite_39 @cite_41 @cite_5 @cite_26 @cite_2 @cite_38 can provide many advantages, but it may seriously suffer from the problem of inter-level error propagation : the mistakes for the parent nodes will propagate to their child nodes until the leaf nodes @cite_5 @cite_38 . In addition, most existing approaches for hierarchical visual recognition focus on leveraging hand-crafted features for tree classifier training, thus it is very attractive to invest how deep features can be leveraged to improve hierarchical visual recognition @cite_26 @cite_60 . | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_14",
"@cite_26",
"@cite_60",
"@cite_41",
"@cite_65",
"@cite_39",
"@cite_2",
"@cite_5"
],
"mid": [
"1987083125",
"2162657744",
"2025862220",
"2220384803",
"2586505867",
"1970719635",
"2008835805",
"2112993448",
"2098020658",
"1857221572"
],
"abstract": [
"In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.",
"We consider multiclass classification problems where the set of labels are organized hierarchically as a category tree. We associate each node in the tree with a classifier and classify the examples recursively from the root to the leaves. We propose a hierarchical Support Vector Machine (SVM) that encourages the classifier at each node to be different from the classifiers at its ancestors. More specifically, we introduce regularizations that force the normal vector of the classifying hyperplane at each node to be orthogonal to those at its ancestors as much as possible. We establish conditions under which training such a hierarchical SVM is a convex optimization problem, and develop an efficient dual-averaging method for solving it.",
"In the real visual world, the number of categories a classifier needs to discriminate is on the order of hundreds or thousands. For example, the SUN dataset [24] contains 899 scene categories and ImageNet [6] has 15,589 synsets. Designing a multiclass classifier that is both accurate and fast at test time is an extremely important problem in both machine learning and computer vision communities. To achieve a good trade-off between accuracy and speed, we adopt the relaxed hierarchy structure from [15], where a set of binary classifiers are organized in a tree or DAG (directed acyclic graph) structure. At each node, classes are colored into positive and negative groups which are separated by a binary classifier while a subset of confusing classes is ignored. We color the classes and learn the induced binary classifier simultaneously using a unified and principled max-margin optimization. We provide an analysis on generalization error to justify our design. Our method has been tested on both Caltech-256 (object recognition) [9] and the SUN dataset (scene classification) [24], and shows significant improvement over existing methods.",
"We present Deep Neural Decision Forests - a novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner. To combine these two worlds, we introduce a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network. Our model differs from conventional deep networks because a decision forest provides the final predictions and it differs from conventional decision forests since we propose a principled, joint and global optimization of split and leaf node parameters. We show experimental results on benchmark machine learning datasets like MNIST and ImageNet and find on-par or superior results when compared to state-of-the-art deep models. Most remarkably, we obtain Top5-Errors of only 7.84 6.38 on ImageNet validation data when integrating our forests in a single-crop, single seven model GoogLeNet architecture, respectively. Thus, even without any form of training data set augmentation we are improving on the 6.67 error obtained by the best GoogLeNet architecture (7 models, 144 crops).",
"In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). To achieve more effective accomplishment of the coarse-to-fine tasks for hierarchical visual recognition, multiple sets of deep features are first extracted from the different layers of deep convolutional neural networks (deep CNNs). A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, and it can provide a good environment for identifying the inter-related learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can achieve the global optimum easily and obtain more discriminative node classifiers for distinguishing the visually-similar atomic object classes (in the same group) effectively. Our HD-MTL algorithm can control the inter-level error propagation effectively by using an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on both the accuracy rates and the computational efficiency for large-scale visual recognition.",
"Hierarchical classification is critical to knowledge management and exploration, as is gene function prediction and document categorization. In hierarchical classification, an input is classified according to a structured hierarchy. In such a situation, the central issue is how to effectively utilize the interclass relationship to improve the generalization performance of flat classification ignoring such dependency. In this article, we propose a novel large margin method through constraints characterizing a multipath hierarchy, where class membership can be nonexclusive. The proposed method permits a treatment of various losses for hierarchical classification. For implementation, we focus on the symmetric difference loss and two large margin classifiers: support vector machines and ψ-learning. Finally, theoretical and numerical analyses are conducted, in addition to an application to gene function prediction. They suggest that the proposed method achieves the desired objective and outperforms strong comp...",
"We present an algorithmic framework for supervised classification learning where the set of labels is organized in a predefined hierarchical structure. This structure is encoded by a rooted tree which induces a metric over the label set. Our approach combines ideas from large margin kernel methods and Bayesian analysis. Following the large margin principle, we associate a prototype with each label in the tree and formulate the learning task as an optimization problem with varying margin constraints. In the spirit of Bayesian methods, we impose similarity requirements between the prototypes corresponding to adjacent labels in the hierarchy. We describe new online and batch algorithms for solving the constrained optimization problem. We derive a worst case loss-bound for the online algorithm and provide generalization analysis for its batch counterpart. We demonstrate the merits of our approach with a series of experiments on synthetic, text and speech data.",
"Many methods have been proposed to solve the image classification problem for a large number of categories. Among them, methods based on tree-based representations achieve good trade-off between accuracy and test time efficiency. While focusing on learning a tree-shaped hierarchy and the corresponding set of classifiers, most of them [11, 2, 14] use a greedy prediction algorithm for test time efficiency. We argue that the dramatic decrease in accuracy at high efficiency is caused by the specific design choice of the learning and greedy prediction algorithms. In this work, we propose a classifier which achieves a better trade-off between efficiency and accuracy with a given tree-shaped hierarchy. First, we convert the classification problem as finding the best path in the hierarchy, and a novel branch-and-bound-like algorithm is introduced to efficiently search for the best path. Second, we jointly train the classifiers using a novel Structured SVM (SSVM) formulation with additional bound constraints. As a result, our method achieves a significant 4.65 , 5.43 , and 4.07 (relative 24.82 , 41.64 , and 109.79 ) improvement in accuracy at high efficiency compared to state-of-the-art greedy \"tree-based\" methods [14] on Caltech-256 [15], SUN [32] and Image Net 1K [9] dataset, respectively. Finally, we show that our branch-and-bound-like algorithm naturally ranks the paths in the hierarchy (Fig. 8) so that users can further process them.",
"In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.",
"The recently proposed ImageNet dataset consists of several million images, each annotated with a single object category. However, these annotations may be imperfect, in the sense that many images contain multiple objects belonging to the label vocabulary. In other words, we have a multi-label problem but the annotations include only a single label (and not necessarily the most prominent). Such a setting motivates the use of a robust evaluation measure, which allows for a limited number of labels to be predicted and, as long as one of the predicted labels is correct, the overall prediction should be considered correct. This is indeed the type of evaluation measure used to assess algorithm performance in a recent competition on ImageNet data. Optimizing such types of performance measures presents several hurdles even with existing structured output learning methods. Indeed, many of the current state-of-the-art methods optimize the prediction of only a single output label, ignoring this 'structure' altogether. In this paper, we show how to directly optimize continuous surrogates of such performance measures using structured output learning techniques with latent variables. We use the output of existing binary classifiers as input features in a new learning stage which optimizes the structured loss corresponding to the robust performance measure. We present empirical evidence that this allows us to 'boost' the performance of existing binary classifiers which are the state-of-the-art for the task of object classification in ImageNet."
]
} |
1707.02467 | 2734448685 | Presented on September 21, 2017 at 11:00 a.m. in the Klaus Advanced Computing Building, room 1116W. | Finally, our small-world network has been studied by @cite_19 in the special case @math . However, the results of @cite_19 are about bootstrap percolation, rather than being about the mixing time of the lazy random walk, which is our concern here. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2903855202"
],
"abstract": [
"Abstract In this paper a random graph model G Z N 2 , p d is introduced, which is a combination of fixed torus grid edges in ( Z ∕ N Z ) 2 and some additional random ones. The random edges are called long, and the probability of having a long edge between vertices u , v ∈ ( Z ∕ N Z ) 2 with graph distance d on the torus grid is p d = c ∕ N d , where c is some constant. We show that, whp, the diameter D ( G Z N 2 , p d ) = Θ ( log N ) . Moreover, we consider a modified non-monotonous bootstrap percolation on G Z N 2 , p d . We prove the presence of phase transitions in mean-field approximation and provide fairly sharp bounds on the error of the critical parameters."
]
} |
1707.02410 | 2734755249 | Modeling the complex interactions between users and items as well as amongst items themselves is at the core of designing successful recommender systems. One classical setting is predicting users' personalized sequential behavior (or 'next-item' recommendation), where the challenges mainly lie in modeling 'third-order' interactions between a user, her previously visited item(s), and the next item to consume. Existing methods typically decompose these higher-order interactions into a combination of pairwise relationships, by way of which user preferences (user-item interactions) and sequential patterns (item-item interactions) are captured by separate components. In this paper, we propose a unified method, TransRec, to model such third-order relationships for large-scale sequential prediction. Methodologically, we embed items into a 'transition space' where users are modeled as translation vectors operating on item sequences. Empirically, this approach outperforms the state-of-the-art on a wide spectrum of real-world datasets. Data and code are available at https: sites.google.com a eng.ucsd.edu ruining-he . | General recommendation. Traditional approaches to recommendation ignore sequential signals in the system. Such systems focus on modeling user preferences, and typically rely on Collaborative Filtering (CF) techniques, especially Matrix Factorization (MF) @cite_0 . For implicit feedback data (like purchases, clicks, and thumbs-up), point-wise and pairwise methods based on MF have been proposed. Point-wise methods (e.g., @cite_4 @cite_26 @cite_8 ) assume all non-observed feedback to be negative and factorize the user-item feedback matrix. In contrast, pairwise methods (e.g., @cite_19 @cite_17 @cite_5 ) make a weaker assumption that users simply prefer observed feedback over unobserved feedback and optimize the pairwise rankings of (positive, non-positive) pairs. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_8",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_17"
],
"mid": [
"",
"2101409192",
"1987431925",
"1690919088",
"2140310134",
"1976999215",
"2089349245"
],
"abstract": [
"",
"A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model.",
"This paper focuses on developing effective and efficient algorithms for top-N recommender systems. A novel Sparse Linear Method (SLIM) is proposed, which generates top-N recommendations by aggregating from user purchase rating profiles. A sparse aggregation coefficient matrix W is learned from SLIM by solving an 1-norm and 2-norm regularized optimization problem. W is demonstrated to produce high quality recommendations and its sparsity allows SLIM to generate recommendations very fast. A comprehensive set of experiments is conducted by comparing the SLIM method and other state-of-the-art top-N recommendation methods. The experiments show that SLIM achieves significant improvements both in run time performance and recommendation quality over the best existing methods.",
"The explosive growth of e-commerce and online environments has made the issue of information search and selection increasingly serious; users are overloaded by options to consider and they may not have the time or knowledge to personally evaluate these options. Recommender systems have proven to be a valuable way for online users to cope with the information overload and have become one of the most powerful and popular tools in electronic commerce. Correspondingly, various techniques for recommendation generation have been proposed. During the last decade, many of them have also been successfully deployed in commercial environments. Recommender Systems Handbook, an edited volume, is a multi-disciplinary effort that involves world-wide experts from diverse fields, such as artificial intelligence, human computer interaction, information technology, data mining, statistics, adaptive user interfaces, decision support systems, marketing, and consumer behavior. Theoreticians and practitioners from these fields continually seek techniques for more efficient, cost-effective and accurate recommender systems. This handbook aims to impose a degree of order on this diversity, by presenting a coherent and unified repository of recommender systems major concepts, theories, methodologies, trends, challenges and applications. Extensive artificial applications, a variety of real-world applications, and detailed case studies are included. Recommender Systems Handbook illustrates how this technology can support the user in decision-making, planning and purchasing processes. It works for well known corporations such as Amazon, Google, Microsoft and AT&T. This handbook is suitable for researchers and advanced-level students in computer science as a reference.",
"Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.",
"Pairwise algorithms are popular for learning recommender systems from implicit feedback. For each user, or more generally context, they try to discriminate between a small set of selected items and the large set of remaining (irrelevant) items. Learning is typically based on stochastic gradient descent (SGD) with uniformly drawn pairs. In this work, we show that convergence of such SGD learning algorithms slows down considerably if the item popularity has a tailed distribution. We propose a non-uniform item sampler to overcome this problem. The proposed sampler is context-dependent and oversamples informative pairs to speed up convergence. An efficient implementation with constant amortized runtime costs is developed. Furthermore, it is shown how the proposed learning algorithm can be applied to a large class of recommender models. The properties of the new learning algorithm are studied empirically on two real-world recommender system problems. The experiments indicate that the proposed adaptive sampler improves the state-of-the art learning algorithm largely in convergence without negative effects on prediction quality or iteration runtime.",
"Tagging plays an important role in many recent websites. Recommender systems can help to suggest a user the tags he might want to use for tagging a specific item. Factorization models based on the Tucker Decomposition (TD) model have been shown to provide high quality tag recommendations outperforming other approaches like PageRank, FolkRank, collaborative filtering, etc. The problem with TD models is the cubic core tensor resulting in a cubic runtime in the factorization dimension for prediction and learning. In this paper, we present the factorization model PITF (Pairwise Interaction Tensor Factorization) which is a special case of the TD model with linear runtime both for learning and prediction. PITF explicitly models the pairwise interactions between users, items and tags. The model is learned with an adaption of the Bayesian personalized ranking (BPR) criterion which originally has been introduced for item recommendation. Empirically, we show on real world datasets that this model outperforms TD largely in runtime and even can achieve better prediction quality. Besides our lab experiments, PITF has also won the ECML PKDD Discovery Challenge 2009 for graph-based tag recommendation."
]
} |
1707.02410 | 2734755249 | Modeling the complex interactions between users and items as well as amongst items themselves is at the core of designing successful recommender systems. One classical setting is predicting users' personalized sequential behavior (or 'next-item' recommendation), where the challenges mainly lie in modeling 'third-order' interactions between a user, her previously visited item(s), and the next item to consume. Existing methods typically decompose these higher-order interactions into a combination of pairwise relationships, by way of which user preferences (user-item interactions) and sequential patterns (item-item interactions) are captured by separate components. In this paper, we propose a unified method, TransRec, to model such third-order relationships for large-scale sequential prediction. Methodologically, we embed items into a 'transition space' where users are modeled as translation vectors operating on item sequences. Empirically, this approach outperforms the state-of-the-art on a wide spectrum of real-world datasets. Data and code are available at https: sites.google.com a eng.ucsd.edu ruining-he . | Modeling temporal dynamics. Several works extend general recommendation models to make use of timestamps associated with feedback. For example, early similarity-based CF (e.g., @cite_14 ) uses time weighting schemes that assign decaying weights to previously-rated items when computing similarities. More recent efforts are mostly based on MF, where the goal is to model and understand the historical of users and items, e.g., Koren achieved state-of-the-art rating prediction results on data, largely by exploiting temporal signals @cite_23 @cite_18 . The sequential prediction task we are tackling is related to the above, except that instead of directly using those timestamps, it focuses on learning the sequential relationships between user actions (i.e., it focuses on the of actions rather than the specific time). | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_23"
],
"mid": [
"2054141820",
"2057991616",
"2057763140"
],
"abstract": [
"As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.",
"Collaborative filtering is regarded as one of the most promising recommendation algorithms. The item-based approaches for collaborative filtering identify the similarity between two items by comparing users' ratings on them. In these approaches, ratings produced at different times are weighted equally. That is to say, changes in user purchase interest are not taken into consideration. For example, an item that was rated recently by a user should have a bigger impact on the prediction of future user behaviour than an item that was rated a long time ago. In this paper, we present a novel algorithm to compute the time weights for different items in a manner that will assign a decreasing weight to old data. More specifically, the users' purchase habits vary. Even the same user has quite different attitudes towards different items. Our proposed algorithm uses clustering to discriminate between different kinds of items. To each item cluster, we trace each user's purchase interest change and introduce a personalized decay factor according to the user own purchase behaviour. Empirical studies have shown that our new algorithm substantially improves the precision of item-based collaborative filtering without introducing higher order computational complexity.",
"Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics is essential for designing recommender systems or general customer preference models. However, this raises unique challenges. Within the ecosystem intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance decay approaches cannot work, as they lose too many signals when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long-term patterns. We show how to model the time changing behavior throughout the life span of the data. Such a model allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie-rating dataset underlying the Netflix Prize contest. Results are encouraging and better than those previously reported on this dataset. In particular, methods described in this paper play a significant role in the solution that won the Netflix contest."
]
} |
1707.02410 | 2734755249 | Modeling the complex interactions between users and items as well as amongst items themselves is at the core of designing successful recommender systems. One classical setting is predicting users' personalized sequential behavior (or 'next-item' recommendation), where the challenges mainly lie in modeling 'third-order' interactions between a user, her previously visited item(s), and the next item to consume. Existing methods typically decompose these higher-order interactions into a combination of pairwise relationships, by way of which user preferences (user-item interactions) and sequential patterns (item-item interactions) are captured by separate components. In this paper, we propose a unified method, TransRec, to model such third-order relationships for large-scale sequential prediction. Methodologically, we embed items into a 'transition space' where users are modeled as translation vectors operating on item sequences. Empirically, this approach outperforms the state-of-the-art on a wide spectrum of real-world datasets. Data and code are available at https: sites.google.com a eng.ucsd.edu ruining-he . | Sequential recommendation. Scalable sequential models usually rely on Markov Chains (MC) to capture sequential patterns (e.g., @cite_16 @cite_2 @cite_1 ). Rendle proposed to factorize the third-order cube' that represents the transitions amongst items made by users. The resulting model, Factorized Personalized Markov Chains (FPMC), can be seen as a combination of MF and MC and achieves good performance for next-basket recommendation. | {
"cite_N": [
"@cite_16",
"@cite_1",
"@cite_2"
],
"mid": [
"2171279286",
"2205235818",
"1985854669"
],
"abstract": [
"Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization.",
"The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods.",
"Next basket recommendation is a crucial task in market basket analysis. Given a user's purchase history, usually a sequence of transaction data, one attempts to build a recommender that can predict the next few items that the user most probably would like. Ideally, a good recommender should be able to explore the sequential behavior (i.e., buying one item leads to buying another next), as well as account for users' general taste (i.e., what items a user is typically interested in) for recommendation. Moreover, these two factors may interact with each other to influence users' next purchase. To tackle the above problems, in this paper, we introduce a novel recommendation approach, namely hierarchical representation model (HRM). HRM can well capture both sequential behavior and users' general taste by involving transaction and user representations in prediction. Meanwhile, the flexibility of applying different aggregation operations, especially nonlinear operations, on representations allows us to model complicated interactions among different factors. Theoretically, we show that our model subsumes several existing methods when choosing proper aggregation operations. Empirically, we demonstrate that our model can consistently outperform the state-of-the-art baselines under different evaluation metrics on real-world transaction data."
]
} |
1707.02410 | 2734755249 | Modeling the complex interactions between users and items as well as amongst items themselves is at the core of designing successful recommender systems. One classical setting is predicting users' personalized sequential behavior (or 'next-item' recommendation), where the challenges mainly lie in modeling 'third-order' interactions between a user, her previously visited item(s), and the next item to consume. Existing methods typically decompose these higher-order interactions into a combination of pairwise relationships, by way of which user preferences (user-item interactions) and sequential patterns (item-item interactions) are captured by separate components. In this paper, we propose a unified method, TransRec, to model such third-order relationships for large-scale sequential prediction. Methodologically, we embed items into a 'transition space' where users are modeled as translation vectors operating on item sequences. Empirically, this approach outperforms the state-of-the-art on a wide spectrum of real-world datasets. Data and code are available at https: sites.google.com a eng.ucsd.edu ruining-he . | There are also works that have adopted metric embeddings for the recommendation task, leading to better generalization ability. For example, Chen introduced Logistic Metric Embeddings (LME) for music playlist generation @cite_27 , where the Markov transitions among different songs are encoded by the distances among them. Recently, Feng further extended LME to model personalized sequential behavior and used pairwise ranking for predicting next points-of-interest @cite_1 . On the other hand, Wang recently introduced the Hierarchical Representation Model (HRM), which extends FPMC by applying aggregation operations (like max average pooling) to model more complex interactions. We will give more details of these works in Section . | {
"cite_N": [
"@cite_27",
"@cite_1"
],
"mid": [
"1989318262",
"2205235818"
],
"abstract": [
"Digital storage of personal music collections and cloud-based music services (e.g. Pandora, Spotify) have fundamentally changed how music is consumed. In particular, automatically generated playlists have become an important mode of accessing large music collections. The key goal of automated playlist generation is to provide the user with a coherent listening experience. In this paper, we present Latent Markov Embedding (LME), a machine learning algorithm for generating such playlists. In analogy to matrix factorization methods for collaborative filtering, the algorithm does not require songs to be described by features a priori, but it learns a representation from example playlists. We formulate this problem as a regularized maximum-likelihood embedding of Markov chains in Euclidian space, and show how the resulting optimization problem can be solved efficiently. An empirical evaluation shows that the LME is substantially more accurate than adaptations of smoothed n-gram models commonly used in natural language processing.",
"The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods."
]
} |
1707.02410 | 2734755249 | Modeling the complex interactions between users and items as well as amongst items themselves is at the core of designing successful recommender systems. One classical setting is predicting users' personalized sequential behavior (or 'next-item' recommendation), where the challenges mainly lie in modeling 'third-order' interactions between a user, her previously visited item(s), and the next item to consume. Existing methods typically decompose these higher-order interactions into a combination of pairwise relationships, by way of which user preferences (user-item interactions) and sequential patterns (item-item interactions) are captured by separate components. In this paper, we propose a unified method, TransRec, to model such third-order relationships for large-scale sequential prediction. Methodologically, we embed items into a 'transition space' where users are modeled as translation vectors operating on item sequences. Empirically, this approach outperforms the state-of-the-art on a wide spectrum of real-world datasets. Data and code are available at https: sites.google.com a eng.ucsd.edu ruining-he . | Knowledge bases. Although different from recommendation, there has been a large body of work in knowledge bases that focuses on modeling multiple, complex relationships between various entities. Recently, partially motivated by the findings made by word2vec @cite_15 , translation-based methods (e.g., @cite_3 @cite_25 @cite_11 ) have achieved state-of-the-art accuracy and scalability, in contrast to those achieved by traditional embedding methods relying on tensor decomposition or collective matrix factorization (e.g., @cite_7 @cite_12 @cite_22 ). Our work is inspired by those findings, and we tackle the challenges from modeling large-scale, personalized, and complicated sequential data. This is the first work that explores this direction to the best of our knowledge. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_3",
"@cite_15",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2117420919",
"205829674",
"2127795553",
"2950133940",
"2184957013",
"2099752825",
"2283196293"
],
"abstract": [
"Relational learning is concerned with predicting unknown values of a relation, given a database of entities and observed relations among entities. An example of relational learning is movie rating prediction, where entities could include users, movies, genres, and actors. Relations encode users' ratings of movies, movies' genres, and actors' roles in movies. A common prediction technique given one pairwise relation, for example a #users x #movies ratings matrix, is low-rank matrix factorization. In domains with multiple relations, represented as multiple matrices, we may improve predictive accuracy by exploiting information from one relation while predicting another. To this end, we propose a collective matrix factorization model: we simultaneously factor several matrices, sharing parameters among factors when an entity participates in multiple relations. Each relation can have a different value type and error distribution; so, we allow nonlinear relationships between the parameters and outputs, using Bregman divergences to measure error. We extend standard alternating projection algorithms to our model, and derive an efficient Newton update for the projection. Furthermore, we propose stochastic optimization methods to deal with large, sparse matrices. Our model generalizes several existing matrix factorization methods, and therefore yields new large-scale optimization algorithms for these problems. Our model can handle any pairwise relational schema and a wide variety of error models. We demonstrate its efficiency, as well as the benefit of sharing parameters among relations.",
"Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.",
"We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.",
"Vast amounts of structured information have been published in the Semantic Web's Linked Open Data (LOD) cloud and their size is still growing rapidly. Yet, access to this information via reasoning and querying is sometimes difficult, due to LOD's size, partial data inconsistencies and inherent noisiness. Machine Learning offers an alternative approach to exploiting LOD's data with the advantages that Machine Learning algorithms are typically robust to both noise and data inconsistencies and are able to efficiently utilize non-deterministic dependencies in the data. From a Machine Learning point of view, LOD is challenging due to its relational nature and its scale. Here, we present an efficient approach to relational learning on LOD data, based on the factorization of a sparse tensor that scales to data consisting of millions of entities, hundreds of relations and billions of known facts. Furthermore, we show how ontological knowledge can be incorporated in the factorization to improve learning results and how computation can be distributed across multiple nodes. We demonstrate that our approach is able to factorize the YAGO 2 core ontology and globally predict statements for this large knowledge base using a single dual-core desktop computer. Furthermore, we show experimentally that our approach achieves good results in several relational learning tasks that are relevant to Linked Data. Once a factorization has been computed, our model is able to predict efficiently, and without any additional training, the likelihood of any of the 4.3 ⋅ 1014 possible triples in the YAGO 2 core ontology.",
"We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up."
]
} |
1707.02286 | 2726187156 | The reinforcement learning paradigm allows, in principle, for complex behaviours to be learned directly from simple reward signals. In practice, however, it is common to carefully hand-design the reward function to encourage a particular solution, or to derive it from demonstration data. In this paper explore how a rich environment can help to promote the learning of complex behavior. Specifically, we train agents in diverse environmental contexts, and find that this encourages the emergence of robust behaviours that perform well across a suite of tasks. We demonstrate this principle for locomotion -- behaviours that are known for their sensitivity to the choice of reward. We train several simulated bodies on a diverse set of challenging terrains and obstacles, using a simple reward function based on forward progress. Using a novel scalable variant of policy gradient reinforcement learning, our agents learn to run, jump, crouch and turn as required by the environment without explicit reward-based guidance. A visual depiction of highlights of the learned behavior can be viewed following this https URL . | Basic locomotion behaviors learned end-to-end via RL have been demonstrated, for instance, by @cite_12 @cite_7 @cite_10 @cite_0 or guided policy search @cite_16 . Locomotion in the context of higher-level tasks has been considered in @cite_4 . Terrain-adaptive locomotion with RL has been demonstrated by @cite_6 , but they still impose considerable structure on their solution. Impressive results were recently achieved with learned locomotion controllers for a 3D humanoid body @cite_25 , but these rely on a domain-specific structure and human motion capture data to bootstrap the movement skills for navigating flat terrains. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_0",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"2534060593",
"2173248099",
"",
"",
"2121103318",
"1191599655",
"",
"2949608212"
],
"abstract": [
"We study a novel architecture and training procedure for locomotion tasks. A high-frequency, low-level \"spinal\" network with access to proprioceptive sensors learns sensorimotor primitives by training on simple tasks. This pre-trained module is fixed and connected to a low-frequency, high-level \"cortical\" network, with access to all sensors, which drives behavior by modulating the inputs to the spinal network. Where a monolithic end-to-end architecture fails completely, learning with a pre-trained spinal module succeeds at multiple high-level tasks, and enables the effective exploration required to learn from sparse rewards. We test our proposed architecture on three simulated bodies: a 16-dimensional swimming snake, a 20-dimensional quadruped, and a 54-dimensional humanoid. Our results are illustrated in the accompanying video at this https URL",
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"",
"",
"We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation.",
"Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.",
"",
"We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters."
]
} |
1707.02194 | 2726824826 | A learning-based framework for representation of domain-specific images is proposed where joint compression and denoising can be done using a VQ-based multi-layer network. While it learns to compress the images from a training set, the compression performance is very well generalized on images from a test set. Moreover, when fed with noisy versions of the test set, since it has priors from clean images, the network also efficiently denoises the test images during the reconstruction. The proposed framework is a regularized version of the Residual Quantization (RQ) where at each stage, the quantization error from the previous stage is further quantized. Instead of codebook learning from the k-means which over-trains for high-dimensional vectors, we show that only generating the codewords from a random, but properly regularized distribution suffices to compress the images globally and without the need to resort to patch-based division of images. The experiments are done on the set of facial images and the method is compared with the JPEG-2000 codec for compression and BM3D for denoising, showing promising results. | Depending on @math and @math , the problem of Eq. can be treated in many different ways. See @cite_11 and @cite_9 for detailed reviews and discussions For example, under the famous sparsity constraint @math or its relaxed version @math , the K-SVD algorithm @cite_7 solves it for local minima in an iterative way. | {
"cite_N": [
"@cite_9",
"@cite_7",
"@cite_11"
],
"mid": [
"2008732654",
"2160547390",
"2163398148"
],
"abstract": [
"In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection - that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.",
"In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data",
"Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: i) building a sparsifying dictionary based on a mathematical model of the data, or ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1-D and 2-D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the K-SVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures."
]
} |
1707.02402 | 2736161405 | We present a simple dynamic batching approach applicable to a large class of dynamic architectures that consistently yields speedups of over 10x. We provide performance bounds when the architecture is not known a priori and a stronger bound in the special case where the architecture is a predetermined balanced tree. We evaluate our approach on 's recent visual question answering (VQA) result of his CLEVR dataset by Inferring and Executing Programs (IEP). We also evaluate on sparsely gated mixture of experts layers and achieve speedups of up to 1000x over the naive implementation. | Previous notable dynamic graph results include neural module networks @cite_4 , which form the basis of the execution engine of in their CLEVR @cite_0 IEP result. The difference is that latter's architecture is built on generic, minimally-engineered neural network blocks that are more likely to generalize to a wider class of problems than the original neural module networks approach, which uses a heavily-engineered question parser and custom per-module architectures. Whereas improvement upon neural module networks constitutes improvement upon a single architecture, improvement on the CLEVR architecture is generalizable to a wide class of models under a minimal set of assumptions (see [sec:discussion] Discussion ). | {
"cite_N": [
"@cite_0",
"@cite_4"
],
"mid": [
"2561715562",
"2416885651"
],
"abstract": [
"When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.",
"Visual question answering is fundamentally compositional in nature---a question like \"where is the dog?\" shares substructure with questions like \"what color is the dog?\" and \"where is the cat?\" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural \"modules\" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes."
]
} |
1707.02554 | 2734810377 | Time-Spatial data plays a crucial role for different fields such as traffic management. These data can be collected via devices such as surveillance sensors or tracking systems. However, how to efficiently an- alyze and visualize these data to capture essential embedded pattern information is becoming a big challenge today. Classic visualization ap- proaches focus on revealing 2D and 3D spatial information and modeling statistical test. Those methods would easily fail when data become mas- sive. Recent attempts concern on how to simply cluster data and perform prediction with time-oriented information. However, those approaches could still be further enhanced as they also have limitations for han- dling massive clusters and labels. In this paper, we propose a visualiza- tion methodology for mobility data using artificial neural net techniques. This method aggregates three main parts that are Back-end Data Model, Neural Net Algorithm including clustering method Self-Organizing Map (SOM) and prediction approach Recurrent Neural Net (RNN) for ex- tracting the features and lastly a solid front-end that displays the results to users with an interactive system. SOM is able to cluster the visiting patterns and detect the abnormal pattern. RNN can perform the predic- tion for time series analysis using its dynamic architecture. Furthermore, an interactive system will enable user to interpret the result with graph- ics, animation and 3D model for a close-loop feedback. This method can be particularly applied in two tasks that Commercial-based Promotion and abnormal traffic patterns detection. | The previous work are main concerned on two parts that clustering and aggregation as well as feature learning. @cite_9 @cite_1 . | {
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2140251882",
"2153940791"
],
"abstract": [
"The increasing availability of GPS-enabled devices is changing the way people interact with the Web, and brings us a large amount of GPS trajectories representing people's location histories. In this paper, based on multiple users' GPS trajectories, we aim to mine interesting locations and classical travel sequences in a given geospatial region. Here, interesting locations mean the culturally important places, such as Tiananmen Square in Beijing, and frequented public areas, like shopping malls and restaurants, etc. Such information can help users understand surrounding locations, and would enable travel recommendation. In this work, we first model multiple individuals' location histories with a tree-based hierarchical graph (TBHG). Second, based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based inference model, which regards an individual's access on a location as a directed link from the user to that location. This model infers the interest of a location by taking into account the following three factors. 1) The interest of a location depends on not only the number of users visiting this location but also these users' travel experiences. 2) Users' travel experiences and location interests have a mutual reinforcement relationship. 3) The interest of a location and the travel experience of a user are relative values and are region-related. Third, we mine the classical travel sequences among locations considering the interests of these locations and users' travel experiences. We evaluated our system using a large GPS dataset collected by 107 users over a period of one year in the real world. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, when considering the users' travel experiences and location interests, we achieved a better performance beyond baselines, such as rank-by-count and rank-by-interest, etc.",
"Analysis of movement is currently a hot research topic in visual analytics. A wide variety of methods and tools for analysis of movement data has been developed in recent years. They allow analysts to look at the data from different perspectives and fulfil diverse analytical tasks. Visual displays and interactive techniques are often combined with computational processing, which, in particular, enables analysis of a larger number of data than would be possible with purely visual methods. Visual analytics leverages methods and tools developed in other areas related to data analytics. particularly statistics, machine learning and geographic information science. We present an illustrated structured survey of the state of the art in visual analytics concerning the analysis of movement data. Besides reviewing the existing works, we demonstrate, using examples. how different visual analytics techniques can support our understanding of various aspects of movement."
]
} |
1707.02483 | 2734693922 | The state-of-the-art named entity recognition (NER) systems are supervised machine learning models that require large amounts of manually annotated data to achieve high accuracy. However, annotating NER data by human is expensive and time-consuming, and can be quite difficult for a new language. In this paper, we present two weakly supervised approaches for cross-lingual NER with no human annotation in a target language. The first approach is to create automatically labeled NER data for a target language via annotation projection on comparable corpora, where we develop a heuristic scheme that effectively selects good-quality projection-labeled data from noisy data. The second approach is to project distributed representations of words (word embeddings) from a target language to a source language, so that the source-language NER system can be applied to the target language without re-training. We also design two co-decoding schemes that effectively combine the outputs of the two projection-based approaches. We evaluate the performance of the proposed approaches on both in-house and open NER data for several target languages. The results show that the combined systems outperform three other weakly supervised approaches on the CoNLL data. | The traditional annotation projection approaches @cite_25 @cite_24 @cite_3 project NER tags across language pairs using parallel corpora or translations. proposed a variant of annotation projection which projects expectations of tags and uses them as constraints to train a model based on generalized expectation criteria. Annotation projection has also been applied to several other cross-lingual NLP tasks, including word sense disambiguation @cite_10 , part-of-speech (POS) tagging @cite_25 and dependency parsing @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_24",
"@cite_10",
"@cite_25"
],
"mid": [
"",
"2251551266",
"2135890475",
"2108997961",
"2016630033"
],
"abstract": [
"",
"As developers of a highly multilingual named entity recognition (NER) system, we face an evaluation resource bottleneck problem: we need evaluation data in many languages, the annotation should not be too time-consuming, and the evaluation results across languages should be comparable. We solve the problem by automatically annotating the English version of a multi-parallel corpus and by projecting the annotations into all the other language versions. For the translation of English entities, we use a phrase-based statistical machine translation system as well as a lookup of known names from a multilingual name database. For the projection, we incrementally apply different methods: perfect string matching, perfect consonant signature matching and edit distance similarity. The resulting annotated parallel corpus will be made available for reuse.",
"While significant effort has been put into annotating linguistic resources for several languages, there are still many left that have only small amounts of such resources. This paper investigates a method of propagating information (specifically mention detection information) into such low resource languages from richer ones. Experiments run on three language pairs (Arabic-English, Chinese-English, and Spanish-English) show that one can achieve relatively decent performance by propagating information from a language with richer resources such as English into a foreign language alone (no resources or models in the foreign language). Furthermore, while examining the performance using various degrees of linguistic information in a statistical framework, results show that propagated features from English help improve the source-language system performance even when used in conjunction with all feature types built from the source language. The experiments also show that using propagated features in conjunction with lexically-derived features only (as can be obtained directly from a mention annotated corpus) yields similar performance to using feature types derived from many linguistic resources.",
"We present an unsupervised method for word sense disambiguation that exploits translation correspondences in parallel corpora. The technique takes advantage of the fact that cross-language lexicalizations of the same concept tend to be consistent, preserving some core element of its semantics, and yet also variable, reflecting differing translator preferences and the influence of context. Working with parallel corpora introduces an extra complication for evaluation, since it is difficult to find a corpus that is both sense tagged and parallel with another language; therefore we use pseudo-translations, created by machine translation systems, in order to make possible the evaluation of the approach against a standard test set. The results demonstrate that word-level translation correspondences are a valuable source of information for sense disambiguation.",
"This paper describes a system and set of algorithms for automatically inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity taggers and morphological analyzers for an arbitrary foreign language. Case studies include French, Chinese, Czech and Spanish.Existing text analysis tools for English are applied to bilingual text corpora and their output projected onto the second language via statistically derived word alignments. Simple direct annotation projection is quite noisy, however, even with optimal alignments. Thus this paper presents noise-robust tagger, bracketer and lemmatizer training procedures capable of accurate system bootstrapping from noisy and incomplete initial projections.Performance of the induced stand-alone part-of-speech tagger applied to French achieves 96 core part-of-speech (POS) tag accuracy, and the corresponding induced noun-phrase bracketer exceeds 91 F-measure. The induced morphological analyzer achieves over 99 lemmatization accuracy on the complete French verbal system.This achievement is particularly noteworthy in that it required absolutely no hand-annotated training data in the given language, and virtually no language-specific knowledge or resources beyond raw text. Performance also significantly exceeds that obtained by direct annotation projection."
]
} |
1707.02483 | 2734693922 | The state-of-the-art named entity recognition (NER) systems are supervised machine learning models that require large amounts of manually annotated data to achieve high accuracy. However, annotating NER data by human is expensive and time-consuming, and can be quite difficult for a new language. In this paper, we present two weakly supervised approaches for cross-lingual NER with no human annotation in a target language. The first approach is to create automatically labeled NER data for a target language via annotation projection on comparable corpora, where we develop a heuristic scheme that effectively selects good-quality projection-labeled data from noisy data. The second approach is to project distributed representations of words (word embeddings) from a target language to a source language, so that the source-language NER system can be applied to the target language without re-training. We also design two co-decoding schemes that effectively combine the outputs of the two projection-based approaches. We evaluate the performance of the proposed approaches on both in-house and open NER data for several target languages. The results show that the combined systems outperform three other weakly supervised approaches on the CoNLL data. | Wikipedia has been exploited to generate weakly labeled multilingual NER training data. The basic idea is to first categorize Wikipedia pages into entity types, either based on manually constructed rules that utilize the category information of Wikipedia @cite_28 or Freebase attributes @cite_22 , or via a classifier trained with manually labeled Wikipedia pages @cite_5 . Heuristic rules are then developed in these works to automatically label the Wikipedia text with NER tags. built high-accuracy, high-coverage multilingual Wikipedia entity type mappings using weakly labeled data and applied those mappings as decoding constrains or dictionary features to improve multilingual NER systems. | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_22"
],
"mid": [
"2099253769",
"2120844411",
"1532912801"
],
"abstract": [
"In this paper, we describe a system by which the multilingual characteristics of Wikipedia can be utilized to annotate a large corpus of text with Named Entity Recognition (NER) tags requiring minimal human intervention and no linguistic expertise. This process, though of value in languages for which resources exist, is particularly useful for less commonly taught languages. We show how the Wikipedia format can be used to identify possible named entities and discuss in detail the process by which we use the Category structure inherent to Wikipedia to determine the named entity type of a proposed entity. We further describe the methods by which English language data can be used to bootstrap the NER process in other languages. We demonstrate the system by using the generated corpus as training sets for a variant of BBN's Identifinder in French, Ukrainian, Spanish, Polish, Russian, and Portuguese, achieving overall F-scores as high as 84.7 on independent, human-annotated corpora, comparable to a system trained on up to 40,000 words of human-annotated newswire.",
"We automatically create enormous, free and multilingual silver-standard training annotations for named entity recognition (ner) by exploiting the text and structure of Wikipedia. Most ner systems rely on statistical models of annotated data to identify and classify names of people, locations and organisations in text. This dependence on expensive annotation is the knowledge bottleneck our work overcomes. We first classify each Wikipedia article into named entity (ne) types, training and evaluating on 7200 manually-labelled Wikipedia articles across nine languages. Our cross-lingual approach achieves up to 95 accuracy. We transform the links between articles into ne annotations by projecting the target [email protected]?s classifications onto the anchor text. This approach yields reasonable annotations, but does not immediately compete with existing gold-standard data. By inferring additional links and heuristically tweaking the Wikipedia corpora, we better align our automatic annotations to gold standards. We annotate millions of words in nine languages, evaluating English, German, Spanish, Dutch and Russian Wikipedia-trained models against conll shared task data and other gold-standard corpora. Our approach outperforms other approaches to automatic ne annotation (Richman and Schone, 2008 [61], , 2008 [46]) competes with gold-standard training when tested on an evaluation corpus from a different source; and performs 10 better than newswire-trained models on manually-annotated Wikipedia text.",
"The increasing diversity of languages used on the web introduces a new level of complexity to Information Retrieval (IR) systems. We can no longer assume that textual content is written in one language or even the same language family. In this paper, we demonstrate how to build massive multilingual annotators with minimal human expertise and intervention. We describe a system that builds Named Entity Recognition (NER) annotators for 40 major languages using Wikipedia and Freebase. Our approach does not require NER human annotated datasets or language specific resources like treebanks, parallel corpora, and orthographic rules. The novelty of approach lies therein - using only language agnostic techniques, while achieving competitive performance. Our method learns distributed word representations (word embeddings) which encode semantic and syntactic features of words in each language. Then, we automatically generate datasets from Wikipedia link structure and Freebase attributes. Finally, we apply two preprocessing stages (oversampling and exact surface form matching) which do not require any linguistic expertise. Our evaluation is two fold: First, we demonstrate the system performance on human annotated datasets. Second, for languages where no gold-standard benchmarks are available, we propose a new method, distant evaluation, based on statistical machine translation."
]
} |
1707.02483 | 2734693922 | The state-of-the-art named entity recognition (NER) systems are supervised machine learning models that require large amounts of manually annotated data to achieve high accuracy. However, annotating NER data by human is expensive and time-consuming, and can be quite difficult for a new language. In this paper, we present two weakly supervised approaches for cross-lingual NER with no human annotation in a target language. The first approach is to create automatically labeled NER data for a target language via annotation projection on comparable corpora, where we develop a heuristic scheme that effectively selects good-quality projection-labeled data from noisy data. The second approach is to project distributed representations of words (word embeddings) from a target language to a source language, so that the source-language NER system can be applied to the target language without re-training. We also design two co-decoding schemes that effectively combine the outputs of the two projection-based approaches. We evaluate the performance of the proposed approaches on both in-house and open NER data for several target languages. The results show that the combined systems outperform three other weakly supervised approaches on the CoNLL data. | Different ways of obtaining cross-lingual embeddings have been proposed in the literature. One approach builds monolingual representations separately and then brings them to the same space typically using a seed dictionary @cite_11 @cite_21 . Another line of work builds inter-lingual representations simultaneously, often by generating mixed language corpora using the supervision at hand (aligned sentences, documents, etc.) @cite_16 @cite_18 . We opt for the first solution in this paper because of its flexibility: we can map all languages to English rather than requiring separate embeddings for each language pair. Additionally we are able to easily add a new language without any constraints on the type of data needed. Note that although we do not specifically create inter-lingual representations, by training mappings to the common language, English, we are able to map words in different languages to a common space. Similar approaches for cross-lingual model transfer have been applied to other NLP tasks such as document classification @cite_0 , dependency parsing @cite_19 and POS tagging @cite_17 . | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_17",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_11"
],
"mid": [
"2952037945",
"342285082",
"",
"",
"2250741688",
"2252212383",
"2126725946"
],
"abstract": [
"We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw-text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient cross-lingual feature learning. We show that bilingual embeddings learned using the proposed model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.",
"The distributional hypothesis of Harris (1954), according to which the meaning of words is evidenced by the contexts they occur in, has motivated several effective techniques for obtaining vector space semantic representations of words using unannotated text corpora. This paper argues that lexico-semantic content should additionally be invariant across languages and proposes a simple technique based on canonical correlation analysis (CCA) for incorporating multilingual evidence into vectors generated monolingually. We evaluate the resulting word representations on standard lexical semantic evaluation tasks and show that our method produces substantially better semantic representations than monolingual techniques.",
"",
"",
"This paper investigates the problem of cross-lingual dependency parsing, aiming at inducing dependency parsers for low-resource languages while using only training data from a resource-rich language (e.g. English). Existing approaches typically don’t include lexical features, which are not transferable across languages. In this paper, we bridge the lexical feature gap by using distributed feature representations and their composition. We provide two algorithms for inducing cross-lingual distributed representations of words, which map vocabularies from two different languages into a common vector space. Consequently, both lexical features and non-lexical features can be used in our model for cross-lingual transfer. Furthermore, our framework is able to incorporate additional useful features such as cross-lingual word clusters. Our combined contributions achieve an average relative error reduction of 10.9 in labeled attachment score as compared with the delexicalized parser, trained on English universal treebank and transferred to three other languages. It also significantly outperforms (2013) augmented with projected cluster features on identical data.",
"We propose a simple yet effective approach to learning bilingual word embeddings (BWEs) from non-parallel document-aligned data (based on the omnipresent skip-gram model), and its application to bilingual lexicon induction (BLI). We demonstrate the utility of the induced BWEs in the BLI task by reporting on benchmarking BLI datasets for three language pairs: (1) We show that our BWE-based BLI models significantly outperform the MuPTM-based and context-counting models in this setting, and obtain the best reported BLI results for all three tested language pairs; (2) We also show that our BWE-based BLI models outperform other BLI models based on recently proposed BWEs that require parallel data for bilingual training.",
"Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90 precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs."
]
} |
1707.01830 | 2733056802 | Neural machine translation models rely on the beam search algorithm for decoding. In practice, we found that the quality of hypotheses in the search space is negatively affected owing to the fixed beam size. To mitigate this problem, we store all hypotheses in a single priority queue and use a universal score function for hypothesis selection. The proposed algorithm is more flexible as the discarded hypotheses can be revisited in a later step. We further design a penalty function to punish the hypotheses that tend to produce a final translation that is much longer or shorter than expected. Despite its simplicity, we show that the proposed decoding algorithm is able to select hypotheses with better qualities and improve the translation performance. | To improve the quality of the score function in beam search, propose to run beam search in the forward pass of training, then apply a new objective function to ensure the gold output does not fall outside the beam. An alternative approach is to correct the scores with reinforcement learning @cite_7 . This work focuses on fixing the limited search space of beam search rather than the score function. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2580192806"
],
"abstract": [
"We introduce a simple, general strategy to manipulate the behavior of a neural decoder that enables it to generate outputs that have specific properties of interest (e.g., sequences of a pre-specified length). The model can be thought of as a simple version of the actor-critic model that uses an interpolation of the actor (the MLE-based token generation policy) and the critic (a value function that estimates the future values of the desired property) for decision making. We demonstrate that the approach is able to incorporate a variety of properties that cannot be handled by standard neural sequence decoders, such as sequence length and backward probability (probability of sources given targets), in addition to yielding consistent improvements in abstractive summarization and machine translation when the property to be optimized is BLEU or ROUGE scores."
]
} |
1707.01786 | 2733236492 | The Recurrent Neural Networks and their variants have shown promising performances in sequence modeling tasks such as Natural Language Processing. These models, however, turn out to be impractical and difficult to train when exposed to very high-dimensional inputs due to the large input-to-hidden weight matrix. This may have prevented RNNs' large-scale application in tasks that involve very high input dimensions such as video modeling; current approaches reduce the input dimensions using various feature extractors. To address this challenge, we propose a new, more general and efficient approach by factorizing the input-to-hidden weight matrix using Tensor-Train decomposition which is trained simultaneously with the weights themselves. We test our model on classification tasks using multiple real-world video datasets and achieve competitive performances with state-of-the-art models, even though our model architecture is orders of magnitude less complex. We believe that the proposed approach provides a novel and fundamental building block for modeling high-dimensional sequential data with RNN architectures and opens up many possibilities to transfer the expressive and advanced architectures from other domains such as NLP to modeling high-dimensional sequential data. | The Tensor-Train was first introduced by @cite_20 as a tensor factorization model with the advantage of being capable of scaling to an arbitrary number of dimensions. @cite_16 showed that one could reshape a fully connected layer into a high-dimensional tensor and then factorize this tensor using Tensor-Train. This was applied to compress very large weight matrices in deep Neural Networks where the entire model was trained end-to-end. In these experiments they compressed fully connected layers on top of convolution layers, and also proved that a Tensor-Train Layer can directly consume pixels of image data such as CIFAR-10, achieving the best result among all known non-convolutional models. Then in @cite_11 it was shown that even the convolutional layers themselves can be compressed with Tensor-Train Layers. Actually, in an earlier work by @cite_29 a similar approach had also been introduced, but their CP factorization is calculated in a pre-processing step and is only fine tuned with error back propagation as a post processing step. | {
"cite_N": [
"@cite_29",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2131524184",
"2952689122",
"",
"2559813832"
],
"abstract": [
"We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the second step, this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels. After such replacement, the entire network is fine-tuned on the training data using standard backpropagation process. We evaluate this approach on two CNNs and show that it is competitive with previous approaches, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Thus, for the 36-class character classification CNN, our approach obtains a 8.5x CPU speedup of the whole network with only minor accuracy drop (1 from 91 to 90 ). For the standard ImageNet architecture (AlexNet), the approach speeds up the second convolution layer by a factor of 4x at the cost of @math increase of the overall top-5 classification error.",
"Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.",
"",
"Convolutional neural networks excel in image recognition tasks, but this comes at the cost of high computational and memory complexity. To tackle this problem, [1] developed a tensor factorization framework to compress fully-connected layers. In this paper, we focus on compressing convolutional layers. We show that while the direct application of the tensor framework [1] to the 4-dimensional kernel of convolution does compress the layer, we can do better. We reshape the convolutional kernel into a tensor of higher order and factorize it. We combine the proposed approach with the previous work to compress both convolutional and fully-connected layers of a network and achieve 80x network compression rate with 1.1 accuracy drop on the CIFAR-10 dataset."
]
} |
1707.01786 | 2733236492 | The Recurrent Neural Networks and their variants have shown promising performances in sequence modeling tasks such as Natural Language Processing. These models, however, turn out to be impractical and difficult to train when exposed to very high-dimensional inputs due to the large input-to-hidden weight matrix. This may have prevented RNNs' large-scale application in tasks that involve very high input dimensions such as video modeling; current approaches reduce the input dimensions using various feature extractors. To address this challenge, we propose a new, more general and efficient approach by factorizing the input-to-hidden weight matrix using Tensor-Train decomposition which is trained simultaneously with the weights themselves. We test our model on classification tasks using multiple real-world video datasets and achieve competitive performances with state-of-the-art models, even though our model architecture is orders of magnitude less complex. We believe that the proposed approach provides a novel and fundamental building block for modeling high-dimensional sequential data with RNN architectures and opens up many possibilities to transfer the expressive and advanced architectures from other domains such as NLP to modeling high-dimensional sequential data. | @cite_18 performed two sequence classification tasks using multiple RNN architectures of relatively low dimensionality: The first task was to classify spoken words where the input sequence had a dimension of 13 channels. In the second task, RNNs were trained to classify handwriting based on the time-stamped 4D spatial features. RNNs have been also applied to classify the sentiment of a sentence such as in the IMDB reviews dataset @cite_17 . In this case, the word embeddings form the input to RNN models and they may have a dimension of a few hundreds. The sequence classification model can be seen as a special case of the Encoder-Decoder-Framework @cite_10 in the sense that a classifier decodes the learned representation for the entire sequence into a probabilistic distribution over all classes. | {
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_17"
],
"mid": [
"2952276042",
"2949888546",
"2113459411"
],
"abstract": [
"Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"Unsupervised vector-based approaches to semantics can model rich lexical meanings, but they largely fail to capture sentiment information that is central to many word meanings and important for a wide range of NLP tasks. We present a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content. The proposed model can leverage both continuous and multi-dimensional sentiment information as well as non-sentiment annotations. We instantiate the model to utilize the document-level sentiment polarity annotations present in many online documents (e.g. star ratings). We evaluate the model using small, widely used sentiment and subjectivity corpora and find it out-performs several previously introduced methods for sentiment classification. We also introduce a large dataset of movie reviews to serve as a more robust benchmark for work in this area."
]
} |
1707.01825 | 2729142045 | In the context of data-mining competitions (e.g., Kaggle, KDDCup, ILSVRC Challenge), we show how access to an oracle that reports a contestant's log-loss score on the test set can be exploited to deduce the ground-truth of some of the test examples. By applying this technique iteratively to batches of @math examples (for small @math ), all of the test labels can eventually be inferred. In this paper, (1) We demonstrate this attack on the first stage of a recent Kaggle competition (Intel & MobileODT Cancer Screening) and use it to achieve a log-loss of @math (and thus attain a rank of #4 out of 848 contestants), without ever training a classifier to solve the actual task. (2) We prove an upper bound on the batch size @math as a function of the floating-point resolution of the probability estimates that the contestant submits for the labels. (3) We derive, and demonstrate in simulation, a more flexible attack that can be used even when the oracle reports the accuracy on an unknown (but fixed) subset of the test set's labels. These results underline the importance of evaluating contestants based only on test data that the oracle does not examine. | Both intentional hacking @cite_3 @cite_9 @cite_2 and inadvertent overfitting @cite_8 @cite_4 to test data in adaptive data analyses -- including but not limited to data-mining competitions -- has generated recent research interest in the privacy-preserving machine learning and computational complexity theory communities. Blum and Hardt @cite_9 recently described a boosting'' attack with which a contestant can estimate the test labels such that, with probability @math , their accuracy w.r.t. the ground truth is better than chance. They also proposed a Ladder'' mechanism that can be used to rank contestants' performance and that is robust to such attacks. In addition, in our own prior work @cite_3 , we showed how an oracle that reports the AUC can be used to infer the ground-truth of a few of the test labels with complete certainty. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_2"
],
"mid": [
"2952550721",
"2950541101",
"2951865872",
"2963578006",
"2205852953"
],
"abstract": [
"We show that, under a standard hardness assumption, there is no computationally efficient algorithm that given @math samples from an unknown distribution can give valid answers to @math adaptively chosen statistical queries. A statistical query asks for the expectation of a predicate over the underlying distribution, and an answer to a statistical query is valid if it is \"close\" to the correct expectation over the distribution. Our result stands in stark contrast to the well known fact that exponentially many statistical queries can be answered validly and efficiently if the queries are chosen non-adaptively (no query may depend on the answers to previous queries). Moreover, a recent work by shows how to accurately answer exponentially many adaptively chosen statistical queries via a computationally inefficient algorithm; and how to answer a quadratic number of adaptive queries via a computationally efficient algorithm. The latter result implies that our result is tight up to a linear factor in @math Conceptually, our result demonstrates that achieving statistical validity alone can be a source of computational intractability in adaptive settings. For example, in the modern large collaborative research environment, data analysts typically choose a particular approach based on previous findings. False discovery occurs if a research finding is supported by the data but not by the underlying distribution. While the study of preventing false discovery in Statistics is decades old, to the best of our knowledge our result is the first to demonstrate a computational barrier. In particular, our result suggests that the perceived difficulty of preventing false discovery in today's collaborative research environment may be inherent.",
"A great deal of effort has been devoted to reducing the risk of spurious scientific discoveries, from the use of sophisticated validation techniques, to deep statistical methods for controlling the false discovery rate in multiple hypothesis testing. However, there is a fundamental disconnect between the theoretical results and the practice of data analysis: the theory of statistical inference assumes a fixed collection of hypotheses to be tested, or learning algorithms to be applied, selected non-adaptively before the data are gathered, whereas in practice data is shared and reused with hypotheses and new analyses being generated on the basis of data exploration and the outcomes of previous analyses. In this work we initiate a principled study of how to guarantee the validity of statistical inference in adaptive data analysis. As an instance of this problem, we propose and investigate the question of estimating the expectations of @math adaptively chosen functions on an unknown distribution given @math random samples. We show that, surprisingly, there is a way to estimate an exponential in @math number of expectations accurately even if the functions are chosen adaptively. This gives an exponential improvement over standard empirical estimators that are limited to a linear number of estimates. Our result follows from a general technique that counter-intuitively involves actively perturbing and coordinating the estimates, using techniques developed for privacy preservation. We give additional applications of this technique to our question.",
"The organizer of a machine learning competition faces the problem of maintaining an accurate leaderboard that faithfully represents the quality of the best submission of each competing team. What makes this estimation problem particularly challenging is its sequential and adaptive nature. As participants are allowed to repeatedly evaluate their submissions on the leaderboard, they may begin to overfit to the holdout data that supports the leaderboard. Few theoretical results give actionable advice on how to design a reliable leaderboard. Existing approaches therefore often resort to poorly understood heuristics such as limiting the bit precision of answers and the rate of re-submission. In this work, we introduce a notion of \"leaderboard accuracy\" tailored to the format of a competition. We introduce a natural algorithm called \"the Ladder\" and demonstrate that it simultaneously supports strong theoretical guarantees in a fully adaptive model of estimation, withstands practical adversarial attacks, and achieves high utility on real submission files from an actual competition hosted by Kaggle. Notably, we are able to sidestep a powerful recent hardness result for adaptive risk estimation that rules out algorithms such as ours under a seemingly very similar notion of accuracy. On a practical note, we provide a completely parameter-free variant of our algorithm that can be deployed in a real competition with no tuning required whatsoever.",
"In machine learning contests such as the ImageNet Large Scale Visual Recognition Challenge ( 2015) and the KDD Cup, contestants can submit candidate solutions and receive from an oracle (typically the organizers of the competition) the accuracy of their guesses compared to the ground-truth labels. One of the most commonly used accuracy metrics for binary classification tasks is the Area Under the Receiver Operating Characteristics Curve (AUC). In this paper we provide proofs-of-concept of how knowledge of the AUC of a set of guesses can be used, in two different kinds of attacks, to improve the accuracy of those guesses. On the other hand, we also demonstrate the intractability of one kind of AUC exploit by proving that the number of possible binary labelings of n examples for which a candidate solution obtains a AUC score of c grows exponentially in n, for every c ∈ (0, 1).",
"The leaderboard in machine learning competitions is a tool to show the performance of various participants and to compare them. However, the leaderboard quickly becomes no longer accurate, due to hack or overfitting. This article gives two pieces of advice to prevent easy hack or overfitting. By following these advice, we reach the conclusion that something like the Ladder leaderboard introduced in [blum2015ladder] is inevitable. With this understanding, we naturally simplify Ladder by eliminating its redundant computation and explain how to choose the parameter and interpret it. We also prove that the sample complexity is cubic to the desired precision of the leaderboard."
]
} |
1707.02026 | 2726264694 | Grammatical error correction (GEC) systems strive to correct both global errors in word order and usage, and local errors in spelling and inflection. Further developing upon recent work on neural machine translation, we propose a new hybrid neural model with nested attention layers for GEC. Experiments show that the new model can effectively correct errors of both types by incorporating word and character-level information,and that the model significantly outperforms previous neural models for GEC as measured on the standard CoNLL-14 benchmark dataset. Further analysis also shows that the superiority of the proposed model can be largely attributed to the use of the nested attention mechanism, which has proven particularly effective in correcting local errors that involve small edits in orthography. | A variety of classifier-based and MT-based techniques have been applied to grammatical error correction. The CoNLL-14 shared task overview paper of provides a comparative evaluation of approaches. Two notable advances after the shared task have been in the areas of combining classifiers and phrase-based MT @cite_0 and adapting phrase-based MT to the GEC task @cite_4 . The latter work has reported the highest performance to date on the task of 49.5 in F @math score on the CoNLL-14 test set. This method integrates discriminative training toward the task-specific evaluation function, a rich set of features, and multiple large language models. Neural approaches to the task are less explored. We believe that the advances from are complementary to the ones we propose for neural MT, and could be integrated with neural models to achieve even higher performance. | {
"cite_N": [
"@cite_0",
"@cite_4"
],
"mid": [
"2410156476",
"2400573211"
],
"abstract": [
"Phrase-based statistical machine translation (SMT) systems have previously been used for the task of grammatical error correction (GEC) to achieve state-of-the-art accuracy. The superiority of SMT systems comes from their ability to learn text transformations from erroneous to corrected text, without explicitly modeling error types. However, phrase-based SMT systems suffer from limitations of discrete word representation, linear mapping, and lack of global context. In this paper, we address these limitations by using two different yet complementary neural network models, namely a neural network global lexicon model and a neural network joint model. These neural networks can generalize better by using continuous space representation of words and learn non-linear mappings. Moreover, they can leverage contextual information from the source sentence more effectively. By adding these two components, we achieve statistically significant improvement in accuracy for grammatical error correction over a state-of-the-art GEC system.",
"In this work, we study parameter tuning towards the M^2 metric, the standard metric for automatic grammar error correction (GEC) tasks. After implementing M^2 as a scorer in the Moses tuning framework, we investigate interactions of dense and sparse features, different optimizers, and tuning strategies for the CoNLL-2014 shared task. We notice erratic behavior when optimizing sparse feature weights with M^2 and offer partial solutions. We find that a bare-bones phrase-based SMT setup with task-specific parameter-tuning outperforms all previously published results for the CoNLL-2014 test set by a large margin (46.37 M^2 over previously 41.75 , by an SMT system with neural features) while being trained on the same, publicly available data. Our newly introduced dense and sparse features widen that gap, and we improve the state-of-the-art to 49.49 M^2."
]
} |
1707.02026 | 2726264694 | Grammatical error correction (GEC) systems strive to correct both global errors in word order and usage, and local errors in spelling and inflection. Further developing upon recent work on neural machine translation, we propose a new hybrid neural model with nested attention layers for GEC. Experiments show that the new model can effectively correct errors of both types by incorporating word and character-level information,and that the model significantly outperforms previous neural models for GEC as measured on the standard CoNLL-14 benchmark dataset. Further analysis also shows that the superiority of the proposed model can be largely attributed to the use of the nested attention mechanism, which has proven particularly effective in correcting local errors that involve small edits in orthography. | Two prior works explored sequence to sequence neural models for GEC @cite_15 @cite_13 , while integrated neural features in a phrase-based system for the task. Neural models were also applied to the related sub-task of grammatical error identification @cite_21 . demonstrated the promise of neural MT for GEC but did not adapt the basic sequence-to-sequence with attention to its unique challenges, falling back to traditional word-alignment models to address vocabulary coverage with a post-processing heuristic. built a character-level sequence to sequence model, which achieves open vocabulary and character-level modeling, but has difficulty with global word-level decisions. | {
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_13"
],
"mid": [
"2321916036",
"2338133831",
""
],
"abstract": [
"Natural language correction has the potential to help language learners improve their writing skills. While approaches with separate classifiers for different error types have high precision, they do not flexibly handle errors such as redundancy or non-idiomatic phrasing. On the other hand, word and phrase-based machine translation methods are not designed to cope with orthographic errors, and have recently been outpaced by neural models. Motivated by these issues, we present a neural network-based approach to language correction. The core component of our method is an encoder-decoder recurrent neural network with an attention mechanism. By operating at the character level, the network avoids the problem of out-of-vocabulary words. We illustrate the flexibility of our approach on dataset of noisy, user-generated text collected from an English learner forum. When combined with a language model, our method achieves a state-of-the-art @math -score on the CoNLL 2014 Shared Task. We further demonstrate that training the network on additional data with synthesized errors can improve performance.",
"We demonstrate that an attention-based encoder-decoder model can be used for sentence-level grammatical error identification for the Automated Evaluation of Scientific Writing (AESW) Shared Task 2016. The attention-based encoder-decoder models can be used for the generation of corrections, in addition to error identification, which is of interest for certain end-user applications. We show that a character-based encoder-decoder model is particularly effective, outperforming other results on the AESW Shared Task on its own, and showing gains over a word-based counterpart. Our final model--a combination of three character-based encoder-decoder models, one word-based encoder-decoder model, and a sentence-level CNN--is the highest performing system on the AESW 2016 binary prediction Shared Task.",
""
]
} |
1707.02026 | 2726264694 | Grammatical error correction (GEC) systems strive to correct both global errors in word order and usage, and local errors in spelling and inflection. Further developing upon recent work on neural machine translation, we propose a new hybrid neural model with nested attention layers for GEC. Experiments show that the new model can effectively correct errors of both types by incorporating word and character-level information,and that the model significantly outperforms previous neural models for GEC as measured on the standard CoNLL-14 benchmark dataset. Further analysis also shows that the superiority of the proposed model can be largely attributed to the use of the nested attention mechanism, which has proven particularly effective in correcting local errors that involve small edits in orthography. | The primary focus of our work is integration of character and word-level reasoning in neural models for GEC, to capture global fluency errors and local errors in spelling and closely related morphological variants, while obtaining open vocabulary coverage. This is achieved with the help of character and word-level encoders and decoders with two nested levels of attention. Our model is inspired by advances in sub-word level modeling in neural machine translation. We build mostly on the hybrid model of to expand its capability to correct rare words by fine-grained character-level attention. We directly compare our model to the one of on the grammar correction task. Alternative methods for MT include modeling of word pieces to achieve open vocabulary @cite_16 , and more recently, fully character-level modeling @cite_5 . None of these models integrate two nested levels of attention although an empirical evaluation of these approaches for GEC would also be interesting. | {
"cite_N": [
"@cite_5",
"@cite_16"
],
"mid": [
"2531207078",
"1816313093"
],
"abstract": [
"Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.",
"Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character n-gram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.1 and 1.3 BLEU, respectively."
]
} |
1707.01922 | 2963898943 | Domain adaptation is an important tool to transfer knowledge about a task (e.g. classification) learned in a source domain to a second, or target domain. Current approaches assume that task-relevant target-domain data is available during training. We demonstrate how to perform domain adaptation when no such task-relevant target-domain data is available. To tackle this issue, we propose zero-shot deep domain adaptation (ZDDA), which uses privileged information from task-irrelevant dual-domain pairs. ZDDA learns a source-domain representation which is not only tailored for the task of interest but also close to the target-domain representation. Therefore, the source-domain task of interest solution (e.g. a classifier for classification tasks) which is jointly trained with the source-domain representation can be applicable to both the source and target representations. Using the MNIST, Fashion-MNIST, NIST, EMNIST, and SUN RGB-D datasets, we show that ZDDA can perform domain adaptation in classification tasks without access to task-relevant target-domain training data. We also extend ZDDA to perform sensor fusion in the SUN RGB-D scene classification task by simulating task-relevant target-domain representations with task-relevant source-domain data. To the best of our knowledge, ZDDA is the first domain adaptation and sensor fusion method which requires no task-relevant target-domain data. The underlying principle is not particular to computer vision data, but should be extensible to other domains. | Domain adaptation (DA) has been extensively studied in computer vision and applied to various applications such as image classification @cite_41 @cite_32 @cite_46 @cite_16 @cite_23 @cite_27 @cite_47 @cite_25 @cite_30 @cite_20 @cite_43 @cite_29 @cite_2 @cite_18 @cite_1 , semantic segmentation @cite_21 @cite_34 , and image captioning @cite_7 . With the advance of deep neural networks in recent years, the state-of-the-art methods successfully perform DA with (fully or partially) labeled @cite_7 @cite_46 @cite_23 @cite_27 @cite_30 or unlabeled @cite_41 @cite_32 @cite_46 @cite_16 @cite_47 @cite_25 @cite_30 @cite_20 @cite_43 @cite_29 @cite_2 @cite_21 @cite_18 @cite_1 T-R target-domain data. Although different strategies such as the domain adversarial loss @cite_20 and the domain confusion loss @cite_30 are proposed to improve the performance in the DA tasks, most of the existing methods need the T-R target-domain training data, which can be unavailable in reality. In contrast, we propose ZDDA to learn from the T-I dual-domain pairs without using the T-R target-domain training data. One part of ZDDA includes simulating the target-domain representation using the source-domain data, and similar concepts have been mentioned in @cite_48 @cite_31 . However, both of @cite_48 @cite_31 require the access to the T-R dual-domain training pairs, but ZDDA needs no T-R target-domain data. | {
"cite_N": [
"@cite_30",
"@cite_41",
"@cite_29",
"@cite_43",
"@cite_2",
"@cite_20",
"@cite_18",
"@cite_31",
"@cite_48",
"@cite_21",
"@cite_23",
"@cite_46",
"@cite_7",
"@cite_32",
"@cite_27",
"@cite_34",
"@cite_16",
"@cite_25",
"@cite_1",
"@cite_47"
],
"mid": [
"2214409633",
"2311414730",
"2739708763",
"2627183927",
"2605370493",
"2593768305",
"2964285681",
"2463402750",
"753847829",
"2593800221",
"2557626841",
"2750946167",
"2611862713",
"1731081199",
"2963118547",
"2963998559",
"2478454054",
"2964288524",
"2616287544",
"2964228922"
],
"abstract": [
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.",
"Recently proposed domain adaptation methods retrain the network parameters and overcome the domain shift issue to a large extent. However, this requires access to all (labeled) source data, a large amount of (unlabeled) target data, and plenty of computational resources. In this work, we propose a lightweight alternative, that allows adapting to the target domain based on a limited number of target samples in a matter of minutes. To this end, we first analyze the output of each convolutional layer from a domain adaptation perspective. Surprisingly, we find that already at the very first layer, domain shift effects pop up. We then propose a new domain adaptation method, where first layer convolutional filters that are badly affected by the domain shift are reconstructed based on less affected ones.",
"In this paper, we propose a new approach called Deep LogCORAL for unsupervised visual domain adaptation. Our work builds on the recently proposed Deep CORAL method, which proposed to train a convolutional neural network and simultaneously minimize the Euclidean distance of convariance matrices between the source and target domains. We propose to use the Riemannian distance, approximated by Log-Euclidean distance, to replace the naive Euclidean distance in Deep CORAL. We also consider first-order information, and minimize the distance of mean vectors between two domains. We build an end-to-end model, in which we minimize both the classification loss, and the domain difference based on the first and second order information between two domains. Our experiments on the benchmark Office dataset demonstrate the improvements of our newly proposed Deep LogCORAL approach over the Deep CORAL method, as well as further improvement when optimizing both orders of information.",
"In recent years, deep neural networks have emerged as a dominant machine learning tool for a wide variety of application domains. However, training a deep neural network requires a large amount of labeled data, which is an expensive process in terms of time, labor and human expertise. Domain adaptation or transfer learning algorithms address this challenge by leveraging labeled data in a different, but related source domain, to develop a model for the target domain. Further, the explosive growth of digital data has posed a fundamental challenge concerning its storage and retrieval. Due to its storage and retrieval efficiency, recent years have witnessed a wide application of hashing in a variety of computer vision applications. In this paper, we first introduce a new dataset, Office-Home, to evaluate domain adaptation algorithms. The dataset contains images of a variety of everyday objects from multiple domains. We then propose a novel deep learning framework that can exploit labeled source data and unlabeled target data to learn informative hash codes, to accurately classify unseen target data. To the best of our knowledge, this is the first research effort to exploit the feature learning capabilities of deep neural networks to learn representative hash codes to address the domain adaptation problem. Our extensive empirical studies on multiple transfer tasks corroborate the usefulness of the framework in learning efficient hash codes which outperform existing competitive baselines for unsupervised domain adaptation.",
"Recently, DNN model compression based on network architecture design, e.g., SqueezeNet, attracted a lot attention. No accuracy drop on image classification is observed on these extremely compact networks, compared to well-known models. An emerging question, however, is whether these model compression techniques hurt DNNs learning ability other than classifying images on a single dataset. Our preliminary experiment shows that these compression methods could degrade domain adaptation (DA) ability, though the classification performance is preserved. Therefore, we propose a new compact network architecture and unsupervised DA method in this paper. The DNN is built on a new basic module Conv-M which provides more diverse feature extractors without significantly increasing parameters. The unified framework of our DA method will simultaneously learn invariance across domains, reduce divergence of feature representations, and adapt label prediction. Our DNN has 4.1M parameters, which is only 6.7 of AlexNet or 59 of GoogLeNet. Experiments show that our DNN obtains GoogLeNet-level accuracy both on classification and DA, and our DA method slightly outperforms previous competitive ones. Put all together, our DA strategy based on our DNN achieves state-of-the-art on sixteen of total eighteen DA tasks on popular Office-31 and Office-Caltech datasets.",
"Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.",
"In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation.",
"We present a modality hallucination architecture for training an RGB object detection model which incorporates depth side information at training time. Our convolutional hallucination network learns a new and complementary RGB image representation which is taught to mimic convolutional mid-level features from a depth network. At test time images are processed jointly through the RGB and hallucination networks to produce improved detection performance. Thus, our method transfers information commonly extracted from depth training data to a network which can extract that information from the RGB counterpart. We present results on the standard NYUDv2 dataset and report improvement on the RGB detection task.",
"In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers.",
"Appearance changes due to weather and seasonal conditions represent a strong impediment to the robust implementation of machine learning systems in outdoor robotics. While supervised learning optimises a model for the training domain, it will deliver degraded performance in application domains that underlie distributional shifts caused by these changes. Traditionally, this problem has been addressed via the collection of labelled data in multiple domains or by imposing priors on the type of shift between both domains. We frame the problem in the context of unsupervised domain adaptation and develop a framework for applying adversarial techniques to adapt popular, state-of-the-art network architectures with the additional objective to align features across domains. Moreover, as adversarial training is notoriously unstable, we first perform an extensive ablation study, adapting many techniques known to stabilise generative adversarial networks, and evaluate on a surrogate classification task with the same appearance change. The distilled insights are applied to the problem of free-space segmentation for motion planning in autonomous driving.",
"In this paper, we propose an approach to the domain adaptation, dubbed Second-or Higher-order Transfer of Knowledge (So-HoT), based on the mixture of alignments of second-or higher-order scatter statistics between the source and target domains. The human ability to learn from few labeled samples is a recurring motivation in the literature for domain adaptation. Towards this end, we investigate the supervised target scenario for which few labeled target training samples per category exist. Specifically, we utilize two CNN streams: the source and target networks fused at the classifier level. Features from the fully connected layers fc7 of each network are used to compute second-or even higher-order scatter tensors, one per network stream per class. As the source and target distributions are somewhat different despite being related, we align the scatters of the two network streams of the same class (within-class scatters) to a desired degree with our bespoke loss while maintaining good separation of the between-class scatters. We train the entire network in end-to-end fashion. We provide evaluations on the standard Office benchmark (visual domains) and RGB-D combined with Caltech256 (depth-to-rgb transfer). We attain state-of-the-art results.",
"While fine-grained object recognition is an important problem in computer vision, current models are unlikely to accurately classify objects in the wild. These fully supervised models need additional annotated images to classify objects in every new scenario, a task that is infeasible. However, sources such as e-commerce websites and field guides provide annotated images for many classes. In this work, we study fine-grained domain adaptation as a step towards overcoming the dataset shift between easily acquired annotated images and the real world. Adaptation has not been studied in the fine-grained setting where annotations such as attributes could be used to increase performance. Our work uses an attribute based multi-task adaptation loss to increase accuracy from a baseline of 4.1 to 19.1 in the semi-supervised adaptation case. Prior do- main adaptation works have been benchmarked on small datasets such as [46] with a total of 795 images for some domains, or simplistic datasets such as [41] consisting of digits. We perform experiments on a subset of a new challenging fine-grained dataset consisting of 1,095,021 images of 2, 657 car categories drawn from e-commerce web- sites and Google Street View.",
"Impressive image captioning results are achieved in domains with plenty of training image and sentence pairs (e.g., MSCOCO). However, transferring to a target domain with significant domain shifts but no paired training data (referred to as cross-domain image captioning) remains largely unexplored. We propose a novel adversarial training procedure to leverage unpaired data in the target domain. Two critic networks are introduced to guide the captioner, namely domain critic and multi-modal critic. The domain critic assesses whether the generated sentences are indistinguishable from sentences in the target domain. The multi-modal critic assesses whether an image and its generated sentence are a valid pair. During training, the critics and captioner act as adversaries -- captioner aims to generate indistinguishable sentences, whereas critics aim at distinguishing them. The assessment improves the captioner through policy gradient updates. During inference, we further propose a novel critic-based planning method to select high-quality sentences without additional supervision (e.g., tags). To evaluate, we use MSCOCO as the source domain and four other datasets (CUB-200-2011, Oxford-102, TGIF, and Flickr30k) as the target domains. Our method consistently performs well on all datasets. In particular, on CUB-200-2011, we achieve 21.8 CIDEr-D improvement after adaptation. Utilizing critics during inference further gives another 4.5 boost.",
"We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.",
"This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models. The main idea is to exploit the Siamese architecture to learn an embedding subspace that is discriminative, and where mapped visual domains are semantically aligned and yet maximally separated. The supervised setting becomes attractive especially when only few target data samples need to be labeled. In this scenario, alignment and separation of semantic probability distributions is difficult because of the lack of data. We found that by reverting to point-wise surrogates of distribution distances and similarities provides an effective solution. In addition, the approach has a high “speed” of adaptation, which requires an extremely low number of labeled target training samples, even one per category can be effective. The approach is extended to domain generalization. For both applications the experiments show very promising results.",
"During the last half decade, convolutional neural networks (CNNs) have triumphed over semantic segmentation, which is a core task of various emerging industrial applications such as autonomous driving and medical imaging. However, to train CNNs requires a huge amount of data, which is difficult to collect and laborious to annotate. Recent advances in computer graphics make it possible to train CNN models on photo-realistic synthetic data with computer-generated annotations. Despite this, the domain mismatch between the real images and the synthetic data significantly decreases the models’ performance. Hence we propose a curriculum-style learning approach to minimize the domain gap in semantic segmentation. The curriculum domain adaptation solves easy tasks first in order to infer some necessary properties about the target domain; in particular, the first task is to learn global label distributions over images and local distributions over landmark superpixels. These are easy to estimate because images of urban traffic scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.). We then train the segmentation network in such a way that the network predictions in the target domain follow those inferred properties. In experiments, our method significantly outperforms the baselines as well as the only known existing approach to the same problem.",
"In this paper, we propose a novel unsupervised domain adaptation algorithm based on deep learning for visual object recognition. Specifically, we design a new model called Deep Reconstruction-Classification Network (DRCN), which jointly learns a shared encoding representation for two tasks: (i) supervised classification of labeled source data, and (ii) unsupervised reconstruction of unlabeled target data. In this way, the learnt representation not only preserves discriminability, but also encodes useful information from the target domain. Our new DRCN model can be optimized by using backpropagation similarly as the standard neural networks.",
"Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target domain is unlabeled, requiring unsupervised adaptation. CORAL [18] is a simple unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation. Here, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL). Experiments on standard benchmark datasets show state-of-the-art performance. Our code is available at: https: github.com VisionLearningGroup CORAL.",
"This paper presents a novel unsupervised domain adaptation method for cross-domain visual recognition. We propose a unified framework that reduces the shift between domains both statistically and geometrically, referred to as Joint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two coupled projections that project the source domain and target domain data into low-dimensional subspaces where the geometrical shift and distribution shift are reduced simultaneously. The objective function can be solved efficiently in a closed form. Extensive experiments have verified that the proposed method significantly outperforms several state-of-the-art domain adaptation methods on a synthetic dataset and three different real world cross-domain visual recognition tasks.",
"Despite rapid advances in face recognition, there remains a clear gap between the performance of still image-based face recognition and video-based face recognition, due to the vast difference in visual quality between the domains and the difficulty of curating diverse large-scale video datasets. This paper addresses both of those challenges, through an image to video feature-level domain adaptation approach, to learn discriminative video frame representations. The framework utilizes large-scale unlabeled video data to reduce the gap between different domains while transferring discriminative knowledge from large-scale labeled still images. Given a face recognition network that is pretrained in the image domain, the adaptation is achieved by (i) distilling knowledge from the network to a video adaptation network through feature matching, (ii) performing feature restoration through synthetic data augmentation and (iii) learning a domain-invariant feature through a domain adversarial discriminator. We further improve performance through a discriminator-guided feature fusion that boosts high-quality frames while eliminating those degraded by video domain-specific factors. Experiments on the YouTube Faces and IJB-A datasets demonstrate that each module contributes to our feature-level domain adaptation framework and substantially improves video face recognition performance to achieve state-of-the-art accuracy. We demonstrate qualitatively that the network learns to suppress diverse artifacts in videos such as pose, illumination or occlusion without being explicitly trained for them."
]
} |
1707.01890 | 2730550971 | Natural Language Processing (NLP) systems often make use of machine learning techniques that are unfamiliar to end-users who are interested in analyzing clinical records. Although NLP has been widely used in extracting information from clinical text, current systems generally do not support model revision based on feedback from domain experts. We present a prototype tool that allows end users to visualize and review the outputs of an NLP system that extracts binary variables from clinical text. Our tool combines multiple visualizations to help the users understand these results and make any necessary corrections, thus forming a feedback loop and helping improve the accuracy of the NLP models. We have tested our prototype in a formative think-aloud user study with clinicians and researchers involved in colonoscopy research. Results from semi-structured interviews and a System Usability Scale (SUS) analysis show that the users are able to quickly start refining NLP models, despite having very little or no experience with machine learning. Observations from these sessions suggest revisions to the interface to better support review workflow and interpretation of results. | a lot is a bit colloquial. Say there has been a significant body of work, or many efforts have developed... There have been many efforts to develop user-centric tools for machine learning and NLP making it easier for the end users to build models. a bit awkward, but ok D'Avolio et. al. @cite_2 have described a prototype that combines several existing tools such as Knowtator @cite_9 for creating text annotations, and cTAKES @cite_14 for deriving NLP features, within a common user interface that can be used to configure the machine learning algorithms and export their results. Our present work complements this effort, focusing instead on facilitating expert review of NLP results and provision of feedback regarding the accuracy and completeness of details extracted from NLP data. | {
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_2"
],
"mid": [
"1981492645",
"2146089916",
"2126344780"
],
"abstract": [
"A general-purpose text annotation tool called Knowtator is introduced. Knowtator facilitates the manual creation of annotated corpora that can be used for evaluating or training a variety of natural language processing systems. Building on the strengths of the widely used Protege knowledge representation system, Knowtator has been developed as a Protege plug-in that leverages Protege's knowledge representation capabilities to specify annotation schemas. Knowtator's unique advantage over other annotation tools is the ease with which complex annotation schemas (e.g. schemas which have constrained relationships between annotation types) can be defined and incorporated into use. Knowtator is available under the Mozilla Public License 1.1 at http: bionlp.sourceforge.net Knowtator.",
"We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at . The cTAKES builds on existing open-source technologies—the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text.",
"Objective Despite at least 40 years of promising empirical performance, very few clinical natural language processing (NLP) or information extraction systems currently contribute to medical science or care. The authors address this gap by reducing the need for custom software and rules development with a graphical user interface-driven, highly generalizable approach to concept-level retrieval. @PARASPLIT Materials and methods A ‘learn by example’ approach combines features derived from open-source NLP pipelines with open-source machine learning classifiers to automatically and iteratively evaluate top-performing configurations. The Fourth i2b2 VA Shared Task Challenge's concept extraction task provided the data sets and metrics used to evaluate performance. @PARASPLIT Results Top F-measure scores for each of the tasks were medical problems (0.83), treatments (0.82), and tests (0.83). Recall lagged precision in all experiments. Precision was near or above 0.90 in all tasks. @PARASPLIT Discussion With no customization for the tasks and less than 5 min of end-user time to configure and launch each experiment, the average F-measure was 0.83, one point behind the mean F-measure of the 22 entrants in the competition. Strong precision scores indicate the potential of applying the approach for more specific clinical information extraction tasks. There was not one best configuration, supporting an iterative approach to model creation. @PARASPLIT Conclusion Acceptable levels of performance can be achieved using fully automated and generalizable approaches to concept-level information extraction. The described implementation and related documentation is available for download."
]
} |
1707.01890 | 2730550971 | Natural Language Processing (NLP) systems often make use of machine learning techniques that are unfamiliar to end-users who are interested in analyzing clinical records. Although NLP has been widely used in extracting information from clinical text, current systems generally do not support model revision based on feedback from domain experts. We present a prototype tool that allows end users to visualize and review the outputs of an NLP system that extracts binary variables from clinical text. Our tool combines multiple visualizations to help the users understand these results and make any necessary corrections, thus forming a feedback loop and helping improve the accuracy of the NLP models. We have tested our prototype in a formative think-aloud user study with clinicians and researchers involved in colonoscopy research. Results from semi-structured interviews and a System Usability Scale (SUS) analysis show that the users are able to quickly start refining NLP models, despite having very little or no experience with machine learning. Observations from these sessions suggest revisions to the interface to better support review workflow and interpretation of results. | Other efforts have taken this idea even further to build interactive machine learning systems that learn iteratively from their end-users. Sometimes referred to as methods, these techniques involve a learning system whose output is used by the end-user to further inform the system about the learning task. This forms a closed loop that can be used to build continuously improving models of prediction. Some examples include applications in interactive document clustering @cite_21 , document retrieval @cite_13 , image segmentation @cite_17 , bug triaging @cite_10 and even music composition @cite_8 . These successes suggest that it may be promising to use feedback to improve machine learning models in the clinical domain. good! | {
"cite_N": [
"@cite_13",
"@cite_8",
"@cite_21",
"@cite_10",
"@cite_17"
],
"mid": [
"1992492534",
"1999043044",
"2000122588",
"2157018954",
"2003238113"
],
"abstract": [
"Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora.",
"Model evaluation plays a special role in interactive machine learning (IML) systems in which users rely on their assessment of a model's performance in order to determine how to improve it. A better understanding of what model criteria are important to users can therefore inform the design of user interfaces for model evaluation as well as the choice and design of learning algorithms. We present work studying the evaluation practices of end users interactively building supervised learning systems for real-world gesture analysis problems. We examine users' model evaluation criteria, which span conventionally relevant criteria such as accuracy and cost, as well as novel criteria such as unexpectedness. We observed that users employed evaluation techniques---including cross-validation and direct, real-time evaluation---not only to make relevant judgments of algorithms' performance and interactively improve the trained models, but also to learn to provide more effective training data. Furthermore, we observed that evaluation taught users about what types of models were easy or possible to build, and users sometimes used this information to modify the learning problem definition or their plans for using the trained models in practice. We discuss the implications of these findings with regard to the role of generalization accuracy in IML, the design of new algorithms and interfaces, and the scope of potential benefits of incorporating human interaction in the design of supervised learning systems.",
"Extracting useful knowledge from large network datasets has become a fundamental challenge in many domains, from scientific literature to social networks and the web. We introduce Apolo, a system that uses a mixed-initiative approach - combining visualization, rich user interaction and machine learning - to guide the user to incrementally and interactively explore large network data and make sense of it. Apolo engages the user in bottom-up sensemaking to gradually build up an understanding over time by starting small, rather than starting big and drilling down. Apolo also helps users find relevant information by specifying exemplars, and then using a machine learning method called Belief Propagation to infer which other nodes may be of interest. We evaluated Apolo with twelve participants in a between-subjects study, with the task being to find relevant new papers to update an existing survey paper. Using expert judges, participants using Apolo found significantly more relevant papers. Subjective feedback of Apolo was also very positive.",
"Network alarm triage refers to grouping and prioritizing a stream of low-level device health information to help operators find and fix problems. Today, this process tends to be largely manual because existing tools cannot easily evolve with the network. We present CueT, a system that uses interactive machine learning to learn from the triaging decisions of operators. It then uses that learning in novel visualizations to help them quickly and accurately triage alarms. Unlike prior interactive machine learning systems, CueT handles a highly dynamic environment where the groups of interest are not known a-priori and evolve constantly. A user study with real operators and data from a large network shows that CueT significantly improves the speed and accuracy of alarm triage compared to the network's current practice.",
"Perceptual user interfaces (PUIs) are an important part of ubiquitous computing. Creating such interfaces is difficult because of the image and signal processing knowledge required for creating classifiers. We propose an interactive machine-learning (IML) model that allows users to train, classify view and correct the classifications. The concept and implementation details of IML are discussed and contrasted with classical machine learning models. Evaluations of two algorithms are also presented. We also briefly describe Image Processing with Crayons (Crayons), which is a tool for creating new camera-based interfaces using a simple painting metaphor. The Crayons tool embodies our notions of interactive machine learning"
]
} |
1707.02010 | 2724323022 | We prove that three spaces of importance in topological combinatorics are homeomorphic to closed balls: the totally nonnegative Grassmannian, the compactification of the space of electrical networks, and the cyclically symmetric amplituhedron. | Lusztig [Section 4] LusIntro used a flow similar to @math to show that @math is contractible. Our flow can be thought of as an affine (or loop group) analogue of his flow, and is closely related to the whirl matrices of @cite_29 . We also remark that Ayala, Kliemann, and San Martin @cite_6 used the language of control theory to give an alternative development in type @math of Lusztig's theory of total positivity. In that context, @math ( @math ) lies in the interior of the compression semigroup of @math , and @math is its attractor . | {
"cite_N": [
"@cite_29",
"@cite_6"
],
"mid": [
"2963963365",
"1988370440"
],
"abstract": [
"Abstract This is the first of a series of papers where we develop a theory of total positivity for loop groups. In this paper, we completely describe the totally nonnegative part of the polynomial loop group G L n ( R [ t , t − 1 ] ) , and for the formal loop group G L n ( R ( ( t ) ) ) we describe the totally nonnegative points which are not totally positive. Furthermore, we make the connection with networks on the cylinder. Our approach involves the introduction of distinguished generators, called whirls and curls, and we describe the commutation relations amongst them. These matrices play the same role as the poles and zeros of the Edrei–Thoma theorem classifying totally positive functions (corresponding to our case n = 1 ). We give a solution to the “factorization problem” using limits of ratios of minors. This is in a similar spirit to the Berenstein–Fomin–Zelevinsky Chamber Ansatz where ratios of minors are used. A birational symmetric group action arising in the commutation relation of curls appeared previously in Noumi–Yamada’s study of discrete Painleve dynamical systems and Berenstein–Kazhdan’s study of geometric crystals.",
"The objective of this article is to bring together two different mathematical subjects, namely totally positive matrices and control sets. It describes the control sets of the totally positive matrices and the sign-regular matrices in flag manifolds. In particular, the classical result by Gantmacher and Krein, follows from Theorem 9.7. One expects that with this description some theorems proved by combinatorial techniques have a geometric or dynamic interpretation."
]
} |
1707.02010 | 2724323022 | We prove that three spaces of importance in topological combinatorics are homeomorphic to closed balls: the totally nonnegative Grassmannian, the compactification of the space of electrical networks, and the cyclically symmetric amplituhedron. | Marsh and Rietsch defined and studied a superpotential on the Grassmannian in the context of mirror symmetry [Section 6] marsh_rietsch . It follows from results of Rietsch @cite_2 (as explained in @cite_24 ) that @math is, rather surprisingly, also the unique totally nonnegative critical point of the @math specialization of the superpotential. However, the superpotential is not defined on the boundary of @math . The precise relationship between @math and the gradient flow of the superpotential remains mysterious. | {
"cite_N": [
"@cite_24",
"@cite_2"
],
"mid": [
"2803112475",
"2013265108"
],
"abstract": [
"We show that for each k and n, the cyclic shift map on the complex Grassmannian Gr(k,n) has exactly @math fixed points. There is a unique totally nonnegative fixed point, given by taking n equally spaced points on the trigonometric moment curve (if k is odd) or the symmetric moment curve (if k is even). We introduce a parameter q, and show that the fixed points of a q-deformation of the cyclic shift map are precisely the critical points of the mirror-symmetric superpotential @math on Gr(k,n). This follows from results of Rietsch about the quantum cohomology ring of Gr(k,n). We survey many other diverse contexts which feature moment curves and the cyclic shift map.",
"Abstract Let G be a simple simply connected complex algebraic group. We give a Lie-theoretic construction of a conjectural mirror family associated to a general flag variety G P , and show that it recovers the Peterson variety presentation for the T -equivariant quantum cohomology rings q H T ∗ ( G P ) ( q ) with quantum parameters inverted. For SL n B we relate our construction to the mirror family defined by Givental and its T -equivariant analogue due to Joe and Kim."
]
} |
1707.01340 | 2734157130 | Web video is often used as a source of data in various fields of study. While specialized subsets of web video, mainly earmarked for dedicated purposes, are often analyzed in detail, there is little information available about the properties of web video as a whole. In this paper we present insights gained from the analysis of the metadata associated with more than 120 million videos harvested from two popular web video platforms, vimeo and YouTube, in 2016 and compare their properties with the ones found in commonly used video collections. This comparison has revealed that existing collections do not (or no longer) properly reflect the properties of web video "in the wild". | Web video gained importance in various areas within the sciences and humanities in recent years. In computer science, it is mostly used in the context of retrieval @cite_9 and machine learning @cite_21 @cite_7 . To facilitate research in these areas, multiple video collections have been compiled over the years. The majority of these datasets was created with a specific problem in mind and they are therefore comprised of a comparatively small number of videos with a narrow range of content. Such collections are usually used for the evaluation of concept recognition tasks @cite_20 or action recognition tasks @cite_8 . Other, more general collections range in size from a few tens @cite_16 @cite_0 to many thousands of hours of video content @cite_10 @cite_14 @cite_4 . The more popular of these include CC @cite_6 , MCG-WEBV @cite_3 and IACC @cite_10 . What these collections have in common is that they all source their videos from the web, but all of them have their own criteria for inclusion of videos. This introduces biases which prevents these collections from being representatives for the state of web video as a whole. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"2067766814",
"1544092585",
"1964659144",
"2167626157",
"2468571691",
"2229783764",
"2133845638",
"",
"",
"2004227763",
"2073546118",
"1995820507"
],
"abstract": [
"Near-duplicate video retrieval (NDVR) has recently attracted lots of research attention due to the exponential growth of online videos. It helps in many areas, such as copyright protection, video tagging, online video usage monitoring, etc. Most of existing approaches use only a single feature to represent a video for NDVR. However, a single feature is often insufficient to characterize the video content. Besides, while the accuracy is the main concern in previous literatures, the scalability of NDVR algorithms for large scale video datasets has been rarely addressed. In this paper, we present a novel approach - Multiple Feature Hashing (MFH) to tackle both the accuracy and the scalability issues of NDVR. MFH preserves the local structure information of each individual feature and also globally consider the local structures for all the features to learn a group of hash functions which map the video keyframes into the Hamming space and generate a series of binary codes to represent the video dataset. We evaluate our approach on a public video dataset and a large scale video dataset consisting of 132,647 videos, which was collected from YouTube by ourselves. The experiment results show that the proposed method outperforms the state-of-the-art techniques in both accuracy and efficiency.",
"We present the Yahoo Flickr Creative Commons 100 Million Dataset (YFCC100M), the largest public multimedia collection that has ever been released. The dataset contains a total of 100 million media objects, of which approximately 99.2 million are photos and 0.8 million are videos, all of which carry a Creative Commons license. Each media object in the dataset is represented by several pieces of metadata, e.g. Flickr identifier, owner name, camera, title, tags, geo, media source. The collection provides a comprehensive snapshot of how photos and videos were taken, described, and shared over the years, from the inception of Flickr in 2004 until early 2014. In this article we explain the rationale behind its creation, as well as the implications the dataset has for science, research, engineering, and development. We further present several new challenges in multimedia research that can now be expanded upon with our dataset.",
"With the pervasiveness of online social media and rapid growth of web data, a large amount of multi-media data is available online. However, how to organize them for facilitating users' experience and government supervision remains a problem yet to be seriously investigated. Topic detection and tracking, which has been a hot research topic for decades, could cluster web videos into different topics according to their semantic content. However, how to online discover topic and track them from web videos and images has not been fully discussed. In this paper, we formulate topic detection and tracking as an online tracking, detection and learning problem. First, by learning from historical data including labeled data and plenty of unlabeled data using semi-supervised multi-class multi-feature method, we obtain a topic tracker which could also discover novel topics from the new stream data. Second, when new data arrives, an online updating method is developed to make topic tracker adaptable to the evolution of the stream data. We conduct experiments on public dataset to evaluate the performance of the proposed method and the results demonstrate its effectiveness for topic detection and tracking.",
"This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably.",
"The huge amount of redundant multimedia data, like video, has become a problem in terms of both space and copyright. Usually, the methods for identifying near-duplicate videos are neither adequate nor scalable to find pairs of similar videos. Similarity self-join operation could be an alternative to solve this problem in which all similar pairs of elements from a video dataset are retrieved. Nonetheless, methods for similarity self-join have poor performance when applied to high-dimensional data. In this work, we propose a new approximate method to compute similarity self-join in sub-quadratic time in order to solve the near-duplicate video detection problem. Our strategy is based on clustering techniques to find out groups of videos which are similar to each other.",
"The prevailing of Web 2.0 techniques has led to the boom of web video content as well as its social network. To overcome the information overload problem, effective web video topic discovery and structuring techniques are highly demanded. To this end, existing works go to two respective directions: video topic discovery based on content or community detection in social network, with limited interplay between topics and network structures. In this paper, we construct the video social network based on web user interactions over videos. By comparing the topics and communities discovered on this network, we unveil the loose correspondence relationship between content and social network, and correspondingly propose a novel community-driven web video topic discovery model, which regularizes the topic model in relaxed community-level. Quantitatively analysis on real-world YouTube data shows that our model has achieved a significant improvement over the purely content-based or network-based baselines. Meanwhile, we propose a community-based topic structuralization framework, which decomposes a topic in social network space, and tracks the spreading trajectory of this topic among different communities on the time line. This structuralization can help users to catch the important facets of topics, such as \"Who is interested with this topic\" and \"How does it propagate among the communities\", which provide valuable insights in related applications such as web monitoring and market development.",
"With the exponential growth of social media, there exist huge numbers of near-duplicate web videos, ranging from simple formatting to complex mixture of different editing effects. In addition to the abundant video content, the social Web provides rich sets of context information associated with web videos, such as thumbnail image, time duration and so on. At the same time, the popularity of Web 2.0 demands for timely response to user queries. To balance the speed and accuracy aspects, in this paper, we combine the contextual information from time duration, number of views, and thumbnail images with the content analysis derived from color and local points to achieve real-time near-duplicate elimination. The results of 24 popular queries retrieved from YouTube show that the proposed approach integrating content and context can reach real-time novelty re-ranking of web videos with extremely high efficiency, where the majority of duplicates can be rapidly detected and removed from the top rankings. The speedup of the proposed approach can reach 164 times faster than the effective hierarchical method proposed in , with just a slight loss of performance.",
"",
"",
"A future with widespread access to large digital libraries of video is nearing reality. Anticipating this future, a great deal of research is focused on methods of browsing and retrieving digital video, developing algorithms for creating surrogates for video content, and creating interfaces that display result sets from multimedia queries. Research in these areas requires that each investigator acquire and digitize video for their studies since the multimedia information retrieval community does not yet have a standard collection of video to be used for research purposes. The primary goal of the Open Video Project is to create and maintain a shared digital video repository and test collection to meet these research needs.",
"This paper begins by considering a number of important design questions for a large-scale, widely available, multimedia test collection intended to support long-term scientific evaluation and comparison of content-based video analysis and exploitation systems. While the collection presented here is not quite web-scale, it is to our knowledge the largest video collection created to date. It is therefore of use in expanding the scale of any evaluation of multimedia collections and systems. Such exploitation systems would include the kinds of functionality already explored within the annual TREC Video Retrieval Evaluation (TRECVid) benchmarking activity such as search, semantic concept detection, and automatic summarization. We then report on our progress in creating such a multimedia collection from publicly available Internet Archive videos with Creative Commons licenses (IACC.1), which we hope will be a useful approximation of a web-scale collection and will support a next generation of benchmarking activities for content-based video operations. We also report on some possibilities for putting this collection to use in multimedia system evaluation. It is the intended that this collection be partitioned and used within the TRECVid 2010 evaluations, and in subsequent years to that.",
"The problem of describing images through natural language has gained importance in the computer vision community. Solutions to image description have either focused on a top-down approach of generating language through combinations of object detections and language models or bottom-up propagation of keyword tags from training images to test images through probabilistic or nearest neighbor techniques. In contrast, describing videos with natural language is a less studied problem. In this paper, we combine ideas from the bottom-up and top-down approaches to image description and propose a method for video description that captures the most relevant contents of a video in a natural language description. We propose a hybrid system consisting of a low level multimodal latent topic model for initial keyword annotation, a middle level of concept detectors and a high level module to produce final lingual descriptions. We compare the results of our system to human descriptions in both short and long forms on two datasets, and demonstrate that final system output has greater agreement with the human descriptions than any single level."
]
} |
1707.01340 | 2734157130 | Web video is often used as a source of data in various fields of study. While specialized subsets of web video, mainly earmarked for dedicated purposes, are often analyzed in detail, there is little information available about the properties of web video as a whole. In this paper we present insights gained from the analysis of the metadata associated with more than 120 million videos harvested from two popular web video platforms, vimeo and YouTube, in 2016 and compare their properties with the ones found in commonly used video collections. This comparison has revealed that existing collections do not (or no longer) properly reflect the properties of web video "in the wild". | Some of these collections are limited in their applicability as they contain content whose licenses do not explicitly allow a free and unconstrained use @cite_11 . Such licensing issues can lead to situations where collections which were purpose-built for certain applications and are used by several researchers cannot easily be shared with the research community at large without the corresponding legal paper work @cite_12 . Other collections which are especially built to avoid such issues yet suffer from other limitations. In @cite_10 , the authors use the Internet Archive https: archive.org details movies as a source for video. While all these videos are guaranteed to be freely usable and re-distributable, they are not representative of modern web video content found in the wild because of a certain lack of diversity in their sources. Other collections, such as @cite_4 , avoid this problem by collecting creative commons https: creativecommons.org licensed images and videos from flickr https: www.flickr.com which increases the diversity. Due to the primary focus of flicker on images rather than videos, the videos found on this platform still differ from what can currently be regarded as general web video. | {
"cite_N": [
"@cite_4",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"1544092585",
"2073546118",
"2484556227",
"2163292664"
],
"abstract": [
"We present the Yahoo Flickr Creative Commons 100 Million Dataset (YFCC100M), the largest public multimedia collection that has ever been released. The dataset contains a total of 100 million media objects, of which approximately 99.2 million are photos and 0.8 million are videos, all of which carry a Creative Commons license. Each media object in the dataset is represented by several pieces of metadata, e.g. Flickr identifier, owner name, camera, title, tags, geo, media source. The collection provides a comprehensive snapshot of how photos and videos were taken, described, and shared over the years, from the inception of Flickr in 2004 until early 2014. In this article we explain the rationale behind its creation, as well as the implications the dataset has for science, research, engineering, and development. We further present several new challenges in multimedia research that can now be expanded upon with our dataset.",
"This paper begins by considering a number of important design questions for a large-scale, widely available, multimedia test collection intended to support long-term scientific evaluation and comparison of content-based video analysis and exploitation systems. While the collection presented here is not quite web-scale, it is to our knowledge the largest video collection created to date. It is therefore of use in expanding the scale of any evaluation of multimedia collections and systems. Such exploitation systems would include the kinds of functionality already explored within the annual TREC Video Retrieval Evaluation (TRECVid) benchmarking activity such as search, semantic concept detection, and automatic summarization. We then report on our progress in creating such a multimedia collection from publicly available Internet Archive videos with Creative Commons licenses (IACC.1), which we hope will be a useful approximation of a web-scale collection and will support a next generation of benchmarking activities for content-based video operations. We also report on some possibilities for putting this collection to use in multimedia system evaluation. It is the intended that this collection be partitioned and used within the TRECVid 2010 evaluations, and in subsequent years to that.",
"Interactive video retrieval tools developed over the past few years are emerging as powerful alternatives to automatic retrieval approaches by giving the user more control as well as more responsibilities. Current research tries to identify the best combinations of image, audio and text features that combined with innovative UI design maximize the tools performance. We present the last installment of the Video Browser Showdown 2015 which was held in conjunction with the International Conference on MultiMedia Modeling 2015 (MMM 2015) and has the stated aim of pushing for a better integration of the user into the search process. The setup of the competition including the used dataset and the presented tasks as well as the participating tools will be introduced . The performance of those tools will be thoroughly presented and analyzed. Interesting highlights will be marked and some predictions regarding the research focus within the field for the near future will be made.",
"This paper exploits the context of natural dynamic scenes for human action recognition in video. Human actions are frequently constrained by the purpose and the physical properties of scenes and demonstrate high correlation with particular scene classes. For example, eating often happens in a kitchen while running is more common outdoors. The contribution of this paper is three-fold: (a) we automatically discover relevant scene classes and their correlation with human actions, (b) we show how to learn selected scene classes from video without manual supervision and (c) we develop a joint framework for action and scene recognition and demonstrate improved recognition of both in natural video. We use movie scripts as a means of automatic supervision for training. For selected action classes we identify correlated scene classes in text and then retrieve video samples of actions and scenes for training using script-to-video alignment. Our visual models for scenes and actions are formulated within the bag-of-features framework and are combined in a joint scene-action SVM-based classifier. We report experimental results and validate the method on a new large dataset with twelve action classes and ten scene classes acquired from 69 movies."
]
} |
1707.01340 | 2734157130 | Web video is often used as a source of data in various fields of study. While specialized subsets of web video, mainly earmarked for dedicated purposes, are often analyzed in detail, there is little information available about the properties of web video as a whole. In this paper we present insights gained from the analysis of the metadata associated with more than 120 million videos harvested from two popular web video platforms, vimeo and YouTube, in 2016 and compare their properties with the ones found in commonly used video collections. This comparison has revealed that existing collections do not (or no longer) properly reflect the properties of web video "in the wild". | Mainly in the context of data mining, the metadata of web video has also been studied. Such works were aiming at predicting certain properties of videos from specific genres @cite_5 or at trying to detect specific behaviors such as privacy invasion @cite_13 . Web video metadata has also been used to study video popularity distributions across different video platform as well as copyright infringement @cite_2 . Again, such approaches are usually based on metadata which was collected especially for this purpose and thus does not provide a representative cross section of web video in general. | {
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_2"
],
"mid": [
"2297970248",
"2057916874",
"2118519969"
],
"abstract": [
"Now a days, the Data Engineering becoming emerging trend to discover knowledge from web audio- visual data such as- YouTube videos, Yahoo Screen, Face Book videos etc. Different categories of web video are being shared on such social websites and are being used by the billions of users all over the world. The uploaded web videos will have different kind of metadata as attribute information of the video data. The metadata attributes defines the contents and features characteristics of the web videos conceptually. Hence, accomplishing web video mining by extracting features of web videos in terms of metadata is a challenging task. In this work, effective attempts are made to classify and predict the metadata features of web videos such as length of the web videos, number of comments of the web videos, ratings information and view counts of the web videos using data mining algorithms such as Decision tree J48 and navie Bayesian algorithms as a part of web video mining. The results of Decision tree J48 and navie Bayesian classification models are analyzed and compared as a step in the process of knowledge discovery from web videos.",
"YouTube is one of the most popular and largest video sharing websites (with social networking features) on the Internet. A significant percentage of videos uploaded on YouTube contains objectionable content and violates YouTube community guidelines. YouTube contains several copyright violated videos, commercial spam, hate and extremism promoting videos, vulgar and pornographic material and privacy invading content. This is primarily due to the low publication barrier and anonymity. We present an approach to identify privacy invading harassment and misdemeanor videos by mining the video metadata. We divide the problem into sub-problems: vulgar video detection, abuse and violence in public places and ragging video detection in school and colleges. We conduct a characterization study on a training dataset by downloading several videos using YouTube API and manually annotating the dataset. We define several discriminatory features for recognizing the target class objects. We employ a one class classifier approach to detect the objectionable video and frame the problem as a recognition problem. Our empirical analysis on test dataset reveals that linguistic features (presence of certain terms and people in the title and description of the main and related videos), popularity based, duration and category of videos can be used to predict the video type. We validate our hypothesis by conducting a series of experiments on evaluation dataset acquired from YouTube. Empirical results reveal that accuracy of proposed approach is more than 80 demonstrating the effectiveness of the approach.",
"User Generated Content (UGC) is re-shaping the way people watch video and TV, with millions of video producers and consumers. In particular, UGC sites are creating new viewing patterns and social interactions, empowering users to be more creative, and developing new business opportunities. To better understand the impact of UGC systems, we have analyzed YouTube, the world's largest UGC VoD system. Based on a large amount of data collected, we provide an in-depth study of YouTube and other similar UGC systems. In particular, we study the popularity life-cycle of videos, the intrinsic statistical properties of requests and their relationship with video age, and the level of content aliasing or of illegal content in the system. We also provide insights on the potential for more efficient UGC VoD systems (e.g. utilizing P2P techniques or making better use of caching). Finally, we discuss the opportunities to leverage the latent demand for niche videos that are not reached today due to information filtering effects or other system scarcity distortions. Overall, we believe that the results presented in this paper are crucial in understanding UGC systems and can provide valuable information to ISPs, site administrators, and content owners with major commercial and technical implications."
]
} |
1707.01400 | 2734075827 | Recently, several methods based on generative adversarial network (GAN) have been proposed for the task of aligning cross-domain images or learning a joint distribution of cross-domain images. One of the methods is to use conditional GAN for alignment. However, previous attempts of adopting conditional GAN do not perform as well as other methods. In this work we present an approach for improving the capability of the methods which are based on conditional GAN. We evaluate the proposed method on numerous tasks and the experimental results show that it is able to align the cross-domain images successfully in absence of paired samples. Furthermore, we also propose another model which conditions on multiple information such as domain information and label information. Conditioning on domain information and label information, we are able to conduct label propagation from the source domain to the target domain. A 2-step alternating training algorithm is proposed to learn this model. | The most relevant work to this paper is CoGAN @cite_17 which also tries to align cross-domain images. In literature @cite_17 , the authors also tried to use conditional GAN for this task. However, their attempt failed in many tasks such as aligning digits and negative digits. Another task which is related to our work is image-to-image translation @cite_4 @cite_8 . Both @cite_10 and @cite_4 adopted two GANs which form a cycle mapping to form a reconstruction loss. Dong al @cite_12 proposed to use conditional GAN for image-to-image translation. They first trained a conditional GAN to learn shared features and then trained an encoder to map the images to latent vectors. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2951939904",
"",
"2951021768",
"2579352881",
"2471149695"
],
"abstract": [
"While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. Source code for official implementation is publicly available this https URL",
"",
"Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to \"fall off\" the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user's scribbles.",
"It's useful to automatically transform an image from its original form to some synthetic form (style, partial contents, etc.), while keeping the original structure or semantics. We define this requirement as the \"image-to-image translation\" problem, and propose a general approach to achieve it, based on deep convolutional and conditional generative adversarial networks (GANs), which has gained a phenomenal success to learn mapping images from noise input since 2014. In this work, we develop a two step (unsupervised) learning method to translate images between different domains by using unlabeled images without specifying any correspondence between them, so that to avoid the cost of acquiring labeled data. Compared with prior works, we demonstrated the capacity of generality in our model, by which variance of translations can be conduct by a single type of model. Such capability is desirable in applications like bidirectional translation",
"We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation."
]
} |
1707.01461 | 2774761636 | Augmenting a neural network with memory that can grow without growing the number of trained parameters is a recent powerful concept with many exciting applications. We propose a design of memory augmented neural networks (MANNs) called Labeled Memory Networks (LMNs) suited for tasks requiring online adaptation in classification models. LMNs organize the memory with classes as the primary key.The memory acts as a second boosted stage following a regular neural network thereby allowing the memory and the primary network to play complementary roles. Unlike existing MANNs that write to memory for every instance and use LRU based memory replacement, LMNs write only for instances with non-zero loss and use label-based memory replacement. We demonstrate significant accuracy gains on various tasks including word-modelling and few-shot learning. In this paper, we establish their potential in online adapting a batch trained neural network to domain-relevant labeled data at deployment time. We show that LMNs are better than other MANNs designed for meta-learning. We also found them to be more accurate and faster than state-of-the-art methods of retuning model parameters for adapting to domain-specific labeled data. | The earliest example of the use of memory in neural networks is attention . Attention as memory is slow for long histories, leading to the development of several more flexible memory-based architectures @cite_3 . Neural Turing Machines (NTMs) @cite_15 were developed for end-to-end learning of algorithmic tasks. One such task where NTMs were shown to work was learning N-gram distribution from token sequences. Since this is related to online sequence prediction, our first model was based on NTMs. However, on our real datasets we found NTMs to not be very effective. The reasons perhaps is the controller's difficulty with adaptively generating keys and values for memory operations. Dynamic-NTMs (DNTMs) @cite_33 alleviate this via fixed trainable keys, and are shown to aid QA tasks but they did not work either as we will show in our experimental section. Like LMNs, DNTMs also propose discrete memory addressing but the keys are trained from input @math unlike in LMNs where the discrete key is class label that requires no training. Another difference with LMNs is that the memory is very tightly integrated with the neural network and requires joint training. | {
"cite_N": [
"@cite_15",
"@cite_33",
"@cite_3"
],
"mid": [
"2167839676",
"2470713034",
""
],
"abstract": [
"We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.",
"We extend neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We have done extensive analysis of our model and different variations of NTM on bAbI task. We also provide further experimental results on sequential pMNIST, Stanford Natural Language Inference, associative recall and copy tasks.",
""
]
} |
1707.01461 | 2774761636 | Augmenting a neural network with memory that can grow without growing the number of trained parameters is a recent powerful concept with many exciting applications. We propose a design of memory augmented neural networks (MANNs) called Labeled Memory Networks (LMNs) suited for tasks requiring online adaptation in classification models. LMNs organize the memory with classes as the primary key.The memory acts as a second boosted stage following a regular neural network thereby allowing the memory and the primary network to play complementary roles. Unlike existing MANNs that write to memory for every instance and use LRU based memory replacement, LMNs write only for instances with non-zero loss and use label-based memory replacement. We demonstrate significant accuracy gains on various tasks including word-modelling and few-shot learning. In this paper, we establish their potential in online adapting a batch trained neural network to domain-relevant labeled data at deployment time. We show that LMNs are better than other MANNs designed for meta-learning. We also found them to be more accurate and faster than state-of-the-art methods of retuning model parameters for adapting to domain-specific labeled data. | Another way to adapt is by training or tuning parameters. The methods of @cite_21 and @cite_25 train only a subset of parameters that are local to each sequence. More recently, @cite_23 and @cite_28 propose meta-learners that "learn to learn" via the loss gradient. In general, however such model retraining techniques are resource-intensive. In our empirical evaluation we found these methods to be slower and less accurate than LMNs. | {
"cite_N": [
"@cite_28",
"@cite_21",
"@cite_25",
"@cite_23"
],
"mid": [
"2753160622",
"1801199632",
"1899458394",
"2951775809"
],
"abstract": [
"Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.",
"We investigate an extension of continuous online learning in recurrent neural network language models. The model keeps a separate vector representation of the current unit of text being processed and adaptively adjusts it after each prediction. The initial experiments give promising results, indicating that the method is able to increase language modelling accuracy, while also decreasing the parameters needed to store the model along with the computation required at each step.",
"We present a Bayesian approach to adapting parameters of a well-trained context-dependent, deep-neural-network, hidden Markov model (CD-DNN-HMM) to improve automatic speech recognition performance. Given an abundance of DNN parameters but with only a limited amount of data, the effectiveness of the adapted DNN model can often be compromised. We formulate maximum a posteriori (MAP) adaptation of parameters of a specially designed CD-DNN-HMM with an augmented linear hidden networks connected to the output tied states, or senones, and compare it to feature space MAP linear regression previously proposed. Experimental evidences on the 20,000-word open vocabulary Wall Street Journal task demonstrate the feasibility of the proposed framework. In supervised adaptation, the proposed MAP adaptation approach provides more than 10 relative error reduction and consistently outperforms the conventional transformation based methods. Furthermore, we present an initial attempt to generate hierarchical priors to improve adaptation efficiency and effectiveness with limited adaptation data by exploiting similarities among senones.",
"We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies."
]
} |
1707.01461 | 2774761636 | Augmenting a neural network with memory that can grow without growing the number of trained parameters is a recent powerful concept with many exciting applications. We propose a design of memory augmented neural networks (MANNs) called Labeled Memory Networks (LMNs) suited for tasks requiring online adaptation in classification models. LMNs organize the memory with classes as the primary key.The memory acts as a second boosted stage following a regular neural network thereby allowing the memory and the primary network to play complementary roles. Unlike existing MANNs that write to memory for every instance and use LRU based memory replacement, LMNs write only for instances with non-zero loss and use label-based memory replacement. We demonstrate significant accuracy gains on various tasks including word-modelling and few-shot learning. In this paper, we establish their potential in online adapting a batch trained neural network to domain-relevant labeled data at deployment time. We show that LMNs are better than other MANNs designed for meta-learning. We also found them to be more accurate and faster than state-of-the-art methods of retuning model parameters for adapting to domain-specific labeled data. | Online learning techniques such as @cite_5 for learning kernel coefficients is relevant if we view the memory vectors @math acting as the support vectors and the memory scalars @math as the associated dual variables. Our setup is a little different in that we employ a mix of batch and online learning. Our proposed scheme of memory updates and merge was inspired by the gradient updates in PEGASOS, and in the case of exactly one cell per label reduces to a specific form of budgeted-PEGASOS @cite_14 . | {
"cite_N": [
"@cite_5",
"@cite_14"
],
"mid": [
"2142623206",
"2129379475"
],
"abstract": [
"We describe and analyze a simple and effective iterative algorithm for solving the optimization problem cast by Support Vector Machines (SVM). Our method alternates between stochastic gradient descent steps and projection steps. We prove that the number of iterations required to obtain a solution of accuracy e is O(1 e). In contrast, previous analyses of stochastic gradient descent methods require Ω (1 e2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1 λ, where λ is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is O (d (λe)), where d is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach can seamlessly be adapted to employ non-linear kernels while working solely on the primal objective function. We demonstrate the efficiency and applicability of our approach by conducting experiments on large text classification problems, comparing our solver to existing state-of-the-art SVM solvers. For example, it takes less than 5 seconds for our solver to converge when solving a text classification problem from Reuters Corpus Volume 1 (RCV1) with 800,000 training examples.",
"When equipped with kernel functions, online learning algorithms are susceptible to the \"curse of kernelization\" that causes unbounded growth in the model size. To address this issue, we present a family of budgeted online learning algorithms for multi-class classification which have constant space and time complexity per update. Our approach is based on the multi-class version of the popular Pegasos algorithm. It keeps the number of support vectors bounded during learning through budget maintenance. By treating the budget maintenance as a source of the gradient error, we prove that the gap between the budgeted Pegasos and the optimal solution directly depends on the average model degradation due to budget maintenance. To minimize the model degradation, we study greedy multi-class budget maintenance methods based on removal, projection, and merging of support vectors. Empirical results show that the proposed budgeted online algorithms achieve accuracy comparable to non-budget multi-class kernelized Pegasos while being extremely computationally efficient."
]
} |
1707.01357 | 2724233922 | Content-invariance in mapping codes learned by GAEs is a useful feature for various relation learning tasks. In this paper we show that the content-invariance of mapping codes for images of 2D and 3D rotated objects can be substantially improved by extending the standard GAE loss (symmetric reconstruction error) with a regularization term that penalizes the symmetric cross-reconstruction error. This error term involves reconstruction of pairs with mapping codes obtained from other pairs exhibiting similar transformations. Although this would principally require knowledge of the transformations exhibited by training pairs, our experiments show that a bootstrapping approach can sidestep this issue, and that the regularization term can effectively be used in an unsupervised setting. | @cite_5 introduced models for separating person and pose of face images. Bi-linear models are two-factor models whose outputs are linear in either factor when the other is held constant, a property which also applies to the GAE. @cite_9 proposed another variant of a bi-linear model in order to learn objects and their optical flow. Due to its similar architecture, the Gated Boltzmann Machine (GBM) can be seen as a direct predecessor of the GAE. GBMs were applied to image pairs for learning transformations, for modeling facial expression , and for disentangling facial pose and expression . The GAE was introduced by @cite_4 as a derivative of the GBM, as standard learning criteria became applicable through the development of Denoising Autoencoders . | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_4"
],
"mid": [
"2170653751",
"1989940876",
"2120190345"
],
"abstract": [
"Perceptual systems routinely separate “content” from “style,” classifying familiar words spoken in an unfamiliar accent, identifying a font or handwriting style across letters, or recognizing a familiar face or object seen under unfamiliar viewing conditions. Yet a general and tractable computational model of this ability to untangle the underlying factors of perceptual observations remains elusive (Hofstadter, 1985). Existing factor models (Mardia, Kent, & Bibby, 1979; Hinton & Zemel, 1994; Ghahramani, 1995; Bell & Sejnowski, 1995; Hinton, Dayan, Frey, & Neal, 1995; Dayan, Hinton, Neal, & Zemel, 1995; Hinton & Ghahramani, 1997) are either insufficiently rich to capture the complex interactions of perceptually meaningful factors such as phoneme and speaker accent or letter and font, or do not allow efficient learning algorithms. We present a general framework for learning to solve two-factor tasks using bilinear models, which provide sufficiently expressive representations of factor interactions but can nonetheless be fit to data using efficient algorithms based on the singular value decomposition and expectation-maximization. We report promising results on three different tasks in three different perceptual domains: spoken vowel classification with a benchmark multi-speaker database, extrapolation of fonts to unseen letters, and translation of faces to novel illuminants.",
"Previous work on unsupervised learning has shown that it is possible to learn Gabor-like feature representations, similar to those employed in the primary visual cortex, from the statistics of natural images. However, such representations are still not readily suited for object recognition or other high-level visual tasks because they can change drastically as the image changes to due object motion, variations in viewpoint, lighting, and other factors. In this paper, we describe how bilinear image models can be used to learn independent representations of the invariances, and their transformations, in natural image sequences. These models provide the foundation for learning higher-order feature representations that could serve as models of higher stages of processing in the cortex, in addition to having practical merit for computer vision tasks.",
"We describe a probabilistic model for learning rich, distributed representations of image transformations. The basic model is defined as a gated conditional random field that is trained to predict transformations of its inputs using a factorial set of latent variables. Inference in the model consists in extracting the transformation, given a pair of images, and can be performed exactly and efficiently. We show that, when trained on natural videos, the model develops domain specific motion features, in the form of fields of locally transformed edge filters. When trained on affine, or more general, transformations of still images, the model develops codes for these transformations, and can subsequently perform recognition tasks that are invariant under these transformations. It can also fantasize new transformations on previously unseen images. We describe several variations of the basic model and provide experimental results that demonstrate its applicability to a variety of tasks."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.