aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1505.05612
1488163396
In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7 of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: this http URL
There has been recent effort on the visual question answering task @cite_28 @cite_17 @cite_9 @cite_20 . However, most of them use a pre-defined and restricted set of questions. Some of these questions are generated from a template. In addition, our FM-IQA dataset is much larger than theirs (e.g., there are only 2591 and 1449 images for @cite_28 and @cite_9 respectively).
{ "cite_N": [ "@cite_28", "@cite_9", "@cite_20", "@cite_17" ], "mid": [ "1983927101", "2951619830", "1895641373", "2058556535" ], "abstract": [ "Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.", "We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test.", "This article proposes a multimedia analysis framework to process video and text jointly for understanding events and answering user queries. The framework produces a parse graph that represents the compositional structures of spatial information (objects and scenes), temporal information (actions and events), and causal information (causalities between events and fluents) in the video and text. The knowledge representation of the framework is based on a spatial-temporal-causal AND-OR graph (S T C-AOG), which jointly models possible hierarchical compositions of objects, scenes, and events as well as their interactions and mutual contexts, and specifies the prior probabilistic distribution of the parse graphs. The authors present a probabilistic generative model for joint parsing that captures the relations between the input video text, their corresponding parse graphs, and the joint parse graph. Based on the probabilistic model, the authors propose a joint parsing system consisting of three modules: video parsing, text parsing, and joint inference. Video parsing and text parsing produce two parse graphs from the input video and text, respectively. The joint inference module produces a joint parse graph by performing matching, deduction, and revision on the video and text parse graphs. The proposed framework has the following objectives: to provide deep semantic parsing of video and text that goes beyond the traditional bag-of-words approaches; to perform parsing and reasoning across the spatial, temporal, and causal dimensions based on the joint S T C-AOG representation; and to show that deep joint parsing facilitates subsequent applications such as generating narrative text descriptions and answering queries in the forms of who, what, when, where, and why. The authors empirically evaluated the system based on comparison against ground-truth as well as accuracy of query answering and obtained satisfactory results.", "Visual information pervades our environment. Vision is used to decide everything from what we want to eat at a restaurant and which bus route to take to whether our clothes match and how long until the milk expires. Individually, the inability to interpret such visual information is a nuisance for blind people who often have effective, if inefficient, work-arounds to overcome them. Collectively, however, they can make blind people less independent. Specialized technology addresses some problems in this space, but automatic approaches cannot yet answer the vast majority of visual questions that blind people may have. VizWiz addresses this shortcoming by using the Internet connections and cameras on existing smartphones to connect blind people and their questions to remote paid workers' answers. VizWiz is designed to have low latency and low cost, making it both competitive with expensive automatic solutions and much more versatile." ] }
1505.05612
1488163396
In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7 of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: this http URL
There are some concurrent and independent works on this topic: @cite_3 @cite_6 @cite_16 . @cite_3 propose a large-scale dataset also based on MS COCO. They also provide some simple baseline methods on this dataset. Compared to them, we propose a stronger model for this task and evaluate our method using human judges. Our dataset also contains two different kinds of language, which can be useful for other tasks, such as machine translation. Because we use a different set of annotators and different requirements of the annotation, our dataset and the @cite_3 can be complementary to each other, and lead to some interesting topics, such as dataset transferring for visual question answering.
{ "cite_N": [ "@cite_16", "@cite_6", "@cite_3" ], "mid": [ "2949218037", "2952246170", "2950761309" ], "abstract": [ "This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.", "We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus.", "We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL)." ] }
1505.05612
1488163396
In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7 of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: this http URL
Both @cite_6 and @cite_16 use a model containing a single LSTM and a CNN. They concatenate the question and the answer (for @cite_16 , the answer is a single word. @cite_6 also prefer a single word as the answer), and then feed them to the LSTM. Different from them, we use two separate LSTMs for questions and answers respectively in consideration of the different properties (e.g. grammar) of questions and answers, while allow the sharing of the word-embeddings. For the dataset, @cite_6 adopt the dataset proposed in @cite_9 , which is much smaller than our FM-IQA dataset. @cite_16 utilize the annotations in MS COCO and synthesize a dataset with four pre-defined types of questions (i.e. object, number, color, and location). They also synthesize the answer with a single word. Their dataset can also be complementary to ours.
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_6" ], "mid": [ "2951619830", "2949218037", "2952246170" ], "abstract": [ "We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test.", "This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.", "We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus." ] }
1505.05272
1505909737
We give under weak assumptions a complete combinatorial characterization of identifiability for linear mixtures of finite alphabet sources, with unknown mixing weights and unknown source signals, but known alphabet. This is based on a detailed treatment of the case of a single linear mixture. Notably, our identifiability analysis applies also to the case of unknown number of sources. We provide sufficient and necessary conditions for identifiability and give a simple sufficient criterion together with an explicit construction to determine the weights and the source signals for deterministic data by taking advantage of the hierarchical structure within the possible mixture values. We show that the probability of identifiability is related to the distribution of a hitting time and converges exponentially fast to one when the underlying sources come from a discrete Markov process. Finally, we explore our theoretical results in a simulation study. This paper extends and clarifies the scope of scenarios for which blind source separation becomes meaningful.
There are further variations of the BSS problem. Some of them are associated with Independent Component Analysis (ICA) (see e.g., @cite_15 ), which is based on the stochastic independence of the different sources (assumed to be random). ICA can be a powerful tool for (over)determined models ( @math ) @cite_15 and there are approaches for underdetermined multiple linear mixture models ( @math ) as well @cite_21 . However, ICA is not applicable for single linear mixtures ( @math ), as the error terms of the single sources sum up to a single error term such that stochastic independence of the sources becomes irrelevant.
{ "cite_N": [ "@cite_15", "@cite_21" ], "mid": [ "2099741732", "2166159048" ], "abstract": [ "Abstract The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of ICA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution.", "Empirical results were obtained for the blind source separation of more sources than mixtures using a previously proposed framework for learning overcomplete representations. This technique assumes a linear mixing model with additive noise and involves two steps: (1) learning an overcomplete representation for the observed data and (2) inferring sources given a sparse prior on the coefficients. We demonstrate that three speech signals can be separated with good fidelity given only two mixtures of the three signals. Similar results were obtained with mixtures of two speech signals and one music signal." ] }
1505.05272
1505909737
We give under weak assumptions a complete combinatorial characterization of identifiability for linear mixtures of finite alphabet sources, with unknown mixing weights and unknown source signals, but known alphabet. This is based on a detailed treatment of the case of a single linear mixture. Notably, our identifiability analysis applies also to the case of unknown number of sources. We provide sufficient and necessary conditions for identifiability and give a simple sufficient criterion together with an explicit construction to determine the weights and the source signals for deterministic data by taking advantage of the hierarchical structure within the possible mixture values. We show that the probability of identifiability is related to the distribution of a hitting time and converges exponentially fast to one when the underlying sources come from a discrete Markov process. Finally, we explore our theoretical results in a simulation study. This paper extends and clarifies the scope of scenarios for which blind source separation becomes meaningful.
Also conceptually related is blind deconvolution (see e.g., @cite_33 @cite_19 ), however, the convolution model makes analysis and identifiability severely different @cite_26 .
{ "cite_N": [ "@cite_19", "@cite_26", "@cite_33" ], "mid": [ "", "2139850196", "2102329202" ], "abstract": [ "", "An algorithm for the identification of finite-impulse-response (FIR) system parameters from output measurements, for systems excited by discrete-alphabet inputs, is described. The approach taken is algebraic. It does not rely directly on the statistical properties of the measurements, but rather it essentially solves the nonlinear equations appearing in the problem by converting them to equivalent linear equations, using the discrete-alphabet property of the input signal. The proposed algorithm was tested by computer simulations and some of these simulations are illustrated. >", "This article addresses the problem of estimating a multivariate linear system from its output when the input is an unobservable sequence of random vectors with finite-alphabet distribution. By explicitly utilizing the finite-alphabet property, an estimation method is proposed under the traditional inverse filtering paradigm as a generalization of a univariate method that has been studied previously. Identifiability of multivariate systems by the proposed method is proved mathematically under very mild conditions that can be satisfied even if the input is nonstationary and has both cross-channel and serial statistical dependencies. Statistical super-efficiency in estimating both parametric and nonparametric systems is also established for an alphabet-based cost function." ] }
1505.05272
1505909737
We give under weak assumptions a complete combinatorial characterization of identifiability for linear mixtures of finite alphabet sources, with unknown mixing weights and unknown source signals, but known alphabet. This is based on a detailed treatment of the case of a single linear mixture. Notably, our identifiability analysis applies also to the case of unknown number of sources. We provide sufficient and necessary conditions for identifiability and give a simple sufficient criterion together with an explicit construction to determine the weights and the source signals for deterministic data by taking advantage of the hierarchical structure within the possible mixture values. We show that the probability of identifiability is related to the distribution of a hitting time and converges exponentially fast to one when the underlying sources come from a discrete Markov process. Finally, we explore our theoretical results in a simulation study. This paper extends and clarifies the scope of scenarios for which blind source separation becomes meaningful.
For the non-blind scenario, i.e., when @math in model ) is known, @cite_31 considers identifiability in a probabilistic framework.
{ "cite_N": [ "@cite_31" ], "mid": [ "2063947013" ], "abstract": [ "We consider the problem of estimating a deterministic finite alphabet vector @math from underdetermined measurements @math , where @math is a given (random) @math matrix. Two new convex optimization methods are introduced for the recovery of finite alphabet signals via @math -norm minimization. The first method is based on regularization. In the second approach, the problem is formulated as the recovery of sparse signals after a suitable sparse transform. The regularization-based method is less complex than the transform-based one. When the alphabet size @math equals 2 and @math grows proportionally, the conditions under which the signal will be recovered with high probability are the same for the two methods. When @math , the behavior of the transform-based method is established. Experimental results support this theoretical result and show that the transform method outperforms the regularization-based one." ] }
1505.05254
2964193225
Video surveillance cameras generate most of recorded video, and there is far more recorded video than operators can watch. Much progress has recently been made using summarization of recorded video, but such techniques do not have much impact on live video surveillance. We assume a camera hierarchy where a Master camera observes the decision-critical region, and one or more Slave cameras observe regions where past activity is important for making the current decision. We propose that when people appear in the live Master camera, the Slave cameras will display their past activities, and the operator could use past information for real-time decision making. The basic units of our method are action tubes, representing objects and their trajectories over time. Our object-based method has advantages over frame based methods, as it can handle multiple people, multiple activities for each person, and can address re-identification uncertainty.
Much work has been done on understanding surveillance video. Popular approaches include the classification of activity as normal anomalous @cite_11 @cite_4 , or using activity recognition to transcribe surveillance video into words @cite_6 @cite_7 . High-level activity understanding is a very promising research direction, but current performance has room for improvements. Realizing that the need for human inspection of video will remain for some time, many methods create visual summaries for faster viewing.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_6", "@cite_11" ], "mid": [ "2021659075", "2110933980", "1601567445", "2130349088" ], "abstract": [ "Real-time unusual event detection in video stream has been a difficult challenge due to the lack of sufficient training information, volatility of the definitions for both normality and abnormality, time constraints, and statistical limitation of the fitness of any parametric models. We propose a fully unsupervised dynamic sparse coding approach for detecting unusual events in videos based on online sparse re-constructibility of query signals from an atomically learned event dictionary, which forms a sparse coding bases. Based on an intuition that usual events in a video are more likely to be reconstructible from an event dictionary, whereas unusual events are not, our algorithm employs a principled convex optimization formulation that allows both a sparse reconstruction code, and an online dictionary to be jointly inferred and updated. Our algorithm is completely un-supervised, making no prior assumptions of what unusual events may look like and the settings of the cameras. The fact that the bases dictionary is updated in an online fashion as the algorithm observes more data, avoids any issues with concept drift. Experimental results on hours of real world surveillance video and several Youtube videos show that the proposed algorithm could reliably locate the unusual events in the video sequence, outperforming the current state-of-the-art methods.", "Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset, which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several baseline approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task.", "We propose a method for describing human activities from video images based on concept hierarchies of actions. Major difficulty in transforming video images into textual descriptions is how to bridge a semantic gap between them, which is also known as inverse Hollywood problem. In general, the concepts of events or actions of human can be classified by semantic primitives. By associating these concepts with the semantic features extracted from video images, appropriate syntactic components such as verbs, objects, etc. are determined and then translated into natural language sentences. We also demonstrate the performance of the proposed method by several experiments.", "Compared to other anomalous video event detection approaches that analyze object trajectories only, we propose a context-aware method to detect anomalies. By tracking all moving objects in the video, three different levels of spatiotemporal contexts are considered, i.e., point anomaly of a video object, sequential anomaly of an object trajectory, and co-occurrence anomaly of multiple video objects. A hierarchical data mining approach is proposed. At each level, frequency-based analysis is performed to automatically discover regular rules of normal events. Events deviating from these rules are identified as anomalies. The proposed method is computationally efficient and can infer complex rules. Experiments on real traffic video validate that the detected video anomalies are hazardous or illegal according to traffic regulations." ] }
1505.05254
2964193225
Video surveillance cameras generate most of recorded video, and there is far more recorded video than operators can watch. Much progress has recently been made using summarization of recorded video, but such techniques do not have much impact on live video surveillance. We assume a camera hierarchy where a Master camera observes the decision-critical region, and one or more Slave cameras observe regions where past activity is important for making the current decision. We propose that when people appear in the live Master camera, the Slave cameras will display their past activities, and the operator could use past information for real-time decision making. The basic units of our method are action tubes, representing objects and their trajectories over time. Our object-based method has advantages over frame based methods, as it can handle multiple people, multiple activities for each person, and can address re-identification uncertainty.
One approach for visual summarization is the generation of a storyboard by selecting some key frames @cite_8 @cite_19 . Another approach is adaptive fast forward @cite_5 , dropping frames at different rates depending on how interesting the video is. Video synopsis @cite_20 @cite_9 @cite_21 @cite_12 , shifting activities in time so that as many activities can be presented simultaneously, presents all activities of a video in a much shorter video. See Fig .
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_21", "@cite_19", "@cite_5", "@cite_12", "@cite_20" ], "mid": [ "2144139464", "2126802797", "2115060048", "2103908291", "2079643064", "2145037218", "2163527813" ], "abstract": [ "The authors propose a novel technique for video summarization based on singular value decomposition (SVD). For the input video sequence, we create a feature-frame matrix A, and perform the SVD on it. From this SVD, we are able, to not only derive the refined feature space to better cluster visually similar frames, but also define a metric to measure the amount of visual content contained in each frame cluster using its degree of visual changes. Then, in the refined feature space, we find the most static frame cluster, define it as the content unit, and use the context value computed from it as the threshold to cluster the rest of the frames. Based on this clustering result, either the optimal set of keyframes, or a summarized motion video with the user specified time length can be generated to support different user requirements for video browsing and content overview. Our approach ensures that the summarized video representation contains little redundancy, and gives equal attention to the same amount of contents.", "The world is covered with millions of Webcams, many transmit everything in their field of view over the Internet 24 hours a day. A Web search finds public webcams in airports, intersections, classrooms, parks, shops, ski resorts, and more. Even more private surveillance cameras cover many private and public facilities. Webcams are an endless resource, but most of the video broadcast will be of little interest due to lack of activity. We propose to generate a short video that will be a synopsis of an endless video streams, generated by webcams or surveillance cameras. We would like to address queries like \"I would like to watch in one minute the highlights of this camera broadcast during the past day\". The process includes two major phases: (i) An online conversion of the video stream into a database of objects and activities (rather than frames), (ii) A response phase, generating the video synopsis as a response to the user's query. To include maximum information in a short synopsis we simultaneously show activities that may have happened at different times. The synopsis video can also be used as an index into the original video stream.", "The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video synopsis can be applied to create a synopsis of an endless video streams, as generated by Webcams and by surveillance cameras. It can address queries like \"show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) an online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.", "Given the enormous growth in user-generated videos, it is becoming increasingly important to be able to navigate them efficiently. As these videos are generally of poor quality, summarization methods designed for well-produced videos do not generalize to them. To address this challenge, we propose to use web-images as a prior to facilitate summarization of user-generated videos. Our main intuition is that people tend to take pictures of objects to capture them in a maximally informative way. Such images could therefore be used as prior information to summarize videos containing a similar set of objects. In this work, we apply our novel insight to develop a summarization algorithm that uses the web-image based prior information in an unsupervised manner. Moreover, to automatically evaluate summarization algorithms on a large scale, we propose a framework that relies on multiple summaries obtained through crowdsourcing. We demonstrate the effectiveness of our evaluation framework by comparing its performance to that of multiple human evaluators. Finally, we present results for our framework tested on hundreds of user-generated videos.", "We derive a statistical graphical model of video scenes with multiple, possibly occluded objects that can be efficiently used for tasks related to video search, browsing and retrieval. The model is trained on query (target) clip selected by the user. Shot retrieval process is based on the likelihood of a video frame under generative model. Instead of using a combination of weighted Euclidean distances as a shot similarity measure, the likelihood model automatically separates and balances various causes of variability in video, including occlusion, appearance change and motion. Thus, we overcome tedious and complex user interventions required in previous studies. We use the model in the adaptive video forward application that adapts video playback speed to the likelihood of the data. The similarity measure of each candidate clip to the target clip defines the playback speed. Given a query, the video is played at a higher speed as long as video content has low likelihood, and when frames similar to the query clip start to come in, the video playback rate drops. Set of experiments o12n typical home videos demonstrate performance, easiness and utility of our application.", "Explosive growth of surveillance video data presents formidable challenges to its browsing, retrieval and storage. Video synopsis, an innovation proposed by Peleg and his colleagues, is aimed for fast browsing by shortening the video into a synopsis while keeping activities in video captured by a camera. However, the current techniques are offline methods requiring that all the video data be ready for the processing, and are expensive in time and space. In this paper, we propose an online and efficient solution, and its supporting algorithms to overcome the problems. The method adopts an online content-aware approach in a step-wise manner, hence applicable to endless video, with less computational cost. Moreover, we propose a novel tracking method, called sticky tracking, to achieve high-quality visualization. The system can achieve a faster-than-real-time speed with a multi-core CPU implementation. The advantages are demonstrated by extensive experiments with a wide variety of videos. The proposed solution and algorithms could be integrated with surveillance cameras, and impact the way that surveillance videos are recorded.", "The power of video over still images is the ability to represent dynamic activities. But video browsing and retrieval are inconvenient due to inherent spatio-temporal redundancies, where some time intervals may have no activity, or have activities that occur in a small image region. Video synopsis aims to provide a compact video representation, while preserving the essential activities of the original video. We present dynamic video synopsis, where most of the activity in the video is condensed by simultaneously showing several actions, even when they originally occurred at different times. For example, we can create a \"stroboscopic movie\", where multiple dynamic instances of a moving object are played simultaneously. This is an extension of the still stroboscopic picture. Previous approaches for video abstraction addressed mostly the temporal redundancy by selecting representative key-frames or time intervals. In dynamic video synopsis the activity is shifted into a significantly shorter period, in which the activity is much denser. Video examples can be found online in http: www.vision.huji.ac.il synopsis" ] }
1505.05254
2964193225
Video surveillance cameras generate most of recorded video, and there is far more recorded video than operators can watch. Much progress has recently been made using summarization of recorded video, but such techniques do not have much impact on live video surveillance. We assume a camera hierarchy where a Master camera observes the decision-critical region, and one or more Slave cameras observe regions where past activity is important for making the current decision. We propose that when people appear in the live Master camera, the Slave cameras will display their past activities, and the operator could use past information for real-time decision making. The basic units of our method are action tubes, representing objects and their trajectories over time. Our object-based method has advantages over frame based methods, as it can handle multiple people, multiple activities for each person, and can address re-identification uncertainty.
Representation of the video from non-overlapping cameras has received little attention, a notable exception is @cite_14 , which projects multiple video cameras on a 3D model of the environment. But such a 3D model is not generally available. Another interesting work has been done by @cite_15 , who have recognized the importance of using objects for highlighting relationships between video streams from multiple cameras. Their work however has concentrated on the extraction and indexing of objects rather than on visual representation.
{ "cite_N": [ "@cite_15", "@cite_14" ], "mid": [ "2159415137", "1953967342" ], "abstract": [ "An automatic object tracking and video summarization method for multicamera systems with a large number of non-overlapping field-of-view cameras is explained. In this system, video sequences are stored for each object as opposed to storing a sequence for each camera. Objectbased representation enables annotation of video segments, and extraction of content semantics for further analysis and summarization. Objects are tracked at each camera by background subtraction and mean-shift analysis. Then the correspondence of objects between different cameras is established by using a Bayesian Belief Network. This framework empowers the user to get a concise response to queries such as “which locations did an object visited on Monday and what did he do there?”", "Videos and 3D models have traditionally existed in separate worlds and as distinct representations. Although texture maps for 3D models have been traditionally derived from multiple still images, real-time mapping of live videos as textures on 3D models has not been attempted. This paper presents a system for rendering multiple live videos in real-time over a 3D model as a novel and demonstrative application of the power of commodity graphics hardware. The system, metaphorically called the Video Flashlight system, \"illuminates\" a static 3D model with live video textures from static and moving cameras in the same way as a flashlight (torch) illuminates an environment. The Video Flashlight system is also an augmented reality solution for security and monitoring systems that deploy numerous cameras to monitor a large scale campus or an urban site. Current video monitoring systems are highly limited in providing global awareness since they typically display numerous camera videos on a grid of 2D displays. In contrast, the Video Flashlight system exploits the real-time rendering capabilities of current graphics hardware and renders live videos from various parts of an environment co-registered with the model. The user gets a global view of the model and is also able to visualize the dynamic videos simultaneously in the context of the model. In particular, the location of pixels and objects seen in the videos are precisely overlaid on the model while the user navigates through the model. The paper presents an overview of the system, details of the real-time rendering and demonstrates the efficacy of the augmented reality application." ] }
1505.05254
2964193225
Video surveillance cameras generate most of recorded video, and there is far more recorded video than operators can watch. Much progress has recently been made using summarization of recorded video, but such techniques do not have much impact on live video surveillance. We assume a camera hierarchy where a Master camera observes the decision-critical region, and one or more Slave cameras observe regions where past activity is important for making the current decision. We propose that when people appear in the live Master camera, the Slave cameras will display their past activities, and the operator could use past information for real-time decision making. The basic units of our method are action tubes, representing objects and their trajectories over time. Our object-based method has advantages over frame based methods, as it can handle multiple people, multiple activities for each person, and can address re-identification uncertainty.
A somehow related approach is Multi-Video Browsing and Summarization @cite_10 , which attempts to synchronize video streams by shifting frames in time, so that visually similar frames are observed in all videos at the same time. This scheme measures similarity by a set of trained visual similarity descriptors among frames, in contrast to our work which is object based.
{ "cite_N": [ "@cite_10" ], "mid": [ "2062569880" ], "abstract": [ "We propose a method for browsing multiple videos with a common theme, such as the result of a search query on a video sharing website, or videos of an event covered by multiple cameras. Given the collection of videos we first align each video with all others. This pairwise video alignment forms the basis of a novel browsing interface, termed the Browsing Companion. It is used to play a primary video and, in addition as thumbnails, other video clips that are temporally synchronized with it. The user can, at any time, click on one of the thumbnails to make it the primary. We also show that video alignment can be used for other applications such as automatic highlight detection and multivideo summarization." ] }
1505.05481
275668722
A general method of coding over expansion is proposed,which allows one to reduce the highly non-trivial problems of coding over analog channels and compressing analog sources to a set of much simpler subproblems, coding over discrete channels and compressing discrete sources. More specifically, the focus of this paper is on the additive exponential noise (AEN) channel, and lossy compression of exponential sources. Taking advantage of the essential decomposable property of these channels (sources), the proposed expansion method allows for mapping of these problems to coding over parallel channels (respectively, sources), where each level is modeled as an independent coding problem over discrete alphabets. Any feasible solution to the resulting optimization problem after expansion corresponds to an achievable scheme of the original problem. Utilizing this mapping, even for the cases where the optimal solutions are difficult to characterize, it is shown that the expansion coding scheme still performs well with appropriate choices of parameters. More specifically, theoretical analysis and numerical results reveal that expansion coding achieves the capacity of AEN channel in the high SNR regime. It is also shown that for lossy compression, the achievable rate distortion pair by expansion coding approaches to the Shannon limit in the low distortion region. Remarkably, by using capacity-achieving codes with low encoding and decoding complexity that are originally designed for discrete alphabets, for instance polar codes, the proposed expansion coding scheme allows for designing low-complexity analog channel and source codes.
Multilevel coding is a general coding method designed for analog noise channels with a flavor of expansion @cite_17 . In particular, a lattice partition chain @math is utilized to represent the channel input, and, together with a shaping technique, the reconstructed codeword is transmitted to the channel. It has been shown that optimal lattices achieving Shannon limit exist. However, the encoding and decoding complexity for such codes is high, in general. In the sense of representing the channel input, our scheme is coincident with multilevel coding by choosing @math , , @math , for some @math , where coding of each level is over @math -ary finite field (see Fig. ). The difference in the proposed method is that besides representing the channel input in this way, we also expand'' the channel noise, such that the coding problem for each level is more suitable to solve by adopting existing discrete coding schemes with moderate coding complexity. Moreover, by adapting the underlying codes to channel-dependent variables, such as carries, the Shannon limit is shown to be achievable by expansion coding with moderate number of expanded levels.
{ "cite_N": [ "@cite_17" ], "mid": [ "2144099979" ], "abstract": [ "A simple sphere bound gives the best possible tradeoff between the volume per point of an infinite array L and its error probability on an additive white Gaussian noise (AWGN) channel. It is shown that the sphere bound can be approached by a large class of coset codes or multilevel coset codes with multistage decoding, including certain binary lattices. These codes have structure of the kind that has been found to be useful in practice. Capacity curves and design guidance for practical codes are given. Exponential error bounds for coset codes are developed, generalizing Poltyrev's (1994) bounds for lattices. These results are based on the channel coding theorems of information theory, rather than the Minkowski-Hlawka theorem of lattice theory." ] }
1505.05481
275668722
A general method of coding over expansion is proposed,which allows one to reduce the highly non-trivial problems of coding over analog channels and compressing analog sources to a set of much simpler subproblems, coding over discrete channels and compressing discrete sources. More specifically, the focus of this paper is on the additive exponential noise (AEN) channel, and lossy compression of exponential sources. Taking advantage of the essential decomposable property of these channels (sources), the proposed expansion method allows for mapping of these problems to coding over parallel channels (respectively, sources), where each level is modeled as an independent coding problem over discrete alphabets. Any feasible solution to the resulting optimization problem after expansion corresponds to an achievable scheme of the original problem. Utilizing this mapping, even for the cases where the optimal solutions are difficult to characterize, it is shown that the expansion coding scheme still performs well with appropriate choices of parameters. More specifically, theoretical analysis and numerical results reveal that expansion coding achieves the capacity of AEN channel in the high SNR regime. It is also shown that for lossy compression, the achievable rate distortion pair by expansion coding approaches to the Shannon limit in the low distortion region. Remarkably, by using capacity-achieving codes with low encoding and decoding complexity that are originally designed for discrete alphabets, for instance polar codes, the proposed expansion coding scheme allows for designing low-complexity analog channel and source codes.
There have been many attempts to utilize discrete codes for analog channels (beyond simple modulation methods). For example, after the introduction of polar codes, considerable attention has been directed towards utilizing their low complexity property for analog channel coding. A very straightforward approach is to use the central limit theorem, which says that certain combinations of i.i.d. discrete random variables converge to a Gaussian distribution. As reported in @cite_19 and @cite_13 , the capacity of AWGN channel can be achieved by coding over large number of BSCs, however, the convergence rate is linear which limits its application in practice. To this end, @cite_20 proposes a MAC based scheme to improve the convergence rate to exponential, at the expense of having a much larger field size. A newly published result in @cite_14 attempts to combine polar codes with multilevel coding, however many aspects of this optimization of polar-coded modulation still remain open. Along the direction of this research, we also try to utilize capacity achieving discrete codes to approximately achieve the capacity of analog channels.
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_13", "@cite_20" ], "mid": [ "1751486617", "", "2142252806", "2121108749" ], "abstract": [ "This paper presents an analysis of spinal codes, a class of rateless codes proposed recently. We prove that spinal codes achieve Shannon capacity for the binary symmetric channel (BSC) and the additive white Gaussian noise (AWGN) channel with an efficient polynomial-time encoder and decoder. They are the first rateless codes with proofs of these properties for BSC and AWGN. The key idea in the spinal code is the sequential application of a hash function over the message bits. The sequential structure of the code turns out to be crucial for efficient decoding. Moreover, counter to the wisdom of having an expander structure in good codes, we show that the spinal code, despite its sequential structure, achieves capacity. The pseudo-randomness provided by a hash function suffices for this purpose. Our proof introduces a variant of Gallager's result characterizing the error exponent of random codes for any memoryless channel. We present a novel application of these error-exponent results within the framework of an efficient sequential code. The application of a hash function over the message bits provides a methodical and effective way to de-randomize Shannon's random codebook construction.", "", "This paper investigates polar coding schemes achieving capacity for the AWGN channel. The approaches using a multiple access channel with a large number of binary-input users and a single-user channel with a large prime-cardinality input are compared with respect to complexity attributes. The problem of finding discrete approximations to the Gaussian input is then investigated, and it is shown that a quantile quantizer achieves a gap to capacity which decreases like 1 q (where q is the number of constellation points), improving on the 1 log(q) decay achieved with a binomial (central limit theorem) quantizer.", "In this paper, polar codes for the m-user multiple access channel (MAC) with binary inputs are constructed. It is shown that Arikan's polarization technique applied individually to each user transforms independent uses of an m-user binary input MAC into successive uses of extremal MACs. This transformation has a number of desirable properties: 1) the “uniform sum-rate” of the original MAC is preserved, 2) the extremal MACs have uniform rate regions that are not only polymatroids but matroids, and thus, 3) their uniform sum-rate can be reached by each user transmitting either uncoded or fixed bits; in this sense, they are easy to communicate over. A polar code can then be constructed with an encoding and decoding complexity of O(n log n) (where n is the block length), a block error probability of o(exp (- n1 2 - e)), and capable of achieving the uniform sum-rate of any binary input MAC with arbitrary many users. Applications of this polar code construction to channels with a finite field input alphabet and to the additive white Gaussian noise channel are also discussed." ] }
1505.04891
2950735187
Word embedding, which refers to low-dimensional dense vector representations of natural words, has demonstrated its power in many natural language processing tasks. However, it may suffer from the inaccurate and incomplete information contained in the free text corpus as training data. To tackle this challenge, there have been quite a few works that leverage knowledge graphs as an additional information source to improve the quality of word embedding. Although these works have achieved certain success, they have neglected some important facts about knowledge graphs: (i) many relationships in knowledge graphs are , or even , rather than simply ; (ii) most head entities and tail entities in knowledge graphs come from very different semantic spaces. To address these issues, in this paper, we propose a new algorithm named ProjectNet. ProjecNet models the relationships between head and tail entities after transforming them with different low-rank projection matrices. The low-rank projection can allow non relationships between entities, while different projection matrices for head and tail entities allow them to originate in different semantic spaces. The experimental results demonstrate that ProjectNet yields more accurate word embedding than previous works, thus leads to clear improvements in various natural language processing tasks.
Word embeddings, (a.k.a. distributed word representations) are usually trained with neural networks by maximizing the likelihood of a text corpus. Based on several pioneering efforts @cite_12 @cite_15 @cite_18 , the research works in this field have grown rapidly in recent years @cite_7 @cite_1 @cite_5 @cite_11 @cite_6 . Among them, @cite_1 @cite_5 draws quite a lot of attention from the community due to its simplicity and effectiveness. An interesting result given by is that the word embedding vectors it produces can reflect human knowledge via some simple arithmetic operations, e.g., @math .
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_1", "@cite_6", "@cite_5", "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "2158139315", "2164019165", "1614298861", "2949679234", "2950133940", "2117130368", "2132339004", "2250539671" ], "abstract": [ "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http: metaoptimize.com projects wordreprs", "Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are built with only local context and one representation per word. This is problematic because words are often polysemous and global context can also provide useful information for learning word meanings. We present a new neural network architecture which 1) learns word embeddings that better capture the semantics of words by incorporating both local and global document context, and 2) accounts for homonymy and polysemy by learning multiple embeddings per word. We introduce a new dataset with human judgments on pairs of words in sentential context, and evaluate our model on it, showing that our model outperforms competitive baselines and other neural language models.", "", "This paper presents a scalable method for integrating compositional morphological representations into a vector-based probabilistic language model. Our approach is evaluated in the context of log-bilinear language models, rendered suitably efficient for implementation inside a machine translation decoder by factoring the vocabulary. We perform both intrinsic and extrinsic evaluations, presenting results on a range of languages which demonstrate that our model learns morphological representations that both perform well on word similarity tasks and lead to substantial reductions in perplexity. When used for translation into morphologically rich languages with large vocabularies, our models obtain improvements of up to 1.2 BLEU points relative to a baseline system using back-off n-gram models.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.", "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition." ] }
1505.04560
2950037525
The availability of an overwhelmingly large amount of bibliographic information including citation and co-authorship data makes it imperative to have a systematic approach that will enable an author to organize her own personal academic network profitably. An effective method could be to have one's co-authorship network arranged into a set of "circles", which has been a recent practice for organizing relationships (e.g., friendship) in many online social networks. In this paper, we propose an unsupervised approach to automatically detect circles in an ego network such that each circle represents a densely knit community of researchers. Our model is an unsupervised method which combines a variety of node features and node similarity measures. The model is built from a rich co-authorship network data of more than 8 hundred thousand authors. In the first level of evaluation, our model achieves 13.33 improvement in terms of overlapping modularity compared to the best among four state-of-the-art community detection methods. Further, we conduct a task-based evaluation -- two basic frameworks for collaboration prediction are considered with the circle information (obtained from our model) included in the feature set. Experimental results show that including the circle information detected by our model improves the prediction performance by 9.87 and 15.25 on average in terms of AUC (Area under the ROC) and P rec@20 (Precision at Top 20) respectively compared to the case, where the circle information is not present.
Research exploring ego structure in co-authorship network. One of the most interesting yet curiously understudied aspects is the analysis of the structural properties of the ego-alter interactions in co-authorship networks. @cite_20 found that the productivity of an author is associated with centrality degree confirming that scientific publishing is related with the extent of collaboration; B " o @cite_26 presented several network measures that investigated the changing impact of author-centric networks. Yan and Ding @cite_21 analyzed the Library and Information Science co-authorship network in relation to the impact of their researchers, finding important correlations. extensively studied the relationship between scientific impact and co-authorship pattern, discovering significant correlations between network indicators (density and ego-betweenness) and performance indicators such as g-index @cite_1 and citation counts @cite_2 . @cite_15 attempted to predict the h-index evolution through ego networks, observing that this factor increases if one can choose to coauthor articles with authors already having a high h-index.
{ "cite_N": [ "@cite_26", "@cite_21", "@cite_1", "@cite_2", "@cite_15", "@cite_20" ], "mid": [ "1608828583", "1972830890", "2049086518", "", "2024296872", "" ], "abstract": [ "This article introduces a suite of approaches and measures to study the impact of co-authorship teams based on the number of publications and their citations on a local and global scale. In particular, we present a novel weighted graph representation that encodes coupled author-paper networks as a weighted co-authorship graph. This weighted graph representation is applied to a dataset that captures the emergence of a new field of science and comprises 614 articles published by 1036 unique authors between 1974 and 2004. To characterize the properties and evolution of this field, we first use four different measures of centrality to identify the impact of authors. A global statistical analysis is performed to characterize the distribution of paper production and paper citations and its correlation with the co-authorship team size. The size of co-authorship clusters over time is examined. Finally, a novel local, author-centered measure based on entropy is applied to determine the global evolution of the field and the identification of the contribution of a single author's impact across all of its co-authorship relations. A visualization of the growth of the weighted co-author network, and the results obtained from the statistical analysis indicate a drift toward a more cooperative, global collaboration process as the main drive in the production of scientific knowledge. © 2005 Wiley Periodicals, Inc. Complexity 10: 57–67, 2005", "Many studies on coauthorship networks focus on network topology and network statistical mechanics. This article takes a different approach by studying micro-level network properties with the aim of applying centrality measures to impact analysis. Using coauthorship data from 16 journals in the field of library and information science (LIS) with a time span of 20 years (1988–2007), we construct an evolving coauthorship network and calculate four centrality measures (closeness centrality, betweenness centrality, degree centrality, and PageRank) for authors in this network. We find that the four centrality measures are significantly correlated with citation counts. We also discuss the usability of centrality measures in author ranking and suggest that centrality measures can be useful indicators for impact analysis. © 2009 Wiley Periodicals, Inc.", "In this study, we propose and validate social networks based theoretical model for exploring scholars' collaboration (co-authorship) network properties associated with their citation-based research performance (i.e., g-index). Using structural holes theory, we focus on how a scholar's egocentric network properties of density, efficiency and constraint within the network associate with their scholarly performance. For our analysis, we use publication data of high impact factor journals in the field of ''Information Science & Library Science'' between 2000 and 2009, extracted from Scopus. The resulting database contained 4837 publications reflecting the contributions of 8069 authors. Results from our data analysis suggest that research performance of scholars' is significantly correlated with scholars' ego-network measures. In particular, scholars with more co-authors and those who exhibit higher levels of betweenness centrality (i.e., the extent to which a co-author is between another pair of co-authors) perform better in terms of research (i.e., higher g-index). Furthermore, scholars with efficient collaboration networks who maintain a strong co-authorship relationship with one primary co-author within a group of linked co-authors (i.e., co-authors that have joint publications) perform better than those researchers with many relationships to the same group of linked co-authors.", "", "The objective of this work was to test the relationship between characteristics of an author's network of coauthors to identify which enhance the h-index. We randomly selected a sample of 238 authors from the Web of Science, calculated their h-index as well as the h-index of all co-authors from their h-index articles, and calculated an adjacency matrix where the relation between co-authors is the number of articles they published together. Our model was highly predictive of the variability in the h-index (R 2 = 0.69). Most of the variance was explained by number of co-authors. Other significant variables were those associated with highly productive co-authors. Contrary to our hypothesis, network structure as measured by components was not predictive. This analysis suggests that the highest h-index will be achieved by working with many co-authors, at least some with high h-indexes themselves. Little improvement in h-index is to be gained by structuring a co-author network to maintain separate research communities.", "" ] }
1505.04803
2071711566
We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video--such as the nearness to hands, gaze, and frequency of occurrence--and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.
Video summarization can also take the form of a single montage of still images. Existing methods take a background reference frame and project in foreground regions @cite_0 , or sequentially display automatically selected key-poses @cite_31 . An interactive approach @cite_9 takes user-selected frames and key points, and generates a storyboard that conveys the trajectory of an object. These approaches generally assume short clips with few objects, or a human-in-the-loop to guide the summarization process. In contrast, we aim to summarize a camera wearer's day containing hours of continuous video with hundreds of objects, with no human intervention.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_31" ], "mid": [ "1502713047", "2030371206", "1979185460" ], "abstract": [ "We present an approach for compact video summaries that allows fast and direct access to video data. The video is segmented into shots and, in appropriate video genres, into scenes, using previously proposed methods. A new concept that supports the hierarchical representation of video is presented, and is based on physical setting and camera locations. We use mosaics to represent and cluster shots, and detect appropriate mosaics to represent scenes. In contrast to approaches to video indexing which are based on key-frames, our efficient mosaic-based scene representation allows fast clustering of scenes into physical settings, as well as further comparison of physical settings across videos. This enables us to detect plots of different episodes in situation comedies and serves as a basis for indexing whole video sequences. In sports videos where settings are not as well defined, our approachallo ws classifying shots for characteristic event detection. We use a novel method for mosaic comparison and create a highly compact non-temporal representation of video. This representation allows accurate comparison of scenes across different videos and serves as a basis for indexing video libraries.", "We present a method for visualizing short video clips in a single static image, using the visual language of storyboards. These schematic storyboards are composed from multiple input frames and annotated using outlines, arrows, and text describing the motion in the scene. The principal advantage of this storyboard representation over standard representations of video -- generally either a static thumbnail image or a playback of the video clip in its entirety -- is that it requires only a moment to observe and comprehend but at the same time retains much of the detail of the source video. Our system renders a schematic storyboard layout based on a small amount of user interaction. We also demonstrate an interaction technique to scrub through time using the natural spatial dimensions of the storyboard. Potential applications include video editing, surveillance summarization, assembly instructions, composition of graphic novels, and illustration of camera technique for film studies.", "We propose a method for generating visual summaries of video. It reduces browsing time, minimizes screen-space utilization, while preserving the crux of the video content and the sensation of motion. The outputs are images or short clips, denoted as dynamic stills or clip trailers, respectively. The method selects informative poses out of extracted video objects. Optimal rotations and transparency supports visualization of an increased number of poses, leading to concise activity visualization. Our method addresses previously avoided scenarios, e.g., activities occurring in one place, or scenes with non-static background. We demonstrate and evaluate the method for various types of videos." ] }
1505.04803
2071711566
We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video--such as the nearness to hands, gaze, and frequency of occurrence--and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.
Compact dynamic summaries simultaneously show several spatially non-overlapping actions from different times of the video @cite_32 @cite_11 . While that framework aims to focus on foreground objects, it assumes a static camera and is therefore inapplicable to egocentric video. A re-targeting approach aims to simultaneously preserve an original video's content while reducing artifacts @cite_27 , but unlike our approach, does not attempt to characterize the varying degrees of object importance. In a semi-automatic method @cite_20 , irrelevant video frames are removed by detecting the main object of interest given a few user-annotated training frames. In contrast, our approach discovers multiple important objects.
{ "cite_N": [ "@cite_27", "@cite_32", "@cite_20", "@cite_11" ], "mid": [ "2115273023", "", "2017691720", "2126802797" ], "abstract": [ "We propose a principled approach to summarization of visual data (images or video) based on optimization of a well-defined similarity measure. The problem we consider is re-targeting (or summarization) of image video data into smaller sizes. A good ldquovisual summaryrdquo should satisfy two properties: (1) it should contain as much as possible visual information from the input data; (2) it should introduce as few as possible new visual artifacts that were not in the input data (i.e., preserve visual coherence). We propose a bi-directional similarity measure which quantitatively captures these two requirements: Two signals S and T are considered visually similar if all patches of S (at multiple scales) are contained in T, and vice versa. The problem of summarization re-targeting is posed as an optimization problem of this bi-directional similarity measure. We show summarization results for image and video data. We further show that the same approach can be used to address a variety of other problems, including automatic cropping, completion and synthesis of visual data, image collage, object removal, photo reshuffling and more.", "", "We present a novel framework for generating and ranking plausible objects hypotheses in an image using bottom-up processes and mid-level cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge about properties of individual object classes, by solving a sequence of constrained parametric min-cut problems (CPMC) on a regular image grid. We then learn to rank the object hypotheses by training a continuous model to predict how plausible the segments are, given their mid-level region properties. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC09 segmentation dataset. It achieves the same average best segmentation covering as the best performing technique to date [2], 0.61 when using just the top 7 ranked segments, instead of the full hierarchy in [2]. Our method achieves 0.78 average best covering using 154 segments. In a companion paper [18], we also show that the algorithm achieves state-of-the art results when used in a segmentation-based recognition pipeline.", "The world is covered with millions of Webcams, many transmit everything in their field of view over the Internet 24 hours a day. A Web search finds public webcams in airports, intersections, classrooms, parks, shops, ski resorts, and more. Even more private surveillance cameras cover many private and public facilities. Webcams are an endless resource, but most of the video broadcast will be of little interest due to lack of activity. We propose to generate a short video that will be a synopsis of an endless video streams, generated by webcams or surveillance cameras. We would like to address queries like \"I would like to watch in one minute the highlights of this camera broadcast during the past day\". The process includes two major phases: (i) An online conversion of the video stream into a database of objects and activities (rather than frames), (ii) A response phase, generating the video synopsis as a response to the user's query. To include maximum information in a short synopsis we simultaneously show activities that may have happened at different times. The synopsis video can also be used as an index into the original video stream." ] }
1505.04803
2071711566
We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video--such as the nearness to hands, gaze, and frequency of occurrence--and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.
Early saliency detectors rely on bottom-up image cues (e.g., @cite_3 @cite_4 ). More recent work tries to learn high-level saliency measures using various Gestalt cues, whether for static images @cite_5 @cite_43 @cite_28 @cite_15 or video @cite_46 . Whereas typically such metrics aim to prime a visual search process, we are interested in high-level saliency for the sake of isolating those things worth summarizing. Researchers have also explored ranking object importance in static images, learning what people mention first from human-annotated tags @cite_52 @cite_39 . In contrast, we learn the importance of objects in terms of their role in a long-term video's story. Relative to any of the above, we introduce novel saliency features amenable to the egocentric video setting.
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_46", "@cite_52", "@cite_3", "@cite_39", "@cite_43", "@cite_5", "@cite_15" ], "mid": [ "2130637418", "", "1989348325", "1946367136", "2128272608", "2163740729", "2128715914", "1640070940", "1555385401" ], "abstract": [ "The classical hypothesis, that bottom-up saliency is a center-surround process, is combined with a more recent hypothesis that all saliency decisions are optimal in a decision-theoretic sense. The combined hypothesis is denoted as discriminant center-surround saliency, and the corresponding optimal saliency architecture is derived. This architecture equates the saliency of each image location to the discriminant power of a set of features with respect to the classification problem that opposes stimuli at center and surround, at that location. It is shown that the resulting saliency detector makes accurate quantitative predictions for various aspects of the psychophysics of human saliency, including non-linear properties beyond the reach of previous saliency models. Furthermore, it is shown that discriminant center-surround saliency can be easily generalized to various stimulus modalities (such as color, orientation and motion), and provides optimal solutions for many other saliency problems of interest for computer vision. Optimal solutions, under this hypothesis, are derived for a number of the former (including static natural images, dense motion fields, and even dynamic textures), and applied to a number of the latter (the prediction of human eye fixations, motion-based saliency in the presence of ego-motion, and motion-based saliency in the presence of highly dynamic backgrounds). In result, discriminant saliency is shown to predict eye fixations better than previous models, and produces background subtraction algorithms that outperform the state-of-the-art in computer vision.", "", "We present an approach to discover and segment foreground object(s) in video. Given an unannotated video sequence, the method first identifies object-like regions in any frame according to both static and dynamic cues. We then compute a series of binary partitions among those candidate “key-segments” to discover hypothesis groups with persistent appearance and motion. Finally, using each ranked hypothesis in turn, we estimate a pixel-level object labeling across all frames, where (a) the foreground likelihood depends on both the hypothesis's appearance as well as a novel localization prior based on partial shape matching, and (b) the background likelihood depends on cues pulled from the key-segments' (possibly diverse) surroundings observed across the sequence. Compared to existing methods, our approach automatically focuses on the persistent foreground regions of interest while resisting oversegmentation. We apply our method to challenging benchmark videos, and show competitive or better results than the state-of-the-art.", "We observe that everyday images contain dozens of objects, and that humans, in describing these images, give different priority to these objects. We argue that a goal of visual recognition is, therefore, not only to detect and classify objects but also to associate with each a level of priority which we call importance'. We propose a definition of importance and show how this may be estimated reliably from data harvested from human observers. We conclude by showing that a first-order estimate of importance may be computed from a number of simple image region measurements and does not require access to image meaning.", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.", "We introduce a method for image retrieval that leverages the implicit information about object importance conveyed by the list of keyword tags a person supplies for an image. We propose an unsupervised learning procedure based on Kernel Canonical Correlation Analysis that discovers the relationship between how humans tag images (e.g., the order in which words are mentioned) and the relative importance of objects and their layout in the scene. Using this discovered connection, we show how to boost accuracy for novel queries, such that the search results may more closely match the user’s mental image of the scene being sought. We evaluate our approach on two datasets, and show clear improvements over both an approach relying on image features alone, as well as a baseline that uses words and image features, but ignores the implied importance cues.", "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. This includes an innovative cue measuring the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure [17], and the combined measure to perform better than any cue alone. Finally, we show how to sample windows from an image according to their objectness distribution and give an algorithm to employ them as location priors for modern class-specific object detectors. In experiments on PASCAL VOC 07 we show this greatly reduces the number of windows evaluated by class-specific object detectors.", "The foreground group in a scene may be ‘discovered’ and computed as a factorized approximation to the pairwise affinity of the elements in the scene. A pointwise approximation of the pairwise affinity information may in fact be interpreted as a ‘saliency’ index, and the foreground of the scene may be obtained by thresholding it. An algorithm called ‘affinity factorization’ is thus obtained which may be used for grouping.", "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on BSDS and PASCAL VOC 2008 demonstrate our ability to find most objects within a small bag of proposed regions." ] }
1505.04803
2071711566
We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video--such as the nearness to hands, gaze, and frequency of occurrence--and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.
To our knowledge, we are the first to explore visual summarization of egocentric video by predicting important objects. Recent work @cite_40 builds on our approach and uses our importance predictions as a cue to generate story-driven egocentric video summarizations.
{ "cite_N": [ "@cite_40" ], "mid": [ "2120645068" ], "abstract": [ "We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects." ] }
1505.04935
1931649315
Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using generated data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating a predictive model for node failures. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing machine state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if machines will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5 , we can achieve true positive rates between 27 and 88 with precision varying between 50 and 72 . We discuss the practicality of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available from the authors' website.
The publication of the Google trace data has triggered a flurry of activity within the community including several with goals that are related to ours. Some of these provide general characterization and statistics about the workload and node state for the cluster @cite_22 @cite_26 @cite_25 and identify high levels of heterogeneity and dynamism in the system, especially when compared to grid workloads @cite_0 . User profiles @cite_3 and task usage shapes @cite_4 have also been characterized for this cluster. Other studies have applied clustering techniques for workload characterization, either in terms of jobs and resources @cite_5 @cite_14 or placement constraints @cite_10 , with the aim to synthesize new traces. A different type of usage is for validation of various workload management algorithms. Examples are @cite_27 where the trace is used to evaluate consolidation strategies, @cite_8 @cite_1 where over-committing (overbooking) is validated, @cite_23 who take heterogeneity into account to perform provisioning or @cite_16 investigating checkpointing algorithms.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_14", "@cite_8", "@cite_1", "@cite_3", "@cite_0", "@cite_27", "@cite_23", "@cite_5", "@cite_16", "@cite_10", "@cite_25" ], "mid": [ "", "2182419557", "2129542763", "2143492785", "", "2002566704", "2034467200", "2136510202", "2052179907", "2072362295", "", "1966771895", "2028617807", "2060331550" ], "abstract": [ "", "The increase in scale and complexity of large compute clusters motivates a need for representative workload benchmarks to evaluate the performance impact of system changes, so as to assist in designing better scheduling algorithms and in carrying out management activities. To achieve this goal, it is necessary to construct workload characterizations from which realistic performance benchmarks can be created. In this paper, we focus on characterizing run-time task resource usage for CPU, memory and disk. The goal is to find an accurate characterization that can faithfully reproduce the performance of historical workload traces in terms of key performance metrics, such as task wait time and machine resource utilization. Through experiments using workload traces from Google production clusters, we find that simply using the mean of task usage can generate synthetic workload traces that accurately reproduce resource utilizations and task waiting time. This seemingly surprising result can be justified by the fact that resource usage for CPU, memory and disk are relatively stable over time for the majority of the tasks. Our work not only presents a simple technique for constructing realistic workload benchmarks, but also provides insights into understanding workload performance in production compute clusters.", "To better understand the challenges in developing effective cloud-based resource schedulers, we analyze the first publicly available trace data from a sizable multi-purpose cluster. The most notable workload characteristic is heterogeneity: in resource types (e.g., cores:RAM per machine) and their usage (e.g., duration and resources needed). Such heterogeneity reduces the effectiveness of traditional slot- and core-based scheduling. Furthermore, some tasks are constrained as to the kind of machine types they can use, increasing the complexity of resource assignment and complicating task migration. The workload is also highly dynamic, varying over time and most workload features, and is driven by many short jobs that demand quick scheduling decisions. While few simplifying assumptions apply, we find that many longer-running jobs have relatively stable resource utilizations, which can help adaptive resource schedulers.", "Designing cloud computing setups is a challenging task. It involves understanding the impact of a plethora of parameters ranging from cluster configuration, partitioning, networking characteristics, and the targeted applications' behavior. The design space, and the scale of the clusters, make it cumbersome and error-prone to test different cluster configurations using real setups. Thus, the community is increasingly relying on simulations and models of cloud setups to infer system behavior and the impact of design choices. The accuracy of the results from such approaches depends on the accuracy and realistic nature of the workload traces employed. Unfortunately, few cloud workload traces are available (in the public domain). In this paper, we present the key steps towards analyzing the traces that have been made public, e.g., from Google, and inferring lessons that can be used to design realistic cloud workloads as well as enable thorough quantitative studies of Hadoop design. Moreover, we leverage the lessons learned from the traces to undertake two case studies: (i) Evaluating Hadoop job schedulers, and (ii) Quantifying the impact of shared storage on Hadoop system performance.", "", "One of the key enablers of a cloud provider competitiveness is ability to over-commit shared infrastructure at ratios that are higher than those of other competitors, without compromising non-functional requirements, such as performance. A widely recognized impediment to achieving this goal is so called \"Virtual Machines sprawl\", a phenomenon referring to the situation when customers order Virtual Machines (VM) on the cloud, use them extensively and then leave them inactive for prolonged periods of time. Since a typical cloud provisioning system treats new VM provision requests according to the nominal virtual hardware specification, an often occurring situation is that the nominal resources of a cloud pool become exhausted fast while the physical hosts utilization remains low.We present a novel cloud resources scheduler called Pulsar that extends OpenStack Nova Filter Scheduler. The key design principle of Pulsar is adaptivity. It recognises that effective safely attainable over-commit ratio varies with time due to workloads' variability and dynamically adapts the effective over-commit ratio to these changes. We evaluate Pulsar via extensive simulations and demonstrate its performance on the actual OpenStack based testbed running popular workloads.", "In the era of cloud computing, users encounter the challenging task of effectively composing and running their applications on the cloud. In an attempt to understand user behavior in constructing applications and interacting with typical cloud infrastructures, we analyzed a large utilization dataset of Google cluster. In the present paper, we consider user behaviorin composing applications from the perspective of topology, maximum requested computational resources, and workload type. We model user dynamic behavior around the user's session view. Mass-Count disparity metrics are used to investigate the characteristics of underlying statistical models and to characterize users into distinct groups according to their composition and behavioral classes and patterns. The present study reveals interesting insight into the heterogeneous structure of the Google cloud workload.", "A new era of Cloud Computing has emerged, but the characteristics of Cloud load in data centers is not perfectly clear. Yet this characterization is critical for the design of novel Cloud job and resource management systems. In this paper, we comprehensively characterize the job task load and host load in a real-world production data center at Google Inc. We use a detailed trace of over 25 million tasks across over 12,500 hosts. We study the differences between a Google data center and other Grid HPC systems, from the perspective of both work load (w.r.t. jobs and tasks) and host load (w.r.t. machines). In particular, we study the job length, job submission frequency, and the resource utilization of jobs in the different systems, and also investigate valuable statistics of machine's maximum load, queue state and relative usage levels, with different job priorities and resource attributes. We find that the Google data center exhibits finer resource allocation with respect to CPU and memory than that of Grid HPC systems. Google jobs are always submitted with much higher frequency and they are much shorter than Grid jobs. As such, Google host load exhibits higher variance and noise.", "Cloud providers aim to provide computing services for a wide range of applications, such as web applications, emails, web searches, map reduce jobs. These applications are commonly scheduled to run on multi-purpose clusters that nowadays are becoming larger and more heterogeneous. A major challenge is to efficiently utilize the cluster's available resources, in particular to maximize the machines' utilization level while minimizing the applications' waiting time. We studied a publicly available trace from a large Google cluster (i12,000 machines) and observed that users generally request more resources than required for running their tasks, leading to low levels of utilization. In this paper, we propose a methodology for achieving an efficient utilization of the cluster's resources while providing the users with fast and reliable computing services. The methodology consists of three main modules: i) a prediction module that forecasts the maximum resource requirement of a task, ii) a scalable scheduling module that efficiently allocates tasks to machines, and iii) a monitoring module that tracks the levels of utilization of the machines and tasks. We present results that show that the impact of more accurate resource estimations for the scheduling of tasks can lead to an increase in the average utilization of the cluster, a reduction in the number of tasks being evicted, and a reduction in the tasks' waiting time.", "Data centers consume tremendous amounts of energy in terms of power distribution and cooling. Dynamic capacity provisioning is a promising approach for reducing energy consumption by dynamically adjusting the number of active machines to match resource demands. However, despite extensive studies of the problem, existing solutions have not fully considered the heterogeneity of both workload and machine hardware found in production environments. In particular, production data centers often comprise heterogeneous machines with different capacities and energy consumption characteristics. Meanwhile, the production cloud workloads typically consist of diverse applications with different priorities, performance and resource requirements. Failure to consider the heterogeneity of both machines and workloads will lead to both sub-optimal energy-savings and long scheduling delays, due to incompatibility between workload requirements and the resources offered by the provisioned machines. To address this limitation, we present Harmony, a Heterogeneity-Aware dynamic capacity provisioning scheme for cloud data centers. Specifically, we first use the K-means clustering algorithm to divide workload into distinct task classes with similar characteristics in terms of resource and performance requirements. Then we present a technique that dynamically adjusting the number of machines to minimize total energy consumption and scheduling delay. Simulations using traces from a Google's compute cluster demonstrate Harmony can reduce energy by 28 percent compared to heterogeneity-oblivious solutions.", "", "In this paper, we aim at optimizing fault-tolerance techniques based on a checkpointing restart mechanism, in the context of cloud computing. Our contribution is three-fold. (1) We derive a fresh formula to compute the optimal number of checkpoints for cloud jobs with varied distributions of failure events. Our analysis is not only generic with no assumption on failure probability distribution, but also attractively simple to apply in practice. (2) We design an adaptive algorithm to optimize the impact of checkpointing regarding various costs like checkpointing restart overhead. (3) We evaluate our optimized solution in a real cluster environment with hundreds of virtual machines and Berkeley Lab Checkpoint Restart tool. Task failure events are emulated via a production trace produced on a large-scale Google data center. Experiments confirm that our solution is fairly suitable for Google systems. Our optimized formula outperforms Young's formula by 3-10 percent, reducing wall-clock lengths by 50-100 seconds per job on average.", "Evaluating the performance of large compute clusters requires benchmarks with representative workloads. At Google, performance benchmarks are used to obtain performance metrics such as task scheduling delays and machine resource utilizations to assess changes in application codes, machine configurations, and scheduling algorithms. Existing approaches to workload characterization for high performance computing and grids focus on task resource requirements for CPU, memory, disk, I O, network, etc. Such resource requirements address how much resource is consumed by a task. However, in addition to resource requirements, Google workloads commonly include task placement constraints that determine which machine resources are consumed by tasks. Task placement constraints arise because of task dependencies such as those related to hardware architecture and kernel version. This paper develops methodologies for incorporating task placement constraints and machine properties into performance benchmarks of large compute clusters. Our studies of Google compute clusters show that constraints increase average task scheduling delays by a factor of 2 to 6, which often results in tens of minutes of additional task wait time. To understand why, we extend the concept of resource utilization to include constraints by introducing a new metric, the Utilization Multiplier (UM). UM is the ratio of the resource utilization seen by tasks with a constraint to the average utilization of the resource. UM provides a simple model of the performance impact of constraints in that task scheduling delays increase with UM. Last, we describe how to synthesize representative task constraints and machine properties, and how to incorporate this synthesis into existing performance benchmarks. Using synthetic task constraints and machine properties generated by our methodology, we accurately reproduce performance metrics for benchmarks of Google compute clusters with a discrepancy of only 13 in task scheduling delay and 5 in resource utilization.", "Cloud computing offers high scalability, flexibility and cost-effectiveness to meet emerging computing requirements. Understanding the characteristics of real workloads on a large production cloud cluster benefits not only cloud service providers but also researchers and daily users. This paper studies a large-scale Google cluster usage trace dataset and characterizes how the machines in the cluster are managed and the workloads submitted during a 29-day period behave. We focus on the frequency and pattern of machine maintenance events, job- and task-level workload behavior, and how the overall cluster resources are utilized." ] }
1505.04467
1706899115
We explore a variety of nearest neighbor baseline approaches for image captioning. These approaches find a set of nearest neighbor images in the training set from which a caption may be borrowed for the query image. We select a caption for the query image by finding the caption that best represents the "consensus" of the set of candidate captions gathered from the nearest neighbor images. When measured by automatic evaluation metrics on the MS COCO caption evaluation server, these approaches perform as well as many recent approaches that generate novel captions. However, human studies show that a method that generates novel captions is still preferred over the nearest neighbor approach.
Several early papers proposed producing image captions by copying captions from other images @cite_18 @cite_35 @cite_24 @cite_4 . @cite_18 use nearest neighbors to define image and caption features, capturing information about objects, actions, and scenes, where @cite_35 use a combination of object, stuff, people and scene information. @cite_33 use GIST nearest neighbors to the query image. @cite_24 use Kernel Canonical Correlation Analysis to map images and captions to a common space where the nearest caption can be found. While not using explicit captions, @cite_4 explores the task of captioning images using surrounding text on webpages.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_4", "@cite_33", "@cite_24" ], "mid": [ "2109586012", "1897761818", "2154158499", "", "68733909" ], "abstract": [ "We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning.", "Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.", "Images are often used to convey many different concepts or illustrate many different stories. We propose an algorithm to mine multiple diverse, relevant, and interesting text snippets for images on the web. Our algorithm scales to all images on the web. For each image, all webpages that contain it are considered. The top-K text snippet selection problem is posed as combinatorial subset selection with the goal of choosing an optimal set of snippets that maximizes a combination of relevancy, interestingness, and diversity. The relevancy and interestingness are scored by machine learned models. Our algorithm is run at scale on the entire image index of a major search engine resulting in the construction of a database of images with their corresponding text snippets. We validate the quality of the database through a large-scale comparative study. We showcase the utility of the database through two web-scale applications: (a) augmentation of images on the web as webpages are browsed and (b) an image browsing experience (similar in spirit to web browsing) that is enabled by interconnecting semantically related images (which may not be visually related) through shared concepts in their corresponding text snippets.", "", "The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated." ] }
1505.04467
1706899115
We explore a variety of nearest neighbor baseline approaches for image captioning. These approaches find a set of nearest neighbor images in the training set from which a caption may be borrowed for the query image. We select a caption for the query image by finding the caption that best represents the "consensus" of the set of candidate captions gathered from the nearest neighbor images. When measured by automatic evaluation metrics on the MS COCO caption evaluation server, these approaches perform as well as many recent approaches that generate novel captions. However, human studies show that a method that generates novel captions is still preferred over the nearest neighbor approach.
@cite_24 popularized the task of image and caption ranking. That is, given an image, rank a set of captions based on which are most relevant. They argued that this task was more correlated with human judgment than the task of novel caption generation measured using automatic metrics. Numerous papers have explored the task of caption ranking @cite_9 @cite_19 @cite_0 @cite_11 @cite_30 @cite_25 . These approaches could also be used to rank the set of training captions, and used to select the one that is most relevant. As far as we are aware, how well such an approach would perform on the MS COCO caption dataset for generation is still an open question. In this paper, we only explore a simple nearest neighbor baseline approach.
{ "cite_N": [ "@cite_30", "@cite_9", "@cite_24", "@cite_19", "@cite_0", "@cite_25", "@cite_11" ], "mid": [ "1895989618", "2159243025", "68733909", "2149557440", "2123024445", "1811254738", "2953276893" ], "abstract": [ "In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. Critical to our approach is a recurrent neural network that attempts to dynamically build a visual representation of the scene as a caption is being generated or read. The representation automatically learns to remember long-term visual concepts. Our model is capable of both generating novel captions given an image, and reconstructing visual features given an image description. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are equal to or preferred by humans 21.0 of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features.", "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.", "The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.", "Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.", "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.", "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .", "We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit." ] }
1505.04202
2204358493
In many resource allocation problems, a centralized controller needs to award some resource to a user selected from a collection of distributed users with the goal of maximizing the utility the user would receive from the resource. This can be modeled as the controller computing an extremum of the distributed users’ utilities. The overhead rate necessary to enable the controller to reproduce the users’ local state can be prohibitively high. An approach to reduce this overhead is interactive communication wherein rate savings are achieved by tolerating an increase in delay. In this paper, we consider the design of a simple achievable scheme based on successive refinements of scalar quantization at each user. The optimal quantization policy is computed via a dynamic program and we demonstrate that tolerating a small increase in delay can yield significant rate savings. We then consider two simpler quantization policies to investigate the scaling properties of the rate-delay tradeoffs. Using a combination of these simpler policies, the performance of the optimal policy can be closely approximated with lower computational costs.
In the present work, we consider a distributed model where all users communicate to a central node that wishes to compute a function of the users' sources. In particular, the users cannot overhear each other but can design their codes (with knowledge of the source distributions of other users) to communicate with the central node---called cooperative design, separate encoding'' @cite_17 . These models are referred to as the chief estimation officer (CEO) problem @cite_15 . We begin by reviewing fundamental limits and achievable schemes for non-interactive variants of CEO-type problems. We then review results for interactive variants that demonstrate significant rate savings may be possible. The non-interactive CEO problem has received considerably more attention than the interactive variant, and, for the cases where fundamental limits are known, quantization followed by entropy coding closely approximates these limits. This motivates our study of interactive quantization as a means to realize further rate savings at the expense of an incurred delay.
{ "cite_N": [ "@cite_15", "@cite_17" ], "mid": [ "2068143054", "2163382065" ], "abstract": [ "We consider a new problem in multiterminal source coding motivated by the following decentralized communication estimation task. A firm's Chief Executive Officer (CEO) is interested in the data sequence X(t) sub t=1 sup spl infin which cannot be observed directly, perhaps because it represents tactical decisions by a competing firm. The CEO deploys a team of L agents who observe independently corrupted versions of X(t) sub t=1 sup spl infin . Because X(t) is only one among many pressing matters to which the CEO must attend, the combined data rate at which the agents may communicate information about their observations to the CEO is limited to, say, R bits per second. If the agents were permitted to confer and pool their data, then in the limit as L spl rarr spl infin they usually would be able to smooth out their independent observation noises entirely. Then they could use their R bits per second to provide the CEO with a representation of X(t) with fidelity D(R), where D( spl middot ) is the distortion-rate function of X(t) . In particular, with such data pooling D can be made arbitrarily small if R exceeds the entropy rate H of X(t) . Suppose, however, that the agents are not permitted to convene, Agent i having to send data based solely on his own noisy observations Y sub i (t) . We show that then there does not exist a finite value of R for which even infinitely many agents can make D arbitrarily small. Furthermore, in this isolated-agents case we determine the asymptotic behavior of the minimal error frequency in the limit as L and then R tend to infinity.", "In a decentralized hypothesis testing network, several peripheral nodes observe an environment and communicate their observations to a central node for the final decision. The presence of capacity constraints introduces theoretical and practical problems. The following problem is addressed: given that the peripheral encoders that satisfy these constraints are scalar quantizers, how should they be designed in order that the central test to be performed on their output indices is most powerful? The scheme is called cooperative design-separate encoding since the quantizers process separate observations but have a common goal; they seek to maximize a system-wide performance measure. The Bhattacharyya distance of the joint index space as such a criterion is suggested, and a design algorithm to optimize arbitrarily many quantizers cyclically is proposed. A simplified version of the algorithm, namely an independent design-separate encoding scheme, where the correlation is either absent or neglected for the sake of simplicity, is outlined. Performances are compared through worked examples. >" ] }
1505.04202
2204358493
In many resource allocation problems, a centralized controller needs to award some resource to a user selected from a collection of distributed users with the goal of maximizing the utility the user would receive from the resource. This can be modeled as the controller computing an extremum of the distributed users’ utilities. The overhead rate necessary to enable the controller to reproduce the users’ local state can be prohibitively high. An approach to reduce this overhead is interactive communication wherein rate savings are achieved by tolerating an increase in delay. In this paper, we consider the design of a simple achievable scheme based on successive refinements of scalar quantization at each user. The optimal quantization policy is computed via a dynamic program and we demonstrate that tolerating a small increase in delay can yield significant rate savings. We then consider two simpler quantization policies to investigate the scaling properties of the rate-delay tradeoffs. Using a combination of these simpler policies, the performance of the optimal policy can be closely approximated with lower computational costs.
In information theory, the interest is usually on characterizing inner outer bounds for the rate region. introduced the generic CEO problem, wherein the CEO wants to reproduce the source from the received signals @cite_15 . The rate region for the problem of source reproduction with constrained distortion remains unknown, except for the cases of: Vempaty and Varshney considered the CEO problem for non-regular source distributions (e.g., truncated Gaussian) with quadratic distortion and studied the asymptotic behavior of the distortion function @cite_31 .
{ "cite_N": [ "@cite_15", "@cite_31" ], "mid": [ "2068143054", "2043312033" ], "abstract": [ "We consider a new problem in multiterminal source coding motivated by the following decentralized communication estimation task. A firm's Chief Executive Officer (CEO) is interested in the data sequence X(t) sub t=1 sup spl infin which cannot be observed directly, perhaps because it represents tactical decisions by a competing firm. The CEO deploys a team of L agents who observe independently corrupted versions of X(t) sub t=1 sup spl infin . Because X(t) is only one among many pressing matters to which the CEO must attend, the combined data rate at which the agents may communicate information about their observations to the CEO is limited to, say, R bits per second. If the agents were permitted to confer and pool their data, then in the limit as L spl rarr spl infin they usually would be able to smooth out their independent observation noises entirely. Then they could use their R bits per second to provide the CEO with a representation of X(t) with fidelity D(R), where D( spl middot ) is the distortion-rate function of X(t) . In particular, with such data pooling D can be made arbitrarily small if R exceeds the entropy rate H of X(t) . Suppose, however, that the agents are not permitted to convene, Agent i having to send data based solely on his own noisy observations Y sub i (t) . We show that then there does not exist a finite value of R for which even infinitely many agents can make D arbitrarily small. Furthermore, in this isolated-agents case we determine the asymptotic behavior of the minimal error frequency in the limit as L and then R tend to infinity.", "We consider the CEO problem for non-regular source distributions (such as uniform or truncated Gaussian). A group of agents observe independently corrupted versions of data and transmit coded versions over rate-limited links to a CEO. The CEO then estimates the underlying data based on the received coded observations. Agents are not allowed to convene before transmitting their observations. This formulation is motivated by the practical problem of a firm’s CEO estimating (non-regular) beliefs about a sequence of events, before acting on them. Agents’ observations are modeled as jointly distributed with the underlying data through a given conditional probability density function. We study the asymptotic behavior of the minimum achievable mean squared error distortion at the CEO in the limit when the number of agents @math and the sum rate @math tend to infinity. We establish a @math convergence of the distortion, an intermediate regime of performance between the exponential behavior in discrete CEO problems [Berger, Zhang, and Viswanathan (1996)], and the @math behavior in Gaussian CEO problems [Viswanathan and Berger (1997)]. Achievability is proved by a layered architecture with scalar quantization, distributed entropy coding, and midrange estimation. The converse is proved using the Bayesian Chazan–Zakai–Ziv bound." ] }
1505.04202
2204358493
In many resource allocation problems, a centralized controller needs to award some resource to a user selected from a collection of distributed users with the goal of maximizing the utility the user would receive from the resource. This can be modeled as the controller computing an extremum of the distributed users’ utilities. The overhead rate necessary to enable the controller to reproduce the users’ local state can be prohibitively high. An approach to reduce this overhead is interactive communication wherein rate savings are achieved by tolerating an increase in delay. In this paper, we consider the design of a simple achievable scheme based on successive refinements of scalar quantization at each user. The optimal quantization policy is computed via a dynamic program and we demonstrate that tolerating a small increase in delay can yield significant rate savings. We then consider two simpler quantization policies to investigate the scaling properties of the rate-delay tradeoffs. Using a combination of these simpler policies, the performance of the optimal policy can be closely approximated with lower computational costs.
The aforementioned literature provides insightful outer bounds for comparing the performance of distributed quantizer designs @cite_14 @cite_27 , but the achievable schemes used in the proofs usually require block coding with infinite block length, which is not practical. For use in a real system, simpler achievable schemes with low computational complexity and performance close to the limits are needed.
{ "cite_N": [ "@cite_27", "@cite_14" ], "mid": [ "2950734754", "2074982335" ], "abstract": [ "A key aspect of many resource allocation problems is the need for the resource controller to compute a function, such as the max or arg max, of the competing users metrics. Information must be exchanged between the competing users and the resource controller in order for this function to be computed. In many practical resource controllers the competing users' metrics are communicated to the resource controller, which then computes the desired extremization function. However, in this paper it is shown that information rate savings can be obtained by recognizing that controller only needs to determine the result of this extremization function. If the extremization function is to be computed losslessly, the rate savings are shown in most cases to be at most 2 bits independent of the number of competing users. Motivated by the small savings in the lossless case, simple achievable schemes for both the lossy and interactive variants of this problem are considered. It is shown that both of these approaches have the potential to realize large rate savings, especially in the case where the number of competing users is large. For the lossy variant, it is shown that the proposed simple achievable schemes are in fact close to the fundamental limit given by the rate distortion function.", "Efficient downlink resource allocation (e.g., subbands in OFDMA LTE) requires channel state information (e.g., subband gains) local to each user be transmitted to the base station (BS). Lossy encoding of the relevant state may result in suboptimal resource allocations by the BS, the performance cost of which may be captured by a suitable distortion measure. This problem is an indirect distributed lossy source coding problem with the function to be computed representing the optimal resource allocation, and the distortion measuring the cost of suboptimal allocations. In this paper we investigate the use of distributed scalar quantizers for lossy encoding of state, where the BS wishes to compute the index of the user with the largest gain on each subband. We prove the superiority of a heterogeneous (across users) quantizer design over the optimal homogeneous quantizer design, even though the source variables are i.i.d." ] }
1505.04202
2204358493
In many resource allocation problems, a centralized controller needs to award some resource to a user selected from a collection of distributed users with the goal of maximizing the utility the user would receive from the resource. This can be modeled as the controller computing an extremum of the distributed users’ utilities. The overhead rate necessary to enable the controller to reproduce the users’ local state can be prohibitively high. An approach to reduce this overhead is interactive communication wherein rate savings are achieved by tolerating an increase in delay. In this paper, we consider the design of a simple achievable scheme based on successive refinements of scalar quantization at each user. The optimal quantization policy is computed via a dynamic program and we demonstrate that tolerating a small increase in delay can yield significant rate savings. We then consider two simpler quantization policies to investigate the scaling properties of the rate-delay tradeoffs. Using a combination of these simpler policies, the performance of the optimal policy can be closely approximated with lower computational costs.
A concern of signal processing is to provide optimized practical quantization algorithms for the DFC system with performance close to the rate-distortion limits @cite_9 @cite_13 @cite_17 @cite_33 @cite_11 @cite_5 @cite_6 . There are asymptotic results for sufficiently high-rate and low-distortion that are derived by applying high rate quantization theory @cite_16 , while there are also non-asymptotic results derived from generalizations of Lloyd's algorithm.
{ "cite_N": [ "@cite_33", "@cite_9", "@cite_17", "@cite_6", "@cite_5", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2103175928", "2027315598", "2163382065", "2023267926", "2049053255", "2020053595", "2139519968", "2267540778" ], "abstract": [ "Quantization is the mapping of continuous quantities into discrete quantities, an operation far more general and flexible than the ubiquitous example of analog-to-digital conversion of scalar amplitude values. By appropriate choice of distortion measures and transmission constraints, quantization can incorporate signal processing such as statistical classification, estimation, and modeling. We here survey several approaches to incorporating such tasks into the quantization and possible extensions to distributed signal processing.", "An important class of engineering problems involves sensing an environment and making estimates based on the phenomena sensed. In the traditional model of this problem, the sensors' observations are available to the estimator without alteration. There is .growing interest in distributed sensing systems in which several observations are communicated to the estimator over channels of limited capacity. The observations must be separately encoded so that the target can be estimated with minimum distortion. Two questions are addressed for a special case of this problem wherein there are two sensors which observe noisy data and communicate with a single estimator: 1) if the encoder is unlimited in complexity, what communication rates and distortions can be achieved, 2) if the encoder must be a quantizer (a mapping of a single observation sample into a digital output), how can it be designed for good performance? The first question is treated by the techniques of information theory. It is proved that a given combination of rates and distortion is achievable if there exist degraded versions of the observations that satisfy certain formulas. The second question is treated by two approaches. In the first, the outputs of the quantizers undergo a second stage of encoding which exploits their correlation to reduce the output rate. Algorithms which design the second stage are presented and tested. The second approach is based on the distributional distance , a measure of dissimilarity between two probability distributions. An algorithm to modify a quantizer for increased distributional distance is derived and tested.", "In a decentralized hypothesis testing network, several peripheral nodes observe an environment and communicate their observations to a central node for the final decision. The presence of capacity constraints introduces theoretical and practical problems. The following problem is addressed: given that the peripheral encoders that satisfy these constraints are scalar quantizers, how should they be designed in order that the central test to be performed on their output indices is most powerful? The scheme is called cooperative design-separate encoding since the quantizers process separate observations but have a common goal; they seek to maximize a system-wide performance measure. The Bhattacharyya distance of the joint index space as such a criterion is suggested, and a design algorithm to optimize arbitrarily many quantizers cyclically is proposed. A simplified version of the algorithm, namely an independent design-separate encoding scheme, where the correlation is either absent or neglected for the sake of simplicity, is outlined. Performances are compared through worked examples. >", "We present an algorithm for designing locally optimal vector quantizers for general networks. We discuss the algorithm's implementation and compare the performance of the resulting \"network vector quantizers\" to traditional vector quantizers (VQs) and to rate-distortion (R-D) bounds where available. While some special cases of network codes (e.g., multiresolution (MR) and multiple description (MD) codes) have been studied in the literature, we here present a unifying approach that both includes these existing solutions as special cases and provides solutions to previously unsolved examples.", "Distributed functional scalar quantization (DFSQ) theory provides optimality conditions and predicts performance of data acquisition systems in which a computation on acquired data is desired. We address two limitations of previous works: prohibitively expensive decoder design and a restriction to source distributions with bounded support. We show that a much simpler decoder has equivalent asymptotic performance to the conditional expectation estimator studied previously, thus reducing decoder design complexity. The simpler decoder features decoupled communication and computation blocks. Moreover, we extend the DFSQ framework with the simpler decoder to source distributions with unbounded support. Finally, through simulation results, we demonstrate that performance at moderate coding rates is well predicted by the asymptotic analysis, and we give new insight on the rate of convergence.", "The authors consider how much performance advantage a fixed-dimensional vector quantizer can gain over a scalar quantizer. They collect several results from high-resolution or asymptotic (in rate) quantization theory and use them to identify source and system characteristics that contribute to the vector quantizer advantage. One well-known advantage is due to improvement in the space-filling properties of polytopes as the dimension increases. Others depend on the source's memory and marginal density shape. The advantages are used to gain insight into product, transform, lattice, predictive, pyramid, and universal quantizers. Although numerical prediction consistently overestimated gains in low rate (1 bit sample) experiments, the theoretical insights may be useful even at these rates. >", "Distributed nature of the sensor network architecture introduces unique challenges and opportunities for collaborative networked signal processing techniques that can potentially lead to significant performance gains. Many evolving low-power sensor network scenarios need to have high spatial density to enable reliable operation in the face of component node failures as well as to facilitate high spatial localization of events of interest. This induces a high level of network data redundancy, where spatially proximal sensor readings are highly correlated. We propose a new way of removing this redundancy in a completely distributed manner, i.e., without the sensors needing to talk, to one another. Our constructive framework for this problem is dubbed DISCUS (distributed source coding using syndromes) and is inspired by fundamental concepts from information theory. We review the main ideas, provide illustrations, and give the intuition behind the theory that enables this framework.We present a new domain of collaborative information communication and processing through the framework on distributed source coding. This framework enables highly effective and efficient compression across a sensor network without the need to establish inter-node communication, using well-studied and fast error-correcting coding algorithms.", "Communication of quantized information is frequently followed by a computation. We consider situations of distributed functional scalar quantization: distributed scalar quantization of (possibly correlated) sources followed by centralized computation of a function. Under smoothness conditions on the sources and function, companding scalar quantizer designs are developed to minimize mean-squared error (MSE) of the computed function as the quantizer resolution is allowed to grow. Striking improvements over quantizers designed without consideration of the function are possible and are larger in the entropy-constrained setting than in the fixed-rate setting. As extensions to the basic analysis, we characterize a large class of functions for which regular quantization suffices, consider certain functions for which asymptotic optimality is achieved without arbitrarily fine quantization, and allow limited collaboration between source encoders. In the entropy-constrained setting, a single bit per sample communicated between encoders can have an arbitrarily large effect on functional distortion. In contrast, such communication has very little effect in the fixed-rate setting." ] }
1505.04202
2204358493
In many resource allocation problems, a centralized controller needs to award some resource to a user selected from a collection of distributed users with the goal of maximizing the utility the user would receive from the resource. This can be modeled as the controller computing an extremum of the distributed users’ utilities. The overhead rate necessary to enable the controller to reproduce the users’ local state can be prohibitively high. An approach to reduce this overhead is interactive communication wherein rate savings are achieved by tolerating an increase in delay. In this paper, we consider the design of a simple achievable scheme based on successive refinements of scalar quantization at each user. The optimal quantization policy is computed via a dynamic program and we demonstrate that tolerating a small increase in delay can yield significant rate savings. We then consider two simpler quantization policies to investigate the scaling properties of the rate-delay tradeoffs. Using a combination of these simpler policies, the performance of the optimal policy can be closely approximated with lower computational costs.
For the high-rate and low-distortion scenario, considered a quantization scheme for the analysis of distributed scalar quantization @cite_11 . It was shown that, with certain constraints on the objective function and source distributions, the high-resolution approach can asymptotically achieve the rate-distortion limits, and the optimized quantization is regular A quantizer is called regular if each partition cell is an interval and each output level is within the corresponding interval. . used a similar high-resolution approach, but with a simpler decoder design and relaxed source distribution requirements @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_11" ], "mid": [ "2049053255", "2267540778" ], "abstract": [ "Distributed functional scalar quantization (DFSQ) theory provides optimality conditions and predicts performance of data acquisition systems in which a computation on acquired data is desired. We address two limitations of previous works: prohibitively expensive decoder design and a restriction to source distributions with bounded support. We show that a much simpler decoder has equivalent asymptotic performance to the conditional expectation estimator studied previously, thus reducing decoder design complexity. The simpler decoder features decoupled communication and computation blocks. Moreover, we extend the DFSQ framework with the simpler decoder to source distributions with unbounded support. Finally, through simulation results, we demonstrate that performance at moderate coding rates is well predicted by the asymptotic analysis, and we give new insight on the rate of convergence.", "Communication of quantized information is frequently followed by a computation. We consider situations of distributed functional scalar quantization: distributed scalar quantization of (possibly correlated) sources followed by centralized computation of a function. Under smoothness conditions on the sources and function, companding scalar quantizer designs are developed to minimize mean-squared error (MSE) of the computed function as the quantizer resolution is allowed to grow. Striking improvements over quantizers designed without consideration of the function are possible and are larger in the entropy-constrained setting than in the fixed-rate setting. As extensions to the basic analysis, we characterize a large class of functions for which regular quantization suffices, consider certain functions for which asymptotic optimality is achieved without arbitrarily fine quantization, and allow limited collaboration between source encoders. In the entropy-constrained setting, a single bit per sample communicated between encoders can have an arbitrarily large effect on functional distortion. In contrast, such communication has very little effect in the fixed-rate setting." ] }
1505.04202
2204358493
In many resource allocation problems, a centralized controller needs to award some resource to a user selected from a collection of distributed users with the goal of maximizing the utility the user would receive from the resource. This can be modeled as the controller computing an extremum of the distributed users’ utilities. The overhead rate necessary to enable the controller to reproduce the users’ local state can be prohibitively high. An approach to reduce this overhead is interactive communication wherein rate savings are achieved by tolerating an increase in delay. In this paper, we consider the design of a simple achievable scheme based on successive refinements of scalar quantization at each user. The optimal quantization policy is computed via a dynamic program and we demonstrate that tolerating a small increase in delay can yield significant rate savings. We then consider two simpler quantization policies to investigate the scaling properties of the rate-delay tradeoffs. Using a combination of these simpler policies, the performance of the optimal policy can be closely approximated with lower computational costs.
For the general rate-distortion problem, an algorithm for building optimized distributed quantizers was given wherein the CEO uses the quantized observations to perform hypothesis testing @cite_17 . A two-stage distributed scheme was proposed for the case when the users each have a noisy observation on the same source, and the CEO needs to reproduce the source with a bounded expected distortion @cite_9 @cite_13 . A first stage of local quantization is followed by a second stage of encoding the quantized signals based on Slepian-Wolf coding using syndrome codes @cite_13 or index reuse techniques @cite_9 . In their treatment of the CEO problem with non-regular source distributions, Vempaty and Varshney provided an acheivability proof that utilized a layered approach of quantization followed by entropy coding @cite_31 .
{ "cite_N": [ "@cite_31", "@cite_9", "@cite_13", "@cite_17" ], "mid": [ "2043312033", "2027315598", "2139519968", "2163382065" ], "abstract": [ "We consider the CEO problem for non-regular source distributions (such as uniform or truncated Gaussian). A group of agents observe independently corrupted versions of data and transmit coded versions over rate-limited links to a CEO. The CEO then estimates the underlying data based on the received coded observations. Agents are not allowed to convene before transmitting their observations. This formulation is motivated by the practical problem of a firm’s CEO estimating (non-regular) beliefs about a sequence of events, before acting on them. Agents’ observations are modeled as jointly distributed with the underlying data through a given conditional probability density function. We study the asymptotic behavior of the minimum achievable mean squared error distortion at the CEO in the limit when the number of agents @math and the sum rate @math tend to infinity. We establish a @math convergence of the distortion, an intermediate regime of performance between the exponential behavior in discrete CEO problems [Berger, Zhang, and Viswanathan (1996)], and the @math behavior in Gaussian CEO problems [Viswanathan and Berger (1997)]. Achievability is proved by a layered architecture with scalar quantization, distributed entropy coding, and midrange estimation. The converse is proved using the Bayesian Chazan–Zakai–Ziv bound.", "An important class of engineering problems involves sensing an environment and making estimates based on the phenomena sensed. In the traditional model of this problem, the sensors' observations are available to the estimator without alteration. There is .growing interest in distributed sensing systems in which several observations are communicated to the estimator over channels of limited capacity. The observations must be separately encoded so that the target can be estimated with minimum distortion. Two questions are addressed for a special case of this problem wherein there are two sensors which observe noisy data and communicate with a single estimator: 1) if the encoder is unlimited in complexity, what communication rates and distortions can be achieved, 2) if the encoder must be a quantizer (a mapping of a single observation sample into a digital output), how can it be designed for good performance? The first question is treated by the techniques of information theory. It is proved that a given combination of rates and distortion is achievable if there exist degraded versions of the observations that satisfy certain formulas. The second question is treated by two approaches. In the first, the outputs of the quantizers undergo a second stage of encoding which exploits their correlation to reduce the output rate. Algorithms which design the second stage are presented and tested. The second approach is based on the distributional distance , a measure of dissimilarity between two probability distributions. An algorithm to modify a quantizer for increased distributional distance is derived and tested.", "Distributed nature of the sensor network architecture introduces unique challenges and opportunities for collaborative networked signal processing techniques that can potentially lead to significant performance gains. Many evolving low-power sensor network scenarios need to have high spatial density to enable reliable operation in the face of component node failures as well as to facilitate high spatial localization of events of interest. This induces a high level of network data redundancy, where spatially proximal sensor readings are highly correlated. We propose a new way of removing this redundancy in a completely distributed manner, i.e., without the sensors needing to talk, to one another. Our constructive framework for this problem is dubbed DISCUS (distributed source coding using syndromes) and is inspired by fundamental concepts from information theory. We review the main ideas, provide illustrations, and give the intuition behind the theory that enables this framework.We present a new domain of collaborative information communication and processing through the framework on distributed source coding. This framework enables highly effective and efficient compression across a sensor network without the need to establish inter-node communication, using well-studied and fast error-correcting coding algorithms.", "In a decentralized hypothesis testing network, several peripheral nodes observe an environment and communicate their observations to a central node for the final decision. The presence of capacity constraints introduces theoretical and practical problems. The following problem is addressed: given that the peripheral encoders that satisfy these constraints are scalar quantizers, how should they be designed in order that the central test to be performed on their output indices is most powerful? The scheme is called cooperative design-separate encoding since the quantizers process separate observations but have a common goal; they seek to maximize a system-wide performance measure. The Bhattacharyya distance of the joint index space as such a criterion is suggested, and a design algorithm to optimize arbitrarily many quantizers cyclically is proposed. A simplified version of the algorithm, namely an independent design-separate encoding scheme, where the correlation is either absent or neglected for the sake of simplicity, is outlined. Performances are compared through worked examples. >" ] }
1505.04202
2204358493
In many resource allocation problems, a centralized controller needs to award some resource to a user selected from a collection of distributed users with the goal of maximizing the utility the user would receive from the resource. This can be modeled as the controller computing an extremum of the distributed users’ utilities. The overhead rate necessary to enable the controller to reproduce the users’ local state can be prohibitively high. An approach to reduce this overhead is interactive communication wherein rate savings are achieved by tolerating an increase in delay. In this paper, we consider the design of a simple achievable scheme based on successive refinements of scalar quantization at each user. The optimal quantization policy is computed via a dynamic program and we demonstrate that tolerating a small increase in delay can yield significant rate savings. We then consider two simpler quantization policies to investigate the scaling properties of the rate-delay tradeoffs. Using a combination of these simpler policies, the performance of the optimal policy can be closely approximated with lower computational costs.
is a scheme that allows message passing over multiple rounds. At each round, the communicating parties are allowed to send messages based on what they have received in previous rounds as well as their local source observation @cite_30 . The interactive communication literature is roughly divided into two categories: communication complexity and interactive information theory.
{ "cite_N": [ "@cite_30" ], "mid": [ "2074493852" ], "abstract": [ "Let (X_ i , Y_ i ), i= 1,2 , be a sequence of independent, identically distributed bivariate random variables and consider the following communication situation. The X component of the process is observed at some location, say A , while the Y component is observed at a different location, say B . The X(Y) component of the process has to be reproduced at location B(A) . It is desired to find the amount of information that should be exchanged between the two locations, so that the distortions incurred will not exceed some predetermined tolerance. A \"single-letter characterization\" of a certain region **** ^ K R ^ 4 of rates and distortions is given. This region contains (all, and only) the achievable rates and distortions for the special case where block coding is used and the information is conveyed through a one-way link that can switch direction only K times per block." ] }
1505.04202
2204358493
In many resource allocation problems, a centralized controller needs to award some resource to a user selected from a collection of distributed users with the goal of maximizing the utility the user would receive from the resource. This can be modeled as the controller computing an extremum of the distributed users’ utilities. The overhead rate necessary to enable the controller to reproduce the users’ local state can be prohibitively high. An approach to reduce this overhead is interactive communication wherein rate savings are achieved by tolerating an increase in delay. In this paper, we consider the design of a simple achievable scheme based on successive refinements of scalar quantization at each user. The optimal quantization policy is computed via a dynamic program and we demonstrate that tolerating a small increase in delay can yield significant rate savings. We then consider two simpler quantization policies to investigate the scaling properties of the rate-delay tradeoffs. Using a combination of these simpler policies, the performance of the optimal policy can be closely approximated with lower computational costs.
Kaspi determined the two party information theoretic limit for lossy compression via interactive communication @cite_30 . This line of research was continued by Ma and Ishwar, who showed (by an example) that the minimum rate for a given distortion constraint can be arbitrarily smaller than the non-interactive minimum rate obtaining the same distortion @cite_23 . In follow up work, Ma and Ishwar showed (by an example) that for the DFC problem, the minimum sum-rate for losslessly computing a function can be smaller than the non-interactive rate; even infinitely-many rounds of interaction may still improve the rate-region @cite_28 .
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_23" ], "mid": [ "2074493852", "2154175667", "2056521209" ], "abstract": [ "Let (X_ i , Y_ i ), i= 1,2 , be a sequence of independent, identically distributed bivariate random variables and consider the following communication situation. The X component of the process is observed at some location, say A , while the Y component is observed at a different location, say B . The X(Y) component of the process has to be reproduced at location B(A) . It is desired to find the amount of information that should be exchanged between the two locations, so that the distortions incurred will not exceed some predetermined tolerance. A \"single-letter characterization\" of a certain region **** ^ K R ^ 4 of rates and distortions is given. This region contains (all, and only) the achievable rates and distortions for the special case where block coding is used and the information is conveyed through a one-way link that can switch direction only K times per block.", "A two-terminal interactive distributed source coding problem with alternating messages for function computation at both locations is studied. For any number of messages, a computable characterization of the rate region is provided in terms of single-letter information measures. While interaction is useless in terms of the minimum sum-rate for lossless source reproduction at one or both locations, the gains can be arbitrarily large for function computation even when the sources are independent. For a class of sources and functions, interaction is shown to be useless, even with infinite messages, when a function has to be computed at only one location, but is shown to be useful, if functions have to be computed at both locations. For computing the Boolean AND function of two independent Bernoulli sources at both locations, an achievable infinite-message sum-rate with infinitesimal-rate messages is derived in terms of a 2-D definite integral and a rate-allocation curve. The benefit of interaction is highlighted in multiterminal function computation problem through examples. For networks with a star topology, multiple rounds of interactive coding is shown to decrease the scaling law of the total network rate by an order of magnitude as the network grows.", "In 1985 Kaspi provided a single-letter characterization of the sum-rate-distortion function for a two-way lossy source coding problem in which two terminals send multiple messages back and forth with the goal of reproducing each other's sources. Yet, the question remained whether more messages can strictly improve the sum-rate-distortion function. Viewing the sum-rate as a functional of the distortions and the joint source distribution and leveraging its convex-geometric properties, we construct an example which shows that two messages can strictly improve the one-message (Wyner-Ziv) rate-distortion function. The example also shows that the ratio of the one-message rate to the two-message sum-rate can be arbitrarily large and simultaneously the ratio of the backward rate to the forward rate in the two-message sum-rate can be arbitrarily small." ] }
1505.04202
2204358493
In many resource allocation problems, a centralized controller needs to award some resource to a user selected from a collection of distributed users with the goal of maximizing the utility the user would receive from the resource. This can be modeled as the controller computing an extremum of the distributed users’ utilities. The overhead rate necessary to enable the controller to reproduce the users’ local state can be prohibitively high. An approach to reduce this overhead is interactive communication wherein rate savings are achieved by tolerating an increase in delay. In this paper, we consider the design of a simple achievable scheme based on successive refinements of scalar quantization at each user. The optimal quantization policy is computed via a dynamic program and we demonstrate that tolerating a small increase in delay can yield significant rate savings. We then consider two simpler quantization policies to investigate the scaling properties of the rate-delay tradeoffs. Using a combination of these simpler policies, the performance of the optimal policy can be closely approximated with lower computational costs.
These results motivate us to consider interaction for the DFC problem. In earlier work, we considered the non-interactive DFC problem of computing an extremum of independent users. We developed distributed scalar quantizers with rate-distortion performance close to the rate-distortion limits @cite_14 @cite_27 . We provided an achievable interactive communication scheme where the CEO communicates a threshold to the users at each round and the users reply with a single bit indicating if its value is above or below the threshold @cite_27 @cite_34 . This scheme can be thought of as a simple two-bin quantizer selected by the CEO at the beginning of each round; in the present work we extend this by allowing the CEO to select a multi-bin quantizer in each round.
{ "cite_N": [ "@cite_27", "@cite_14", "@cite_34" ], "mid": [ "2950734754", "2074982335", "2100311865" ], "abstract": [ "A key aspect of many resource allocation problems is the need for the resource controller to compute a function, such as the max or arg max, of the competing users metrics. Information must be exchanged between the competing users and the resource controller in order for this function to be computed. In many practical resource controllers the competing users' metrics are communicated to the resource controller, which then computes the desired extremization function. However, in this paper it is shown that information rate savings can be obtained by recognizing that controller only needs to determine the result of this extremization function. If the extremization function is to be computed losslessly, the rate savings are shown in most cases to be at most 2 bits independent of the number of competing users. Motivated by the small savings in the lossless case, simple achievable schemes for both the lossy and interactive variants of this problem are considered. It is shown that both of these approaches have the potential to realize large rate savings, especially in the case where the number of competing users is large. For the lossy variant, it is shown that the proposed simple achievable schemes are in fact close to the fundamental limit given by the rate distortion function.", "Efficient downlink resource allocation (e.g., subbands in OFDMA LTE) requires channel state information (e.g., subband gains) local to each user be transmitted to the base station (BS). Lossy encoding of the relevant state may result in suboptimal resource allocations by the BS, the performance cost of which may be captured by a suitable distortion measure. This problem is an indirect distributed lossy source coding problem with the function to be computed representing the optimal resource allocation, and the distortion measuring the cost of suboptimal allocations. In this paper we investigate the use of distributed scalar quantizers for lossy encoding of state, where the BS wishes to compute the index of the user with the largest gain on each subband. We prove the superiority of a heterogeneous (across users) quantizer design over the optimal homogeneous quantizer design, even though the source variables are i.i.d.", "For the resource allocation problem in a multiuser OFDMA system, we propose an interactive communication scheme between the base station and the users with the assumption that this system utilizes a rateless code for data transmission. We describe the problem of minimizing the overhead measured in the number of bits that must be exchanged required by the interactive scheme, and solve it with dynamic programming. We present simulation results showing the reduction of overhead information enabled by the interactive scheme relative to a straightforward one-way scheme in which each user reports its own channel quality." ] }
1505.04202
2204358493
In many resource allocation problems, a centralized controller needs to award some resource to a user selected from a collection of distributed users with the goal of maximizing the utility the user would receive from the resource. This can be modeled as the controller computing an extremum of the distributed users’ utilities. The overhead rate necessary to enable the controller to reproduce the users’ local state can be prohibitively high. An approach to reduce this overhead is interactive communication wherein rate savings are achieved by tolerating an increase in delay. In this paper, we consider the design of a simple achievable scheme based on successive refinements of scalar quantization at each user. The optimal quantization policy is computed via a dynamic program and we demonstrate that tolerating a small increase in delay can yield significant rate savings. We then consider two simpler quantization policies to investigate the scaling properties of the rate-delay tradeoffs. Using a combination of these simpler policies, the performance of the optimal policy can be closely approximated with lower computational costs.
This interactive coding scheme can be understood as a type of posterior matching @cite_18 . Shayevitz and Feder considered the problem of point-to-point communication over a memoryless channel with noiseless feedback from the receiver. A capacity-achieving transmission scheme was developed based on the transmitter providing statistically independent information that is missing by observing the a-posteriori density of the message as feedback from the receiver. In the present work, the focus is on minimizing the sum-rate from a collection of sources; however, the feedback from the CEO is used by the users in determining what is transmitted in the next round.
{ "cite_N": [ "@cite_18" ], "mid": [ "2011914761" ], "abstract": [ "In this paper, we introduce a fundamental principle for optimal communication over general memoryless channels in the presence of noiseless feedback, termed posterior matching. Using this principle, we devise a (simple, sequential) generic feedback transmission scheme suitable for a large class of memoryless channels and input distributions, achieving any rate below the corresponding mutual information. This provides a unified framework for optimal feedback communication in which the Horstein scheme (BSC) and the Schalkwijk-Kailath scheme (AWGN channel) are special cases. Thus, as a corollary, we prove that the Horstein scheme indeed attains the BSC capacity, settling a longstanding conjecture. We further provide closed form expressions for the error probability of the scheme over a range of rates, and derive the achievable rates in a mismatch setting where the scheme is designed according to the wrong channel model. Several illustrative examples of the posterior matching scheme for specific channels are given, and the corresponding error probability expressions are evaluated. The proof techniques employed utilize novel relations between information rates and contraction properties of iterated function systems." ] }
1505.04243
2104167465
In this paper we analyze boosting algorithms in linear regression from a new perspective: that of modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS @math ) and least squares boosting (LS-Boost( @math )), can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a modification of FS @math that yields an algorithm for the Lasso, and that may be easily extended to an algorithm that computes the Lasso path for different values of the regularization parameter. Furthermore, we show that these new algorithms for the Lasso may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function. We derive novel, comprehensive computational guarantees for several boosting algorithms in linear regression (including LS-Boost( @math ) and FS @math ) by using techniques of modern first-order methods in convex optimization. Our computational guarantees inform us about the statistical properties of boosting algorithms. In particular they provide, for the first time, a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm with a prespecified learning rate for a fixed but arbitrary number of iterations, for any dataset.
In @cite_24 , the authors also point out that boosting algorithms often lead to a large collection of nonzero coefficients. They suggest reducing the complexity of the model by some form of post-processing" technique---one such proposal is to apply a regularization on the selected set of coefficients.
{ "cite_N": [ "@cite_24" ], "mid": [ "12010550" ], "abstract": [ "Learning a function of many arguments is viewed from the perspective of high– dimensional numerical quadrature. It is shown that many of the popular ensemble learning procedures can be cast in this framework. In particular randomized methods, including bagging and random forests, are seen to correspond to random Monte Carlo integration methods each based on particular importance sampling strategies. Non random boosting methods are seen to correspond to deterministic quasi Monte Carlo integration techniques. This view helps explain some of their properties and suggests modifications to them that can substantially improve their accuracy while dramatically improving computational performance." ] }
1505.04243
2104167465
In this paper we analyze boosting algorithms in linear regression from a new perspective: that of modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS @math ) and least squares boosting (LS-Boost( @math )), can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a modification of FS @math that yields an algorithm for the Lasso, and that may be easily extended to an algorithm that computes the Lasso path for different values of the regularization parameter. Furthermore, we show that these new algorithms for the Lasso may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function. We derive novel, comprehensive computational guarantees for several boosting algorithms in linear regression (including LS-Boost( @math ) and FS @math ) by using techniques of modern first-order methods in convex optimization. Our computational guarantees inform us about the statistical properties of boosting algorithms. In particular they provide, for the first time, a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm with a prespecified learning rate for a fixed but arbitrary number of iterations, for any dataset.
A parallel line of work in machine learning @cite_43 explores the scope of boosting-like algorithms on @math -regularized versions of different loss functions arising mainly in the context of classification problems. The proposal of @cite_43 , when adapted to the least squares regression problem with @math -regularization penalty, leads to the following optimization problem: for which the authors @cite_43 employ greedy coordinate descent methods. Like the boosting algorithms considered herein, at each iteration the algorithm studied by @cite_43 selects a certain coefficient @math to update, leaving all other coefficients @math unchanged. The amount with which to update the coefficient @math is determined by fully optimizing the loss function with respect to @math , again holding all other coefficients constant (note that one recovers @math if @math ). This way of updating @math leads to a simple soft-thresholding operation @cite_7 and is different from forward stagewise update rules. In contrast, the boosting algorithm @math that we propose here is based on subgradient descent on the dual of the problem , i.e., problem .
{ "cite_N": [ "@cite_43", "@cite_7" ], "mid": [ "2096291962", "191129667" ], "abstract": [ "We derive generalizations of AdaBoost and related gradient-based coordinate descent methods that incorporate sparsity-promoting penalties for the norm of the predictor that is being learned. The end result is a family of coordinate descent algorithms that integrate forward feature induction and back-pruning through regularization and give an automatic stopping criterion for feature induction. We study penalties based on the l1, l2, and l∞ norms of the predictor and introduce mixed-norm penalties that build upon the initial penalties. The mixed-norm regularizers facilitate structural sparsity in parameter space, which is a useful property in multiclass prediction and other related tasks. We report empirical results that demonstrate the power of our approach in building accurate and structurally sparse models.", "Much recent effort has sought asymptotically minimax methods for recovering infinite dimensional objects-curves, densities, spectral densities, images-from noisy data. A now rich and complex body of work develops nearly or exactly minimax estimators for an array of interesting problems. Unfortunately, the results have rarely moved into practice, for a variety of reasons-among them being similarity to known methods, computational intractability and lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data: translate the empirical wavelet coefficients towards the origin by an amount √(2 log n) √n. The proposal differs from those in current use, is computationally practical and is spatially adaptive; it thus avoids several of the previous objections. Further, the method is nearly minimax both for a wide variety of loss functions-pointwise error, global error measured in L p -norms, pointwise and global error in estimation of derivatives-and for a wide range of smoothness classes, including standard Holder and Sobolev classes, and bounded variation. This is a much broader near optimality than anything previously proposed: we draw loose parallels with near optimality in robustness and also with the broad near eigenfunction properties of wavelets themselves. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and information-based complexity" ] }
1505.03996
1715018960
In the context of using norms for controlling multiagent systems, a vitally important question that has not yet been addressed in the literature is the development of mechanisms for monitoring norm compliance under partial action observability. This paper proposes the reconstruction of unobserved actions to tackle this problem. In particular, we formalize the problem of reconstructing unobserved actions, and propose an information model and algorithms for monitoring norms under partial action observability using two different processes for reconstructing unobserved actions. Our evaluation shows that reconstructing unobserved actions increases significantly the number of norm violations and fulfilments detected.
Previous work on norms for regulating MAS proposed control mechanisms for norms to have an effective influence on agent behaviours @cite_22 . These mechanisms are classified into two main categories @cite_22 : mechanisms, which make the violation of norms impossible; and mechanisms, which are applied after the detection of norm violations and fulfilments, reacting upon them.
{ "cite_N": [ "@cite_22" ], "mid": [ "139117100" ], "abstract": [ "The viability of the application of the e-Institution paradigm for obtaining overall desired behavior in open multiagent systems (MAS) lies in the possibility of bringing the norms of the institution to have an actual impact on the MAS. Institutional norms have to be implementedin the society. The paper addresses two possible views on implementing norms, the so-called regimentationof norms, and the enforcementof norms, with particular attention to this last one. Aim of the paper is to provide a theory for the understanding of the notion of enforcement and for the design of enforcement mechanisms in e-Institutions." ] }
1505.03996
1715018960
In the context of using norms for controlling multiagent systems, a vitally important question that has not yet been addressed in the literature is the development of mechanisms for monitoring norm compliance under partial action observability. This paper proposes the reconstruction of unobserved actions to tackle this problem. In particular, we formalize the problem of reconstructing unobserved actions, and propose an information model and algorithms for monitoring norms under partial action observability using two different processes for reconstructing unobserved actions. Our evaluation shows that reconstructing unobserved actions increases significantly the number of norm violations and fulfilments detected.
mechanisms prevent agents from performing forbidden actions (vs. force agents to perform obligatory actions) by mediating access to resources and the communication channel, such as Electronic Institutions (EIs) @cite_17 . However, the regimentation of all actions is often difficult or impossible. Furthermore, it is sometimes preferable to allow agents to make flexible decisions about norm compliance @cite_8 . In response to this need, mechanisms were developed. Proposals on the enforcement of norms can be classified according to the entity that monitors whether norms are fulfilled or not. Specifically, norm compliance can be monitored by either agents themselves or the underlying infrastructure may provide monitoring entities.
{ "cite_N": [ "@cite_8", "@cite_17" ], "mid": [ "1593757267", "2165208522" ], "abstract": [ "The right framework for studying normative issues in infosociety and MAS is that of deliberate or spontaneous social order, and intended or unintended, centralised or decentralised forms of social control. For effectively supporting human cooperation it is necessary to \"incorporate\" social and normative knowledge in intelligent technology; computers should deal with--and thus partially \"understand\"--permissions, obligations, power, roles, commitments, trust, etc. Here only one facet of this problem is considered: the spontaneous and decentralised norm creation, and normative monitoring and intervention. Cognitive aspects of spontaneous conventions, implicit commitments, tacit agreements, and the bottom-up issuing and spreading of norms are discussed. The transition from \"face to face\" normative relationships to some stronger constraints on agents' action, and to institutions and authority, and the possibility of a consequent increase of trust, are explored. In particular, I focus on the transition from two party trust, right, permission, and commitment, to three party relationships, where some witness or some enforcing authority is introduced. In this perspective of 'formalising the informal', i.e., the interpersonal unofficial normative matter, I discuss (also in order to stress dangers of computer-based formalisation and enforcement of rules) the important phenomenon of functional (collaborative) systematic violation of rules in organisation and cooperation, and the possible emergence of a \"convention to violate\".", "The design and development of open multi-agent systems (MAS) is a key aspect in agent research. We advocate that they can be realised as electronic institutions. In this paper we focus on the execution of electronic institutions by introducing AMELI, an infrastructure that mediates agentsý interactions while enforcing institutional rules. An innovative feature of AMELI is that it is of general purpose (it can interpret any institution specification), and therefore it can be regarded as domain-independent. The combination of ISLANDER [5] and AMELI provides full support for the design and development of electronic institutions." ] }
1505.03996
1715018960
In the context of using norms for controlling multiagent systems, a vitally important question that has not yet been addressed in the literature is the development of mechanisms for monitoring norm compliance under partial action observability. This paper proposes the reconstruction of unobserved actions to tackle this problem. In particular, we formalize the problem of reconstructing unobserved actions, and propose an information model and algorithms for monitoring norms under partial action observability using two different processes for reconstructing unobserved actions. Our evaluation shows that reconstructing unobserved actions increases significantly the number of norm violations and fulfilments detected.
Regarding , this approach is characterized by the fact that norm violations and fulfilments are monitored by agents that are involved in an interaction @cite_26 @cite_0 , or other agents that observe an interaction in which they are not directly involved @cite_13 @cite_23 @cite_1 . The main drawback of proposals based on agent monitoring is the fact that norm monitoring and enforcement must be implemented by agent programmers.
{ "cite_N": [ "@cite_26", "@cite_1", "@cite_0", "@cite_23", "@cite_13" ], "mid": [ "1591644483", "1978708265", "1499185662", "2096143896", "92376239" ], "abstract": [ "Interaction protocols are specific, often standard, constraints on the behaviors of autonomous agents in a multiagent system. Protocols are essential to the functioning of open systems, such as those that arise in most interesting web applications. A variety of common protocols in negotiation and electronic commerce are best treated as commitment protocols, which are defined, or at least analyzed, in terms of the creation, satisfaction, or manipulation of the commitments among the participating agents. When protocols are employed in open environments, such as the Internet, they must be executed by agents that behave more or less autonomously and whose internal designs are not known. In such settings, therefore, there is a risk that the participating agents may fail to comply with the given protocol. Without a rigorous means to verify compliance, the very idea of protocols for interoperation is subverted. We develop an approach for testing whether the behavior of an agent complies with a commitment protocol. Our approach requires the specification of commitment protocols in temporal logic, and involves a novel way of synthesizing and applying ideas from distributed computing and logics of program.", "In this paper, we study the problem of reaching a consensus among all the agents in the networked control systems (NCS) in the presence of misbehaving agents. A reputation-based resilient distributed control algorithm is first proposed for the leader-follower consensus network. The proposed algorithm embeds a resilience mechanism that includes four phases (detection, mitigation, identification, and update), into the control process in a distributed manner. At each phase, every agent only uses local and one-hop neighbors' information to identify and isolate the misbehaving agents, and even compensate their effect on the system. We then extend the proposed algorithm to the leaderless consensus network by introducing and adding two recovery schemes (rollback and excitation recovery) into the current framework to guarantee the accurate convergence of the well-behaving agents in NCS. The effectiveness of the proposed method is demonstrated through case studies in multirobot formation control and wireless sensor networks.", "One aspect of the development of e-market services for the facilitation of business-to-business electronic commerce concerns the provision of automated support for contract performance assessment. Assessing the parties' performance of an agreement, once it comes into force, requires reasoning with the contract terms (obligations, rights, powers and other legal relations that obtain between parties) as parties go about conducting their business exchange, sometimes complying and sometimes deviating from their pre-agreed prescribed behaviour. Compliance with prescribed behaviour is typically evaluated individually by each partner to an agreement and where parties' views differ, disputes arise that require some form of resolution.", "In a multiagent system where norms are used to regulate the actions agents ought to execute, some agents may decide not to abide by the norms if this can benefit them. Norm enforcement mechanisms are designed to counteract these benefits and thus the motives for not abiding by the norms. In this work we propose a distributed mechanism through which agents in the multiagent system that do not abide by the norms can be ostracised by their peers. An ostracised agent cannot interact anymore and looses all benefits from future interactions. We describe a model for multiagent systems structured as networks of agents, and a behavioural model for the agents in such systems. Furthermore, we provide analytical results which show that there exists an upper bound to the number of potential norm violations when all the agents exhibit certain behaviours. We also provide experimental results showing that both stricter enforcement behaviours and larger percentage of agents exhibiting these behaviours reduce the number of norm violations, and that the network topology influences the number of norm violations. These experiments have been executed under varying scenarios with different values for the number of agents, percentage of enforcers, percentage of violators, network topology, and agent behaviours. Finally, we give examples of applications where the enforcement techniques we provide could be used.", "Behavioral norms are key ingredients that allow agent coordination where societ al laws do not sufficiently constrain agent behaviors. Whereas social laws need to be enforced in a top-down manner, norms evolve in a bottom-up manner and are typically more self-enforcing. While effective norms can significantly enhance performance of individual agents and agent societies, there has been little work in multiagent systems on the formation of social norms. We propose a model that supports the emergence of social norms via learning from interaction experiences. In our model, individual agents repeatedly interact with other agents in the society over instances of a given scenario. Each interaction is framed as a stage game. An agent learns its policy to play the game over repeated interactions with multiple agents. We term this mode of learning social learning, which is distinct from an agent learning from repeated interactions against the same player. We are particularly interested in situations where multiple action combinations yield the same optimal payoff. The key research question is to find out if the entire population learns to converge to a consistent norm. In addition to studying such emergence of social norms among homogeneous learners via social learning, we study the effects of heterogeneous learners, population size, multiple social groups, etc." ] }
1505.03996
1715018960
In the context of using norms for controlling multiagent systems, a vitally important question that has not yet been addressed in the literature is the development of mechanisms for monitoring norm compliance under partial action observability. This paper proposes the reconstruction of unobserved actions to tackle this problem. In particular, we formalize the problem of reconstructing unobserved actions, and propose an information model and algorithms for monitoring norms under partial action observability using two different processes for reconstructing unobserved actions. Our evaluation shows that reconstructing unobserved actions increases significantly the number of norm violations and fulfilments detected.
Regarding , several authors proposed developing entities at the infrastructure level that are in charge of both monitoring and enforcing norms. Cardoso & Oliveira @cite_25 proposed an architecture in which the monitoring and enforcement of norms is made by a single institutional entity. This centralized implementation represents a performance limitation when dealing with a considerable number of agents. To address the performance limitation of centralized approaches, distributed mechanisms for an institutional enforcement of norms were proposed in @cite_21 @cite_16 @cite_18 @cite_6 .
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_6", "@cite_16", "@cite_25" ], "mid": [ "2120854308", "2068310911", "2143297560", "2023870983", "2109380713" ], "abstract": [ "The behaviours of autonomous agents may deviate from those deemed to be for the good of the societ al systems of which they are a part. Norms have therefore been proposed as a means to regulate agent behaviours in open and dynamic systems, where these norms specify the obliged, permitted and prohibited behaviours of agents. Regulation can effectively be achieved through use of enforcement mechanisms that result in a net loss of utility for an agent in cases where the agent's behaviour fails to comply with the norms. Recognition of compliance is thus crucial for achieving regulation. In this paper we propose a generic architecture for observation of agent behaviours, and recognition of these behaviours as constituting, or counting as, compliance or violation. The architecture deploys monitors that receive inputs from observers, and processes these inputs together with transition network representations of individual norms. In this way, monitors determine the fulfillment or violation status of norms. The paper also describes a proof of concept implementation and deployment of monitors in electronic contracting environments.", "Software technology is undergoing a transition form monolithic systems, constructed according to a single overall design, into conglomerates of semiautonomous, heterogeneous, and independently designed subsystems, constructed and managed by different organizations, with little, if any, knowledge of each other. Among the problems inherent in such conglomerates, none is more serious than the difficulty to control the activities of the disparate agents operating in it, and the difficulty for such agents to coordinate their activities with each other. We argue that the nature of coordination and control required for such systems calls for the following principles to be satisfied: (1) coordination policies need to be enforced: (2) the enforcement needs to be decentralized; and (3) coordination policies need to be formulated explicitly—rather than being implicit in the code of the agents involved—and they should be enforced by means of a generic, broad spectrum mechanism; and (4) it should be possible to deploy and enforce a policy incrementally, without exacting any cost from agents and activities not subject to it. We describe a mechansim called law-governed interaction (LGI), currently implemented by the Moses toolkit, which has been designed to satisfy these principles. We show that LGI is at least as general as a conventional centralized coordination mechanism (CCM), and that it is more scalable, and generally more efficient, then CCM.", "Norms have been promoted as a coordination mechanism for controlling agent behaviours in open MAS. Thus, agent platforms must provide normative support, allowing both norm-aware and non norm-aware agents to take part in MAS controlled by norms. In this paper, the most relevant proposals on the definition of norm enforcement mechanisms have been analysed. These proposals present several drawbacks that make them unsuitable for open MAS. In response to these problems, this paper describes a new Norm-Enforcing Architecture aimed at controlling open MAS.", "Norms are widely recognised as a means of coordinating multiagent systems. The distributed management of norms is a challenging issue and we observe a lack of truly distributed computational realisations of normative models. In order to regulate the behaviour of autonomous agents that take part in multiple, related activities, we propose a normative model, the Normative Structure (NS), an artifact that is based on the propagation of normative positions (obligations, prohibitions, permissions), as consequences of agents' actions. Within a NS, conflicts may arise due to the dynamic nature of the MAS and the concurrency of agents' actions. However, ensuring conflict-freedom of a NS at design time is computationally intractable. We show this by formalising the notion of conflict, providing a mapping of NSs into Coloured Petri Nets and borrowing well-known theoretical results from that field. Since online conflict resolution is required, we present a tractable algorithm to be employed distributedly. We then demonstrate that this algorithm is paramount for the distributed enactment of a NS.", "Norms and institutions have been proposed to regulate multi-agent interactions. However, agents are intrinsically autonomous, and may thus decide whether to comply with norms. On the other hand, besides institutional norms, agents may adopt new norms by establishing commitments with other agents. In this paper, we address these issues by considering an electronic institution that monitors the compliance to norms in an evolving normative framework: norms are used both to regulate an existing environment and to define contracts that make agents' commitments explicit. In particular, we consider the creation of virtual organizations in which agents commit to certain cooperation efforts regulated by appropriate norms. The supervision of norm fulfillment is based on the notion of institutional reality, which is constructed by assigning powers to agents enacting institutional roles. Constitutive rules make a connection between the illocutions of those agents and institutional facts, certifying the occurrence of ..." ] }
1505.03851
2104108589
We describe a technique for drawing values from discrete distributions, such as sampling from the random variables of a mixture model, that avoids computing a complete table of partial sums of the relative probabilities. A table of alternate (“butterfly-patterned”) form is faster to compute, making better use of coalesced memory accesses. From this table, complete partial sums are computed on the fly during a binary search. Measurements using an NVIDIA Titan Black GPU show that for a sufficiently large number of clusters or topics (K > 200), this technique alone more than doubles the speed of a latent Dirichlet allocation (LDA) application already highly tuned for GPU execution.
Vose @cite_13 describes a preprocessing algorithm, with proof, that further reduces the preprocessing complexity of the alias method to @math . The tradeoff that permits this improvement is that the preprocessing algorithm makes no attempt to minimize the probability of accessing the array @math .
{ "cite_N": [ "@cite_13" ], "mid": [ "2116940697" ], "abstract": [ "Let xi be a random variable over a finite set with an arbitrary probability distribution. Improvements to a fast method of generating sample values for xi in constant time are suggested. The proposed modification reduces the time required for initialization to O(n). For a simple genetic algorithm, this improvement changes an O(g n 1n n) algorithm into an O(g n) algorithm (where g is the number of generations, and n is the population size). >" ] }
1505.03851
2104108589
We describe a technique for drawing values from discrete distributions, such as sampling from the random variables of a mixture model, that avoids computing a complete table of partial sums of the relative probabilities. A table of alternate (“butterfly-patterned”) form is faster to compute, making better use of coalesced memory accesses. From this table, complete partial sums are computed on the fly during a binary search. Measurements using an NVIDIA Titan Black GPU show that for a sufficiently large number of clusters or topics (K > 200), this technique alone more than doubles the speed of a latent Dirichlet allocation (LDA) application already highly tuned for GPU execution.
Matias et al. @cite_4 describe a technique for preprocessing a set of relative probabilities into a set of trees, after which a sequence of intermixed generate (draw) and update operations can be performed, where an update operation changes just one of the relative probabilities; a single generate operation takes @math expected time, and a single update operation takes @math amortized expected time.
{ "cite_N": [ "@cite_4" ], "mid": [ "2060517203" ], "abstract": [ "We present and analyze an efficient new algorithm for generating a random variate distributed according to a dynamically changing set of weights. The algorithm can generate the random variate and update a weight each in @math expected time. (For all feasible values of @math , @math is at most 5.) The @math expected update time is amortized; in the worst-case the expected update time is @math . The algorithm is simple, practical, and easy to implement." ] }
1505.03851
2104108589
We describe a technique for drawing values from discrete distributions, such as sampling from the random variables of a mixture model, that avoids computing a complete table of partial sums of the relative probabilities. A table of alternate (“butterfly-patterned”) form is faster to compute, making better use of coalesced memory accesses. From this table, complete partial sums are computed on the fly during a binary search. Measurements using an NVIDIA Titan Black GPU show that for a sufficiently large number of clusters or topics (K > 200), this technique alone more than doubles the speed of a latent Dirichlet allocation (LDA) application already highly tuned for GPU execution.
Li et al. @cite_19 describe a modified LDA topic modeling algorithm, which they call Metropolis-Hastings-Walker sampling, that uses Walker's alias method but amortizes the cost of constructing the table by drawing from the same table during multiple consecutive sampling iterations of a Metropolis-Hastings sampler; their paper provides some justification for why it is acceptable to use a slightly stale'' alias table (their words) for the purposes of this application.
{ "cite_N": [ "@cite_19" ], "mid": [ "2052261215" ], "abstract": [ "Inference in topic models typically involves a sampling step to associate latent variables with observations. Unfortunately the generative model loses sparsity as the amount of data increases, requiring O(k) operations per word for k topics. In this paper we propose an algorithm which scales linearly with the number of actually instantiated topics kd in the document. For large document collections and in structured hierarchical models kd ll k. This yields an order of magnitude speedup. Our method applies to a wide variety of statistical models such as PDP [16,4] and HDP [19]. At its core is the idea that dense, slowly changing distributions can be approximated efficiently by the combination of a Metropolis-Hastings step, use of sparsity, and amortized constant time sampling via Walker's alias method." ] }
1505.03825
2949999282
This paper addresses the problem of automatically localizing dominant objects as spatio-temporal tubes in a noisy collection of videos with minimal or even no supervision. We formulate the problem as a combination of two complementary processes: discovery and tracking. The first one establishes correspondences between prominent regions across videos, and the second one associates successive similar object regions within the same video. Interestingly, our algorithm also discovers the implicit topology of frames associated with instances of the same object class across different videos, a role normally left to supervisory information in the form of class labels in conventional image and video understanding methods. Indeed, as demonstrated by our experiments, our method can handle video collections featuring multiple object classes, and substantially outperforms the state of the art in colocalization, even though it tackles a broader problem with much less supervision.
Our approach combines object discovery and tracking. The discovery part establishes correspondences between frames across videos to detect object candidates. Similar approaches have been proposed for salient region detection @cite_31 , image cosegmentation @cite_34 @cite_28 , and image colocalization @cite_5 . Conventional object tracking methods @cite_8 usually require annotations for at least one frame @cite_14 @cite_11 @cite_7 , or object detectors trained for target classes in a supervised manner @cite_20 @cite_10 @cite_12 . Our method does not require such supervision and instead alternates discovery and tracking of object candidates.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_28", "@cite_5", "@cite_31", "@cite_34", "@cite_10", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "2089961441", "2139474895", "1995903777", "2118557299", "2950926078", "2105628432", "2044475502", "2061773916", "2148958980", "", "2098941887" ], "abstract": [ "Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.", "We propose a novel offline tracking algorithm based on model-averaged posterior estimation through patch matching across frames. Contrary to existing online and offline tracking methods, our algorithm is not based on temporally-ordered estimates of target state but attempts to select easy-to-track frames first out of the remaining ones without exploiting temporal coherency of target. The posterior of the selected frame is estimated by propagating densities from the already tracked frames in a recursive manner. The density propagation across frames is implemented by an efficient patch matching technique, which is useful for our algorithm since it does not require motion smoothness assumption. Also, we present a hierarchical approach, where a small set of key frames are tracked first and non-key frames are handled by local key frames. Our tracking algorithm is conceptually well-suited for the sequences with abrupt motion, shot changes, and occlusion. We compare our tracking algorithm with existing techniques in real videos with such challenges and illustrate its superior performance qualitatively and quantitatively.", "The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.", "Joint segmentation of image sets has great importance for object recognition, image classification, and image retrieval. In this paper, we aim to jointly segment a set of images starting from a small number of labeled images or none at all. To allow the images to share segmentation information with each other, we build a network that contains segmented as well as unsegmented images, and extract functional maps between connected image pairs based on image appearance features. These functional maps act as general property transporters between the images and, in particular, are used to transfer segmentations. We define and operate in a reduced functional space optimized so that the functional maps approximately satisfy cycle-consistency under composition in the network. A joint optimization framework is proposed to simultaneously generate all segmentation functions over the images so that they both align with local segmentation cues in each particular image, and agree with each other under network transportation. This formulation allows us to extract segmentations even with no training data, but can also exploit such data when available. The collective effect of the joint processing using functional maps leads to accurate information sharing among images and yields superior segmentation results, as shown on the iCoseg, MSRC, and PASCAL data sets.", "This paper addresses unsupervised discovery and localization of dominant objects from a noisy image collection with multiple object classes. The setting of this problem is fully unsupervised, without even image-level annotations or any assumption of a single dominant class. This is far more general than typical colocalization, cosegmentation, or weakly-supervised localization tasks. We tackle the discovery and localization problem using a part-based region matching approach: We use off-the-shelf region proposals to form a set of candidate bounding boxes for objects and object parts. These regions are efficiently matched across images using a probabilistic Hough transform that evaluates the confidence for each candidate correspondence considering both appearance and spatial consistency. Dominant objects are discovered and localized by comparing the scores of candidate regions and selecting those that stand out over other regions containing them. Extensive experimental evaluations on standard benchmarks demonstrate that the proposed approach significantly outperforms the current state of the art in colocalization, and achieves robust object discovery in challenging mixed-class datasets.", "This paper proposes a fast and scalable alternating optimization technique to detect regions of interest (ROIs) in cluttered Web images without labels. The proposed approach discovers highly probable regions of object instances by iteratively repeating the following two functions: (1) choose the exemplar set (i.e. a small number of highly ranked reference ROIs) across the dataset and (2) refine the ROIs of each image with respect to the exemplar set. These two subproblems are formulated as ranking in two different similarity networks of ROI hypotheses by link analysis. The experiments with the PASCAL 06 dataset show that our unsupervised localization performance is better than one of state-of-the-art techniques and comparable to supervised methods. Also, we test the scalability of our approach with five objects in Flickr dataset consisting of more than 200K images.", "Joint segmentation of image sets is a challenging problem, especially when there are multiple objects with variable appearance shared among the images in the collection and the set of objects present in each particular image is itself varying and unknown. In this paper, we present a novel method to jointly segment a set of images containing objects from multiple classes. We first establish consistent functional maps across the input images, and introduce a formulation that explicitly models partial similarity across images instead of global consistency. Given the optimized maps between pairs of images, multiple groups of consistent segmentation functions are found such that they align with segmentation cues in the images, agree with the functional maps, and are mutually exclusive. The proposed fully unsupervised approach exhibits a significant improvement over the state-of-the-art methods, as shown on the co-segmentation data sets MSRC, Flickr, and PASCAL.", "The majority of existing pedestrian trackers concentrate on maintaining the identities of targets, however systems for remote biometric analysis or activity recognition in surveillance video often require stable bounding-boxes around pedestrians rather than approximate locations. We present a multi-target tracking system that is designed specifically for the provision of stable and accurate head location estimates. By performing data association over a sliding window of frames, we are able to correct many data association errors and fill in gaps where observations are missed. The approach is multi-threaded and combines asynchronous HOG detections with simultaneous KLT tracking and Markov-Chain Monte-Carlo Data Association (MCM-CDA) to provide guaranteed real-time tracking in high definition video. Where previous approaches have used ad-hoc models for data association, we use a more principled approach based on a Minimal Description Length (MDL) objective which accurately models the affinity between observations. We demonstrate by qualitative and quantitative evaluation that the system is capable of providing precise location estimates for large crowds of pedestrians in real-time. To facilitate future performance comparisons, we make a new dataset with hand annotated ground truth head locations publicly available.", "In this paper, we address the problem of automatically detecting and tracking a variable number of persons in complex scenes using a monocular, potentially moving, uncalibrated camera. We propose a novel approach for multiperson tracking-by-detection in a particle filtering framework. In addition to final high-confidence detections, our algorithm uses the continuous confidence of pedestrian detectors and online-trained, instance-specific classifiers as a graded observation model. Thus, generic object category knowledge is complemented by instance-specific information. The main contribution of this paper is to explore how these unreliable information sources can be used for robust multiperson tracking. The algorithm detects and tracks a large number of dynamically moving people in complex scenes with occlusions, does not rely on background modeling, requires no camera or ground plane calibration, and only makes use of information from the past. Hence, it imposes very few restrictions and is suitable for online applications. Our experiments show that the method yields good tracking performance in a large variety of highly dynamic scenarios, such as typical surveillance videos, webcam footage, or sports sequences. We demonstrate that our algorithm outperforms other methods that rely on additional information. Furthermore, we analyze the influence of different algorithm components on the robustness.", "", "Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance." ] }
1505.03825
2949999282
This paper addresses the problem of automatically localizing dominant objects as spatio-temporal tubes in a noisy collection of videos with minimal or even no supervision. We formulate the problem as a combination of two complementary processes: discovery and tracking. The first one establishes correspondences between prominent regions across videos, and the second one associates successive similar object regions within the same video. Interestingly, our algorithm also discovers the implicit topology of frames associated with instances of the same object class across different videos, a role normally left to supervisory information in the form of class labels in conventional image and video understanding methods. Indeed, as demonstrated by our experiments, our method can handle video collections featuring multiple object classes, and substantially outperforms the state of the art in colocalization, even though it tackles a broader problem with much less supervision.
Our setting is also related to object segmentation or cosegmentation in videos. For video object segmentation, clusters of long-term point tracks have been used @cite_2 @cite_24 @cite_25 , while assuming that points from the same object have similar tracks. @cite_17 @cite_1 , appearances of potential object and background are modeled and combined with motion information for the task. These methods produce results for individual videos and do not investigate relationships between videos and the objects they contain. Video object cosegmentation aims to segment a detailed mask of common object out of videos. This problem has been addressed with weak supervision such as object class per video @cite_29 and additional labels for a few frames that indicate whether the frames contain target object or not @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_1", "@cite_24", "@cite_2", "@cite_25", "@cite_17" ], "mid": [ "88469699", "2105297725", "1989348325", "2129822853", "1496571393", "2012184117", "2113708607" ], "abstract": [ "We present a spatio-temporal energy minimization formulation for simultaneous video object discovery and co-segmentation across multiple videos containing irrelevant frames. Our approach overcomes a limitation that most existing video co-segmentation methods possess, i.e., they perform poorly when dealing with practical videos in which the target objects are not present in many frames. Our formulation incorporates a spatio-temporal auto-context model, which is combined with appearance modeling for superpixel labeling. The superpixel-level labels are propagated to the frame level through a multiple instance boosting algorithm with spatial reasoning, based on which frames containing the target object are identified. Our method only needs to be bootstrapped with the frame-level labels for a few video frames (e.g., usually 1 to 3) to indicate if they contain the target objects or not. Extensive experiments on four datasets validate the efficacy of our proposed method: 1) object segmentation from a single video on the SegTrack dataset, 2) object co-segmentation from multiple videos on a video co-segmentation dataset, and 3) joint object discovery and co-segmentation from multiple videos containing irrelevant frames on the MOViCS dataset and XJTU-Stevens, a new dataset that we introduce in this paper. The proposed method compares favorably with the state-of-the-art in all of these experiments.", "The ubiquitous availability of Internet video offers the vision community the exciting opportunity to directly learn localized visual concepts from real-world imagery. Unfortunately, most such attempts are doomed because traditional approaches are ill-suited, both in terms of their computational characteristics and their inability to robustly contend with the label noise that plagues uncurated Internet content. We present CRANE, a weakly supervised algorithm that is specifically designed to learn under such conditions. First, we exploit the asymmetric availability of real-world training data, where small numbers of positive videos tagged with the concept are supplemented with large quantities of unreliable negative data. Second, we ensure that CRANE is robust to label noise, both in terms of tagged videos that fail to contain the concept as well as occasional negative videos that do. Finally, CRANE is highly parallelizable, making it practical to deploy at large scale without sacrificing the quality of the learned solution. Although CRANE is general, this paper focuses on segment annotation, where we show state-of-the-art pixel-level segmentation results on two datasets, one of which includes a training set of spatiotemporal segments from more than 20,000 videos.", "We present an approach to discover and segment foreground object(s) in video. Given an unannotated video sequence, the method first identifies object-like regions in any frame according to both static and dynamic cues. We then compute a series of binary partitions among those candidate “key-segments” to discover hypothesis groups with persistent appearance and motion. Finally, using each ranked hypothesis in turn, we estimate a pixel-level object labeling across all frames, where (a) the foreground likelihood depends on both the hypothesis's appearance as well as a novel localization prior based on partial shape matching, and (b) the background likelihood depends on cues pulled from the key-segments' (possibly diverse) surroundings observed across the sequence. Compared to existing methods, our approach automatically focuses on the persistent foreground regions of interest while resisting oversegmentation. We apply our method to challenging benchmark videos, and show competitive or better results than the state-of-the-art.", "Point trajectories have emerged as a powerful means to obtain high quality and fully unsupervised segmentation of objects in video shots. They can exploit the long term motion difference between objects, but they tend to be sparse due to computational reasons and the difficulty in estimating motion in homogeneous areas. In this paper we introduce a variational method to obtain dense segmentations from such sparse trajectory clusters. Information is propagated with a hierarchical, nonlinear diffusion process that runs in the continuous domain but takes superpixels into account. We show that this process raises the density from 3 to 100 and even increases the average precision of labels.", "Unsupervised learning requires a grouping step that defines which data belong together. A natural way of grouping in images is the segmentation of objects or parts of objects. While pure bottom-up segmentation from static cues is well known to be ambiguous at the object level, the story changes as soon as objects move. In this paper, we present a method that uses long term point trajectories based on dense optical flow. Defining pair-wise distances between these trajectories allows to cluster them, which results in temporally consistent segmentations of moving objects in a video shot. In contrast to multi-body factorization, points and even whole objects may appear or disappear during the shot. We provide a benchmark dataset and an evaluation method for this so far uncovered setting.", "Motion segmentation based on point trajectories can integrate information of a whole video shot to detect and separate moving objects. Commonly, similarities are defined between pairs of trajectories. However, pairwise similarities restrict the motion model to translations. Non-translational motion, such as rotation or scaling, is penalized in such an approach. We propose to define similarities on higher order tuples rather than pairs, which leads to hypergraphs. To apply spectral clustering, the hypergraph is transferred to an ordinary graph, an operation that can be interpreted as a projection. We propose a specific nonlinear projection via a regularized maximum operator, and show that it yields significant improvements both compared to pairwise similarities and alternative hypergraph projections.", "We present a technique for separating foreground objects from the background in a video. Our method is fast, fully automatic, and makes minimal assumptions about the video. This enables handling essentially unconstrained settings, including rapidly moving background, arbitrary object motion and appearance, and non-rigid deformations and articulations. In experiments on two datasets containing over 1400 video shots, our method outperforms a state-of-the-art background subtraction technique [4] as well as methods based on clustering point tracks [6, 18, 19]. Moreover, it performs comparably to recent video object segmentation methods based on object proposals [14, 16, 27], while being orders of magnitude faster." ] }
1505.03653
2005076281
Network updates such as policy and routing changes occur frequently in Software Defined Networks (SDN). Updates should be performed consistently, preventing temporary disruptions, and should require as little overhead as possible. Scalability is increasingly becoming an essential requirement in SDN. In this paper we propose to use time-triggered network updates to achieve consistent updates. Our proposed solution requires lower overhead than existing update approaches, without compromising the consistency during the update. We demonstrate that accurate time enables far more scalable consistent updates in SDN than previously available. In addition, it provides the SDN programmer with fine-grained control over the tradeoff between consistency and scalability.
The use of time in distributed applications has been widely analyzed, both in theory and in practice. Analysis of the usage of time and synchronized clocks, e.g., Lamport @cite_4 @cite_5 dates back to the late 1970s and early 1980s. Accurate time has been used in various different applications, such as distributed database @cite_0 , industrial automation systems @cite_7 , automotive networks @cite_35 , and accurate instrumentation and measurements @cite_31 . While the usage of accurate time in distributed systems has been widely discussed in the literature, we are not aware of similar analyses of the usage of accurate time as a means for performing consistent updates in computer networks.
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_7", "@cite_0", "@cite_5", "@cite_31" ], "mid": [ "", "1973501242", "1984963581", "2013409485", "", "2103072081" ], "abstract": [ "", "The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become.", "This paper describes an application of the IEEE 1588 standard to industrial automation. Key application use cases are identified that can benefit from time-based control techniques to improve performance results over traditional control methods. A brief discussion of how the 1588 standard may be adopted suitable to these applications. Application problems specific to industrial automation are enumerated and candidate solutions described.", "Spanner is Google’s scalable, multiversion, globally distributed, and synchronously replicated database. It is the first system to distribute data at global scale and support externally-consistent distributed transactions. This article describes how Spanner is structured, its feature set, the rationale underlying various design decisions, and a novel time API that exposes clock uncertainty. This API and its implementation are critical to supporting external consistency and a variety of powerful features: nonblocking reads in the past, lock-free snapshot transactions, and atomic schema changes, across all of Spanner.", "", "White Rabbit (WR) is the project name for a ambiguous project that uses Ethernet as both, deterministic (synchronous) data transfer and timing network. The presented design aims for a general purpose, fieldbus like transmission system, which provides deterministic data and timing to approximately 1000 timing stations. The main advantage over conventional systems is the highly accurate timing (sub-nanosecond range) without restrictions on the traffic schedule and an upper bound for the delivery time of high priority messages. In addition, WR also automatically compensates for transmission delays in the fibre links, which are in the range of 10 km length. It takes advantage of the latest developments on synchronous Ethernet and IEEE 1588 to enable the distribution of accurate timing information to the nodes saving noticeable amounts of bandwidth." ] }
1505.03653
2005076281
Network updates such as policy and routing changes occur frequently in Software Defined Networks (SDN). Updates should be performed consistently, preventing temporary disruptions, and should require as little overhead as possible. Scalability is increasingly becoming an essential requirement in SDN. In this paper we propose to use time-triggered network updates to achieve consistent updates. Our proposed solution requires lower overhead than existing update approaches, without compromising the consistency during the update. We demonstrate that accurate time enables far more scalable consistent updates in SDN than previously available. In addition, it provides the SDN programmer with fine-grained control over the tradeoff between consistency and scalability.
Time-of-day routing @cite_12 routes traffic to different destinations based on the time-of-day. Path calendaring @cite_45 can be used to configure network paths based on scheduled or foreseen traffic changes. The two latter examples are typically performed at a low rate and do not place demanding requirements on accuracy.
{ "cite_N": [ "@cite_45", "@cite_12" ], "mid": [ "2170644767", "2061819031" ], "abstract": [ "Datacenter WAN traffic consists of high priority transfers that have to be carried as soon as they arrive alongside large transfers with pre-assigned deadlines on their completion (ranging from minutes to hours). The ability to offer guarantees to large transfers is crucial for business needs and impacts overall cost-of-business. State-of-the-art traffic engineering solutions only consider the current time epoch and hence cannot provide pre-facto promises for long-lived transfers. We present Tempus, an online traffic engineering scheme that exploits information on transfer size and deadlines to appropriately pack long-running transfers across network paths and time, thereby leaving enough capacity slack for future high-priority requests. Tempus builds on a tailored approximate solution to a mixed packing-covering linear program, which is parallelizable and scales well in both running time and memory usage. Consequently, Tempus is able to quickly and effectively update its solution when new transfers arrive or unexpected changes happen. These updates involve only small edits to existing transfers. Therefore, as experiments on traces from a large production WAN show, Tempus can offer and keep promises to long-lived transfers well in advance of their actual deadline; the promise on minimal transfer size is comparable with an offline optimal solution and outperforms state-of-the-art solutions by 2-3X.", "Description d'une methode generale pour implementer un systeme reparti ayant n'importe quel degre desire de tolerance de panne. La synchronisation par horloge fiable et une solution au probleme «Bizantine Generals» sont assumes" ] }
1505.03653
2005076281
Network updates such as policy and routing changes occur frequently in Software Defined Networks (SDN). Updates should be performed consistently, preventing temporary disruptions, and should require as little overhead as possible. Scalability is increasingly becoming an essential requirement in SDN. In this paper we propose to use time-triggered network updates to achieve consistent updates. Our proposed solution requires lower overhead than existing update approaches, without compromising the consistency during the update. We demonstrate that accurate time enables far more scalable consistent updates in SDN than previously available. In addition, it provides the SDN programmer with fine-grained control over the tradeoff between consistency and scalability.
The analysis of @cite_46 proposed an incremental method that improves the scalability of consistent updates by breaking each update into multiple independent rounds, thereby reducing the total overhead consumed in each separate round. The timed approach we present in this paper can improve the incremental method even further, by reducing the overhead consumed in each round.
{ "cite_N": [ "@cite_46" ], "mid": [ "2071580597" ], "abstract": [ "A consistent update installs a new packet-forwarding policy across the switches of a software-defined network in place of an old policy. While doing so, such an update guarantees that every packet entering the network either obeys the old policy or the new one, but not some combination of the two. In this paper, we introduce new algorithms that trade the time required to perform a consistent update against the rule-space overhead required to implement it. We break an update in to k rounds that each transfer part of the traffic to the new configuration. The more rounds used, the slower the update, but the smaller the rule-space overhead. To ensure consistency, our algorithm analyzes the dependencies between rules in the old and new policies to determine which rules to add and remove on each round. In addition, we show how to optimize rule space used by representing the minimization problem as a mixed integer linear program. Moreover, to ensure the largest flows are moved first, while using rule space efficiently, we extend the mixed integer linear program with additional constraints. Our initial experiments show that a 6-round, optimized incremental update decreases rule space overhead from 100 to less than 10 . Moreover, if we cap the maximum rule-space overhead at 5 and assume the traffic flow volume follows Zipf's law, we find that 80 of the traffic may be transferred to the new policy in the first round and 99 in the first 3 rounds." ] }
1505.03873
2950310122
With the widespread availability of cellphones and cameras that have GPS capabilities, it is common for images being uploaded to the Internet today to have GPS coordinates associated with them. In addition to research that tries to predict GPS coordinates from visual features, this also opens up the door to problems that are conditioned on the availability of GPS coordinates. In this work, we tackle the problem of performing image classification with location context, in which we are given the GPS coordinates for images in both the train and test phases. We explore different ways of encoding and extracting features from the GPS coordinates, and show how to naturally incorporate these features into a Convolutional Neural Network (CNN), the current state-of-the-art for most image classification and recognition problems. We also show how it is possible to simultaneously learn the optimal pooling radii for a subset of our features within the CNN framework. To evaluate our model and to help promote research in this area, we identify a set of location-sensitive concepts and annotate a subset of the Yahoo Flickr Creative Commons 100M dataset that has GPS coordinates with these concepts, which we make publicly available. By leveraging location context, we are able to achieve almost a 7 gain in mean average precision.
There is a large body of work that focuses on the problem of image geolocation, such as geolocating static cameras @cite_28 , city-scale location recognition @cite_6 , im2gps @cite_24 @cite_34 , place recognition @cite_13 @cite_12 , landmark recognition @cite_39 @cite_23 @cite_29 @cite_26 , geolocation leveraging geometry information @cite_39 @cite_7 @cite_1 @cite_17 , and geolocation with graph-based representations @cite_41 . More recent works have also tried to supplement images with corresponding data @cite_14 @cite_19 @cite_4 @cite_36 @cite_10 , such as digital elevation maps and land cover survey data, which we draw inspiration from in constructing our features. In contrast to these works, we assume we are given GPS coordinates, and use this information to help improve image classification performance.
{ "cite_N": [ "@cite_41", "@cite_36", "@cite_29", "@cite_10", "@cite_4", "@cite_39", "@cite_23", "@cite_17", "@cite_26", "@cite_7", "@cite_28", "@cite_6", "@cite_19", "@cite_34", "@cite_12", "@cite_14", "@cite_1", "@cite_24", "@cite_13" ], "mid": [ "2144990732", "2202263281", "2058650633", "2081418428", "2087475273", "2147854204", "1987488988", "", "2136451880", "2125795712", "2135676438", "2134446283", "", "1883248133", "2013270301", "104903125", "1565312575", "2103163130", "1995288918" ], "abstract": [ "Recognizing the location of a query image by matching it to a database is an important problem in computer vision, and one for which the representation of the database is a key issue. We explore new ways for exploiting the structure of a database by representing it as a graph, and show how the rich information embedded in a graph can improve a bag-of-words-based location recognition method. In particular, starting from a graph on a set of images based on visual connectivity, we propose a method for selecting a set of sub graphs and learning a local distance function for each using discriminative techniques. For a query image, each database image is ranked according to these local distance functions in order to place the image in the right part of the graph. In addition, we propose a probabilistic method for increasing the diversity of these ranked database images, again based on the structure of the image graph. We demonstrate that our methods improve performance over standard bag-of-words methods on several existing location recognition datasets.", "Image based geolocation aims to answer the question: where was this ground photograph taken? We present an approach to geolocalating a single image based on matching human delineated line segments in the ground image to automatically detected line segments in ortho images. Our approach is based on distance transform matching. By observing that the uncertainty of line segments is non-linearly amplified by projective transformations, we develop an uncertainty based representation and incorporate it into a geometric matching framework. We show that our approach is able to rule out a considerable portion of false candidate regions even in a database composed of geographic areas with similar visual appearances.", "This article presents an approach for modeling landmarks based on large-scale, heavily contaminated image collections gathered from the Internet. Our system efficiently combines 2D appearance and 3D geometric constraints to extract scene summaries and construct 3D models. In the first stage of processing, images are clustered based on low-dimensional global appearance descriptors, and the clusters are refined using 3D geometric constraints. Each valid cluster is represented by a single iconic view, and the geometric relationships between iconic views are captured by an iconic scene graph. Using structure from motion techniques, the system then registers the iconic images to efficiently produce 3D models of the different aspects of the landmark. To improve coverage of the scene, these 3D models are subsequently extended using additional, non-iconic views. We also demonstrate the use of iconic images for recognition and browsing. Our experimental results demonstrate the ability to process datasets containing up to 46,000 images in less than 20 hours, using a single commodity PC equipped with a graphics card. This is a significant advance towards Internet-scale operation.", "The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest.", "Geographic location is a powerful property for organizing large-scale photo collections, but only a small fraction of online photos are geo-tagged. Most work in automatically estimating geo-tags from image content is based on comparison against models of buildings or landmarks, or on matching to large reference collections of geotagged images. These approaches work well for frequently photographed places like major cities and tourist destinations, but fail for photos taken in sparsely photographed places where few reference photos exist. Here we consider how to recognize general geo-informative attributes of a photo, e.g. the elevation gradient, population density, demographics, etc. of where it was taken, instead of trying to estimate a precise geo-tag. We learn models for these attributes using a large (noisy) set of geo-tagged images from Flickr by training deep convolutional neural networks (CNNs). We evaluate on over a dozen attributes, showing that while automatically recognizing some attributes is very difficult, others can be automatically estimated with about the same accuracy as a human.", "In this paper we propose a new technique for learning a discriminative codebook for local feature descriptors, specifically designed for scalable landmark classification. The key contribution lies in exploiting the knowledge of correspondences within sets of feature descriptors during code-book learning. Feature correspondences are obtained using structure from motion (SfM) computation on Internet photo collections which serve as the training data. Our codebook is defined by a random forest that is trained to map corresponding feature descriptors into identical codes. Unlike prior forest-based codebook learning methods, we utilize fine-grained descriptor labels and address the challenge of training a forest with an extremely large number of labels. Our codebook is used with various existing feature encoding schemes and also a variant we propose for importance-weighted aggregation of local features. We evaluate our approach on a public dataset of 25 landmarks and our new dataset of 620 landmarks (614K images). Our approach significantly outperforms the state of the art in landmark classification. Furthermore, our method is memory efficient and scalable.", "With recent advances in mobile computing, the demand for visual localization or landmark identification on mobile devices is gaining interest. We advance the state of the art in this area by fusing two popular representations of street-level image data - facade-aligned and viewpoint-aligned - and show that they contain complementary information that can be exploited to significantly improve the recall rates on the city scale. We also improve feature detection in low contrast parts of the street-level data, and discuss how to incorporate priors on a user's position (e.g. given by noisy GPS readings or network cells), which previous approaches often ignore. Finally, and maybe most importantly, we present our results according to a carefully designed, repeatable evaluation scheme and make publicly available a set of 1.7 million images with ground truth labels, geotags, and calibration data, as well as a difficult set of cell phone query images. We provide these resources as a benchmark to facilitate further research in the area.", "", "Modeling and recognizing landmarks at world-scale is a useful yet challenging task. There exists no readily available list of worldwide landmarks. Obtaining reliable visual models for each landmark can also pose problems, and efficiency is another challenge for such a large scale system. This paper leverages the vast amount of multimedia data on the Web, the availability of an Internet image search engine, and advances in object recognition and clustering techniques, to address these issues. First, a comprehensive list of landmarks is mined from two sources: (1) 20 million GPS-tagged photos and (2) online tour guide Web pages. Candidate images for each landmark are then obtained from photo sharing Websites or by querying an image search engine. Second, landmark visual models are built by pruning candidate images using efficient image matching and unsupervised clustering techniques. Finally, the landmarks and their visual models are validated by checking authorship of their member images. The resulting landmark recognition engine incorporates 5312 landmarks from 1259 cities in 144 countries. The experiments demonstrate that the engine can deliver satisfactory recognition performance with high efficiency.", "Efficient view registration with respect to a given 3D reconstruction has many applications like inside-out tracking in indoor and outdoor environments, and geo-locating images from large photo collections. We present a fast location recognition technique based on structure from motion point clouds. Vocabulary tree-based indexing of features directly returns relevant fragments of 3D models instead of documents from the images database. Additionally, we propose a compressed 3D scene representation which improves recognition rates while simultaneously reducing the computation time and the memory consumption. The design of our method is based on algorithms that efficiently utilize modern graphics processing units to deliver real-time performance for view registration. We demonstrate the approach by matching hand-held outdoor videos to known 3D urban models, and by registering images from online photo collections to the corresponding landmarks.", "A key problem in widely distributed camera networks is locating the cameras. This paper considers three scenarios for camera localization: localizing a camera in an unknown environment, adding a new camera in a region with many other cameras, and localizing a camera by finding correlations with satellite imagery. We find that simple summary statistics (the time course of principal component coefficients) are sufficient to geolocate cameras without determining correspondences between cameras or explicitly reasoning about weather in the scene. We present results from a database of images from 538 cameras collected over the course of a year. We find that for cameras that remain stationary and for which we have accurate image times- tamps, we can localize most cameras to within 50 miles of the known location. In addition, we demonstrate the use of a distributed camera network in the construction a map of weather conditions.", "We look at the problem of location recognition in a large image dataset using a vocabulary tree. This entails finding the location of a query image in a large dataset containing 3times104 streetside images of a city. We investigate how the traditional invariant feature matching approach falls down as the size of the database grows. In particular we show that by carefully selecting the vocabulary using the most informative features, retrieval performance is significantly improved, allowing us to increase the number of database images by a factor of 10. We also introduce a generalization of the traditional vocabulary tree search algorithm which improves performance by effectively increasing the branching factor of a fixed vocabulary tree.", "", "Finding an image's exact GPS location is a challenging computer vision problem that has many real-world applications. In this paper, we address the problem of finding the GPS location of images with an accuracy which is comparable to hand-held GPS devices. We leverage a structured data set of about 100,000 images build from Google Maps Street View as the reference images. We propose a localization method in which the SIFT descriptors of the detected SIFT interest points in the reference images are indexed using a tree. In order to localize a query image, the tree is queried using the detected SIFT descriptors in the query image. A novel GPS-tag-based pruning method removes the less reliable descriptors. Then, a smoothing step with an associated voting scheme is utilized; this allows each query descriptor to vote for the location its nearest neighbor belongs to, in order to accurately localize the query image. A parameter called Confidence of Localization which is based on the Kurtosis of the distribution of votes is defined to determine how reliable the localization of a particular image is. In addition, we propose a novel approach to localize groups of images accurately in a hierarchical manner. First, each image is localized individually; then, the rest of the images in the group are matched against images in the neighboring area of the found first match. The final location is determined based on the Confidence of Localization parameter. The proposed image group localization method can deal with very unclear queries which are not capable of being geolocated individually.", "Repeated structures such as building facades, fences or road markings often represent a significant challenge for place recognition. Repeated structures are notoriously hard for establishing correspondences using multi-view geometry. Even more importantly, they violate the feature independence assumed in the bag-of-visual-words representation which often leads to over-counting evidence and significant degradation of retrieval performance. In this work we show that repeated structures are not a nuisance but, when appropriately represented, they form an important distinguishing feature for many places. We describe a representation of repeated structures suitable for scalable retrieval. It is based on robust detection of repeated image structures and a simple modification of weights in the bag-of-visual-word model. Place recognition results are shown on datasets of street-level imagery from Pittsburgh and San Francisco demonstrating significant gains in recognition performance compared to the standard bag-of-visual-words baseline and more recently proposed burstiness weighting.", "Given a picture taken somewhere in the world, automatic geo-localization of that image is a task that would be extremely useful e.g. for historical and forensic sciences, documentation purposes, organization of the world's photo material and also intelligence applications. While tremendous progress has been made over the last years in visual location recognition within a single city, localization in natural environments is much more difficult, since vegetation, illumination, seasonal changes make appearance-only approaches impractical. In this work, we target mountainous terrain and use digital elevation models to extract representations for fast visual database lookup. We propose an automated approach for very large scale visual localization that can efficiently exploit visual information contours and geometric constraints consistent orientation at the same time. We validate the system on the scale of a whole country Switzerland, 40 000km2 using a new dataset of more than 200 landscape query pictures with ground truth.", "We present a fast, simple location recognition and image localization method that leverages feature correspondence and geometry estimated from large Internet photo collections. Such recovered structure contains a significant amount of useful information about images and image features that is not available when considering images in isolation. For instance, we can predict which views will be the most common, which feature points in a scene are most reliable, and which features in the scene tend to co-occur in the same image. Based on this information, we devise an adaptive, prioritized algorithm for matching a representative set of SIFT features covering a large scene to a query image for efficient localization. Our approach is based on considering features in the scene database, and matching them to query image features, as opposed to more conventional methods that match image features to visual words or database features. We find this approach results in improved performance, due to the richer knowledge of characteristics of the database features compared to query image features. We present experiments on two large city-scale photo collections, showing that our algorithm compares favorably to image retrieval-style approaches to location recognition.", "Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban rural classification.", "The aim of this work is to localize a query photograph by finding other images depicting the same place in a large geotagged image database. This is a challenging task due to changes in viewpoint, imaging conditions and the large size of the image database. The contribution of this work is two-fold. First, we cast the place recognition problem as a classification task and use the available geotags to train a classifier for each location in the database in a similar manner to per-exemplar SVMs in object recognition. Second, as only few positive training examples are available for each location, we propose a new approach to calibrate all the per-location SVM classifiers using only the negative examples. The calibration we propose relies on a significance measure essentially equivalent to the p-values classically used in statistical hypothesis testing. Experiments are performed on a database of 25,000 geotagged street view images of Pittsburgh and demonstrate improved place recognition accuracy of the proposed approach over the previous work." ] }
1505.03873
2950310122
With the widespread availability of cellphones and cameras that have GPS capabilities, it is common for images being uploaded to the Internet today to have GPS coordinates associated with them. In addition to research that tries to predict GPS coordinates from visual features, this also opens up the door to problems that are conditioned on the availability of GPS coordinates. In this work, we tackle the problem of performing image classification with location context, in which we are given the GPS coordinates for images in both the train and test phases. We explore different ways of encoding and extracting features from the GPS coordinates, and show how to naturally incorporate these features into a Convolutional Neural Network (CNN), the current state-of-the-art for most image classification and recognition problems. We also show how it is possible to simultaneously learn the optimal pooling radii for a subset of our features within the CNN framework. To evaluate our model and to help promote research in this area, we identify a set of location-sensitive concepts and annotate a subset of the Yahoo Flickr Creative Commons 100M dataset that has GPS coordinates with these concepts, which we make publicly available. By leveraging location context, we are able to achieve almost a 7 gain in mean average precision.
In addition, several works also explore other aspects of images and location information, such as 3D cars with locations @cite_15 , organizing geotagged photos @cite_11 , structure from motion on Internet photos @cite_38 , recognizing city identity @cite_5 , looking beyond the visible scene @cite_16 , discovering representative geographic visual elements @cite_18 @cite_22 , predicting land cover from images @cite_33 , and annotation enhancement using canonical correlation analysis @cite_20 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_22", "@cite_33", "@cite_5", "@cite_15", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "2129201358", "", "2171322814", "2013667252", "1525292367", "1992969267", "2035430745", "2083978868", "2103388840" ], "abstract": [ "There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like \"Notre Dame\" or \"Trevi Fountain.\" This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world's well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.", "", "We present a weakly-supervised visual data mining approach that discovers connections between recurring mid-level visual elements in historic (temporal) and geographic (spatial) image collections, and attempts to capture the underlying visual style. In contrast to existing discovery methods that mine for patterns that remain visually consistent throughout the dataset, our goal is to discover visual elements whose appearance changes due to change in time or location; i.e., exhibit consistent stylistic variations across the label space (date or geo-location). To discover these elements, we first identify groups of patches that are style-sensitive. We then incrementally build correspondences to find the same element across the entire dataset. Finally, we train style-aware regressors that model each element's range of stylistic differences. We apply our approach to date and geo-location prediction and show substantial improvement over several baselines that do not model visual style. We also demonstrate the method's effectiveness on the related task of fine-grained classification.", "The primary and novel contribution of this work is the conjecture that large collections of georeferenced photo collections can be used to derive maps of what-is-where on the surface of the earth. We investigate the application of what we term “proximate sensing” to the problem of land cover classification for a large geographic region. We show that our approach is able to achieve almost 75 classification accuracy in a binary land cover labelling problem using images from a photo sharing site in a completely automated fashion. We also investigate 1) how existing geographic knowledge can be used to provide labelled training data in a weakly-supervised manner; 2) the effect of the photographer's intent when he or she captures the photograph; and 3) a method for filtering out non-informative images.", "After hundreds of years of human settlement, each city has formed a distinct identity, distinguishing itself from other cities. In this work, we propose to characterize the identity of a city via an attribute analysis of 2 million geo-tagged images from 21 cities over 3 continents. First, we estimate the scene attributes of these images and use this representation to build a higher-level set of 7 city attributes, tailored to the form and function of cities. Then, we conduct the city identity recognition experiments on the geo-tagged images and identify images with salient city identity on each city attribute. Based on the misclassification rate of the city identity recognition, we analyze the visual similarity among different cities. Finally, we discuss the potential application of computer vision to urban planning.", "Geometry and geography can play an important role in recognition tasks in computer vision. To aid in studying connections between geometry and recognition, we introduce NYC3DCars, a rich dataset for vehicle detection in urban scenes built from Internet photos drawn from the wild, focused on densely trafficked areas of New York City. Our dataset is augmented with detailed geometric and geographic information, including full camera poses derived from structure from motion, 3D vehicle annotations, and geographic information from open resources, including road segmentations and directions of travel. NYC3DCars can be used to study new questions about using geometric information in detection tasks, and to explore applications of Internet photos in understanding cities. To demonstrate the utility of our data, we evaluate the use of the geographic information in our dataset to enhance a parts-based detection method, and suggest other avenues for future exploration.", "A common thread that ties together many prior works in scene understanding is their focus on the aspects directly present in a scene such as its categorical classification or the set of objects. In this work, we propose to look beyond the visible elements of a scene; we demonstrate that a scene is not just a collection of objects and their configuration or the labels assigned to its pixels - it is so much more. From a simple observation of a scene, we can tell a lot about the environment surrounding the scene such as the potential establishments near it, the potential crime rate in the area, or even the economic climate. Here, we explore several of these aspects from both the human perception and computer vision perspective. Specifically, we show that it is possible to predict the distance of surrounding establishments such as McDonald's or hospitals even by using scenes located far from them. We go a step further to show that both humans and computers perform well at navigating the environment based only on visual cues from scenes. Lastly, we show that it is possible to predict the crime rates in an area simply by looking at a scene without any real-time criminal activity. Simply put, here, we illustrate that it is possible to look beyond the visible scene.", "Photo community sites such as Flickr and Picasa Web Album host a massive amount of personal photos with millions of new photos uploaded every month. These photos constitute an overwhelming source of images that require effective management. There is an increasingly imperative need for semantic annotation of these web images. This paper addresses the problem by considering two kinds of annotation: semantic annotation and geographic annotation. Both are useful for image search and retrieval and for facilitating communities and social networks. This paper proposes a novel method of Logistic Canonical Correlation Regression (LCCR) for the annotation task. This model exploits the canonical correlation between heterogeneous features and an annotation lexicon of interest, and builds a generalized annotation engine based on canonical correlations in order to produce enhanced annotation for web images. We validate the effectiveness of our algorithm using a dataset of over 380,000 images tagged with GPS coordinates.", "We investigate how to organize a large collection of geotagged photos, working with a dataset of about 35 million images collected from Flickr. Our approach combines content analysis based on text tags and image data with structural analysis based on geospatial data. We use the spatial distribution of where people take photos to define a relational structure between the photos that are taken at popular places. We then study the interplay between this structure and the content, using classification methods for predicting such locations from visual, textual and temporal features of the photos. We find that visual and temporal features improve the ability to estimate the location of a photo, compared to using just textual features. We illustrate using these techniques to organize a large photo collection, while also revealing various interesting properties about popular cities and landmarks at a global scale." ] }
1505.03873
2950310122
With the widespread availability of cellphones and cameras that have GPS capabilities, it is common for images being uploaded to the Internet today to have GPS coordinates associated with them. In addition to research that tries to predict GPS coordinates from visual features, this also opens up the door to problems that are conditioned on the availability of GPS coordinates. In this work, we tackle the problem of performing image classification with location context, in which we are given the GPS coordinates for images in both the train and test phases. We explore different ways of encoding and extracting features from the GPS coordinates, and show how to naturally incorporate these features into a Convolutional Neural Network (CNN), the current state-of-the-art for most image classification and recognition problems. We also show how it is possible to simultaneously learn the optimal pooling radii for a subset of our features within the CNN framework. To evaluate our model and to help promote research in this area, we identify a set of location-sensitive concepts and annotate a subset of the Yahoo Flickr Creative Commons 100M dataset that has GPS coordinates with these concepts, which we make publicly available. By leveraging location context, we are able to achieve almost a 7 gain in mean average precision.
Also closely related are the numerous works on context, which have shown to be helpful for various tasks in computer vision @cite_25 @cite_9 . We leverage contextual information by considering the GPS coordinates of our images and extracting complementary location features.
{ "cite_N": [ "@cite_9", "@cite_25" ], "mid": [ "2166761907", "2141364309" ], "abstract": [ "There is general consensus that context can be a rich source of information about an object's identity, location and scale. In fact, the structure of many real-world scenes is governed by strong configurational rules akin to those that apply to a single object. Here we introduce a simple framework for modeling the relationship between context and object properties based on the correlation between the statistics of low-level features across the entire scene and the objects that it contains. The resulting scheme serves as an effective procedure for object priming, context driven focus of attention and automatic scale-selection on real-world scenes.", "This paper presents an empirical evaluation of the role of context in a contemporary, challenging object detection task - the PASCAL VOC 2008. Previous experiments with context have mostly been done on home-grown datasets, often with non-standard baselines, making it difficult to isolate the contribution of contextual information. In this work, we present our analysis on a standard dataset, using top-performing local appearance detectors as baseline. We evaluate several different sources of context and ways to utilize it. While we employ many contextual cues that have been used before, we also propose a few novel ones including the use of geographic context and a new approach for using object spatial support." ] }
1505.03703
2155511299
In this paper, we propose a novel unsupervised deep learning model, called PCA-based Convolutional Network (PCN). The architecture of PCN is composed of several feature extraction stages and a nonlinear output stage. Particularly, each feature extraction stage includes two layers: a convolutional layer and a feature pooling layer. In the convolutional layer, the filter banks are simply learned by PCA. In the nonlinear output stage, binary hashing is applied. For the higher convolutional layers, the filter banks are learned from the feature maps that were obtained in the previous stage. To test PCN, we conducted extensive experiments on some challenging tasks, including handwritten digits recognition, face recognition and texture classification. The results show that PCN performs competitive with or even better than state-of-the-art deep learning models. More importantly, since there is no back propagation for supervised finetuning, PCN is much more efficient than existing deep networks.
In the past few years, variations of convolutional network have been proposed with respect to the pooling and convolutional operation. Recently, unsupervised learning was used for pre-training in each stage that would alleviate the need of labeled data. When all the stages were pre-trained, the network was fine-tuned by using stochastic gradient descent method. Many methods were proposed to pre-train filter banks of convolution layers in an unsupervised feature learning mode. The convolutional versions of sparse RBMs @cite_8 @cite_10 , sparse coding @cite_14 and predictive sparse decomposition(PSD) @cite_8 @cite_4 @cite_15 @cite_6 were reported and achieved high accuracy on several benchmarks.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_6", "@cite_15", "@cite_10" ], "mid": [ "", "2188492526", "2546302380", "2169488311", "2140262144", "" ], "abstract": [ "", "In this work we present a system to automatically learn features from audio in an unsupervised manner. Our method first learns an overcomplete dictionary which can be used to sparsely decompose log-scaled spectrograms. It then trains an efficient encoder which quickly maps new inputs to approximations of their sparse representations using the learned dictionary. This avoids expensive iterative procedures usually required to infer sparse codes. We then use these sparse codes as inputs for a linear Support Vector Machine (SVM). Our system achieves 83.4 accuracy in predicting genres on the GTZAN dataset, which is competitive with current state-of-the-art approaches. Furthermore, the use of a simple linear classifier combined with a fast feature extraction system allows our approach to scale well to large datasets.", "In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer. Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where the filters in one or both stages are learned in supervised or unsupervised mode. This paper addresses three questions: 1. How does the non-linearities that follow the filter banks influence the recognition accuracy? 2. does learning the filter banks in an unsupervised or supervised manner improve the performance over random filters or hardwired filters? 3. Is there any advantage to using an architecture with two stages of feature extraction, rather than one? We show that using non-linearities that include rectification and local contrast normalization is the single most important ingredient for good accuracy on object recognition benchmarks. We show that two stages of feature extraction yield better accuracy than one. Most surprisingly, we show that a two-stage system with random filters can yield almost 63 recognition rate on Caltech-101, provided that the proper non-linearities and pooling layers are used. Finally, we show that with supervised refinement, the system achieves state-of-the-art performance on NORB dataset (5.6 ) and unsupervised pre-training followed by supervised refinement produces good accuracy on Caltech-101 (≫ 65 ), and the lowest known error rate on the undistorted, unprocessed MNIST dataset (0.53 ).", "We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features. While sparse coding has become an increasingly popular method for learning visual features, it is most often trained at the patch level. Applying the resulting filters convolutionally results in highly redundant codes because overlapping patches are encoded in isolation. By training convolutionally over large image windows, our method reduces the redudancy between feature vectors at neighboring locations and improves the efficiency of the overall representation. In addition to a linear decoder that reconstructs the image from sparse features, our method trains an efficient feed-forward encoder that predicts quasi-sparse features from the input. While patch-based training rarely produces anything but oriented edge detectors, we show that convolutional training produces highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves performance on a number of visual recognition and detection tasks.", "Several recently-proposed architectures for high-performance object recognition are composed of two main stages: a feature extraction stage that extracts locally-invariant feature vectors from regularly spaced image patches, and a somewhat generic supervised classifier. The first stage is often composed of three main modules: (1) a bank of filters (often oriented edge detectors); (2) a non-linear transform, such as a point-wise squashing functions, quantization, or normalization; (3) a spatial pooling operation which combines the outputs of similar filters over neighboring regions. We propose a method that automatically learns such feature extractors in an unsupervised fashion by simultaneously learning the filters and the pooling units that combine multiple filter outputs together. The method automatically generates topographic maps of similar filters that extract features of orientations, scales, and positions. These similar filters are pooled together, producing locally-invariant outputs. The learned feature descriptors give comparable results as SIFT on image recognition tasks for which SIFT is well suited, and better results than SIFT on tasks for which SIFT is less well suited.", "" ] }
1505.03703
2155511299
In this paper, we propose a novel unsupervised deep learning model, called PCA-based Convolutional Network (PCN). The architecture of PCN is composed of several feature extraction stages and a nonlinear output stage. Particularly, each feature extraction stage includes two layers: a convolutional layer and a feature pooling layer. In the convolutional layer, the filter banks are simply learned by PCA. In the nonlinear output stage, binary hashing is applied. For the higher convolutional layers, the filter banks are learned from the feature maps that were obtained in the previous stage. To test PCN, we conducted extensive experiments on some challenging tasks, including handwritten digits recognition, face recognition and texture classification. The results show that PCN performs competitive with or even better than state-of-the-art deep learning models. More importantly, since there is no back propagation for supervised finetuning, PCN is much more efficient than existing deep networks.
Alternatively, some networks similar to ConvNets were proposed but used pre-fixed filters in convolution layer and yielded good performance on several benchmarks. In @cite_19 @cite_16 , Gabor filters were used in the first convolution layer. Meanwhile, wavelet scattering networks (ScatNet) @cite_14 @cite_3 also used pre-fixed convolutional filters which were called scattering operators. By using a similar multiple levels of ConvNets, the algorithm had achieved impressive results in the applications of handwritten digits and texture recognition. One more closely related work is called PCANet @cite_5 , which simply use PCA filters in an unsupervised learning mode at the convolution layer. Built upon a multiple convolution layers, a nonlinear output stage was applied with hashing and block-wise histogram. Just a few cascaded convolution layers were demonstrated to be able to achieve new records in several challenging vision tasks, such as face and handwritten recognition, and comparative results on texture classification and object recognition.
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_19", "@cite_5", "@cite_16" ], "mid": [ "", "2167383966", "1624854622", "1616262590", "2105464770" ], "abstract": [ "", "An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.", "We introduce a novel set of features for robust object recognition. Each element of this set is a complex feature obtained by combining position- and scale-tolerant edge-detectors over neighboring positions and multiple orientations. Our system's architecture is motivated by a quantitative model of visual cortex. We show that our approach exhibits excellent recognition performance and outperforms several state-of-the-art systems on a variety of image datasets including many different object categories. We also demonstrate that our system is able to learn from very few examples. The performance of the approach constitutes a suggestive plausibility proof for a class of feedforward models of object recognition in cortex.", "In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.", "We apply a biologically inspired model of visual object recognition to the multiclass object categorization problem. Our model modifies that of Serre, Wolf, and Poggio. As in that work, we first apply Gabor filters at all positions and scales; feature complexity and position scale invariance are then built up by alternating template matching and max pooling operations. We refine the approach in several biologically plausible ways, using simple versions of sparsification and lateral inhibition. We demonstrate the value of retaining some position and scale information above the intermediate feature level. Using feature selection we arrive at a model that performs better with fewer features. Our final model is tested on the Caltech 101 object categories and the UIUC car localization task, in both cases achieving state-of-the-art performance. The results strengthen the case for using this class of model in computer vision." ] }
1505.03365
2950351128
Graph cuts-based algorithms have achieved great success in energy minimization for many computer vision applications. These algorithms provide approximated solutions for multi-label energy functions via move-making approach. This approach fuses the current solution with a proposal to generate a lower-energy solution. Thus, generating the appropriate proposals is necessary for the success of the move-making approach. However, not much research efforts has been done on the generation of "good" proposals, especially for non-metric energy functions. In this paper, we propose an application-independent and energy-based approach to generate "good" proposals. With these proposals, we present a graph cuts-based move-making algorithm called GA-fusion (fusion with graph approximation-based proposals). Extensive experiments support that our proposal generation is effective across different classes of energy functions. The proposed algorithm outperforms others both on real and synthetic problems.
where @math is an element-wise operator for @math at node @math , and @math denotes the current label assigned on node @math . The label on node @math switches between the current label and @math according to the value of @math . In such a case, the binary function @math is submodular if the original function is metric @cite_7 . This condition is relaxed in @cite_1 such that the binary function @math is submodular if every pairwise term satisfies
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "2101309634", "2143516773" ], "abstract": [ "In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.", "Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy." ] }
1505.03365
2950351128
Graph cuts-based algorithms have achieved great success in energy minimization for many computer vision applications. These algorithms provide approximated solutions for multi-label energy functions via move-making approach. This approach fuses the current solution with a proposal to generate a lower-energy solution. Thus, generating the appropriate proposals is necessary for the success of the move-making approach. However, not much research efforts has been done on the generation of "good" proposals, especially for non-metric energy functions. In this paper, we propose an application-independent and energy-based approach to generate "good" proposals. With these proposals, we present a graph cuts-based move-making algorithm called GA-fusion (fusion with graph approximation-based proposals). Extensive experiments support that our proposal generation is effective across different classes of energy functions. The proposed algorithm outperforms others both on real and synthetic problems.
@math -Expansion is one of the most acclaimed methodologies; however, standard @math -expansion is not applicable if the energy function does not satisfy the condition ). In such a case, a sequence of reduced binary functions is no longer submodular. We may truncate the pairwise terms @cite_14 @cite_2 to optimize these functions, thereby making every pairwise term submodular. This strategy works only when the non-submodular part of the energy function is very small. If the non-submodular part is not negligible, performance is seriously degraded @cite_17 .
{ "cite_N": [ "@cite_14", "@cite_17", "@cite_2" ], "mid": [ "2107884096", "2137117160", "2001933992" ], "abstract": [ "Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http: vision.middlebury.edu MRF .", "Many computer vision applications rely on the efficient optimization of challenging, so-called non-submodular, binary pairwise MRFs. A promising graph cut based approach for optimizing such MRFs known as \"roof duality\" was recently introduced into computer vision. We study two methods which extend this approach. First, we discuss an efficient implementation of the \"probing\" technique introduced recently by (2006). It simplifies the MRF while preserving the global optimum. Our code is 400-700 faster on some graphs than the implementation of the work of (2006). Second, we present a new technique which takes an arbitrary input labeling and tries to improve its energy. We give theoretical characterizations of local minima of this procedure. We applied both techniques to many applications, including image segmentation, new view synthesis, super-resolution, diagram recognition, parameter learning, texture restoration, and image deconvolution. For several applications we see that we are able to find the global minimum very efficiently, and considerably outperform the original roof duality approach. In comparison to existing techniques, such as graph cut, TRW, BP, ICM, and simulated annealing, we nearly always find a lower energy.", "We describe an interactive, computer-assisted framework for combining parts of a set of photographs into a single composite picture, a process we call \"digital photomontage.\" Our framework makes use of two techniques primarily: graph-cut optimization, to choose good seams within the constituent images so that they can be combined as seamlessly as possible; and gradient-domain fusion, a process based on Poisson equations, to further reduce any remaining visible artifacts in the composite. Also central to the framework is a suite of interactive tools that allow the user to specify a variety of high-level image objectives, either globally across the image, or locally through a painting-style interface. Image objectives are applied independently at each pixel location and generally involve a function of the pixel values (such as \"maximum contrast\") drawn from that same location in the set of source images. Typically, a user applies a series of image objectives iteratively in order to create a finished composite. The power of this framework lies in its generality; we show how it can be used for a wide variety of applications, including \"selective composites\" (for instance, group photos in which everyone looks their best), relighting, extended depth of field, panoramic stitching, clean-plate production, stroboscopic visualization of movement, and time-lapse mosaics." ] }
1505.03365
2950351128
Graph cuts-based algorithms have achieved great success in energy minimization for many computer vision applications. These algorithms provide approximated solutions for multi-label energy functions via move-making approach. This approach fuses the current solution with a proposal to generate a lower-energy solution. Thus, generating the appropriate proposals is necessary for the success of the move-making approach. However, not much research efforts has been done on the generation of "good" proposals, especially for non-metric energy functions. In this paper, we propose an application-independent and energy-based approach to generate "good" proposals. With these proposals, we present a graph cuts-based move-making algorithm called GA-fusion (fusion with graph approximation-based proposals). Extensive experiments support that our proposal generation is effective across different classes of energy functions. The proposed algorithm outperforms others both on real and synthetic problems.
Some other approaches generate proposals in runtime (, ). Contrary to , the number of the proposals is not limited since they dynamically generate proposals online. @cite_27 , proposals are generated by blurring the current solution and random labeling for denoising application. However, they do not explicitly concern objective energy in proposal generation. And, they are also application-specific. Recently, Ishikawa @cite_6 proposed an application-independent method to generate proposals. This method uses gradient descent algorithm on the objective energy function. Although they are energy-aware and can be applied to some cases, it is still limited to differentiable energy functions. Thus, this method cannot be applied even to the Potts model, which is one of the most popular prior models. In our understanding, this algorithm is only meaningful for ordered labels that represent physical quantities.
{ "cite_N": [ "@cite_27", "@cite_6" ], "mid": [ "2103498186", "2142336599" ], "abstract": [ "We introduce a new technique that can reduce any higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we combine the reduction with the fusion-move and QPBO algorithms to optimize higher-order multi-label problems. While many vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models, so that higher-order energies can be used to capture the rich statistics of natural scenes. To demonstrate the algorithm, we minimize a third-order energy, which allows clique potentials with up to four pixels, in an image restoration problem. The problem uses the fields of experts model, a learned spatial prior of natural images that has been used to test two belief propagation algorithms capable of optimizing higher-order energies. The results show that the algorithm exceeds the BP algorithms in both optimization performance and speed.", "Markov Random Field is now ubiquitous in many formulations of various vision problems. Recently, optimization of higher-order potentials became practical using higher-order graph cuts: the combination of i) the fusion move algorithm, ii) the reduction of higher-order binary energy minimization to first-order, and iii) the QPBO algorithm. In the fusion move, it is crucial for the success and efficiency of the optimization to provide proposals that fits the energies being optimized. For higher-order energies, it is even more so because they have richer class of null potentials. In this paper, we focus on the efficiency of the higher-order graph cuts and present a simple technique for generating proposal labelings that makes the algorithm much more efficient, which we empirically show using examples in stereo and image denoising." ] }
1505.03229
267862395
Deep neural networks have been exhibiting splendid accuracies in many of visual pattern classification problems. Many of the state-of-the-art methods employ a technique known as data augmentation at the training stage. This paper addresses an issue of decision rule for classifiers trained with augmented data. Our method is named as APAC: the Augmented PAttern Classification, which is a way of classification using the optimal decision rule for augmented data learning. Discussion of methods of data augmentation is not our primary focus. We show clear evidences that APAC gives far better generalization performance than the traditional way of class prediction in several experiments. Our convolutional neural network model with APAC achieved a state-of-the-art accuracy on the MNIST dataset among non-ensemble classifiers. Even our multilayer perceptron model beats some of the convolutional models with recently invented stochastic regularization techniques on the CIFAR-10 dataset.
Data augmentation plays an essential role in boosting performance of generic object recognition. Krizhevsky al used a few types of image processing, such as random cropping, horizontal reflection, and color processing, to create image patches for the ImageNet training @cite_3 . More recently, Wu al vastly expanded the ImageNet dataset with many types of image processing including color casting, vignetting, rotation, aspect ratio change, and lens distortion on top of standard cropping and flipping @cite_16 . Although these two works use different network architectures and computational hardware, it is still interesting to see the difference in the performances levels. The top-5 prediction error rate of the latter is 5.33 a large gap could be an implicit evidence that richer data augmentation leads to better generalization.
{ "cite_N": [ "@cite_16", "@cite_3" ], "mid": [ "1563686443", "1677182931" ], "abstract": [ "We present a state-of-the-art image recognition system, Deep Image, developed using end-to-end deep learning. The key components are a custom-built supercomputer dedicated to deep learning, a highly optimized parallel algorithm using new strategies for data partitioning and communication, larger deep neural network models, novel data augmentation approaches, and usage of multi-scale high-resolution images. Our method achieves excellent results on multiple challenging computer vision benchmarks.", "Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94 top-5 test error on the ImageNet 2012 classification dataset. This is a 26 relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66 [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1 , [26]) on this dataset." ] }
1505.03229
267862395
Deep neural networks have been exhibiting splendid accuracies in many of visual pattern classification problems. Many of the state-of-the-art methods employ a technique known as data augmentation at the training stage. This paper addresses an issue of decision rule for classifiers trained with augmented data. Our method is named as APAC: the Augmented PAttern Classification, which is a way of classification using the optimal decision rule for augmented data learning. Discussion of methods of data augmentation is not our primary focus. We show clear evidences that APAC gives far better generalization performance than the traditional way of class prediction in several experiments. Our convolutional neural network model with APAC achieved a state-of-the-art accuracy on the MNIST dataset among non-ensemble classifiers. Even our multilayer perceptron model beats some of the convolutional models with recently invented stochastic regularization techniques on the CIFAR-10 dataset.
Very recently, an website article reported a method named as Test-Time Augmentation @cite_8 , where prediction is made by taking average of the output from many virtual samples, though the algorithm is not fully described.
{ "cite_N": [ "@cite_8" ], "mid": [ "1694178301" ], "abstract": [ "We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN)." ] }
1505.03229
267862395
Deep neural networks have been exhibiting splendid accuracies in many of visual pattern classification problems. Many of the state-of-the-art methods employ a technique known as data augmentation at the training stage. This paper addresses an issue of decision rule for classifiers trained with augmented data. Our method is named as APAC: the Augmented PAttern Classification, which is a way of classification using the optimal decision rule for augmented data learning. Discussion of methods of data augmentation is not our primary focus. We show clear evidences that APAC gives far better generalization performance than the traditional way of class prediction in several experiments. Our convolutional neural network model with APAC achieved a state-of-the-art accuracy on the MNIST dataset among non-ensemble classifiers. Even our multilayer perceptron model beats some of the convolutional models with recently invented stochastic regularization techniques on the CIFAR-10 dataset.
Tangent Prop @cite_20 is a way to avoid over-fitting with implicit use of data augmentation. Virtual samples are used to compute the regularization term that is defined as the sum of tangent distances, each of which is the distance between an original sample and a slightly deformed one. It is expected that classifier's output is stable in the vicinity of original data points, but not necessarily so in other locations.
{ "cite_N": [ "@cite_20" ], "mid": [ "2111494971" ], "abstract": [ "In many machine learning applications, one has access, not only to training data, but also to some high-level a priori knowledge about the desired behavior of the system. For example, it is known in advance that the output of a character recognizer should be invariant with respect to small spatial distortions of the input images (translations, rotations, scale changes, etcetera). We have implemented a scheme that allows a network to learn the derivative of its outputs with respect to distortion operators of our choosing. This not only reduces the learning time and the amount of training data, but also provides a powerful language for specifying what generalizations we wish the network to perform." ] }
1505.03532
2503235127
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.
Extracting spatial-temporal features play an important role in analyzing scientific and engineering applications, including behavior recognition @cite_11 , bioinformatics @cite_41 , video analysis c @cite_54 , and health informatics @cite_13 . Depending on the applications, mining spatial features in one time frame and relationships among spatial objects in and across time frames are extremely challenging tasks due to three reasons. First, the extent and shape of a feature could be an important indicator in determining its influence. However, due to various data type (regular and irregular), it is not easy to apply a generic approach for all applications. Second, effectively incorporating the temporal information in the overall analysis is a necessity to uncover interesting upcoming events. Finally, how to process very large data sets in real time demands appropriately responding to extreme scale computing and big data challenges. In this work, we attack this problem by presenting a three-step approach for detecting and tracking spatio-temporal features in the context of blob-filament detection in fusion plasma.
{ "cite_N": [ "@cite_41", "@cite_54", "@cite_13", "@cite_11" ], "mid": [ "2055926450", "1999192586", "2122365325", "2533739470" ], "abstract": [ "The aggregation of proteins as a result of intrinsic or environmental stress may be cytoprotective, but is also linked to pathophysiological states and cellular ageing. We analysed the principles of aggregate formation and the cellular strategies to cope with aggregates in Escherichia coli using fluorescence microscopy of thermolabile reporters, EM tomography and mathematical modelling. Misfolded proteins deposited at the cell poles lead to selective re-localization of the DnaK DnaJ ClpB disaggregating chaperones, but not of GroEL and Lon to these sites. Polar aggregation of cytosolic proteins is mainly driven by nucleoid occlusion and not by an active targeting mechanism. Accordingly, cytosolic aggregation can be efficiently re-targeted to alternative sites such as the inner membrane in the presence of site-specific aggregation seeds. Polar positioning of aggregates allows for asymmetric inheritance of damaged proteins, resulting in higher growth rates of damage-free daughter cells. In contrast, symmetric damage inheritance of randomly distributed aggregates at the inner membrane abrogates this rejuvenation process, indicating that asymmetric deposition of protein aggregates is important for increasing the fitness of bacterial cell populations.", "Previous work on action recognition has focused on adapting hand-designed local features, such as SIFT or HOG, from static images to the video domain. In this paper, we propose using unsupervised feature learning as a way to learn features directly from video data. More specifically, we present an extension of the Independent Subspace Analysis algorithm to learn invariant spatio-temporal features from unlabeled video data. We discovered that, despite its simplicity, this method performs surprisingly well when combined with deep learning techniques such as stacking and convolution to learn hierarchical representations. By replacing hand-designed features with our learned features, we achieve classification results superior to all previous published results on the Hollywood2, UCF, KTH and YouTube action recognition datasets. On the challenging Hollywood2 and YouTube action datasets we obtain 53.3 and 75.8 respectively, which are approximately 5 better than the current best published results. Further benefits of this method, such as the ease of training and the efficiency of training and prediction, will also be discussed. You can download our code and learned spatio-temporal features here: http: ai.stanford.edu ∼wzou", "In this study we describe an ambulatory system for estimation of spatio-temporal parameters during long periods of walking. This original method based on wavelet analysis is proposed to compute the values of temporal gait parameters from the angular velocity of lower limbs. Based on a mechanical model, the medio-lateral rotation of the lower limbs during stance and swing, the stride length and velocity are estimated by integration of the angular velocity. Measurement's accuracy was assessed using as a criterion standard the information provided by foot pressure sensors. To assess the accuracy of the method on a broad range of performance for each gait parameter, we gathered data from young and elderly subjects. No significant error was observed for toe-off detection, while a slight systematic delay (10 ms on average) existed between heelstrike obtained from gyroscopes and footswitch. There was no significant difference between actual spatial parameters (stride length and velocity) and their estimated values. Errors for velocity and stride length estimations were 0.06 m s and 0.07 in, respectively. This system is light, portable, inexpensive and does not provoke any discomfort to Subjects. It can be carried for long periods of time, thus providing new longitudinal information such as stride-to-stride variability of gait. Several clinical applications can be proposed such as outcome evaluation after total knee or hip replacement, external prosthesis adjustment for amputees, monitoring of rehabilitation progress, gait analysis in neurological diseases, and fall risk estimation in elderly. (C) 2002 Published by Elsevier Science Ltd.", "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior." ] }
1505.03532
2503235127
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.
The definition of a blob is varied in the literature depending on fusion experiments or simulations as well as available diagnostic information for measurements @cite_26 . This makes blob detection a challenging task. Figure plots local normalized density distribution in the regions of interest in one time frame. We can observe that there are two reddish spots located at the left portion of the figure, which are associated with blob-filaments and are significantly different from their surrounding neighbors. It is clear that a reddish spot is not a single point but a group of connected points or a region. Therefore, we formulate the blob detection problem as a region outlier detection problem. Similar to the spatial outlier @cite_8 , a region outlier is a group of spatial connected objects whose non-spatial attribute values are significantly different from those of other spatial surrounding objects in its spatial neighborhood. Figure shows blobs are region outliers. The number of region outliers detected is determined by pre-defined criteria provided by domain experts.
{ "cite_N": [ "@cite_26", "@cite_8" ], "mid": [ "1993874889", "1586410151" ], "abstract": [ "A blob-filament (or simply “blob”) is a magnetic-field-aligned plasma structure which is considerably denser than the surrounding background plasma and highly localized in the directions perpendicular to the equilibrium magnetic field B. In experiments and simulations, these intermittent filaments are often formed near the boundary between open and closed field lines, and seem to arise in theory from the saturation process for the dominant edge instabilities and turbulence. Blobs become charge-polarized under the action of an external force which causes unequal drifts on ions and electrons; the resulting polarization-induced E × B drift moves the blobs radially outwards across the scrape-off-layer (SOL). Since confined plasmas generally are subject to radial or outwards expansion forces (e.g., curvature and ∇B forces in toroidal plasmas), blob transport is a general phenomenon occurring in nearly all plasmas. This paper reviews the relationship between the experimental and theoretical results on blob form...", "Spatial outliers represent locations which are significantly different from their neighborhoods even though they may not be significantly different from the entire population. Identification of spatial outliers can lead to the discovery of unexpected, interesting, and implicit knowledge, such as local instability. In this paper, we first provide a general definition of S-outliers for spatial outliers. This definition subsumes the traditional definitions of spatial outliers. Second, we characterize the computation structure of spatial outlier detection methods and present scalable algorithms. Third, we provide a cost model of the proposed algorithms. Finally, we experimentally evaluate our algorithms using a Minneapolis-St. Paul (Twin Cities) traffic data set." ] }
1505.03532
2503235127
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.
The problem is to design an efficient and effective approach to detect and track different shapes of region outliers simultaneously in fusion plasma data streams. By identifying and monitoring these blob-filaments (region outliers), scientists can gain a better understanding about this phenomena. In addition, a data stream is an ordered sequence of data that arrives continuously and has to be processed online. Due to the high arrival rate of data, the blob detection must finish processing before the next data chunk arrives @cite_59 . Therefore, another critical problem is to develop a high-performance blob detection approach in order to meet the real-time requirements.
{ "cite_N": [ "@cite_59" ], "mid": [ "2039841137" ], "abstract": [ "In applications, such as sensor networks and power usage monitoring, data are in the form of streams, each of which is an infinite sequence of data points with explicit or implicit timestamps and has special characteristics, such as transiency, uncertainty, dynamic data distribution, multidimensionality, and dynamic relationship. These characteristics introduce new research issues that make outlier detection for stream data more challenging than that for regular (non-stream) data. This paper discusses those research issues for applications where data come from a single stream as well as multiple streams." ] }
1505.03532
2503235127
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.
The problem of outlier detection has been extensively studied and can be generally classified into four categories: distance-based, density-based, clustering-based, and distribution-based approaches @cite_42 @cite_38 .
{ "cite_N": [ "@cite_38", "@cite_42" ], "mid": [ "2122646361", "2137130182" ], "abstract": [ "Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.", "Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review." ] }
1505.03532
2503235127
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.
Density-based methods @cite_12 assign a local outlier factor (LOF) to each sample based on their local density. The LOF determines the degree of outlierness, where samples with high LOF value are identified as outliers. This approach does not require any prior knowledge of underlying distribution of the data. However, it has a high computational complexity since pair-wise distances have to be computed to obtain each local density value.
{ "cite_N": [ "@cite_12" ], "mid": [ "2144182447" ], "abstract": [ "For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical." ] }
1505.03532
2503235127
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.
Clustering-based methods @cite_3 conduct clustering-based techniques on the sample points of the data to characterize the local data behavior. Since this method does not focus on outlier detection, the outliers are produced as by-products and it is not optimized for outlier detection.
{ "cite_N": [ "@cite_3" ], "mid": [ "2019014808" ], "abstract": [ "In this paper, we present a new definition for outlier: cluster-based local outlier, which is meaningful and provides importance to the local data behavior. A measure for identifying the physical significance of an outlier is designed, which is called cluster-based local outlier factor (CBLOF). We also propose the FindCBLOF algorithm for discovering outliers. The experimental results show that our approach outperformed the existing methods on identifying meaningful and interesting outliers." ] }
1505.03532
2503235127
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.
Distribution-based methods @cite_8 applies machine learning techniques to estimate a probability distribution over the data and develop a statistical test to detect outliers. These methods use all dimensions to define a neighborhood for comparison and typically do not distinguish non-spatial attributes from spatial attributes.
{ "cite_N": [ "@cite_8" ], "mid": [ "1586410151" ], "abstract": [ "Spatial outliers represent locations which are significantly different from their neighborhoods even though they may not be significantly different from the entire population. Identification of spatial outliers can lead to the discovery of unexpected, interesting, and implicit knowledge, such as local instability. In this paper, we first provide a general definition of S-outliers for spatial outliers. This definition subsumes the traditional definitions of spatial outliers. Second, we characterize the computation structure of spatial outlier detection methods and present scalable algorithms. Third, we provide a cost model of the proposed algorithms. Finally, we experimentally evaluate our algorithms using a Minneapolis-St. Paul (Twin Cities) traffic data set." ] }
1505.03532
2503235127
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.
A number of distributed outlier detection methods have also been studied in @cite_39 @cite_51 @cite_9 @cite_40 @cite_32 . Most of these methods are seeking an efficient way to parallelize classical outlier detection methods such as distance-based outliers @cite_27 @cite_28 , distribution-based outliers @cite_39 , density-based outliers @cite_28 , density-based outliers @cite_30 , and PCA-based techniques @cite_60 . However, there methods are not generally applicable to region outlier detection and tracking. In particular, in order to tackle high volume and velocity challenges arising from fusion plasma big data, specialized outlier detection scheme and suitable high performance computing technique are demanded to complete blob detection in the order of milliseconds.
{ "cite_N": [ "@cite_30", "@cite_60", "@cite_28", "@cite_9", "@cite_32", "@cite_39", "@cite_40", "@cite_27", "@cite_51" ], "mid": [ "1966147156", "137179224", "2169900105", "", "", "2153610999", "", "1556107440", "" ], "abstract": [ "Efficiently detecting outliers or anomalies is an important problem in many areas of science, medicine and information technology. Applications range from data cleaning to clinical diagnosis, from detecting anomalous defects in materials to fraud and intrusion detection. Over the past decade, researchers in data mining and statistics have addressed the problem of outlier detection using both parametric and non-parametric approaches in a centralized setting. However, there are still several challenges that must be addressed. First, most approaches to date have focused on detecting outliers in a continuous attribute space. However, almost all real-world data sets contain a mixture of categorical and continuous attributes. Categorical attributes are typically ignored or incorrectly modeled by existing approaches, resulting in a significant loss of information. Second, there have not been any general-purpose distributed outlier detection algorithms. Most distributed detection algorithms are designed with a specific domain (e.g. sensor networks) in mind. Third, the data sets being analyzed may be streaming or otherwise dynamic in nature. Such data sets are prone to concept drift, and models of the data must be dynamic as well. To address these challenges, we present a tunable algorithm for distributed outlier detection in dynamic mixed-attribute data sets.", "An automatic duplicating system in which computer fanfold documents are fed by an automatic handling apparatus having a tractor and drive control system for advancing the document across the platen of the processor for the system. An arrangement is provided which will control operation of the tractor and drive control system, and use is made of a stepper motor having many, small angled rotational movements which can be digitally controlled. A logic circuit arrangement is provided to energize and control the motor so that precise positioning of an edge of each frame section of the fanfold document can be made relative to a platen registration edge. In addition, the logic circuit is adapted to effect acceleration and deceleration of the motor in a controlled sense.", "An outlier is an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism. Outlier detection has many applications, such as data cleaning, fraud detection and network intrusion. The existence of outliers can indicate individuals or groups that exhibit a behavior that is very different from most of the individuals of the dataset. In this paper we design two parallel algorithms, the first one is for finding out distance-based outliers based on nested loops along with randomization and the use of a pruning rule. The second parallel algorithm is for detecting density-based local outliers. In both cases data parallelism is used. We show that both algorithms reach near linear speedup. Our algorithms are tested on four real-world datasets coming from the Machine Learning Database Repository at the UCI.", "", "", "Sensor networks have recently found many popular applications in a number of different settings. Sensors at different locations can generate streaming data, which can be analyzed in real-time to identify events of interest. In this paper, we propose a framework that computes in a distributed fashion an approximation of multi-dimensional data distributions in order to enable complex applications in resource-constrained sensor networks.We motivate our technique in the context of the problem of outlier detection. We demonstrate how our framework can be extended in order to identify either distance- or density-based outliers in a single pass over the data, and with limited memory requirements. Experiments with synthetic and real data show that our method is efficient and accurate, and compares favorably to other proposed techniques. We also demonstrate the applicability of our technique to other related problems in sensor networks.", "", "Data mining is a new, important and fast growing database application. Outlier (exception) detection is one kind of data mining, which can be applied in a variety of areas like monitoring of credit card fraud and criminal activities in electronic commerce. With the ever-increasing size and attributes (dimensions) of database, previously proposed detection methods for two dimensions are no longer applicable. The time complexity of the Nested-Loop (NL) algorithm (Knorr and Ng, in Proc. 24th VLDB, 1998) is linear to the dimensionality but quadratic to the dataset size, inducing an unacceptable cost for large dataset. A more efficient version (ENL) and its parallel version (PENL) are introduced. In theory, the improvement of performance in PENL is linear to the number of processors, as shown in a performance comparison between ENL and PENL using Bulk Synchronization Parallel (BSP) model. The great improvement is further verified by experiments on a parallel computer system IBM 9076 SP2. The results show that it is a very good choice to mine outliers in a cluster of workstations with a low-cost interconnected by a commodity communication network.", "" ] }
1505.03532
2503235127
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.
In the first two steps of our proposed approach, we apply distribution-based outlier detection to detect outlier points by considering only non-spatial attributes and then leverage fast CCL to construct the region outliers by taking into account spatial-attributes. We choose distribution-based outlier detection since it can solve the problem of finding outliers efficiently if an accurate approximation of a data distribution can be properly found @cite_8 @cite_39 . Normally the distribution of the stream data may change over time @cite_22 . However, this assumption may not hold in fusion experiments since a fusion experiment lasts very short time period from a few seconds to hundreds of seconds. Therefore, we consider the simpler problem of fixed distribution parameters, noting that several fusion devices have shown similar distribution functions of blob events. Then we can perform exploratory data analysis to compute best fitted distribution parameters offline and then build an accurate online distribution model. We leave the more complicated problem of real-time distribution estimation for future work.
{ "cite_N": [ "@cite_22", "@cite_39", "@cite_8" ], "mid": [ "2026493302", "2153610999", "1586410151" ], "abstract": [ "In the statistics community, outlier detection for time series data has been studied for decades. Recently, with advances in hardware and software technology, there has been a large body of work on temporal outlier detection from a computational perspective within the computer science community. In particular, advances in hardware technology have enabled the availability of various forms of temporal data collection mechanisms, and advances in software technology have enabled a variety of data management mechanisms. This has fueled the growth of different kinds of data sets such as data streams, spatio-temporal data, distributed streams, temporal networks, and time series data, generated by a multitude of applications. There arises a need for an organized and detailed study of the work done in the area of outlier detection with respect to such temporal datasets. In this survey, we provide a comprehensive and structured overview of a large set of interesting outlier definitions for various forms of temporal data, novel techniques, and application scenarios in which specific definitions and techniques have been widely used.", "Sensor networks have recently found many popular applications in a number of different settings. Sensors at different locations can generate streaming data, which can be analyzed in real-time to identify events of interest. In this paper, we propose a framework that computes in a distributed fashion an approximation of multi-dimensional data distributions in order to enable complex applications in resource-constrained sensor networks.We motivate our technique in the context of the problem of outlier detection. We demonstrate how our framework can be extended in order to identify either distance- or density-based outliers in a single pass over the data, and with limited memory requirements. Experiments with synthetic and real data show that our method is efficient and accurate, and compares favorably to other proposed techniques. We also demonstrate the applicability of our technique to other related problems in sensor networks.", "Spatial outliers represent locations which are significantly different from their neighborhoods even though they may not be significantly different from the entire population. Identification of spatial outliers can lead to the discovery of unexpected, interesting, and implicit knowledge, such as local instability. In this paper, we first provide a general definition of S-outliers for spatial outliers. This definition subsumes the traditional definitions of spatial outliers. Second, we characterize the computation structure of spatial outlier detection methods and present scalable algorithms. Third, we provide a cost model of the proposed algorithms. Finally, we experimentally evaluate our algorithms using a Minneapolis-St. Paul (Twin Cities) traffic data set." ] }
1505.03532
2503235127
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.
Recently, several researchers @cite_35 @cite_18 @cite_34 have developed a blob-tracking algorithm that uses raw fast camera data directly with GPI technique. In @cite_35 @cite_34 , they leverage a contouring method, database techniques and image analysis software to track the blob motion and changes in the structure of blobs. After normalizing each frame by an average frame created from roughly one thousand frames around the target time frame, the resulting images are contoured and the closed contours satisfying certain size constraints are determined as blobs. Then, an ellipse is fitted to the contour midway between the smallest level contours and the peak. All information about blobs are added into a SQL database for more data analysis. This method is close to our approach but it can not be used for real-time blob detection since they compute time-averaged intensity to normalize the local intensity. Additionally, only closed contours are treated as blobs, which may miss blobs at the edges of the regions of interest. Finally, these methods are still post-run-analysis, which cannot provide real-time feedback in fusion experiments.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_34" ], "mid": [ "2026158755", "2079038395", "" ], "abstract": [ "Abstract Fast 2-D cameras examine a variety of important aspects of the plasma edge and in-vessel components on the National Spherical Torus Experiment (NSTX). Four Phantom and two Miro visible-light cameras manufactured by Vision Research are used on NSTX for edge studies. Each camera can take several gigabytes (GBs) of data during each plasma pulse. Timely access to this amount of data can itself be a challenge, but analysing all these data using manual frame-by-frame examination is not practical. This paper describes image analysis, database techniques, and visualization methods used to organize the fast camera data and to facilitate physics insights from it. An example is presented of analysing and characterizing the size, movement and dynamics of coherent plasma structures (typically referred to as “blobs”) near the plasma edge. Software tools that generate statistics of blob speed, shape, amplitude, size, and orientation are described. The characteristics of emitted blobs affect plasma confinement and heat loads on plasma facing components, and are thus of particular interest to future machines like ITER.", "This article is part of Ralph Kube's doctoral thesis, which is available in Munin at http: hdl.handle.net 10037 7088", "" ] }
1505.03460
1670890661
Ambient radio frequency (RF) energy harvesting technique has recently been proposed as a potential solution for providing proactive energy replenishment for wireless devices. This paper aims to analyze the performance of a battery-free wireless sensor powered by ambient RF energy harvesting using a stochastic geometry approach. Specifically, we consider the point-to-point uplink transmission of a wireless sensor in a stochastic geometry network, where ambient RF sources, such as mobile transmit devices, access points and base stations, are distributed as a Ginibre @math -determinantal point process (DPP). The DPP is able to capture repulsion among points, and hence, it is more general than the Poisson point process (PPP). We analyze two common receiver architectures: separated receiver and time-switching architectures. For each architecture, we consider the scenarios with and without co-channel interference for information transmission. We derive the expectation of the RF energy harvesting rate in closed form and also compute its variance. Moreover, we perform a worst-case study which derives the upper bound of both power and transmission outage probabilities. Additionally, we provide guidelines on the setting of optimal time-switching coefficient in the case of the time-switching architecture. Numerical results verify the correctness of the analysis and show various tradeoffs between parameter setting. Lastly, we prove that the RF-powered sensor performs better when the distribution of the ambient sources exhibits stronger repulsion.
Geometry approaches have been applied to analyze RF energy harvesting performance in cellular network @cite_34 , cognitive radio network @cite_25 , and relay network @cite_12 @cite_24 @cite_21 . The authors in @cite_34 investigate tradeoffs among transmit power and density of mobiles and wireless charging stations which are both distributed as a homogeneous Poisson Point Process (PPP). Energy harvesting relay network has been mostly analyzed. @cite_25 , the authors study a cognitive radio network where primary and secondary networks are distributed as independent homogeneous PPPs. The secondary network is powered by the energy opportunistically harvested from nearby transmitters in the primary network. Under the outage probability requirements for both coexisting networks, the maximum throughput of the secondary network is analyzed. The study in @cite_12 analyzes the impact of cooperative density and relay selection in a large-scale network with transmitter-receiver pairs distributed as a PPP. Reference @cite_24 investigates a decode-and-forward relay network with multiple source-destination pairs. Under the assumption that the relay nodes are distributed as a PPP, the network outage probability has been characterized. The studies in @cite_21 investigates network performance of a two-way network-coded cooperative network, where the source, destination and RF-powered relay nodes are modeled as three independent PPPs.
{ "cite_N": [ "@cite_21", "@cite_24", "@cite_34", "@cite_25", "@cite_12" ], "mid": [ "2135373233", "1993710812", "2068499153", "2122489958", "2015786472" ], "abstract": [ "In this letter, we study the performance of network coding (NC)-aided cooperative communications in large scale networks, where the relays are able to harvest energy emitted by wireless transmissions. In particular, we derive theoretical expressions for key network performance metrics, i.e., the probability of successful data exchange and the network lifetime gain. The proposed analytical expressions are verified via extensive Monte Carlo simulations, demonstrating the potential benefits of the energy harvested by the wireless transmissions.", "In this paper, the application of wireless information and power transfer to cooperative networks is investigated, where the relays in the network are randomly located and based on the decode-forward strategy. For the scenario with one source-destination pair, three different strategies for using the available relays are studied, and their impact on the outage probability and diversity gain is characterized by applying stochastic geometry. By using the assumptions that the path loss exponent is two and that the relay-destination distances are much larger than the source-relay distances, closed form analytical results can be developed to demonstrate that the use of energy harvesting relays can achieve the same diversity gain as the case with conventional self-powered relays. For the scenario with multiple sources, the relays can be viewed as a type of scarce resource, where the sources compete with each other to get help from the relays. Such a competition is modeled as a coalition formation game, and two distributed game theoretic algorithms are developed based on different payoff functions. Simulation results are provided to confirm the accuracy of the developed analytical results and facilitate a better performance comparison.", "Microwave power transfer (MPT) delivers energy wirelessly from stations called power beacons (PBs) to mobile devices by microwave radiation. This provides mobiles practically infinite battery lives and eliminates the need of power cords and chargers. To enable MPT for mobile recharging, this paper proposes a new network architecture that overlays an uplink cellular network with randomly deployed PBs for powering mobiles, called a hybrid network. The deployment of the hybrid network under an outage constraint on data links is investigated based on a stochastic-geometry model where single-antenna base stations (BSs) and PBs form independent homogeneous Poisson point processes (PPPs) with densities λb and λp, respectively, and single-antenna mobiles are uniformly distributed in Voronoi cells generated by BSs. In this model, mobiles and PBs fix their transmission power at p and q, respectively; a PB either radiates isotropically, called isotropic MPT, or directs energy towards target mobiles by beamforming, called directed MPT. The model is used to derive the tradeoffs between the network parameters (p, λb, q, λp) under the outage constraint. First, consider the deployment of the cellular network. It is proved that the outage constraint is satisfied so long as the product pλbα 2 is above a given threshold where α is the path-loss exponent. Next, consider the deployment of the hybrid network assuming infinite energy storage at mobiles. It is shown that for isotropic MPT, the product qλpλbα 2 has to be above a given threshold so that PBs are sufficiently dense; for directed MPT, zmqλpλbα 2 with zm denoting the array gain should exceed a different threshold to ensure short distances between PBs and their target mobiles. Furthermore, similar results are derived for the case of mobiles having small energy storage.", "Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.", "Energy harvesting (EH) from ambient radio-frequency (RF) electromagnetic waves is an efficient solution for fully autonomous and sustainable communication networks. Most of the related works presented in the literature are based on specific (and small-scale) network structures, which although give useful insights on the potential benefits of the RF-EH technology, cannot characterize the performance of general networks. In this paper, we adopt a large-scale approach of the RF-EH technology and we characterize the performance of a network with random number of transmitter-receiver pairs by using stochastic-geometry tools. Specifically, we analyze the outage probability performance and the average harvested energy, when receivers employ power splitting (PS) technique for \"simultaneous\" information and energy transfer. A non-cooperative scheme, where information energy are conveyed only via direct links, is firstly considered and the outage performance of the system as well as the average harvested energy are derived in closed form in function of the power splitting. For this protocol, an interesting optimization problem which minimizes the transmitted power under outage probability and harvesting constraints, is formulated and solved in closed form. In addition, we study a cooperative protocol where sources' transmissions are supported by a random number of potential relays that are randomly distributed into the network. In this case, information energy can be received at each destination via two independent and orthogonal paths (in case of relaying). We characterize both performance metrics, when a selection combining scheme is applied at the receivers and a single relay is randomly selected for cooperative diversity." ] }
1505.03460
1670890661
Ambient radio frequency (RF) energy harvesting technique has recently been proposed as a potential solution for providing proactive energy replenishment for wireless devices. This paper aims to analyze the performance of a battery-free wireless sensor powered by ambient RF energy harvesting using a stochastic geometry approach. Specifically, we consider the point-to-point uplink transmission of a wireless sensor in a stochastic geometry network, where ambient RF sources, such as mobile transmit devices, access points and base stations, are distributed as a Ginibre @math -determinantal point process (DPP). The DPP is able to capture repulsion among points, and hence, it is more general than the Poisson point process (PPP). We analyze two common receiver architectures: separated receiver and time-switching architectures. For each architecture, we consider the scenarios with and without co-channel interference for information transmission. We derive the expectation of the RF energy harvesting rate in closed form and also compute its variance. Moreover, we perform a worst-case study which derives the upper bound of both power and transmission outage probabilities. Additionally, we provide guidelines on the setting of optimal time-switching coefficient in the case of the time-switching architecture. Numerical results verify the correctness of the analysis and show various tradeoffs between parameter setting. Lastly, we prove that the RF-powered sensor performs better when the distribution of the ambient sources exhibits stronger repulsion.
Other than RF energy harvesting, stochastic geometry approaches have also been applied to address other types of energy harvesting systems. Reference @cite_31 investigates the network coverage of a hexagonal cellular network, where the base stations are powered by renewable energy, and the mobiles are distributed as a PPP. The authors in @cite_13 explore the network coverage in a relay-assisted cellular network modeled as a PPP. Each relay node adopt an energy harvesting module, the energy arrival process of which is assumed to be an independent and identical poisson process. @cite_35 , the authors provides a fundamental characterization of the regimes under which a multiple-tier heterogeneous network with genetic energy harvesting modules fundamentally achieves the same performance as the ones with reliable energy sources. Different from above studies, our previous in @cite_17 adopts a determinantal point process model to analyze the downlink transmission performance from an access point to a sensor powered by ambient RF energy.
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "2148058923", "2072373150", "2093779333", "1633315636" ], "abstract": [ "We develop a new tractable model for K-tier heterogeneous cellular networks (HetNets), where each base station (BS) is powered solely by a self-contained energy harvesting module. The BSs across tiers differ in terms of the energy harvesting rate, energy storage capacity, transmit power and deployment density. Since a BS may not always have enough energy, it may need to be kept OFF and allowed to recharge while nearby users are served by neighboring BSs that are ON. We show that the fraction of time a k^ th tier BS can be kept ON, termed availability ρ_k, is a fundamental metric of interest. Using tools from random walk theory, fixed point analysis and stochastic geometry, we characterize the set of K-tuples (ρ_1,ρ_2,…ρ_K), termed the availability region, that is achievable by general uncoordinated operational strategies, where the decision to toggle the current ON OFF state of a BS is taken independently of the other BSs. If the availability vector corresponding to the optimal system performance, e.g., in terms of rate, lies in this availability region, there is no performance loss due to the presence of unreliable energy sources. As a part of our analysis, we model the temporal dynamics of the energy level at each BS as a birth-death process, derive the energy utilization rate, and use hitting stopping time analysis to prove that there exists a fundamental limit on ρ_k that cannot be surpassed by any uncoordinated strategy.", "Powering a radio access network using renewables such as wind and solar power promises dramatic reduction of the network operation cost and of the networks' carbon footprints. However, the spatial variation of the energy field can lead to fluctuation in power supplied to the network and thereby affects its coverage. To quantify the effect, the paper considers a cellular downlink network with hexagonal cells and powered by harvesting energy. The network coverage of mobiles is specified by an outage constraint. A novel model of the energy field is developed using stochastic geometry. In the model, fixed maximum energy intensity occurs at Poisson distributed locations, called energy centers; the intensities fall off from the centers following an exponential-decay function of squared distance; the energy intensity at an arbitrary location is given by the decayed intensity from the nearest energy center. First, consider single harvesters deployed on the same sites as base stations (BSs). The mobile outage probability is shown to decrease exponentially with the product of the energy-field parameters: the energy-center density and exponential rate of the energy-decay function. Next, consider distributed harvesters whose generated energy is aggregated and then re-distributed to BSs. As the number of harvesters per aggregator increases, the power supplied to each BS is shown to converge to a constant proportional to the number of harvesters per BS, which counteracts the randomness of the energy field.", "", "The advance in RF energy transfer and harvesting technique over the past decade has enabled wireless energy replenishment for electronic devices, which is deemed as a promising alternative to address the energy bottleneck of conventional battery-powered devices. In this paper, by using a stochastic geometry approach, we aim to analyze the performance of an RF-powered wireless sensor in a downlink simultaneous wireless information and power transfer (SWIPT) system with ambient RF transmitters. Specifically, we consider the point-to-point downlink SWIPT transmission from an access point to a wireless sensor in a network, where ambient RF transmitters are distributed as a Ginibre α-determinantal point process (DPP), which becomes the Poisson point process when a approaches zero. In the considered network, we focus on analyzing the performance of a sensor equipped with the power-splitting architecture. Under this architecture, we characterize the expected RF energy harvesting rate of the sensor. Moreover, we derive the upper bound of both power and transmission outage probabilities. Numerical results show that our upper bounds are accurate for different value of a." ] }
1505.03509
2112194971
In this paper we study the diculty of counting nodes in a synchronous dynamic network where nodes share the same identier, they communicate by using a broadcast with unlimited bandwidth and, at each synchronous round, network topology may change. To count in such setting, it has been shown that the presence of a leader is necessary. We focus on a particularly interesting subset of dynamic networks, namely Persistent Distance -G(PD)h, in which each node has a xed distance from the leader across rounds and such distance is at most h. In these networks the dynamic diameter D is at most 2h. We prove the number of rounds for counting inG(PD)2 is at least logarithmic with respect to the network sizejVj. Thanks to this result, we show that counting on any dynamic anonymous network with D constant w.r.t. jVj takes at least D+(log jVj) rounds where (log jVj) represents the additional cost to be payed for handling anonymity. At the best
The question concerning what can be computed on top of static anonymous networks, has been pioneered by Angluin in @cite_13 and by Yamashita and Kameda @cite_1 . In the domain of non-anonymous dynamic networks the counting problem has been addressed in the following contexts: (i) dynamicity governed by node churn in the context of distributed query execution @cite_17 @cite_15 , (ii) dynamicity governed by random adversary in the context of peer-to-peer networks @cite_8 and (iii) dynamicity governed worst-case adversary in the context of @math -interval connectivity @cite_7 @cite_10 .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_10", "@cite_1", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "2120957127", "2106737683", "2120741723", "1964154267", "2022378993", "1968191405", "1976404012" ], "abstract": [ "We investigate to what extent flooding and routing is possible if the graph is allowed to change unpredictably at each time step. We study what minimal requirements are necessary so that a node may correctly flood or route a message in a network whose links may change arbitrarily at any given point, subject to the condition that the underlying graph is connected. We look at algorithmic constraints such as limited storage, no knowledge of an upper bound on the number of nodes, and no usage of identifiers. We look at flooding as well as routing to some existing specified destination and give algorithms.", "As the size of distributed systems keeps growing, the peer to peer communication paradigm has been identified as the key to scalability. Peer to peer overlay networks are characterized by their self-organizing capabilities, resilience to failure and fully decentralized control. In a peer to peer overlay, no entity has a global knowledge of the system. As much as this property is essential to ensure the scalability, monitoring the system under such circumstances is a complex task. Yet, estimating the size of the system is core functionality for many distributed applications to parameter setting or monitoring purposes. In this paper, we propose a comparative study between three algorithms that estimate in a fully decentralized way the size of a peer to peer overlay. Candidate approaches are generally applicable irrespective of the underlying structure of the peer to peer overlay. The paper reports the head to head comparison of estimation system size algorithms. The simulations have been conducted using the same simulation framework and inputs and highlight the differences in cost and accuracy of the estimation between the algorithms both in static and dynamic settings", "In this paper we investigate distributed computation in dynamic networks in which the network topology changes from round to round. We consider a worst-case model in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. The model captures mobile networks and wireless networks, in which mobility and interference render communication unpredictable. In contrast to much of the existing work on dynamic networks, we do not assume that the network eventually stops changing; we require correctness and termination even in networks that change continually. We introduce a stability property called T -interval connectivity (for T >= 1), which stipulates that for every T consecutive rounds there exists a stable connected spanning subgraph. For T = 1 this means that the graph is connected in every round, but changes arbitrarily between rounds. We show that in 1-interval connected graphs it is possible for nodes to determine the size of the network and compute any com- putable function of their initial inputs in O(n2) rounds using messages of size O(log n + d), where d is the size of the input to a single node. Further, if the graph is T-interval connected for T > 1, the computation can be sped up by a factor of T, and any function can be computed in O(n + n2 T) rounds using messages of size O(log n + d). We also give two lower bounds on the token dissemination problem, which requires the nodes to disseminate k pieces of information to all the nodes in the network. The T-interval connected dynamic graph model is a novel model, which we believe opens new avenues for research in the theory of distributed computing in wireless, mobile and dynamic networks.", "", "This paper studies the problem of answering aggregation queries, satisfying the interval validity semantics, in a distributed system prone to continuous arrival and departure of participants. The interval validity semantics states that the query answer must be calculated considering contributions of at least all processes that remained in the distributed system for the whole query duration. Satisfying this semantics in systems experiencing unbounded churn is impossible due to the lack of connectivity and path stability between processes. This paper presents a novel architecture, namely Virtual Tree, for building and maintaining a structured overlay network with guaranteed connectivity and path stability in settings characterized by bounded churn rate. The architecture includes a simple query answering algorithm that provides interval valid answers. The overlay network generated by the Virtual Tree architecture is a tree-shaped topology with virtual nodes constituted by clusters of processes and virtual links constituted by multiple communication links connecting processes located in adjacent virtual nodes. We formally prove a bound on the churn rate for interval valid queries in a distributed system where communication latencies are bounded by a constant unknown by processes. Finally, we carry out an extensive experimental evaluation that shows the degree of robustness of the overlay network generated by the virtual tree architecture under different churn rates.", "This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.", "Massive-scale self-administered networks like Peer-to-Peer and Sensor Networks have data distributed across thousands of participant hosts. These networks are highly dynamic with short-lived hosts being the norm rather than an exception. In recent years, researchers have investigated best-effort algorithms to efficiently process aggregate queries (e.g., sum, count, average, minimum and maximum) on these networks. Unfortunately, query semantics for best-effort algorithms are ill-defined, making it hard to reason about guarantees associated with the result returned. In this paper, we specify a correctness condition, Single-Site Validity, with respect to which the above algorithms are best-effort. We present a class of algorithms that guarantee validity in dynamic networks. Experiments on real-life and synthetic network topologies validate performance of our algorithms, revealing the hitherto unknown price of validity." ] }
1505.03509
2112194971
In this paper we study the diculty of counting nodes in a synchronous dynamic network where nodes share the same identier, they communicate by using a broadcast with unlimited bandwidth and, at each synchronous round, network topology may change. To count in such setting, it has been shown that the presence of a leader is necessary. We focus on a particularly interesting subset of dynamic networks, namely Persistent Distance -G(PD)h, in which each node has a xed distance from the leader across rounds and such distance is at most h. In these networks the dynamic diameter D is at most 2h. We prove the number of rounds for counting inG(PD)2 is at least logarithmic with respect to the network sizejVj. Thanks to this result, we show that counting on any dynamic anonymous network with D constant w.r.t. jVj takes at least D+(log jVj) rounds where (log jVj) represents the additional cost to be payed for handling anonymity. At the best
Counting in anonymous dynamic networks : black In @cite_4 , the authors propose a gossip-based protocol to compute aggregation function in a dynamic network by exploiting an invariant, called , defined over the whole set of processes. The network graph considered by @cite_4 is governed by a fair random adversary. The first work investigating the problem of counting in an anonymous network with worst-case adversary is @cite_6 . The authors provided an algorithm that, under the assumption of a fixed upper bound on the maximum node degree, it computes an upper bound on the size of the network. Building on this result, @cite_3 proposes an exact counting algorithm under the same assumption. Finally, @cite_14 provides a counting algorithm for 1-interval connected networks considering each node is equipped with a local degree detector, i.e. an oracle able to predict the degree of the node in each graph generated by the adversary. Both @cite_3 and @cite_14 terminate in an exponential number of rounds.
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_4", "@cite_6" ], "mid": [ "2025284419", "1857402595", "2110100895", "2172782997" ], "abstract": [ "In this paper we investigate the problem of counting the size of a network where processes are anonymous (i.e., they share the same identifier) and the network topology constantly changes controlled by an adversary able to look internal process states and add and remove edges in order to contrast the convergence of the algorithm to the correct count. It is easy to show that, if the adversary can generate graphs without any constraint on the connectivity (i.e. it can generate topologies where there exist nodes not able to influence the others), counting is impossible. In this paper we consider a synchronous round based computation and the dynamicity is governed by a worst-case adversary that generates a sequence of graphs, one for each round, with the only constraint that each graph must be connected (1-interval connectivity property). It has been conjectured that counting in a finite time against such adversary is impossible and the existing solutions consider that each process has some knowledge about network topologies generated by the adversary, i.e. at each round, each node has a degree lesser than D. Along the path of proving the validity (or not) of the conjecture, this paper presents an algorithm that counts in a finite time against the worst-case adversary assuming each process is equipped with an oracle. The latter provides a process at each round r with an estimation of the process degree in the graph generated by the adversary at round r. To the best of our knowledge, this is the first counting algorithm (terminating in a finite time) where processes exploit the minimal knowledge about the behavior of the adversary. Interestingly, such oracle can be implemented in a wide range of real systems.", "This paper addresses the problem of counting the size of a network where i processes have the same identifiers anonymous nodes and ii the network topology constantly changes dynamic network. Changes are driven by a powerful adversary that can look at internal process states and add and remove edges in order to contrast the convergence of the algorithm to the correct count. The paper proposes two leader-based counting algorithms. Such algorithms are based on a technique that mimics an energy-transfer between network nodes. The first algorithm assumes that the adversary cannot generate either disconnected network graphs or network graphs where nodes have degree greater than D. In such algorithm, the leader can count the size of the network and detect the counting termination in a finite time i.e., conscious counting algorithm. The second algorithm assumes that the adversary only keeps the network graph connected at any time and we prove that the leader can still converge to a correct count in a finite number of rounds, but it is not conscious when this convergence happens.", "Over the last decade, we have seen a revolution in connectivity between computers, and a resulting paradigm shift from centralized to highly distributed systems. With massive scale also comes massive instability, as node and link failures become the norm rather than the exception. For such highly volatile systems, decentralized gossip-based protocols are emerging as an approach to maintaining simplicity and scalability while achieving fault-tolerant information dissemination. In this paper, we study the problem of computing aggregates with gossip-style protocols. Our first contribution is an analysis of simple gossip-based protocols for the computation of sums, averages, random samples, quantiles, and other aggregate functions, and we show that our protocols converge exponentially fast to the true answer when using uniform gossip. Our second contribution is the definition of a precise notion of the speed with which a node's data diffuses through the network. We show that this diffusion speed is at the heart of the approximation guarantees for all of the above problems. We analyze the diffusion speed of uniform gossip in the presence of node and link failures, as well as for flooding-based mechanisms. The latter expose interesting connections to random walks on graphs.", "Contribution. We study the fundamental naming and counting problems in networks that are anonymous, unknown, and possibly dynamic. Network dynamicity is modeled by the 1-interval connectivity model [KLO10]. We first prove that on static networks with broadcast counting is impossible to solve without a leader and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. With a leader we solve counting in linear time. Then we focus on dynamic networks with broadcast. We show that if nodes know an upper bound on the maximum degree that will ever appear then they can obtain an upper bound on n. Finally, we replace broadcast with one-to-each, in which a node may send a different message to each of its neighbors. This variation is then proved to be computationally equivalent to a full-knowledge model with unique names." ] }
1505.03290
2207021687
We describe algorithms for computing eigenpairs (eigenvalue--eigenvector) of a complex @math matrix @math . These algorithms are numerically stable, strongly accurate, and theoretically efficient (i.e., polynomial-time). We do not believe they outperform in practice the algorithms currently used for this computational problem. The merit of our paper is to give a positive answer to a long-standing open problem in numerical linear algebra.
Homotopy continuation methods go back, at least, to the work of Lahaye @cite_37 . A detailed survey of their use to solve polynomial equations is in @cite_27 . More explicit focus in eigenvalue computations is considered in @cite_21 @cite_32 @cite_13 @cite_30 but we do not know of any serious attempt to implement them.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_21", "@cite_32", "@cite_27", "@cite_13" ], "mid": [ "2023799556", "2901269463", "2045349809", "2023512872", "1676198905", "2015632604" ], "abstract": [ "The eigenvalue problem of a matrix A can be considered a system of polynomial equations @math in complex variables @math and @math . In this paper, a homotopy continuation algorithm for solving eigenvalue problems of real nonsymmetric matrices is developed based on this point. Different from current homotopy continuation methods for real nonsymmetric matrices, this algorithm makes use of the homotopy which consists of real polynomials. Hence when a complex path is followed, its conjugate path is obtained as a by-product. Moreover, techniques including a double step inverse iteration are developed to eliminate complex computations completely. A great amount of operations is saved with these complex-to-real conversions.The algorithm employed the strategy of “divide and conquer,” which makes most of the eigen-paths almost straight lines and extremely easy to follow. Although some of the paths may contain bifurcation, the situation is handled with a complexification process whic...", "", "Abstract The homotopy method is used to find all eigenpairs of symmetric matrices. A special homotopy is constructed for Jacobi matrices. It is shown that there are exactly n distinct smooth curves connecting trivial solutions to desired eigenpairs. These curves are solutions of a certain ordinary differential equation with different initial values. Hence, they can be followed numerically. Incorporated with sparse matrix techniques, this method might be used to solve eigenvalue problems for large scale matrices.", "Abstract Generalized eigenvalue problems can be considered as a system of polynomials. The homotopy continuation method is used to find all the isolated zeros of the polynomial system which corresponds to the eigenpairs of the generalized eigenvalue problem. A special homotopy is constructed in such a way that there are exactly n distinct smooth curves connecting trivial solutions to desired eigenpairs. Since the curves followed by general homotopy curve following scheme are computed independently of one another, the algorithm is a likely candidate for exploiting the advantages of parallel processing to the generalized eigenvalue problems.", "Publisher Summary This chapter presents the numerical solution of polynomial systems by homotopy continuation methods. Solving polynomial systems is an area where numerical computations arise almost naturally. Given the complexity of the problem, e standard machine arithmetic to obtain efficient programs is used.", "The eigenvalues of a matix A are the zeros of its characteristic polynomial f[lambda] = det[A - [lambda]I]. With Hyman's method of determinant evaluation, a new homotopy continuation method, homotopy-determinant method, is developed in this paper for finding all eigenvalues of a real upper Hessenberg matrix. In contrast to other homotopy continuation methods, the homotopy-determinant method calculates eigenvalues without computing their corresponding eigenvectors. Like all homotopy methods, our method solves the eigenvalue problem by following eigenvalue paths of a real homotopy whose regularity is established to the extent necessary. The inevitable bifurcation and possible path jumping are handled by effective processes. 18 refs., 4 figs., 1 tab." ] }
1505.03290
2207021687
We describe algorithms for computing eigenpairs (eigenvalue--eigenvector) of a complex @math matrix @math . These algorithms are numerically stable, strongly accurate, and theoretically efficient (i.e., polynomial-time). We do not believe they outperform in practice the algorithms currently used for this computational problem. The merit of our paper is to give a positive answer to a long-standing open problem in numerical linear algebra.
In the early 1990s Shub and Smale set up a program to understand the cost of solving square systems of complex polynomial equations using homotopy methods. In a collection of articles @cite_42 @cite_34 @cite_0 @cite_23 @cite_39 , known as the B 'ezout series , they put in place many of the notions and techniques that occur in this article. The B 'ezout series did not, however, conclusively settle the understanding of the cost above, and in 1998 Smale proposed it as the 17th in his list of problems for the mathematicians of the 21st century @cite_33 . The problem is not yet considered fully solved by the community but significant advances appear in @cite_5 @cite_29 @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_29", "@cite_42", "@cite_39", "@cite_0", "@cite_23", "@cite_5", "@cite_34" ], "mid": [ "2120280171", "2044413922", "2061860086", "2089003198", "2063917062", "2592503220", "1965654736", "2129280636", "1590743980" ], "abstract": [ "The 17th of the problems proposed by Steve Smale for the 21st century asks for the existence of a deterministic algorithm computing an approximate solution of a system of n complex polynomials in n unknowns in time polynomial, on the average, in the size N of the input system. A partial solution to this problem was given by Carlos Beltr an and Luis Miguel Pardo who exhibited a randomized algorithm doing so. In this paper we further extend this result in several directions. Firstly, we exhibit a linear homotopy algorithm that eciently implements a nonconstructive idea of", "V. I. Arnold, on behalf of the International Mathematical Union has written to a number of mathematicians with a suggestion that they describe some great problems for the next century. This report is my response. Arnold's invitation is inspired in part by Hilbert's list of 1900 (see e.g. [Browder, 1976]) and I have used that list to help design this essay. I have listed 18 problems, chosen with these criteria:", "We prove a new complexity bound, polynomial on the average, for the problem of finding an approximate zero of systems of polynomial equations. The average number of Newton steps required by this method is almost linear in the size of the input (dense encoding). We show that the method can also be used to approximate several or all the solutions of non-degenerate systems, and prove that this last task can be done in running time which is linear in the Bezout number of the system and polynomial in the size of the input, on the average.", "", "We show that there are algorithms which find an approximate zero of a system of polynomial equations and which function in polynomial time on the average. The number of arithmetic operations is cN 4s , where N is the input size and c a universal constant", "", "We estimate the probability that a given number of projective Newton steps applied to a linear homotopy of a system of n homogeneous polynomial equations in @math complex variables of fixed degrees will find all the roots of the system. We also extend the framework of our analysis to cover the classical implicit function theorem and revisit the condition number in this context. Further complexity theory is developed.", "Smale’s 17th Problem asks: “Can a zero of n complex polynomial equations in n unknowns be found approximately, on the average, in polynomial time with a uniform algorithm?”. We give a positive answer to this question. Namely, we describe a uniform probabilistic algorithm that computes an approximate zero of systems of polynomial equations f : Cn −→ Cn, performing a number of arithmetic operations which is polynomial in the size of the input, on the average.", "In this paper we study volume estimates in the space of systems of n homegeneous polynomial equations of fixed degrees d i with respect to a natural Hermitian structure on the space of such systems invariant under the action of the unitary group. We show that the average number of real roots of real systems is D 1 2 where D = Π d i is the Be zout number. We estimate the volume of the subspace of badly conditioned problems and show that volume is bounded by a small degree polynomial in n, N and D times the reciprocal of the condition number to the fourth power. Here N is the dimension of the space of systems." ] }
1505.03290
2207021687
We describe algorithms for computing eigenpairs (eigenvalue--eigenvector) of a complex @math matrix @math . These algorithms are numerically stable, strongly accurate, and theoretically efficient (i.e., polynomial-time). We do not believe they outperform in practice the algorithms currently used for this computational problem. The merit of our paper is to give a positive answer to a long-standing open problem in numerical linear algebra.
The results in these papers cannot be directly used for the eigenpair problem since instances of the latter are ill-posed as polynomial systems. But the intervening ideas can be reshaped to attempt a tailor-made analysis for the eigenpair problem. A major step in this direction was done by Armentano in his PhD thesis (see @cite_3 and its precedent @cite_24 ), where the condition number @math for the eigenpair problem was exhaustively studied. A further step was taken in @cite_9 where @math was used to analyze a randomized algorithm for the Hermitian eigenpair problem.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_3" ], "mid": [ "", "2038332805", "2016553765" ], "abstract": [ "", "We describe and analyze a randomized homotopy algorithm for the Hermitian eigenvalue problem. Given an @math n×n Hermitian matrix @math A, the algorithm returns, almost surely, a pair @math (?,v) which approximates, in a very strong sense, an eigenpair of @math A. We prove that the expected cost of this algorithm, where the expectation is both over the random choices of the algorithm and a probability distribution on the input matrix @math A, is @math O(n6), that is, cubic on the input size. Our result relies on a cost assumption for some pseudorandom number generators whose rationale is argued by us.", "A unitarily invariant projective framework is introduced to analyze the complexity of path-following methods for the eigenvalue problem. A condition number, and its relation to the distance to ill-posedness, is given. A Newton map appropriate for this context is defined, and a version of Smale’s ( )-theorem is proven. The main result of this paper bounds the complexity of path-following methods in terms of the length of the path in the condition metric." ] }
1505.03358
1532457925
The dynamics of attention in social media tend to obey power laws. Attention concentrates on a relatively small number of popular items and neglecting the vast majority of content produced by the crowd. Although popularity can be an indication of the perceived value of an item within its community, previous research has hinted to the fact that popularity is distinct from intrinsic quality. As a result, content with low visibility but high quality lurks in the tail of the popularity distribution. This phenomenon can be particularly evident in the case of photo-sharing communities, where valuable photographers who are not highly engaged in online social interactions contribute with high-quality pictures that remain unseen. We propose to use a computer vision method to surface beautiful pictures from the immense pool of near-zero-popularity items, and we test it on a large dataset of creative-commons photos on Flickr. By gathering a large crowdsourced ground truth of aesthetics scores for Flickr images, we show that our method retrieves photos whose median perceived beauty score is equal to the most popular ones, and whose average is lower by only 1.5 .
Computational aesthetics is the branch of computer vision that studies how to automatically score images in terms of their photographic beauty. ( datta ) and ( ke2006design ) designed the first compositional features to distinguish amateur from professional photos. Computational aesthetics researchers have been developing dedicated discriminative visual features and attributes @cite_42 @cite_27 , generic semantic features @cite_7 @cite_36 , topic-specific models @cite_22 @cite_8 and effective learning frameworks @cite_16 to improve the quality of the aesthetics predictors. Aesthetic features have been also used to infer higher-level properties of images and videos, such as image affective value @cite_23 , image memorability @cite_30 , video creativity @cite_9 , and video interestingness @cite_57 @cite_59 . To our knowledge, this is the first time that image aesthetic predictors are used to expose high quality content from low-popular images in the context of social media.
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_7", "@cite_8", "@cite_36", "@cite_9", "@cite_42", "@cite_57", "@cite_27", "@cite_23", "@cite_59", "@cite_16" ], "mid": [ "2009627211", "2104915826", "2048835603", "2024595555", "2078807908", "2025510654", "2007773296", "", "2080754665", "2003856922", "2150578721", "2157922518" ], "abstract": [ "When glancing at a magazine, or browsing the Internet, we are continuously being exposed to photographs. Despite of this overflow of visual information, humans are extremely good at remembering thousands of pictures along with some of their visual details. But not all images are equal in memory. Some stitch to our minds, and other are forgotten. In this paper we focus on the problem of predicting how memorable an image will be. We show that memorability is a stable property of an image that is shared across different viewers. We introduce a database for which we have measured the probability that each picture will be remembered after a single view. We analyze image features and labels that contribute to making an image memorable, and we train a predictor based on global image descriptors. We find that predicting image memorability is a task that can be addressed with current computer vision techniques. Whereas making memorable images is a challenging task in visualization and photography, this work is a first attempt to quantify this useful quality of images.", "Traditionally, distinguishing between high quality professional photos and low quality amateurish photos is a human task. To automatically assess the quality of a photo that is consistent with humans perception is a challenging topic in computer vision. Various differences exist between photos taken by professionals and amateurs because of the use of photography techniques. Previous methods mainly use features extracted from the entire image. In this paper, based on professional photography techniques, we first extract the subject region from a photo, and then formulate a number of high-level semantic features based on this subject and background division. We test our features on a large and diverse photo database, and compare our method with the state of the art. Our method performs significantly better with a classification rate of 93 versus 72 by the best existing method. In addition, we conduct the first study on high-level video quality assessment. Our system achieves a precision of over 95 in a reasonable recall rate for both photo and video assessments. We also show excellent application results in web image search re-ranking.", "In this paper, we automatically assess the aesthetic properties of images. In the past, this problem has been addressed by hand-crafting features which would correlate with best photographic practices (e.g. “Does this image respect the rule of thirds?”) or with photographic techniques (e.g. “Is this image a macro?”). We depart from this line of research and propose to use generic image descriptors to assess aesthetic quality. We experimentally show that the descriptors we use, which aggregate statistics computed from low-level local features, implicitly encode the aesthetic properties explicitly used by state-of-the-art methods and outperform them by a significant margin.", "", "With the ever-expanding volume of visual content available, the ability to organize and navigate such content by aesthetic preference is becoming increasingly important. While still in its nascent stage, research into computational models of aesthetic preference already shows great potential. However, to advance research, realistic, diverse and challenging databases are needed. To this end, we introduce a new large-scale database for conducting Aesthetic Visual Analysis: AVA. It contains over 250,000 images along with a rich variety of meta-data including a large number of aesthetic scores for each image, semantic labels for over 60 categories as well as labels related to photographic style. We show the advantages of AVA with respect to existing databases in terms of scale, diversity, and heterogeneity of annotations. We then describe several key insights into aesthetic preference afforded by AVA. Finally, we demonstrate, through three applications, how the large scale of AVA can be leveraged to improve performance on existing preference tasks.", "The notion of creativity, as opposed to related concepts such as beauty or interestingness, has not been studied from the perspective of automatic analysis of multimedia content. Meanwhile, short online videos shared on social media platforms, or micro-videos, have arisen as a new medium for creative expression. In this paper we study creative micro-videos in an effort to understand the features that make a video creative, and to address the problem of automatic detection of creative content. Defining creative videos as those that are novel and have aesthetic value, we conduct a crowdsourcing experiment to create a dataset of over 3, 800 micro-videos labelled as creative and non-creative. We propose a set of computational features that we map to the components of our definition of creativity, and conduct an analysis to determine which of these features correlate most with creative video. Finally, we evaluate a supervised approach to automatically detect creative video, with promising results, showing that it is necessary to model both aesthetic value and novelty to achieve optimal classification accuracy.", "Aesthetic quality classification plays an important role in how people organize large photo collections. In particular, color harmony is a key factor in the various aspects that determine the perceived quality of a photo, and it should be taken into account to improve the performance of automatic aesthetic quality classification. However, the existing models of color harmony take only simple color patterns into consideration–e.g., patches consisting of a few colors–and thus cannot be used to assess photos with complicated color arrangements. In this work, we tackle the challenging problem of evaluating the color harmony of photos with a particular focus on aesthetic quality classification. A key point is that a photograph can be seen as a collection of local regions with color variations that are relatively simple. This led us to develop a method for assessing the aesthetic quality of a photo based on the photo's color harmony. We term the method ‘bags-of-color-patterns.’ Results of experiments on a large photo collection with user-provided aesthetic quality scores show that our aesthetic quality classification method, which explicitly takes into account the color harmony of a photo, outperforms the existing methods. Results also show that the classification performance is improved by combining our color harmony feature with blur, edges, and saliency features that reflect the aesthetics of the photos.", "", "With the rise in popularity of digital cameras, the amount of visual data available on the web is growing exponentially. Some of these pictures are extremely beautiful and aesthetically pleasing, but the vast majority are uninteresting or of low quality. This paper demonstrates a simple, yet powerful method to automatically select high aesthetic quality images from large image collections. Our aesthetic quality estimation method explicitly predicts some of the possible image cues that a human might use to evaluate an image and then uses them in a discriminative approach. These cues or high level describable image attributes fall into three broad types: 1) compositional attributes related to image layout or configuration, 2) content attributes related to the objects or scene types depicted, and 3) sky-illumination attributes related to the natural lighting conditions. We demonstrate that an aesthetics classifier trained on these describable attributes can provide a significant improvement over baseline methods for predicting human quality judgments. We also demonstrate our method for predicting the “interestingness” of Flickr photos, and introduce a novel problem of estimating query specific “interestingness”.", "Images can affect people on an emotional level. Since the emotions that arise in the viewer of an image are highly subjective, they are rarely indexed. However there are situations when it would be helpful if images could be retrieved based on their emotional content. We investigate and develop methods to extract and combine low-level features that represent the emotional content of an image, and use these for image emotion classification. Specifically, we exploit theoretical and empirical concepts from psychology and art theory to extract image features that are specific to the domain of artworks with emotional expression. For testing and training, we use three data sets: the International Affective Picture System (IAPS); a set of artistic photography from a photo sharing site (to investigate whether the conscious use of colors and textures displayed by the artists improves the classification); and a set of peer rated abstract paintings to investigate the influence of the features and ratings on pictures without contextual content. Improved classification results are obtained on the International Affective Picture System (IAPS), compared to state of the art work.", "The amount of videos available on the Web is growing explosively. While some videos are very interesting and receive high rating from viewers, many of them are less interesting or even boring. This paper conducts a pilot study on the understanding of human perception of video interestingness, and demonstrates a simple computational method to identify more interesting videos. To this end we first construct two datasets of Flickr and YouTube videos respectively. Human judgements of interestingness are collected and used as the ground-truth for training computational models. We evaluate several off-the-shelf visual and audio features that are potentially useful for predicting interestingness on both datasets. Results indicate that audio and visual features are equally important and the combination of both modalities shows very promising results.", "Visual quality (VisQ) representation is a fundamental step in the learning of a VisQ prediction model for photos. It not only reflects how we understand VisQ but also determines the label type. Existing studies apply a scalar value (i.e., a categorical label or a score) to represent VisQ. As VisQ is a subjective property, only a scalar value is insufficient to represent human's perceived VisQ of a photo. This study represents VisQ by a distribution on pre-defined ordinal basic ratings in order to capture the subjectivity of VisQ better. When using the new representation, the label type is structural instead of scalar. Conventional learning algorithms cannot be directly applied in model learning. Meanwhile, for many photos, the numbers of users involved in the evaluation are limited, making some labels unreliable. In this study, a new algorithm called support vector distribution regression (SVDR) is presented to deal with the structural output learning. Two independent learning strategies (reliability-sensitive learning and label refinement) are proposed to alleviate the difficulty of insufficient involved users for rating. Combining SVDR with the two learning strategies, two separate structural-output regression algorithms (i.e., reliability-sensitive SVDR and label refinement-based SVDR) are produced. Experimental results demonstrate the effectiveness of our introduced learning strategies and learning algorithms." ] }
1505.03358
1532457925
The dynamics of attention in social media tend to obey power laws. Attention concentrates on a relatively small number of popular items and neglecting the vast majority of content produced by the crowd. Although popularity can be an indication of the perceived value of an item within its community, previous research has hinted to the fact that popularity is distinct from intrinsic quality. As a result, content with low visibility but high quality lurks in the tail of the popularity distribution. This phenomenon can be particularly evident in the case of photo-sharing communities, where valuable photographers who are not highly engaged in online social interactions contribute with high-quality pictures that remain unseen. We propose to use a computer vision method to surface beautiful pictures from the immense pool of near-zero-popularity items, and we test it on a large dataset of creative-commons photos on Flickr. By gathering a large crowdsourced ground truth of aesthetics scores for Flickr images, we show that our method retrieves photos whose median perceived beauty score is equal to the most popular ones, and whose average is lower by only 1.5 .
Existing aesthetic ground truths are often derived from photo contest websites, such as DPChallenge.com @cite_49 or Photo.net @cite_6 , where (semi) professional photographers can rate the quality of their peers' images. The average quality and style of the images in such datasets is way higher than the typical picture quality in photo sharing sites, making them not suitable to train aesthetic models. Hybrid datasets @cite_54 that add lower-quality images to overcome this issue are also not good for training @cite_36 . In addition, social signals such as Flickr interestingness Flickr interestingness algorithm is secret, but it considers some metrics of social feedback. For more details refer to https: www.flickr.com explore interesting @cite_59 are often used as a proxy for aesthetics in that type of datasets. However, no quantitative evidence is given that neither the Flickr interestingness nor the popularity of the photographers are good proxies for image quality, which is exactly the research question we address. Crowdsourcing constitutes a reliable way to collect ground truths on image features @cite_37 , the only attempt to do it in the context of aesthetics has been limited in scope (faces) and very small-scale @cite_56 .
{ "cite_N": [ "@cite_37", "@cite_36", "@cite_54", "@cite_6", "@cite_56", "@cite_59", "@cite_49" ], "mid": [ "2013103455", "2078807908", "1997095443", "1511924373", "2157505157", "2150578721", "2170658603" ], "abstract": [ "Crowdsourcing has the potential to become a preferred tool to study image aesthetic appeal preferences of users. Nevertheless, some reliability issues still exist, partially due to the sometimes doubtful commitment of paid workers to perform the rating task properly. In this paper we compare the reliability in scoring image aesthetic appeal of both a paid and a volunteer crowd. We recruit our volunteers through Facebook and our paid users via Microworkers. We conclude that, whereas volunteer participants are more likely to leave the rating task unfinished, when they complete it they do so more reliably than paid users.", "With the ever-expanding volume of visual content available, the ability to organize and navigate such content by aesthetic preference is becoming increasingly important. While still in its nascent stage, research into computational models of aesthetic preference already shows great potential. However, to advance research, realistic, diverse and challenging databases are needed. To this end, we introduce a new large-scale database for conducting Aesthetic Visual Analysis: AVA. It contains over 250,000 images along with a rich variety of meta-data including a large number of aesthetic scores for each image, semantic labels for over 60 categories as well as labels related to photographic style. We show the advantages of AVA with respect to existing databases in terms of scale, diversity, and heterogeneity of annotations. We then describe several key insights into aesthetic preference afforded by AVA. Finally, we demonstrate, through three applications, how the large scale of AVA can be leveraged to improve performance on existing preference tasks.", "Automatically assessing photo quality from the perspective of visual aesthetics is of great interest in high-level vision research and has drawn much attention in recent years. In this paper, we propose content-based photo quality assessment using regional and global features. Under this framework, subject areas, which draw the most attentions of human eyes, are first extracted. Then regional features extracted from subject areas and the background regions are combined with global features to assess the photo quality. Since professional photographers may adopt different photographic techniques and may have different aesthetic criteria in mind when taking different types of photos (e.g. landscape versus portrait), we propose to segment regions and extract visual features in different ways according to the categorization of photo content. Therefore we divide the photos into seven categories based on their content and develop a set of new subject area extraction methods and new visual features, which are specially designed for different categories. This argument is supported by extensive experimental comparisons of existing photo quality assessment approaches as well as our new regional and global features over different categories of photos. Our new features significantly outperform the state-of-the-art methods. Another contribution of this work is to construct a large and diversified benchmark database for the research of photo quality assessment. It includes 17, 613 photos with manually labeled ground truth.", "Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities of photographs is a highly subjective task. Hence, there is no unanimously agreed standard for measuring aesthetic value. In spite of the lack of firm rules, certain features in photographic images are believed, by many, to please humans more than certain others. In this paper, we treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated online photo sharing Website as data source. We extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. Automated classifiers are built using support vector machines and classification trees. Linear regression on polynomial terms of the features is also applied to infer numerical aesthetics ratings. The work attempts to explore the relationship between emotions which pictures arouse in people, and their low-level content. Potential applications include content-based image retrieval and digital photography.", "Automatically assessing the subjective quality of a photo is a challenging area in visual computing. Previous works study the aesthetic quality assessment on a general set of photos regardless of the photo's content and mainly use features extracted from the entire image. In this work, we focus on a specific genre of photos: consumer photos with faces. This group of photos constitutes an important part of consumer photo collections. We first conduct an online study on Mechanical Turk to collect ground-truth and subjective opinions for a database of consumer photos with faces. We then extract technical features, perceptual features, and social relationship features to represent the aesthetic quality of a photo, by focusing on face-related regions. Experiments show that our features perform well for categorizing or predicting the aesthetic quality.", "The amount of videos available on the Web is growing explosively. While some videos are very interesting and receive high rating from viewers, many of them are less interesting or even boring. This paper conducts a pilot study on the understanding of human perception of video interestingness, and demonstrates a simple computational method to identify more interesting videos. To this end we first construct two datasets of Flickr and YouTube videos respectively. Human judgements of interestingness are collected and used as the ground-truth for training computational models. We evaluate several off-the-shelf visual and audio features that are potentially useful for predicting interestingness on both datasets. Results indicate that audio and visual features are equally important and the combination of both modalities shows very promising results.", "We propose a principled method for designing high level features forphoto quality assessment. Our resulting system can classify between high quality professional photos and low quality snapshots. Instead of using the bag of low-level features approach, we first determine the perceptual factors that distinguish between professional photos and snapshots. Then, we design high level semantic features to measure the perceptual differences. We test our features on a large and diverse dataset and our system is able to achieve a classification rate of 72 on this difficult task. Since our system is able to achieve a precision of over 90 in low recall scenarios, we show excellent results in a web image search application." ] }
1505.02985
2275179653
In the planted bisection model a random graph G(n,p_+,p_-) with n vertices is created by partitioning the vertices randomly into two classes of equal size (up to plus or minus 1). Any two vertices that belong to the same class are linked by an edge with probability p_+ and any two that belong to different classes with probability (p_-) c * sqrt((d_+)ln(d_+)) for a certain constant c>0.
Determining the minimum bisection width of a graph is NP-hard @cite_29 and there is evidence that the problem does not even admit a PTAS @cite_3 . On the positive side, it is possible to approximate the minimum bisection width within a factor of @math for graphs on @math vertices in polynomial time @cite_7 .
{ "cite_N": [ "@cite_29", "@cite_7", "@cite_3" ], "mid": [ "", "2169528477", "2091602684" ], "abstract": [ "", "Hierarchical graph decompositions play an important role in the design of approximation and online algorithms for graph problems. This is mainly due to the fact that the results concerning the approximation of metric spaces by tree metrics (e.g. [10,11,14,16]) depend on hierarchical graph decompositions. In this line of work a probability distribution over tree graphs is constructed from a given input graph, in such a way that the tree distances closely resemble the distances in the original graph. This allows it, to solve many problems with a distance-based cost function on trees, and then transfer the tree solution to general undirected graphs with only a logarithmic loss in the performance guarantee. The results about oblivious routing [30,22] in general undirected graphs are based on hierarchical decompositions of a different type in the sense that they are aiming to approximate the bottlenecks in the network (instead of the point-to-point distances). We call such decompositions cut-based decompositions. It has been shown that they also can be used to design approximation and online algorithms for a wide variety of different problems, but at the current state of the art the performance guarantee goes down by an O(log2n log log n)-factor when making the transition from tree networks to general graphs. In this paper we show how to construct cut-based decompositions that only result in a logarithmic loss in performance, which is asymptotically optimal. Remarkably, one major ingredient of our proof is a distance-based decomposition scheme due to Fakcharoenphol, Rao and Talwar [16]. This shows an interesting relationship between these seemingly different decomposition techniques. The main applications of the new decomposition are an optimal O(log n)-competitive algorithm for oblivious routing in general undirected graphs, and an O(log n)-approximation for Minimum Bisection, which improves the O(log1.5n) approximation by Feige and Krauthgamer [17].", "Assuming that NP @math @math BPTIME( @math ), we show that graph min-bisection, dense @math -subgraph, and bipartite clique have no polynomial time approximation scheme (PTAS). We give a reduction from the minimum distance of code (MDC) problem. Starting with an instance of MDC, we build a quasi-random probabilistically checkable proof (PCP) that suffices to prove the desired inapproximability results. In a quasi-random PCP, the query pattern of the verifier looks random in a certain precise sense. Among the several new techniques we introduce, the most interesting one gives a way of certifying that a given polynomial belongs to a given linear subspace of polynomials. As is important for our purpose, the certificate itself happens to be another polynomial, and it can be checked probabilistically by reading a constant number of its values." ] }
1505.02985
2275179653
In the planted bisection model a random graph G(n,p_+,p_-) with n vertices is created by partitioning the vertices randomly into two classes of equal size (up to plus or minus 1). Any two vertices that belong to the same class are linked by an edge with probability p_+ and any two that belong to different classes with probability (p_-) c * sqrt((d_+)ln(d_+)) for a certain constant c>0.
Finally, there has been recent progress on determining the minimum bisection width on the - random graph. Although its precise asymptotics remain unknown in the case of bounded average degrees @math , it was proved in @cite_13 that the main correction term corresponds to the Parisi formula'' in the Sherrington-Kirkpartrick model @cite_35 . Additionally, regarding the case of very sparse random graphs, there is a sharp threshold for the minimum bisection width to be linear in @math @cite_14 .
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_13" ], "mid": [ "2116466689", "2080818361", "1739872454" ], "abstract": [ "In this chapter we prove the Parisi formula, which gives the limiting value of the free energy per site for the Sherrington-Kirkpatrick model at each temperature, starting with the famous result of Guerra that the Parisi formula provided a bound for this free energy. We make full use of Poisson-Dirichlet cascades. We then use the same techniques to prove in a strong sense that in presence of an external field, the overlaps are positive.", "Consider partitions of the vertex set of a graph G into two sets with sizes differing by at most 1: the bisection width of G is the minimum over all such partitions of the number of ‘‘cross edges’’ between the parts. We are interested in sparse random graphs Ž . G with edge probability c n. We show that, if c ln 4, then the bisection width is n n, c n with high probability; while if c ln 4, then it is equal to 0 with high probability. There are corresponding threshold results for partitioning into any fixed number of parts. 2001 John Wiley & Sons, Inc. Random Struct. Alg., 18, 31 38, 2001", "For Erdős–Renyi random graphs with average degree γγ, and uniformly random γγ-regular graph on nn vertices, we prove that with high probability the size of both the Max-Cut and maximum bisection are n(γ4+P∗γ4−−√+o(γ−−√))+o(n)n(γ4+P∗γ4+o(γ))+o(n) while the size of the minimum bisection is n(γ4−P∗γ4−−√+o(γ−−√))+o(n)n(γ4−P∗γ4+o(γ))+o(n). Our derivation relates the free energy of the anti-ferromagnetic Ising model on such graphs to that of the Sherrington–Kirkpatrick model, with P∗≈0.7632P∗≈0.7632 standing for the ground state energy of the latter, expressed analytically via Parisi’s formula." ] }
1505.02419
281284504
Compositional embedding models build a representation (or embedding) for a linguistic structure based on its component word embeddings. We propose a Feature-rich Compositional Embedding Model (FCM) for relation extraction that is expressive, generalizes to new domains, and is easy-to-implement. The key idea is to combine both (unlexicalized) hand-crafted features with learned word embeddings. The model is able to directly tackle the difficulties met by traditional compositional embeddings models, such as handling arbitrary types of sentence annotations and utilizing global information for composition. We test the proposed model on two relation extraction tasks, and demonstrate that our model outperforms both previous compositional models and traditional feature rich models on the ACE 2005 relation extraction task, and the SemEval 2010 relation classification task. The combination of our model and a log-linear classifier with hand-crafted features gives state-of-the-art results.
In order to build a representation (embedding) for a sentence based on its component word embeddings and structural information, recent work on compositional models (stemming from the deep learning community) has designed model structures that mimic the structure of the input. For example, these models could take into account the order of the words (as in Convolutional Neural Networks (CNNs)) @cite_23 or build off of an input tree (as in Recursive Neural Networks (RNNs) or the Semantic Matching Energy Function) @cite_10 @cite_3 . Several results (c.f. ) have suggested that this method of features (as opposed to hand designing them) is a promising avenue for supplanting traditional feature engineering approaches.
{ "cite_N": [ "@cite_3", "@cite_10", "@cite_23" ], "mid": [ "2951131188", "2251939518", "2158899491" ], "abstract": [ "Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature.", "Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.", "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements." ] }
1505.02419
281284504
Compositional embedding models build a representation (or embedding) for a linguistic structure based on its component word embeddings. We propose a Feature-rich Compositional Embedding Model (FCM) for relation extraction that is expressive, generalizes to new domains, and is easy-to-implement. The key idea is to combine both (unlexicalized) hand-crafted features with learned word embeddings. The model is able to directly tackle the difficulties met by traditional compositional embeddings models, such as handling arbitrary types of sentence annotations and utilizing global information for composition. We test the proposed model on two relation extraction tasks, and demonstrate that our model outperforms both previous compositional models and traditional feature rich models on the ACE 2005 relation extraction task, and the SemEval 2010 relation classification task. The combination of our model and a log-linear classifier with hand-crafted features gives state-of-the-art results.
A different approach to combining traditional linguistic features and embeddings is hand-engineering features with word embeddings and adding them to log-linear models. Such approaches have achieved state-of-the-art results in many tasks including NER, chunking, dependency parsing, semantic role labeling, and relation extraction @cite_1 @cite_15 @cite_26 @cite_16 @cite_28 @cite_4 . considered features similar to ours for semantic role labeling.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_28", "@cite_1", "@cite_15", "@cite_16" ], "mid": [ "2128634885", "2250646484", "2125553157", "130850236", "2158139315", "2132529109" ], "abstract": [ "We present a simple and effective semisupervised method for training dependency parsers. We focus on the problem of lexical representation, introducing features that incorporate word clusters derived from a large unannotated corpus. We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague Dependency Treebank, and we show that the cluster-based features yield substantial gains in performance across a wide range of conditions. For example, in the case of English unlabeled second-order parsing, we improve from a baseline accuracy of 92.02 to 93.16 , and in the case of Czech unlabeled second-order parsing, we improve from a baseline accuracy of 86.13 to 87.13 . In addition, we demonstrate that our method also improves performance when small amounts of training data are available, and can roughly halve the amount of supervised data required to reach a desired level of performance.", "Relation Extraction (RE) is the task of extracting semantic relationships between entities in text. Recent studies on relation extraction are mostly supervised. The clear drawback of supervised methods is the need of training data: labeled data is expensive to obtain, and there is often a mismatch between the training data and the data the system will be applied to. This is the problem of domain adaptation. In this paper, we propose to combine (i) term generalization approaches such as word clustering and latent semantic analysis (LSA) and (ii) structured kernels to improve the adaptability of relation extractors to new text genres domains. The empirical evaluation on ACE 2005 domains shows that a suitable combination of syntax and lexical generalization is very promising for domain adaptation.", "We present a simple semi-supervised relation extraction system with large-scale word clustering. We focus on systematically exploring the effectiveness of different cluster-based features. We also propose several statistical methods for selecting clusters at an appropriate level of granularity. When training on different sizes of data, our semi-supervised approach consistently outperformed a state-of-the-art supervised baseline system.", "We present a technique for augmenting annotated training data with hierarchical word clusters that are automatically derived from a large unannotated corpus. Cluster membership is encoded in features that are incorporated in a discriminatively trained tagging model. Active learning is used to select training examples. We evaluate the technique for named-entity tagging. Compared with a state-of-the-art HMM-based name finder, the presented technique requires only 13 as much annotated data to achieve the same level of performance. Given a large annotated training set of 1,000,000 words, the technique achieves a 25 reduction in error over the state-of-the-art HMM trained on the same material.", "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http: metaoptimize.com projects wordreprs", "State-of-the-art semantic role labelling systems require large annotated corpora to achieve full performance. Unfortunately, such corpora are expensive to produce and often do not generalize well across domains. Even in domain, errors are often made where syntactic information does not provide sufficient cues. In this paper, we mitigate both of these problems by employing distributional word representations gathered from unlabelled data. While straight-forward word representations of predicates and arguments improve performance, we show that further gains are achieved by composing representations that model the interaction between predicate and argument, and capture full argument spans." ] }
1505.02419
281284504
Compositional embedding models build a representation (or embedding) for a linguistic structure based on its component word embeddings. We propose a Feature-rich Compositional Embedding Model (FCM) for relation extraction that is expressive, generalizes to new domains, and is easy-to-implement. The key idea is to combine both (unlexicalized) hand-crafted features with learned word embeddings. The model is able to directly tackle the difficulties met by traditional compositional embeddings models, such as handling arbitrary types of sentence annotations and utilizing global information for composition. We test the proposed model on two relation extraction tasks, and demonstrate that our model outperforms both previous compositional models and traditional feature rich models on the ACE 2005 relation extraction task, and the SemEval 2010 relation classification task. The combination of our model and a log-linear classifier with hand-crafted features gives state-of-the-art results.
In order to better utilize the dependency annotations, recently work built their models according to the dependency paths @cite_29 @cite_14 , which share similar motivations to the usage of On-path features in our work.
{ "cite_N": [ "@cite_29", "@cite_14" ], "mid": [ "2107620532", "2251837567" ], "abstract": [ "In sentence modeling and classification, convolutional neural network approaches have recently achieved state-of-the-art results, but all such efforts process word vectors sequentially and neglect long-distance dependencies. To exploit both deep learning and linguistic structures, we propose a tree-based convolutional neural network model which exploit various long-distance relationships between words. Our model improves the sequential baselines on all three sentiment and question classification tasks, and achieves the highest published accuracy on TREC.", "Accurate scoring of syntactic structures such as head-modifier arcs in dependency parsing typically requires rich, highdimensional feature representations. A small subset of such features is often selected manually. This is problematic when features lack clear linguistic meaning as in embeddings or when the information is blended across features. In this paper, we use tensors to map high-dimensional feature vectors into low dimensional representations. We explicitly maintain the parameters as a low-rank tensor to obtain low dimensional representations of words in their syntactic roles, and to leverage modularity in the tensor for easy training with online algorithms. Our parser consistently outperforms the Turbo and MST parsers across 14 different languages. We also obtain the best published UAS results on 5 languages. 1" ] }
1505.02192
262788691
Epidemic processes are common out-of-equilibrium phenomena of broad interdisciplinary interest. Recently, dynamic message-passing (DMP) has been proposed as an efficient algorithm for simulating epidemic models on networks, and in particular for estimating the probability that a given node will become infectious at a particular time. To date, DMP has been applied exclusively to models with one-way state changes, as opposed to models like SIS and SIRS where nodes can return to previously inhabited states. Because many real-world epidemics can exhibit such recurrent dynamics, we propose a DMP algorithm for complex, recurrent epidemic models on networks. Our approach takes correlations between neighboring nodes into account while preventing causal signals from backtracking to their immediate source, and thus avoids echo chamber effects'' where a pair of adjacent nodes each amplify the probability that the other is infectious. We demonstrate that this approach well approximates results obtained from Monte Carlo simulation and that its accuracy is often superior to the pair approximation (which also takes second-order correlations into account). Moreover, our approach is more computationally efficient than the pair approximation, especially for complex epidemic models: the number of variables in our DMP approach grows as @math where @math is the number of edges and @math is the number of states, as opposed to @math for the pair approximation. We suspect that the resulting reduction in computational effort, as well as the conceptual simplicity of DMP, will make it a useful tool in epidemic modeling, especially for high-dimensional inference tasks.
Systems of differential equations for , such as , do not appear to have a closed analytic form due to their nonlinearities. On the other hand, we can compute quantities such as epidemic thresholds by linearizing around a stationary point, such as @math where the initial outbreak is small. Given a perturbation @math , the linear stability of the system, i.e., whether or not @math diverges in time, is governed by the eigenvalues of the Jacobian matrix @math of the right hand side of at the stationary point @math . The Jacobian for at @math is (j i),(k j') = - kj ij' + (1- j ^*) (j i),(k j') , . where (j i),(k j') = jj' (1- ik ) , . This definition of @math is another way of saying that the edge @math influences edges @math for @math , but does not backtrack to @math . This corresponds to our assumption that infections, for instance, do not bounce from @math to @math and back again and create an echo chamber effect. For this reason, @math is also known in the literature as the non-backtracking matrix @cite_24 or the Hashimoto matrix @cite_16 .
{ "cite_N": [ "@cite_24", "@cite_16" ], "mid": [ "2024514015", "1523808012" ], "abstract": [ "Spectral algorithms are classic approaches to clustering and community detection in networks. However, for sparse networks the standard versions of these algorithms are suboptimal, in some cases completely failing to detect communities even when other algorithms such as belief propagation can do so. Here, we present a class of spectral algorithms based on a nonbacktracking walk on the directed edges of the graph. The spectrum of this operator is much better-behaved than that of the adjacency matrix or other commonly used matrices, maintaining a strong separation between the bulk eigenvalues and the eigenvalues relevant to community structure even in the sparse case. We show that our algorithm is optimal for graphs generated by the stochastic block model, detecting communities all of the way down to the theoretical limit. We also show the spectrum of the nonbacktracking operator for some real-world networks, illustrating its advantages over traditional spectral clustering.", "Publisher Summary This chapter focuses on zeta functions of finite graphs and representations of p-adic groups. It discusses two different subjects: first is a combinatorial problem in algebraic graph theory, and the other is arithmetic of discrete subgroups of p-adic groups and their representations. The chapter presents the notation and basic definitions in graph theory. It also presents a generalization of the definition of zeta function. Spectrum of a finite multigraph is analyzed in the chapter. Moreover, the chapter also describes harmonic functions and the Hodge decomposition. The chapter also presents the computation of zeta functions Zx(u) for some well known families of graphs. These computations give many examples of graphs that are not Ramanujan graphs." ] }
1505.02192
262788691
Epidemic processes are common out-of-equilibrium phenomena of broad interdisciplinary interest. Recently, dynamic message-passing (DMP) has been proposed as an efficient algorithm for simulating epidemic models on networks, and in particular for estimating the probability that a given node will become infectious at a particular time. To date, DMP has been applied exclusively to models with one-way state changes, as opposed to models like SIS and SIRS where nodes can return to previously inhabited states. Because many real-world epidemics can exhibit such recurrent dynamics, we propose a DMP algorithm for complex, recurrent epidemic models on networks. Our approach takes correlations between neighboring nodes into account while preventing causal signals from backtracking to their immediate source, and thus avoids echo chamber effects'' where a pair of adjacent nodes each amplify the probability that the other is infectious. We demonstrate that this approach well approximates results obtained from Monte Carlo simulation and that its accuracy is often superior to the pair approximation (which also takes second-order correlations into account). Moreover, our approach is more computationally efficient than the pair approximation, especially for complex epidemic models: the number of variables in our DMP approach grows as @math where @math is the number of edges and @math is the number of states, as opposed to @math for the pair approximation. We suspect that the resulting reduction in computational effort, as well as the conceptual simplicity of DMP, will make it a useful tool in epidemic modeling, especially for high-dimensional inference tasks.
Similarly, just as the leading eigenvector of @math was recently shown to be a good measure of importance or centrality'' of a node @cite_50 , it may be helpful in identifying superspreaders''---nodes where an initial infection will generate the largest outbreak, and be the most likely to lead to a widespread epidemic.
{ "cite_N": [ "@cite_50" ], "mid": [ "2058526570" ], "abstract": [ "Eigenvector centrality is a common measure of the importance of nodes in a network. Here we show that under common conditions the eigenvector centrality displays a localization transition that causes most of the weight of the centrality to concentrate on a small number of nodes in the network. In this regime the measure is no longer useful for distinguishing among the remaining nodes and its efficacy as a network metric is impaired. As a remedy, we propose an alternative centrality measure based on the nonbacktracking matrix, which gives results closely similar to the standard eigenvector centrality in dense networks where the latter is well behaved, but avoids localization and gives useful results in regimes where the standard centrality fails." ] }
1505.02377
2259994334
Metric learning aims to embed one metric space into another to benefit tasks like classification and clustering. Although a greatly distorted metric space has a high degree of freedom to fit training data, it is prone to overfitting and numerical inaccuracy. This paper presents bounded-distortion metric learning (BDML), a new metric learning framework which amounts to finding an optimal Mahalanobis metric space with a bounded-distortion constraint. An efficient solver based on the multiplicative weights update method is proposed. Moreover, we generalize BDML to pseudo-metric learning and devise the semidefinite relaxation and a randomized algorithm to approximately solve it. We further provide theoretical analysis to show that distortion is a key ingredient for stability and generalization ability of our BDML algorithm. Extensive experiments on several benchmark datasets yield promising results.
Metric learning algorithms can be categorized according to different criteria, such as Mahalanobis @cite_21 @cite_10 @cite_20 and non-Mahalanobis @cite_36 @cite_40 @cite_38 methods; probabilistic @cite_39 @cite_5 and non-probabilistic @cite_23 @cite_37 methods; unsupervised @cite_6 , supervised @cite_41 and semi-supervised @cite_21 methods; and global @cite_17 and local @cite_35 methods.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_35", "@cite_36", "@cite_41", "@cite_21", "@cite_6", "@cite_39", "@cite_40", "@cite_23", "@cite_5", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2160814101", "1544007523", "2091632079", "2140376886", "2110654099", "2117154949", "2053186076", "2144935315", "2113307832", "2116731705", "2122189848", "2003677307", "2152010828", "2104752854" ], "abstract": [ "Multi-metric learning techniques learn local metric tensors in different parts of a feature space. With such an approach, even simple classifiers can be competitive with the state-of-the-art because the distance measure locally adapts to the structure of the data. The learned distance measure is, however, non-metric, which has prevented multi-metric learning from generalizing to tasks such as dimensionality reduction and regression in a principled way. We prove that, with appropriate changes, multi-metric learning corresponds to learning the structure of a Riemannian manifold. We then show that this structure gives us a principled way to perform dimensionality reduction and regression according to the learned metrics. Algorithmically, we provide the first practical algorithm for computing geodesics according to the learned metrics, as well as algorithms for computing exponential and logarithmic maps on the Riemannian manifold. Together, these tools let many Euclidean algorithms take advantage of multi-metric learning. We illustrate the approach on regression and dimensionality reduction tasks that involve predicting measurements of the human body from shape data.", "The learning of appropriate distance metrics is a critical problem in image classification and retrieval. In this work, we propose a boosting-based technique, termed BOOSTMETRIC, for learning a Mahalanobis distance metric. One of the primary difficulties in learning such a metric is to ensure that the Mahalanobis matrix remains positive semidefinite. Semidefinite programming is sometimes used to enforce this constraint, but does not scale well. BOOSTMETRIC is instead based on a key observation that any positive semidefinite matrix can be decomposed into a linear positive combination of trace-one rank-one matrices. BOOSTMETRIC thus uses rank-one positive semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting method is easy to implement, does not require tuning, and can accommodate various types of constraints. Experiments on various datasets show that the proposed algorithm compares favorably to those state-of-the-art methods in terms of classification accuracy and running time.", "Nearest neighbour classification expects the class conditional probabilities to be locally constant, and suffers from bias in high dimensions. We propose a locally adaptive form of nearest neighbour classification to try to ameliorate this curse of dimensionality. We use a local linear discriminant analysis to estimate an effective metric for computing neighbourhoods. We determine the local decision boundaries from centroid information, and then shrink neighbourhoods in directions orthogonal to these local decision boundaries, and elongate them parallel to the boundaries. Thereafter, any neighbourhood-based classifier can be employed, using the modified neighbourhoods. The posterior probabilities tend to be more homogeneous in the modified neighbourhoods. We also propose a method for global dimension reduction, that combines local dimension information. In a number of examples, the methods demonstrate the potential for substantial improvements over nearest neighbour classification.", "In this paper, we introduce two novel metric learning algorithms, Χ2-LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: Χ2-LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear Χ2-distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach's robustness, speed, parallelizability and insensitivity towards the single additional hyper-parameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of Χ2-LMNN, obtain best results in 19 out of 20 learning settings.", "The main theme of this paper is to develop a novel eigenvalue optimization framework for learning a Mahalanobis metric. Within this context, we introduce a novel metric learning approach called DML-eig which is shown to be equivalent to a well-known eigenvalue optimization problem called minimizing the maximal eigenvalue of a symmetric matrix (Overton, 1988; Lewis and Overton, 1996). Moreover, we formulate LMNN (, 2005), one of the state-of-the-art metric learning methods, as a similar eigenvalue optimization problem. This novel framework not only provides new insights into metric learning but also opens new avenues to the design of efficient metric learning algorithms. Indeed, first-order algorithms are developed for DML-eig and LMNN which only need the computation of the largest eigenvector of a matrix per iteration. Their convergence characteristics are rigorously established. Various experiments on benchmark data sets show the competitive performance of our new approaches. In addition, we report an encouraging result on a difficult and challenging face verification data set called Labeled Faces in the Wild (LFW).", "Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \"plausible\" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \"similar.\" For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ℝn, learns a distance metric over ℝn that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance.", "Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in", "In this paper we propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and fast classification. Unlike other methods, our classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. The performance of the method is demonstrated on several data sets, both for metric learning and linear dimensionality reduction.", "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.", "In this paper, we examine the generalization error of regularized distance metric learning. We show that with appropriate constraints, the generalization error of regularized distance metric learning could be independent from the dimensionality, making it suitable for handling high dimensional data. In addition, we present an efficient online learning algorithm for regularized distance metric learning. Our empirical studies with data classification and face recognition show that the proposed algorithm is (i) effective for distance metric learning when compared to the state-of-the-art methods, and (ii) efficient and robust for high dimensional data.", "We describe a latent variable model for supervised dimensionality reduction and distance metric learning. The model discovers linear projections of high dimensional data that shrink the distance between similarly labeled inputs and expand the distance between differently labeled ones. The model's continuous latent variables locate pairs of examples in a latent space of lower dimensionality. The model differs significantly from classical factor analysis in that the posterior distribution over these latent variables is not always multivariate Gaussian. Nevertheless we show that inference is completely tractable and derive an Expectation-Maximization (EM) algorithm for parameter estimation. We also compare the model to other approaches in distance metric learning. The model's main advantage is its simplicity: at each iteration of the EM algorithm, the distance metric is re-estimated by solving an unconstrained least-squares problem. Experiments show that these simple updates are highly effective.", "We describe and analyze an online algorithm for supervised learning of pseudo-metrics. The algorithm receives pairs of instances and predicts their similarity according to a pseudo-metric. The pseudo-metrics we use are quadratic forms parameterized by positive semi-definite matrices. The core of the algorithm is an update rule that is based on successive projections onto the positive semi-definite cone and onto half-space constraints imposed by the examples. We describe an efficient procedure for performing these projections, derive a worst case mistake bound on the similarity predictions, and discuss a dual version of the algorithm in which it is simple to incorporate kernel operators. The online algorithm also serves as a building block for deriving a large-margin batch algorithm. We demonstrate the merits of the proposed approach by conducting experiments on MNIST dataset and on document filtering.", "Many learning algorithms use a metric defined over the input space as a principal tool, and their performance critically depends on the quality of this metric. We address the problem of learning metrics using side-information in the form of equivalence constraints. Unlike labels, we demonstrate that this type of side-information can sometimes be automatically obtained without the need of human intervention. We show how such side-information can be used to modify the representation of the data, leading to improved clustering and classification.Specifically, we present the Relevant Component Analysis (RCA) algorithm, which is a simple and efficient algorithm for learning a Mahalanobis metric. We show that RCA is the solution of an interesting optimization problem, founded on an information theoretic basis. If dimensionality reduction is allowed within RCA, we show that it is optimally accomplished by a version of Fisher's linear discriminant that uses constraints. Moreover, under certain Gaussian assumptions, RCA can be viewed as a Maximum Likelihood estimation of the within class covariance matrix. We conclude with extensive empirical evaluations of RCA, showing its advantage over alternative methods.", "We present an algorithm for learning a quadratic Gaussian metric (Mahalanobis distance) for use in classification tasks. Our method relies on the simple geometric intuition that a good metric is one under which points in the same class are simultaneously near each other and far from points in the other classes. We construct a convex optimization problem whose solution generates such a metric by trying to collapse all examples in the same class to a single point and push examples in other classes infinitely far away. We show that when the metric we learn is used in simple classifiers, it yields substantial improvements over standard alternatives on a variety of problems. We also discuss how the learned metric may be used to obtain a compact low dimensional feature representation of the original input space, allowing more efficient classification with very little reduction in performance." ] }
1505.02377
2259994334
Metric learning aims to embed one metric space into another to benefit tasks like classification and clustering. Although a greatly distorted metric space has a high degree of freedom to fit training data, it is prone to overfitting and numerical inaccuracy. This paper presents bounded-distortion metric learning (BDML), a new metric learning framework which amounts to finding an optimal Mahalanobis metric space with a bounded-distortion constraint. An efficient solver based on the multiplicative weights update method is proposed. Moreover, we generalize BDML to pseudo-metric learning and devise the semidefinite relaxation and a randomized algorithm to approximately solve it. We further provide theoretical analysis to show that distortion is a key ingredient for stability and generalization ability of our BDML algorithm. Extensive experiments on several benchmark datasets yield promising results.
Based on the type of constraints, we can also classify them into pairwise and triplet-wise ones. Pairwise methods @cite_21 @cite_3 often adds constraints to enforce distances between pairs of dissimilar points are larger than a given threshold. Representative methods in the triplet group are the large-margin nearest neighbor @cite_28 and its variants @cite_36 . They exploit the local triplet constraints to assure that the distance between any point and its different-class neighbour should be at least one unit margin further than the distance between it and its same-class neighbour. Intuitively, if these triplet constraints are well satisfied, the empirical loss of kNN would be small.
{ "cite_N": [ "@cite_28", "@cite_21", "@cite_36", "@cite_3" ], "mid": [ "2106053110", "2117154949", "2140376886", "" ], "abstract": [ "The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.", "Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \"plausible\" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \"similar.\" For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ℝn, learns a distance metric over ℝn that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance.", "In this paper, we introduce two novel metric learning algorithms, Χ2-LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: Χ2-LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear Χ2-distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach's robustness, speed, parallelizability and insensitivity towards the single additional hyper-parameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of Χ2-LMNN, obtain best results in 19 out of 20 learning settings.", "" ] }
1505.02377
2259994334
Metric learning aims to embed one metric space into another to benefit tasks like classification and clustering. Although a greatly distorted metric space has a high degree of freedom to fit training data, it is prone to overfitting and numerical inaccuracy. This paper presents bounded-distortion metric learning (BDML), a new metric learning framework which amounts to finding an optimal Mahalanobis metric space with a bounded-distortion constraint. An efficient solver based on the multiplicative weights update method is proposed. Moreover, we generalize BDML to pseudo-metric learning and devise the semidefinite relaxation and a randomized algorithm to approximately solve it. We further provide theoretical analysis to show that distortion is a key ingredient for stability and generalization ability of our BDML algorithm. Extensive experiments on several benchmark datasets yield promising results.
Metric learning is closely related to , an important topic in theoretical computer science that has played an important role to design approximation algorithms. One line of research focuses on how to embed a finite metric space into normed spaces with a low distortion @cite_25 @cite_26 , , preserving the structure of the original metric space. Metric learning is also related to manifold learning @cite_38 and kernel learning @cite_27 . Learning a distance metric function amounts to learning a kernel function that measures the similarity between points.
{ "cite_N": [ "@cite_38", "@cite_27", "@cite_26", "@cite_25" ], "mid": [ "2160814101", "2145295623", "1499673022", "" ], "abstract": [ "Multi-metric learning techniques learn local metric tensors in different parts of a feature space. With such an approach, even simple classifiers can be competitive with the state-of-the-art because the distance measure locally adapts to the structure of the data. The learned distance measure is, however, non-metric, which has prevented multi-metric learning from generalizing to tasks such as dimensionality reduction and regression in a principled way. We prove that, with appropriate changes, multi-metric learning corresponds to learning the structure of a Riemannian manifold. We then show that this structure gives us a principled way to perform dimensionality reduction and regression according to the learned metrics. Algorithmically, we provide the first practical algorithm for computing geodesics according to the learned metrics, as well as algorithms for computing exponential and logarithmic maps on the Riemannian manifold. Together, these tools let many Euclidean algorithms take advantage of multi-metric learning. We illustrate the approach on regression and dimensionality reduction tasks that involve predicting measurements of the human body from shape data.", "Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space---classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm---using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.", "A variable dwell ignition system comprising a circuit responsive to changes in engine RPM for causing the dwell time of the ignition system to be varied accordingly. The circuit is adapted to be connected to a sensor coil to receive ignition timing signals developed thereacross which are generated in timed relationship to the engine operation. The circuit provides for changing the direct current (DC) level about which the ignition signals are generated such that a threshold level is varied. As the threshold level is varied the dwell time of the system is varied respectively.", "" ] }
1505.02388
2950276493
Monitoring breathing rates and patterns helps in the diagnosis and potential avoidance of various health problems. Current solutions for respiratory monitoring, however, are usually invasive and or limited to medical facilities. In this paper, we propose a novel respiratory monitoring system, UbiBreathe, based on ubiquitous off-the-shelf WiFi-enabled devices. Our experiments show that the received signal strength (RSS) at a WiFi-enabled device held on a person's chest is affected by the breathing process. This effect extends to scenarios when the person is situated on the line-of-sight (LOS) between the access point and the device, even without holding it. UbiBreathe leverages these changes in the WiFi RSS patterns to enable ubiquitous non-invasive respiratory rate estimation, as well as apnea detection. We propose the full architecture and design for UbiBreathe, incorporating various modules that help reliably extract the hidden breathing signal from a noisy WiFi RSS. The system handles various challenges such as noise elimination, interfering humans, sudden user movements, as well as detecting abnormal breathing situations. Our implementation of UbiBreathe using off-the-shelf devices in a wide range of environmental conditions shows that it can estimate different breathing rates with less than 1 breaths per minute (bpm) error. In addition, UbiBreathe can detect apnea with more than 96 accuracy in both the device-on-chest and hands-free scenarios. This highlights its suitability for a new class of anywhere respiratory monitoring.
Due to skin sensitivity and other issues that inhibit attaching sensors to the body, RF respiration monitoring devices were proposed including microwave doppler radars @cite_33 @cite_40 , ultra-wideband (UWB) radars @cite_23 @cite_41 , and ISM-based systems @cite_30 @cite_3 . These systems can provide high accuracy in respiration rate detection due to the special frequency band used and or dense device deployment. However, their main drawbacks are the limited range of high frequency devices used, as well as their high cost. In addition, @cite_3 uses special ZigBee devices with high gain directional antennas.
{ "cite_N": [ "@cite_30", "@cite_33", "@cite_41", "@cite_3", "@cite_40", "@cite_23" ], "mid": [ "1984554603", "2098649495", "2134639649", "2073718740", "", "2020365485" ], "abstract": [ "Remote measurements of the cardiac pulse can provide comfortable physiological assessment without electrodes. However, attempts so far are non-automated, susceptible to motion artifacts and typically expensive. In this paper, we introduce a new methodology that overcomes these problems. This novel approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into independent components. Using Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to an FDA-approved finger blood volume pulse (BVP) sensor and achieved high accuracy and correlation even in the presence of movement artifacts. Furthermore, we applied this technique to perform heart rate measurements from three participants simultaneously. This is the first demonstration of a low-cost accurate video-based method for contact-free heart rate measurements that is automated, motion-tolerant and capable of performing concomitant measurements on more than one person at a time.", "A CMOS Doppler radar sensor has been developed and used to measure motion due to heart and respiration. The quadrature direct-conversion radar transceiver has been fully integrated in 0.25-mum CMOS, the baseband analog signal conditioning has been developed on a printed circuit board, and digital signal processing has been performed in Matlab. The theoretical signal-to-noise ratio (SNR) is derived based on the radar equation, the direct-conversion receiver's properties, oscillator phase noise, range correlation, and receiver noise. Heart and respiration signatures and rates have been measured at ranges from 0.5 to 2.0 m on 22 human subjects wearing normal T-shirts. The theoretical SNR expression was validated with this study. The heart rates found with the radar sensor were compared with a three-lead electrocardiogram, and they were within 5 beats min with 95 confidence for 16 of 22 subjects at a 0.5-m range and 11 of 22 subjects at a 1.0-m range. The respiration rates found with the radar sensor were compared with those found using a piezoelectric respiratory effort belt, and the respiration rates were within five respirations per minute for 18 of 22 subjects at a 0.5-m range, 17 of 22 subjects at a 1.0-m range, and 19 of 22 subjects at a 1.5-m range.", "Ultra-wide Band (UWB) technology is a new, useful and safe technology in the fleld of wireless body networks. This paper focuses on the feasibility of estimating vital signs | speciflcally breathing rate and heartbeat frequency | from the spectrum of recorded waveforms, using an impulse-radio (IR) UWB radar. To this end, an analytical model is developed to perform and interpret the spectral analysis. Both the harmonics and the intermodulation between respiration and heart signals are addressed. Simulations have been performed to demonstrate how they afiect the detection of vital signs and also to analyze the in∞uence of the pulse waveform. A fllter to cancel out breathing harmonics is also proposed to improve heart rate detection. The results of the experiments are presented under difierent scenarios which demonstrate the accuracy of the proposed technique for determining respiration and heartbeat rates. It has been shown that an IR-UWB radar can meet the requirements of typical biomedical applications such as non-invasive heart and respiration rate monitoring.", "Respiratory rate is an important vital sign that can indicate progression of illness but to also predict rapid decline in health. For the purpose, non-contact monitoring systems are becoming more popular due to the self-evident increase in patient comfort. As a cost effective solution for non-invasive breathing monitoring, utilizing the received signal strength measurements of inexpensive transceivers has been proposed. However, the applicability of the available solutions is limited since they rely on numerous sensors. In this work, considerable improvement is made, and a respiratory rate monitoring system based on a single commercial off-the-shelf transmitter-receiver pair is presented. Methods that enable estimation and enhance the accuracy are presented and their effects are evaluated. Moreover, it is empirically demonstrated that the performance of the system is comparable to the accuracy of a high-end device for 3-4 orders of magnitude less price; achieving mean absolute error of 0.12 breaths per minute in the most realistic scenario of the experiments.", "", "It has been shown that remote monitoring of pulmonary activity can be achieved using ultra-wideband (UWB) systems, which shows promise in home healthcare, rescue, and security applications. In this paper, we first present a multi-ray propagation model for UWB signal, which is traveling through the human thorax and is reflected on the air dry-skin fat muscle interfaces. A geometry-based statistical channel model is then developed for simulating the reception of UWB signals in the indoor propagation environment. This model enables replication of time-varying multipath profiles due to the displacement of a human chest. Subsequently, a UWB distributed cognitive radar system (UWB-DCRS) is developed for the robust detection of chest cavity motion and the accurate estimation of respiration rate. The analytical framework can serve as a basis in the planning and evaluation of future measurement programs. We also provide a case study on how the antenna beamwidth affects the estimation of respiration rate based on the proposed propagation models and system architecture." ] }
1505.02097
2951304565
Consider the following three important problems in statistical inference, namely, constructing confidence intervals for (1) the error of a high-dimensional ( @math ) regression estimator, (2) the linear regression noise level, and (3) the genetic signal-to-noise ratio of a continuous-valued trait (related to the heritability). All three problems turn out to be closely related to the little-studied problem of performing inference on the @math -norm of the signal in high-dimensional linear regression. We derive a novel procedure for this, which is asymptotically correct when the covariates are multivariate Gaussian and produces valid confidence intervals in finite samples as well. The procedure, called EigenPrism, is computationally fast and makes no assumptions on coefficient sparsity or knowledge of the noise level. We investigate the width of the EigenPrism confidence intervals, including a comparison with a Bayesian setting in which our interval is just 5 wider than the Bayes credible interval. We are then able to unify the three aforementioned problems by showing that the EigenPrism procedure with only minor modifications is able to make important contributions to all three. We also investigate the robustness of coverage and find that the method applies in practice and in finite samples much more widely than just the case of multivariate Gaussian covariates. Finally, we apply EigenPrism to a genetic dataset to estimate the genetic signal-to-noise ratio for a number of continuous phenotypes.
When @math , ordinary least squares (OLS) theory gives us inference for @math and thus also for @math . When @math , the problem of estimating @math has been studied in @cite_5 . @cite_5 uses the method of moments on two statistics to estimate @math and @math without assumptions on @math , and with the same multivariate Gaussian random design assumptions used here . @cite_5 also derives asymptotic distributional results, but does not explore the estimation of the parameters of the asymptotic distributions, nor the coverage of any CI derived from it. The main contribution of our work is to provide tight, CIs which achieve nominal coverage even in finite samples.
{ "cite_N": [ "@cite_5" ], "mid": [ "2119491313" ], "abstract": [ "The residual variance and the proportion of explained variation are important quantities in many statistical models and model fitting procedures. They play an important role in regression diagnostics and model selection procedures, as well as in determining the performance limits in many problems. In this paper we propose new method-of-moments-based estimators for the residual variance, the proportion of explained variation and other related quantities, such as the l2 signal strength. The proposed estimators are consistent and asymptotically normal in high-dimensional linear models with Gaussian predictors and errors, where the number of predictors d is proportional to the number of observations n; in fact, consistency holds even in settings where d n → ∞. Existing results on residual variance estimation in high-dimensional linear models depend on sparsity in the underlying signal. Our results require no sparsity assumptions and imply that the residual variance and the proportion of explained variation can be consistently estimated even when d>n and the underlying signal itself is nonestimable. Numerical work suggests that some of our distributional assumptions may be relaxed. A real-data analysis involving gene expression data and single nucleotide polymorphism data illustrates the performance of the proposed methods." ] }
1505.02097
2951304565
Consider the following three important problems in statistical inference, namely, constructing confidence intervals for (1) the error of a high-dimensional ( @math ) regression estimator, (2) the linear regression noise level, and (3) the genetic signal-to-noise ratio of a continuous-valued trait (related to the heritability). All three problems turn out to be closely related to the little-studied problem of performing inference on the @math -norm of the signal in high-dimensional linear regression. We derive a novel procedure for this, which is asymptotically correct when the covariates are multivariate Gaussian and produces valid confidence intervals in finite samples as well. The procedure, called EigenPrism, is computationally fast and makes no assumptions on coefficient sparsity or knowledge of the noise level. We investigate the width of the EigenPrism confidence intervals, including a comparison with a Bayesian setting in which our interval is just 5 wider than the Bayes credible interval. We are then able to unify the three aforementioned problems by showing that the EigenPrism procedure with only minor modifications is able to make important contributions to all three. We also investigate the robustness of coverage and find that the method applies in practice and in finite samples much more widely than just the case of multivariate Gaussian covariates. Finally, we apply EigenPrism to a genetic dataset to estimate the genetic signal-to-noise ratio for a number of continuous phenotypes.
Inference for high-dimensional regression error, noise level, and genetic variance decomposition are each individually well-studied, so we review some relevant works here. To begin with, many authors have studied high-dimensional regression error for specific coefficient estimators, such as the Lasso , often providing conditions under which this regression error asymptotes to 0 (see for example @cite_18 @cite_4 ). To our knowledge the only author who has considered inference for a general estimator is @cite_15 , who does so using the Johnson--Lindenstrauss Lemma and assuming , that is, @math in the linear model . Thus the problem studied there is quite different from that addressed here, as we allow for noise in the linear model. Furthermore, because the Johnson--Lindenstrauss Lemma is not distribution-specific, it is conservative and thus Ward's bounds are in general conservative, while we will show that in most cases our CIs will be quite tight.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_15" ], "mid": [ "2271626458", "1968694834", "2032618720" ], "abstract": [ "We study the fundamental problems of variance and risk estimation in high dimensional statistical modeling. In particular, we consider the problem of learning a coefficient vector Theta 0 is an element of Rp from noisy linear observations y = X Theta 0 + w is an element of Rn (p > n) and the popular estimation procedure of solving the '1-penalized least squares objective known as the LASSO or Basis Pursuit DeNoising (BPDN). In this context, we develop new estimators for the '2 estimation risk k Theta b- Theta 0k2 and the variance of the noise when distributions of Theta 0 and w are unknown. These can be used to select the regularization parameter optimally. Our approach combines Stein's unbiased risk estimate [Ste81] and the recent results of [BM12a] [BM12b] on the analysis of approximate message passing and the risk of LASSO. We establish high-dimensional consistency of our estimators for sequences of matrices X of increasing dimensions, with independent Gaussian entries. We establish validity for a broader class of Gaussian designs, conditional on a certain conjecture from statistical physics. To the best of our knowledge, this result is the first that provides an asymptotically consistent risk estimator for the LASSO solely based on data. In addition, we demonstrate through simulations that our variance estimation outperforms several existing methods in the literature.", "We consider the asymptotic behavior of regression estimators that minimize the residual sum of squares plus a penalty proportional to Σ ∥β j ∥γ for some y > 0. These estimators include the Lasso as a special case when γ = 1. Under appropriate conditions, we show that the limiting distributions can have positive probability mass at 0 when the true value of the parameter is 0. We also consider asymptotics for nearly singular designs.", "Compressed sensing (CS) decoding algorithms can efficiently recover an N -dimensional real-valued vector x to within a factor of its best k-term approximation by taking m = O(klogN k) measurements y = Phix. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting CS estimate. However, because the underlying sparsity of the signal x is unknown, the quality of a CS estimate x using m measurements is not assured. It is nevertheless shown in this paper that sharp bounds on the error ||x - x ||lN2 can be achieved with almost no effort. More precisely, suppose that a maximum number of measurements m is preimposed. One can reserve 10 log p of these m measurements and compute a sequence of possible estimates ( xj)j=1p to x from the m -10logp remaining measurements; the errors ||x - xj ||lN2 for j = 1, ..., p can then be bounded with high probability. As a consequence, numerical upper and lower bounds on the error between x and the best k-term approximation to x can be estimated for p values of k with almost no cost. This observation has applications outside CS as well." ] }
1505.02097
2951304565
Consider the following three important problems in statistical inference, namely, constructing confidence intervals for (1) the error of a high-dimensional ( @math ) regression estimator, (2) the linear regression noise level, and (3) the genetic signal-to-noise ratio of a continuous-valued trait (related to the heritability). All three problems turn out to be closely related to the little-studied problem of performing inference on the @math -norm of the signal in high-dimensional linear regression. We derive a novel procedure for this, which is asymptotically correct when the covariates are multivariate Gaussian and produces valid confidence intervals in finite samples as well. The procedure, called EigenPrism, is computationally fast and makes no assumptions on coefficient sparsity or knowledge of the noise level. We investigate the width of the EigenPrism confidence intervals, including a comparison with a Bayesian setting in which our interval is just 5 wider than the Bayes credible interval. We are then able to unify the three aforementioned problems by showing that the EigenPrism procedure with only minor modifications is able to make important contributions to all three. We also investigate the robustness of coverage and find that the method applies in practice and in finite samples much more widely than just the case of multivariate Gaussian covariates. Finally, we apply EigenPrism to a genetic dataset to estimate the genetic signal-to-noise ratio for a number of continuous phenotypes.
There has also been a lot of recent interest in estimating the noise level @math in high-dimensional regression problems. @cite_19 introduced a refitted cross validation method that estimates @math assuming sparsity and a model selection procedure that misses none of the correct variables. @cite_12 introduced the scaled Lasso for estimating @math using an iterative procedure that includes the Lasso. @cite_20 also use an @math penalty to estimate the noise level, but in a finite mixture of regressions model. @cite_18 use the Lasso and Stein's unbiased risk estimate to produce an estimator for @math . All of these works prove consistency of their estimators, but under conditions on the sparsity of the coefficient vector. Indeed, it can be shown that such a condition is needed when @math is treated as fixed (which it is not in the present paper). Under the same sparsity conditions, @cite_19 and @cite_12 also provide asymptotic distributional results for their estimators, allowing for the construction of asymptotic CIs. What distinguishes our treatment of this problem from the existing literature is that our estimator and CI for @math make assumptions on the sparsity or structure of @math .
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_20", "@cite_12" ], "mid": [ "", "2271626458", "2138142480", "2154972590" ], "abstract": [ "", "We study the fundamental problems of variance and risk estimation in high dimensional statistical modeling. In particular, we consider the problem of learning a coefficient vector Theta 0 is an element of Rp from noisy linear observations y = X Theta 0 + w is an element of Rn (p > n) and the popular estimation procedure of solving the '1-penalized least squares objective known as the LASSO or Basis Pursuit DeNoising (BPDN). In this context, we develop new estimators for the '2 estimation risk k Theta b- Theta 0k2 and the variance of the noise when distributions of Theta 0 and w are unknown. These can be used to select the regularization parameter optimally. Our approach combines Stein's unbiased risk estimate [Ste81] and the recent results of [BM12a] [BM12b] on the analysis of approximate message passing and the risk of LASSO. We establish high-dimensional consistency of our estimators for sequences of matrices X of increasing dimensions, with independent Gaussian entries. We establish validity for a broader class of Gaussian designs, conditional on a certain conjecture from statistical physics. To the best of our knowledge, this result is the first that provides an asymptotically consistent risk estimator for the LASSO solely based on data. In addition, we demonstrate through simulations that our variance estimation outperforms several existing methods in the literature.", "We consider a finite mixture of regressions (FMR) model for high-dimensional inhomogeneous data where the number of covariates may be much larger than sample size. We propose an l 1-penalized maximum likelihood estimator in an appropriate parameterization. This kind of estimation belongs to a class of problems where optimization and theory for non-convex functions is needed. This distinguishes itself very clearly from high-dimensional estimation with convex loss- or objective functions as, for example, with the Lasso in linear or generalized linear models. Mixture models represent a prime and important example where non-convexity arises.", "Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model. It chooses an equilibrium with a sparse regression method by iteratively estimating the noise level via the mean residual square and scaling the penalty in proportion to the estimated noise level. The iterative algorithm costs little beyond the computation of a path or grid of the sparse regression estimator for penalty levels above a proper threshold. For the scaled lasso, the algorithm is a gradient descent in a convex minimization of a penalized joint loss function for the regression coefficients and noise level. Under mild regularity conditions, we prove that the scaled lasso simultaneously yields an estimator for the noise level and an estimated coefficient vector satisfying certain oracle inequalities for prediction, the estimation of the noise level and the regression coefficients. These inequalities provide sufficient conditions for the consistency and asymptotic normality of the noise-level estimator, including certain cases where the number of variables is of greater order than the sample size. Parallel results are provided for least-squares estimation after model selection by the scaled lasso. Numerical results demonstrate the superior performance of the proposed methods over an earlier proposal of joint convex minimization. Copyright 2012, Oxford University Press." ] }
1505.02097
2951304565
Consider the following three important problems in statistical inference, namely, constructing confidence intervals for (1) the error of a high-dimensional ( @math ) regression estimator, (2) the linear regression noise level, and (3) the genetic signal-to-noise ratio of a continuous-valued trait (related to the heritability). All three problems turn out to be closely related to the little-studied problem of performing inference on the @math -norm of the signal in high-dimensional linear regression. We derive a novel procedure for this, which is asymptotically correct when the covariates are multivariate Gaussian and produces valid confidence intervals in finite samples as well. The procedure, called EigenPrism, is computationally fast and makes no assumptions on coefficient sparsity or knowledge of the noise level. We investigate the width of the EigenPrism confidence intervals, including a comparison with a Bayesian setting in which our interval is just 5 wider than the Bayes credible interval. We are then able to unify the three aforementioned problems by showing that the EigenPrism procedure with only minor modifications is able to make important contributions to all three. We also investigate the robustness of coverage and find that the method applies in practice and in finite samples much more widely than just the case of multivariate Gaussian covariates. Finally, we apply EigenPrism to a genetic dataset to estimate the genetic signal-to-noise ratio for a number of continuous phenotypes.
We note that neuroscientists have also done work estimating a signal-to-noise ratio, namely the explainable variance in functional MRI. That problem is made especially challenging due to correlations in the noise, making it different from the i.i.d. noise setting considered in this paper. For this related problem, @cite_9 are able to construct an unbiased estimator in the random effects framework by permuting the measurement vector in such a way as to leave the noise covariance structure unchanged.
{ "cite_N": [ "@cite_9" ], "mid": [ "1538452572" ], "abstract": [ "This note explores the connections and differences between three commonly used methods for constructing minimax lower bounds in nonparametric estimation problems: Le Cam’s, Assouad’s and Fano’s. Two connections are established between Le Cam’s and Assouad’s and between Assouad’s and Fano’s. The three methods are then compared in the context of two estimation problems for a smooth class of densities on [0,1]. The two estimation problems are for the integrated squared first derivatives and for the density function itself." ] }
1505.02433
1840825582
This paper contributes a novel embedding model which measures the probability of each belief @math in a large-scale knowledge repository via simultaneously learning distributed representations for entities ( @math and @math ), relations ( @math ), and the words in relation mentions ( @math ). It facilitates knowledge completion by means of simple vector operations to discover new beliefs. Given an imperfect belief, we can not only infer the missing entities, predict the unknown relations, but also tell the plausibility of the belief, just leveraging the learnt embeddings of remaining evidences. To demonstrate the scalability and the effectiveness of our model, we conduct experiments on several large-scale repositories which contain millions of beliefs from WordNet, Freebase and NELL, and compare it with other cutting-edge approaches via competing the performances assessed by the tasks of entity inference, relation prediction and triplet classification with respective metrics. Extensive experimental results show that the proposed model outperforms the state-of-the-arts with significant improvements.
Unstructured @cite_11 is a naive model which exploits the occurrence information of the head and the tail entities without considering the relation between them. It defines a scoring function @math , and this model obviously can not discriminate between a pair of entities involving different relations. Therefore, Unstructured is commonly regarded as the baseline approach.
{ "cite_N": [ "@cite_11" ], "mid": [ "2127795553" ], "abstract": [ "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples." ] }
1505.02433
1840825582
This paper contributes a novel embedding model which measures the probability of each belief @math in a large-scale knowledge repository via simultaneously learning distributed representations for entities ( @math and @math ), relations ( @math ), and the words in relation mentions ( @math ). It facilitates knowledge completion by means of simple vector operations to discover new beliefs. Given an imperfect belief, we can not only infer the missing entities, predict the unknown relations, but also tell the plausibility of the belief, just leveraging the learnt embeddings of remaining evidences. To demonstrate the scalability and the effectiveness of our model, we conduct experiments on several large-scale repositories which contain millions of beliefs from WordNet, Freebase and NELL, and compare it with other cutting-edge approaches via competing the performances assessed by the tasks of entity inference, relation prediction and triplet classification with respective metrics. Extensive experimental results show that the proposed model outperforms the state-of-the-arts with significant improvements.
Single Layer Model , proposed by @cite_19 thus aims to alleviate the shortcomings of the Distance Model by means of the non-linearity of a single layer neural network @math , in which @math . The linear output layer then gives the scoring function: @math .
{ "cite_N": [ "@cite_19" ], "mid": [ "2127426251" ], "abstract": [ "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively." ] }
1505.02433
1840825582
This paper contributes a novel embedding model which measures the probability of each belief @math in a large-scale knowledge repository via simultaneously learning distributed representations for entities ( @math and @math ), relations ( @math ), and the words in relation mentions ( @math ). It facilitates knowledge completion by means of simple vector operations to discover new beliefs. Given an imperfect belief, we can not only infer the missing entities, predict the unknown relations, but also tell the plausibility of the belief, just leveraging the learnt embeddings of remaining evidences. To demonstrate the scalability and the effectiveness of our model, we conduct experiments on several large-scale repositories which contain millions of beliefs from WordNet, Freebase and NELL, and compare it with other cutting-edge approaches via competing the performances assessed by the tasks of entity inference, relation prediction and triplet classification with respective metrics. Extensive experimental results show that the proposed model outperforms the state-of-the-arts with significant improvements.
Bilinear Model @cite_13 @cite_8 is another model that tries to fix the issue of weak interaction between the head and tail entities caused by the Distance Model with a relation-specific bilinear form: @math .
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "2123228027", "2101802482" ], "abstract": [ "We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us \"understand\" a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferences about whether particular unobserved relations are likely to be true. Often there is a tradeoff between these two aims: cluster-based models yield more easily interpretable representations, while factorization-based approaches have given better predictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relations in a nonparametric Bayesian clustering framework. Inference is fully Bayesian but scales well to large data sets. The model simultaneously discovers interpretable clusters and yields predictive performance that matches or beats previous probabilistic models for relational data.", "Many data such as social networks, movie preferences or knowledge bases are multi-relational, in that they describe multiple relations between entities. While there is a large body of work focused on modeling these data, modeling these multiple types of relations jointly remains challenging. Further, existing approaches tend to breakdown when the number of these types grows. In this paper, we propose a method for modeling large multi-relational datasets, with possibly thousands of relations. Our model is based on a bilinear structure, which captures various orders of interaction of the data, and also shares sparse latent factors across different relations. We illustrate the performance of our approach on standard tensor-factorization datasets where we attain, or outperform, state-of-the-art results. Finally, a NLP application demonstrates our scalability and the ability of our model to learn efficient and semantically meaningful verb representations." ] }
1505.02433
1840825582
This paper contributes a novel embedding model which measures the probability of each belief @math in a large-scale knowledge repository via simultaneously learning distributed representations for entities ( @math and @math ), relations ( @math ), and the words in relation mentions ( @math ). It facilitates knowledge completion by means of simple vector operations to discover new beliefs. Given an imperfect belief, we can not only infer the missing entities, predict the unknown relations, but also tell the plausibility of the belief, just leveraging the learnt embeddings of remaining evidences. To demonstrate the scalability and the effectiveness of our model, we conduct experiments on several large-scale repositories which contain millions of beliefs from WordNet, Freebase and NELL, and compare it with other cutting-edge approaches via competing the performances assessed by the tasks of entity inference, relation prediction and triplet classification with respective metrics. Extensive experimental results show that the proposed model outperforms the state-of-the-arts with significant improvements.
Neural Tensor Network (NTN) @cite_19 designs a general scoring function: @math , which combines the Single Layer and Bilinear Models. This model is more expressive as the second-order correlations are also considered in the non-linear transformation function, but the computational complexity is rather high.
{ "cite_N": [ "@cite_19" ], "mid": [ "2127426251" ], "abstract": [ "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively." ] }
1505.02433
1840825582
This paper contributes a novel embedding model which measures the probability of each belief @math in a large-scale knowledge repository via simultaneously learning distributed representations for entities ( @math and @math ), relations ( @math ), and the words in relation mentions ( @math ). It facilitates knowledge completion by means of simple vector operations to discover new beliefs. Given an imperfect belief, we can not only infer the missing entities, predict the unknown relations, but also tell the plausibility of the belief, just leveraging the learnt embeddings of remaining evidences. To demonstrate the scalability and the effectiveness of our model, we conduct experiments on several large-scale repositories which contain millions of beliefs from WordNet, Freebase and NELL, and compare it with other cutting-edge approaches via competing the performances assessed by the tasks of entity inference, relation prediction and triplet classification with respective metrics. Extensive experimental results show that the proposed model outperforms the state-of-the-arts with significant improvements.
TransE @cite_11 is a canonical model different from all the other prior arts, which embeds relations into the same vector space as entities by regarding the relation @math as a translation from @math to @math , i.e. @math . It works well on beliefs with the ONE-TO-ONE mapping property but performs badly with multi-mapping beliefs. Given a series of facts associated with a ONE-TO-MANY relation @math , e.g. @math , TransE tends to represent the embeddings of entities on the MANY-side as extremely close to each other with very little discrimination.
{ "cite_N": [ "@cite_11" ], "mid": [ "2127795553" ], "abstract": [ "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples." ] }
1505.02433
1840825582
This paper contributes a novel embedding model which measures the probability of each belief @math in a large-scale knowledge repository via simultaneously learning distributed representations for entities ( @math and @math ), relations ( @math ), and the words in relation mentions ( @math ). It facilitates knowledge completion by means of simple vector operations to discover new beliefs. Given an imperfect belief, we can not only infer the missing entities, predict the unknown relations, but also tell the plausibility of the belief, just leveraging the learnt embeddings of remaining evidences. To demonstrate the scalability and the effectiveness of our model, we conduct experiments on several large-scale repositories which contain millions of beliefs from WordNet, Freebase and NELL, and compare it with other cutting-edge approaches via competing the performances assessed by the tasks of entity inference, relation prediction and triplet classification with respective metrics. Extensive experimental results show that the proposed model outperforms the state-of-the-arts with significant improvements.
TransM @cite_21 exploits the structure of the whole knowledge graph, and adjusts the learning rate, which is specific to each relation based on the multiple mapping property of the relation.
{ "cite_N": [ "@cite_21" ], "mid": [ "2172684358" ], "abstract": [ "Many knowledge repositories nowadays contain billions of triplets, i.e. (head-entity, relationship, tail-entity), as relation instances. These triplets form a directed graph with entities as nodes and relationships as edges. However, this kind of symbolic and discrete storage structure makes it difficult for us to exploit the knowledge to enhance other intelligenceacquired applications (e.g. the QuestionAnswering System), as many AI-related algorithms prefer conducting computation on continuous data. Therefore, a series of emerging approaches have been proposed to facilitate knowledge computing via encoding the knowledge graph into a low-dimensional embedding space. TransE is the latest and most promising approach among them, and can achieve a higher performance with fewer parameters by modeling the relationship as a transitional vector from the head entity to the tail entity. Unfortunately, it is not flexible enough to tackle well with the various mapping properties of triplets, even though its authors spot the harm on performance. In this paper, we thus propose a superior model called TransM to leverage the structure of the knowledge graph via pre-calculating the distinct weight for each training triplet according to its relational mapping property. In this way, the optimal function deals with each triplet depending on its own weight. We carry out extensive experiments to compare TransM with the state-of-the-art method TransE and other prior arts. The performance of each approach is evaluated within two different application scenarios on several benchmark datasets. Results show that the model we proposed significantly outperforms the former ones with lower parameter complexity as TransE." ] }
1505.02433
1840825582
This paper contributes a novel embedding model which measures the probability of each belief @math in a large-scale knowledge repository via simultaneously learning distributed representations for entities ( @math and @math ), relations ( @math ), and the words in relation mentions ( @math ). It facilitates knowledge completion by means of simple vector operations to discover new beliefs. Given an imperfect belief, we can not only infer the missing entities, predict the unknown relations, but also tell the plausibility of the belief, just leveraging the learnt embeddings of remaining evidences. To demonstrate the scalability and the effectiveness of our model, we conduct experiments on several large-scale repositories which contain millions of beliefs from WordNet, Freebase and NELL, and compare it with other cutting-edge approaches via competing the performances assessed by the tasks of entity inference, relation prediction and triplet classification with respective metrics. Extensive experimental results show that the proposed model outperforms the state-of-the-arts with significant improvements.
TransH @cite_1 is, to the best knowledge of the authors, the state of the art approach. It improves TransE by modeling a relation as a hyperplane, which makes it more flexible with regard to modeling beliefs with multi-mapping properties.
{ "cite_N": [ "@cite_1" ], "mid": [ "2283196293" ], "abstract": [ "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up." ] }
1505.02433
1840825582
This paper contributes a novel embedding model which measures the probability of each belief @math in a large-scale knowledge repository via simultaneously learning distributed representations for entities ( @math and @math ), relations ( @math ), and the words in relation mentions ( @math ). It facilitates knowledge completion by means of simple vector operations to discover new beliefs. Given an imperfect belief, we can not only infer the missing entities, predict the unknown relations, but also tell the plausibility of the belief, just leveraging the learnt embeddings of remaining evidences. To demonstrate the scalability and the effectiveness of our model, we conduct experiments on several large-scale repositories which contain millions of beliefs from WordNet, Freebase and NELL, and compare it with other cutting-edge approaches via competing the performances assessed by the tasks of entity inference, relation prediction and triplet classification with respective metrics. Extensive experimental results show that the proposed model outperforms the state-of-the-arts with significant improvements.
Due to the diverse feature spaces between unstructured texts and structured beliefs, one key challenge of connecting natural language and knowledge is to project the features into the same space and to merge them together for knowledge completion. @cite_3 have recently proposed JRME to jointly learn the embedding representations for both relations and mentions in order to predict unknown relations between entities in NELL. However, the functionality of their latest method is limited to the relation prediction task (see section ), as the correlations between entities and relations are ignored. Therefore, we desire a comprehensive model that can simultaneously consider entities, relations and even the relation mentions, and can integrate the heterogeneous resources to support multiple subtasks of knowledge completion, such as entity inference , relation prediction and triplet classification .
{ "cite_N": [ "@cite_3" ], "mid": [ "1777908065" ], "abstract": [ "This paper contributes a joint embedding model for predicting relations between a pair of entities in the scenario of relation inference. It differs from most stand-alone approaches which separately operate on either knowledge bases or free texts. The proposed model simultaneously learns low-dimensional vector representations for both triplets in knowledge repositories and the mentions of relations in free texts, so that we can leverage the evidence both resources to make more accurate predictions. We use NELL to evaluate the performance of our approach, compared with cutting-edge methods. Results of extensive experiments show that our model achieves significant improvement on relation extraction." ] }
1505.02441
2949618814
Location based services (LBS) have become very popular in recent years. They range from map services (e.g., Google Maps) that store geographic locations of points of interests, to online social networks (e.g., WeChat, Sina Weibo, FourSquare) that leverage user geographic locations to enable various recommendation functions. The public query interfaces of these services may be abstractly modeled as a kNN interface over a database of two dimensional points on a plane: given an arbitrary query point, the system returns the k points in the database that are nearest to the query point. In this paper we consider the problem of obtaining approximate estimates of SUM and COUNT aggregates by only querying such databases via their restrictive public interfaces. We distinguish between interfaces that return location information of the returned tuples (e.g., Google Maps), and interfaces that do not return location information (e.g., Sina Weibo). For both types of interfaces, we develop aggregate estimation algorithms that are based on novel techniques for precisely computing or approximately estimating the Voronoi cell of tuples. We discuss a comprehensive set of real-world experiments for testing our algorithms, including experiments on Google Maps, WeChat, and Sina Weibo.
Aggregate Estimations over Hidden Web Repositories: There has been a number of prior work in performing aggregate estimation over static hidden databases. @cite_4 provided an unbiased estimator for COUNT and SUM aggregates for static databases with form based interfaces. @cite_13 @cite_0 @cite_23 @cite_5 describe efficient techniques to obtain random samples from hidden web databases that can then be utilized to perform aggregate estimation. Recent works such as @cite_1 @cite_19 propose more sophisticated sampling techniques so as to reduce the variance of the aggregate estimation. For hidden databases with keyword interfaces, prior work have studied estimating the size of search engines @cite_20 @cite_21 @cite_9 or a corpus @cite_16 @cite_11 .
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_21", "@cite_1", "@cite_0", "@cite_19", "@cite_23", "@cite_5", "@cite_16", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "", "1967932772", "2043499927", "", "2129817180", "", "2018816493", "", "", "2102815161", "2151512210", "" ], "abstract": [ "", "Many websites (e.g., WedMD.com, CNN.com) provide keyword search interfaces over a large corpus of documents. Meanwhile, many third parties (e.g., investors, analysts) are interested in learning big-picture analytical information over such a document corpus, but have no direct way of accessing it other than using the highly restrictive web search interface. In this paper, we study how to enable third-party data analytics over a search engine's corpus without the cooperation of its owner - specifically, by issuing a small number of search queries through the web interface. Almost all existing techniques require a pre-constructed query pool - i.e., a small yet comprehensive collection of queries which, if all issued through the search interface, can recall almost all documents in the corpus. The problem with this requirement is that a good'' query pool can only be constructed by someone with very specific knowledge (e.g., size, topic, special terms used, etc.) of the corpus, essentially leading to a chicken-and-egg problem. In this paper, we develop QG-SAMPLER and QG-ESTIMATOR, the first practical pool-free techniques for sampling and aggregate (e.g., SUM, COUNT, AVG) estimation over a search engine's corpus, respectively. Extensive real-world experiments show that our algorithms perform on-par with the state-of-the-art pool-based techniques equipped with a carefully tailored query pool, and significantly outperforms the latter when the query pool is a mismatch.", "Search engines over document corpora typically provide keyword-search interfaces. Examples include search engines over the web as well as those over enterprise and government websites. The corpus of such a search engine forms a rich source of information of analytical interest to third parties, but the only available access is by issuing search queries through its interface. To support data analytics over a search engine's corpus, one needs to address two main problems, the sampling of documents (for offline analytics) and the direct (online) estimation of aggregates, while issuing a small number of queries through the keyword-search interface. Existing work on sampling produces samples with unknown bias and may incur an extremely high query cost. Existing aggregate estimation technique suffers from a similar problem, as the estimation error and query cost can both be large for certain aggregates. We propose novel techniques which produce unbiased samples as well as unbiased aggregate estimates with small variances while incurring a query cost an order of magnitude smaller than the existing techniques. We present theoretical analysis and extensive experiments to illustrate the effectiveness of our approach.", "", "A large number of online databases are hidden behind form-like interfaces which allow users to execute search queries by specifying selection conditions in the interface. Most of these interfaces return restricted answers (e.g., only top-k of the selected tuples), while many of them also accompany each answer with the COUNT of the selected tuples. In this paper, we propose techniques which leverage the COUNT information to ef?ciently acquire unbiased samples of the hidden database. We also discuss variants for interfaces which do not provide COUNTinformation. We conduct extensive experiments to illustrate the ef?ciency and accuracy of our techniques.", "", "Recently, there has been growing interest in random sampling from online hidden databases. These databases reside behind form-like web interfaces which allow users to execute search queries by specifying the desired values for certain attributes, and the system responds by returning a few (e.g., top-k) tuples that satisfy the selection conditions, sorted by a suitable scoring function. In this paper, we consider the problem of uniform random sampling over such hidden databases. A key challenge is to eliminate the skew of samples incurred by the selective return of highly ranked tuples. To address this challenge, all state-of-the-art samplers share a common approach: they do not use overflowing queries. This is done in order to avoid favoring highly ranked tuples and thus incurring high skew in the retrieved samples. However, not considering overflowing queries substantially impacts sampling efficiency. In this paper, we propose novel sampling techniques which do leverage overflowing queries. As a result, we are able to significantly improve sampling efficiency over the state-of-the-art samplers, while at the same time substantially reduce the skew of generated samples. We conduct extensive experiments over synthetic and real-world databases to illustrate the superiority of our techniques over the existing ones.", "", "", "A large part of the data on the World Wide Web is hidden behind form-like interfaces. These interfaces interact with a hidden back-end database to provide answers to user queries. Generating a uniform random sample of this hidden database by using only the publicly available interface gives us access to the underlying data distribution. In this paper, we propose a random walk scheme over the query space provided by the interface to sample such databases. We discuss variants where the query space is visualized as a fixed and random ordering of attributes. We also propose techniques to further improve the sample quality by using a probabilistic rejection based approach. We conduct extensive experiments to illustrate the accuracy and efficiency of our techniques.", "We revisit a problem introduced by Bharat and Broder almost a decade ago: how to sample random pages from the corpus of documents indexed by a search engine, using only the search engine’s public interface? Such a primitive is particularly useful in creating objective benchmarks for search engines. The technique of Bharat and Broder suffers from a well-recorded bias: it favors long documents. In this paper we introduce two novel sampling algorithms: a lexicon-based algorithm and a random walk algorithm. Our algorithms produce biased samples, but each sample is accompanied by a weight, which represents its bias. The samples, in conjunction with the weights, are then used to simulate near-uniform samples. To this end, we resort to four well-known Monte Carlo simulation methods: rejection sampling, importance sampling, the Metropolis-Hastings algorithm, and the Maximum Degree method. The limited access to search engines force our algorithms to use bias weights that are only “approximate”. We characterize analytically the effect of approximate bias weights on Monte Carlo methods and conclude that our algorithms are guaranteed to produce near-uniform samples from the search engine’s corpus. Our study of approximate Monte Carlo methods could be of independent interest. Experiments on a corpus of 2.4 million documents substantiate our analytical findings and show that our algorithms do not have significant bias towards long documents. We use our algorithms to collect fresh comparative statistics about the corpora of the Google, MSN Search, and Yahoo! search engines.", "" ] }
1505.02325
2264297172
Choosing a hard-to-guess secret is a prerequisite in many security applications. Whether it is a password for user authentication or a secret key for a cryptographic primitive, picking it requires the user to trade-off usability costs with resistance against an adversary: a simple password is easier to remember but is also easier to guess; likewise, a shorter cryptographic key may require fewer computational and storage resources but it is also easier to attack. A fundamental question is how one can optimally resolve this trade-off. A big challenge is the fact that an adversary can also utilize the knowledge of such usability vs. security trade-offs to strengthen its attack. In this paper, we propose a game-theoretic framework for analyzing the optimal trade-offs in the face of strategic adversaries. We consider two types of adversaries: those limited in their number of tries, and those that are ruled by the cost of making individual guesses. For each type, we derive the mutually-optimal decisions as Nash Equilibria, the strategically pessimistic decisions as maximin, and optimal commitments as Strong Stackelberg Equilibria of the game. We establish that when the adversaries are faced with a capped number of guesses, the user's optimal trade-off is a uniform randomization over a subset of the secret domain. On the other hand, when the attacker strategy is ruled by the cost of making individual guesses, Nash Equilibria may completely fail to provide the user with any level of security, signifying the crucial role of credible commitment for such cases. We illustrate our results using numerical examples based on real-world samples and discuss some policy implications of our work.
Game and decision theory has been applied in other cybersecurity contexts with promising potentials @cite_23 @cite_31 . The first part of our work (Capped-Guesses) is, in its abstract form, similar to the security game model analyzed in @cite_16 . In their model, the defender has limited resources to cover a wide range of targets, while an adversary chooses a single target to attack. If targets are thought of as secrets, the defender in their model is akin to the guesser in our work, and their adversary is our picker. Therefore, our Capped-Guesses model is the complement'' of their model. Specifically, the results that they develop for their defender will be translatable to our guesser. However, the focus of our paper was on the picker.
{ "cite_N": [ "@cite_31", "@cite_16", "@cite_23" ], "mid": [ "1592657892", "", "568984285" ], "abstract": [ "Covering attack detection, malware response, algorithm and mechanism design, privacy, and risk management, this comprehensive work applies unique quantitative models derived from decision, control, and game theories to understanding diverse network security problems. It provides the reader with a system-level theoretical understanding of network security, and is essential reading for researchers interested in a quantitative approach to key incentive and resource allocation issues in the field. It also provides practitioners with an analytical foundation that is useful for formalising decision-making processes in network security.", "", "Global threats of terrorism, drug-smuggling, and other crimes have led to a significant increase in research on game theory for security. Game theory provides a sound mathematical approach to deploy limited security resources to maximize their effectiveness. A typical approach is to randomize security schedules to avoid predictability, with the randomization using artificial intelligence techniques to take into account the importance of different targets and potential adversary reactions. This book distills the forefront of this research to provide the first and only study of long-term deployed applications of game theory for security for key organizations such as the Los Angeles International Airport police and the U.S. Federal Air Marshals Service. The author and his research group draw from their extensive experience working with security officials to intelligently allocate limited security resources to protect targets, outlining the applications of these algorithms in research and the real world. The book also includes professional perspectives from security experts Erroll G. Southers; Lieutenant Commander Joe DiRenzo III, U.S. Coast Guard; Lieutenant Commander Ben Maule, U.S. Coast Guard; Erik Jensen, U.S. Coast Guard; and Lieutenant Fred S. Bertsch IV, U.S. Coast Guard." ] }